C# Class Lucene.Net.Analysis.Util.CharTokenizer

An abstract base class for simple, character-oriented tokenizers.

You must specify the required LuceneVersion compatibility when creating CharTokenizer:

A new CharTokenizer API has been introduced with Lucene 3.1. This API moved from UTF-16 code units to UTF-32 codepoints to eventually add support for supplementary characters. The old char based API has been deprecated and should be replaced with the int based methods #isTokenChar(int) and #normalize(int).

As of Lucene 3.1 each CharTokenizer - constructor expects a LuceneVersion argument. Based on the given LuceneVersion either the new API or a backwards compatibility layer is used at runtime. For LuceneVersion < 3.1 the backwards compatibility layer ensures correct behavior even for indexes build with previous versions of Lucene. If a LuceneVersion >= 3.1 is used CharTokenizer requires the new API to be implemented by the instantiated class. Yet, the old char based API is not required anymore even if backwards compatibility must be preserved. CharTokenizer subclasses implementing the new API are fully backwards compatible if instantiated with LuceneVersion < 3.1.

Note: If you use a subclass of CharTokenizer with LuceneVersion >= 3.1 on an index build with a version < 3.1, created tokens might not be compatible with the terms in your index.

Inheritance: Tokenizer
Afficher le fichier Open project: paulirwin/lucene.net

Méthodes publiques

Méthode Description
CharTokenizer ( Lucene.Net.Util.Version matchVersion, AttributeFactory factory, TextReader input ) : System.Diagnostics

Creates a new CharTokenizer instance

End ( ) : void
IncrementToken ( ) : bool
Reset ( ) : void

Méthodes protégées

Méthode Description
CharTokenizer ( Lucene.Net.Util.Version matchVersion, TextReader input ) : System.Diagnostics

Creates a new CharTokenizer instance

IsTokenChar ( char c ) : bool

Returns true iff a codepoint should be included in a token. This tokenizer generates as tokens adjacent sequences of codepoints which satisfy this predicate. Codepoints for which this is false are used to define token boundaries and are not included in tokens.

Normalize ( int c ) : int

Called on each token character to normalize it before it is added to the token. The default implementation does nothing. Subclasses may use this to, e.g., lowercase tokens.

Method Details

CharTokenizer() public méthode

Creates a new CharTokenizer instance
public CharTokenizer ( Lucene.Net.Util.Version matchVersion, AttributeFactory factory, TextReader input ) : System.Diagnostics
matchVersion Lucene.Net.Util.Version /// Lucene version to match
factory AttributeFactory /// the attribute factory to use for this
input System.IO.TextReader /// the input to split up into tokens
Résultat System.Diagnostics

CharTokenizer() protected méthode

Creates a new CharTokenizer instance
protected CharTokenizer ( Lucene.Net.Util.Version matchVersion, TextReader input ) : System.Diagnostics
matchVersion Lucene.Net.Util.Version /// Lucene version to match
input System.IO.TextReader /// the input to split up into tokens
Résultat System.Diagnostics

End() public méthode

public End ( ) : void
Résultat void

IncrementToken() public méthode

public IncrementToken ( ) : bool
Résultat bool

IsTokenChar() protected abstract méthode

Returns true iff a codepoint should be included in a token. This tokenizer generates as tokens adjacent sequences of codepoints which satisfy this predicate. Codepoints for which this is false are used to define token boundaries and are not included in tokens.
protected abstract IsTokenChar ( char c ) : bool
c char
Résultat bool

Normalize() protected méthode

Called on each token character to normalize it before it is added to the token. The default implementation does nothing. Subclasses may use this to, e.g., lowercase tokens.
protected Normalize ( int c ) : int
c int
Résultat int

Reset() public méthode

public Reset ( ) : void
Résultat void