C# 클래스 Lucene.Net.Analysis.Util.CharTokenizer

An abstract base class for simple, character-oriented tokenizers.

You must specify the required LuceneVersion compatibility when creating CharTokenizer:

A new CharTokenizer API has been introduced with Lucene 3.1. This API moved from UTF-16 code units to UTF-32 codepoints to eventually add support for supplementary characters. The old char based API has been deprecated and should be replaced with the int based methods #isTokenChar(int) and #normalize(int).

As of Lucene 3.1 each CharTokenizer - constructor expects a LuceneVersion argument. Based on the given LuceneVersion either the new API or a backwards compatibility layer is used at runtime. For LuceneVersion < 3.1 the backwards compatibility layer ensures correct behavior even for indexes build with previous versions of Lucene. If a LuceneVersion >= 3.1 is used CharTokenizer requires the new API to be implemented by the instantiated class. Yet, the old char based API is not required anymore even if backwards compatibility must be preserved. CharTokenizer subclasses implementing the new API are fully backwards compatible if instantiated with LuceneVersion < 3.1.

Note: If you use a subclass of CharTokenizer with LuceneVersion >= 3.1 on an index build with a version < 3.1, created tokens might not be compatible with the terms in your index.

상속: Tokenizer
파일 보기 프로젝트 열기: paulirwin/lucene.net

공개 메소드들

메소드 설명
CharTokenizer ( Lucene.Net.Util.Version matchVersion, AttributeFactory factory, TextReader input ) : System.Diagnostics

Creates a new CharTokenizer instance

End ( ) : void
IncrementToken ( ) : bool
Reset ( ) : void

보호된 메소드들

메소드 설명
CharTokenizer ( Lucene.Net.Util.Version matchVersion, TextReader input ) : System.Diagnostics

Creates a new CharTokenizer instance

IsTokenChar ( char c ) : bool

Returns true iff a codepoint should be included in a token. This tokenizer generates as tokens adjacent sequences of codepoints which satisfy this predicate. Codepoints for which this is false are used to define token boundaries and are not included in tokens.

Normalize ( int c ) : int

Called on each token character to normalize it before it is added to the token. The default implementation does nothing. Subclasses may use this to, e.g., lowercase tokens.

메소드 상세

CharTokenizer() 공개 메소드

Creates a new CharTokenizer instance
public CharTokenizer ( Lucene.Net.Util.Version matchVersion, AttributeFactory factory, TextReader input ) : System.Diagnostics
matchVersion Lucene.Net.Util.Version /// Lucene version to match
factory AttributeFactory /// the attribute factory to use for this
input System.IO.TextReader /// the input to split up into tokens
리턴 System.Diagnostics

CharTokenizer() 보호된 메소드

Creates a new CharTokenizer instance
protected CharTokenizer ( Lucene.Net.Util.Version matchVersion, TextReader input ) : System.Diagnostics
matchVersion Lucene.Net.Util.Version /// Lucene version to match
input System.IO.TextReader /// the input to split up into tokens
리턴 System.Diagnostics

End() 공개 메소드

public End ( ) : void
리턴 void

IncrementToken() 공개 메소드

public IncrementToken ( ) : bool
리턴 bool

IsTokenChar() 보호된 추상적인 메소드

Returns true iff a codepoint should be included in a token. This tokenizer generates as tokens adjacent sequences of codepoints which satisfy this predicate. Codepoints for which this is false are used to define token boundaries and are not included in tokens.
protected abstract IsTokenChar ( char c ) : bool
c char
리턴 bool

Normalize() 보호된 메소드

Called on each token character to normalize it before it is added to the token. The default implementation does nothing. Subclasses may use this to, e.g., lowercase tokens.
protected Normalize ( int c ) : int
c int
리턴 int

Reset() 공개 메소드

public Reset ( ) : void
리턴 void