Свойство | Тип | Описание | |
---|---|---|---|
GLOBAL_REUSE_STRATEGY | ReuseStrategy | ||
PER_FIELD_REUSE_STRATEGY | ReuseStrategy |
Метод | Описание | |
---|---|---|
Analyzer ( ) : Lucene.Net.Util |
Create a new Analyzer, reusing the same set of components per-thread across calls to #tokenStream(String, Reader).
|
|
Analyzer ( ReuseStrategy reuseStrategy ) : Lucene.Net.Util |
Expert: create a new Analyzer with a custom ReuseStrategy. NOTE: if you just want to reuse on a per-field basis, its easier to use a subclass of AnalyzerWrapper such as PerFieldAnalyerWrapper instead.
|
|
CreateComponents ( string fieldName, |
Creates a new TokenStreamComponents instance for this analyzer.
|
|
Dispose ( ) : void |
Frees persistent resources used by this Analyzer
|
|
GetOffsetGap ( string fieldName ) : int |
Just like #getPositionIncrementGap, except for Token offsets instead. By default this returns 1. this method is only called if the field produced at least one token for indexing.
|
|
GetPositionIncrementGap ( string fieldName ) : int |
Invoked before indexing a IndexableField instance if terms have already been added to that field. this allows custom analyzers to place an automatic position increment gap between IndexbleField instances using the same field name. The default value position increment gap is 0. With a 0 position increment gap and the typical default token position increment of 1, all terms in a field, including across IndexableField instances, are in successive positions, allowing exact PhraseQuery matches, for instance, across IndexableField instance boundaries.
|
|
InitReader ( string fieldName, |
Override this if you want to add a CharFilter chain. The default implementation returns
|
|
TokenStream ( string fieldName, |
Returns a TokenStream suitable for this method uses #createComponents(String, Reader) to obtain an instance of TokenStreamComponents. It returns the sink of the components and stores the components internally. Subsequent calls to this method will reuse the previously stored components after resetting them through TokenStreamComponents#setReader(Reader). NOTE: After calling this method, the consumer must follow the workflow described in TokenStream to properly consume its contents. See the Lucene.Net.Analysis Analysis package documentation for some examples demonstrating this.
|
public Analyzer ( ReuseStrategy reuseStrategy ) : Lucene.Net.Util | ||
reuseStrategy | ReuseStrategy | |
Результат | Lucene.Net.Util |
public abstract CreateComponents ( string fieldName, |
||
fieldName | string |
/// the name of the fields content passed to the
/// |
reader |
/// the reader passed to the |
|
Результат | TokenStreamComponents |
public GetOffsetGap ( string fieldName ) : int | ||
fieldName | string | the field just indexed |
Результат | int |
public GetPositionIncrementGap ( string fieldName ) : int | ||
fieldName | string | IndexableField name being indexed. |
Результат | int |
public InitReader ( string fieldName, |
||
fieldName | string | IndexableField name being indexed |
reader | original Reader | |
Результат |
public TokenStream ( string fieldName, |
||
fieldName | string | the name of the field the created TokenStream is used for |
reader | ||
Результат |
public static ReuseStrategy GLOBAL_REUSE_STRATEGY | ||
Результат | ReuseStrategy |