C# (CSharp) org.apache.lucene.analysis.ngram Namespace

Classes

Name Description
EdgeNGramTokenizer Tokenizes the input from an edge into n-grams of given size(s).

This Tokenizer create n-grams from the beginning edge or ending edge of a input token.

As of Lucene 4.4, this tokenizer

Although highly discouraged, it is still possible to use the old behavior through Lucene43EdgeNGramTokenizer.

Lucene43EdgeNGramTokenizer
Lucene43NGramTokenizer
NGramFilterFactory Factory for NGramTokenFilter.
 <fieldType name="text_ngrm" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.WhitespaceTokenizerFactory"/> <filter class="solr.NGramFilterFactory" minGramSize="1" maxGramSize="2"/> </analyzer> </fieldType>
NGramTokenFilter Tokenizes the input into n-grams of the given size(s).

You must specify the required Version compatibility when creating a NGramTokenFilter. As of Lucene 4.4, this token filters:

  • handles supplementary characters correctly,
  • emits all n-grams for the same token at the same position,
  • does not modify offsets,
  • sorts n-grams by their offset in the original token first, then increasing length (meaning that "abc" will give "a", "ab", "abc", "b", "bc", "c").

You can make this filter use the old behavior by providing a version < Version#LUCENE_44 in the constructor but this is not recommended as it will lead to broken TokenStreams that will cause highlighting bugs.

If you were using this TokenFilter to perform partial highlighting, this won't work anymore since this filter doesn't update offsets. You should modify your analysis chain to use NGramTokenizer, and potentially override NGramTokenizer#isTokenChar(int) to perform pre-tokenization.

NGramTokenFilter.PositionIncrementAttributeAnonymousInnerClassHelper
NGramTokenFilter.PositionLengthAttributeAnonymousInnerClassHelper
NGramTokenizer Tokenizes the input into n-grams of the given size(s).

On the contrary to NGramTokenFilter, this class sets offsets so that characters between startOffset and endOffset in the original stream are the same as the term chars.

For example, "abcde" would be tokenized as (minGram=2, maxGram=3):

Termababcbcbcdcdcdede
Position increment1111111
Position length1111111
Offsets[0,2[[0,3[[1,3[[1,4[[2,4[[2,5[[3,5[

This tokenizer changed a lot in Lucene 4.4 in order to:

  • tokenize in a streaming fashion to support streams which are larger than 1024 chars (limit of the previous version),
  • count grams based on unicode code points instead of java chars (and never split in the middle of surrogate pairs),
  • give the ability to #isTokenChar(int) pre-tokenize the stream before computing n-grams.

Additionally, this class doesn't trim trailing whitespaces and emits tokens in a different order, tokens are now emitted by increasing start offsets while they used to be emitted by increasing lengths (which prevented from supporting large input streams).

Although highly discouraged, it is still possible to use the old behavior through Lucene43NGramTokenizer.

NGramTokenizerFactory Factory for NGramTokenizer.
 <fieldType name="text_ngrm" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.NGramTokenizerFactory" minGramSize="1" maxGramSize="2"/> </analyzer> </fieldType>