C# 클래스 Lucene.Net.Search.Highlight.TokenSources

Hides implementation issues associated with obtaining a TokenStream for use with the higlighter - can obtain from TermFreqVectors with offsets and (optionally) positions or from Analyzer class reparsing the stored content.
파일 보기 프로젝트 열기: synhershko/lucene.net 1 사용 예제들

공개 메소드들

메소드 설명
GetAnyTokenStream ( IndexReader reader, int docId, String field, Analyzer analyzer ) : TokenStream

A convenience method that tries a number of approaches to getting a token stream. The cost of finding there are no termVectors in the index is minimal (1000 invocations still registers 0 ms). So this "lazy" (flexible?) approach to coding is probably acceptable

GetAnyTokenStream ( IndexReader reader, int docId, String field, Document doc, Analyzer analyzer ) : TokenStream

A convenience method that tries to first get a TermPositionVector for the specified docId, then, falls back to using the passed in {@link org.apache.lucene.document.Document} to retrieve the TokenStream. This is useful when you already have the document, but would prefer to use the vector first.

GetTokenStream ( Document doc, String field, Analyzer analyzer ) : TokenStream
GetTokenStream ( IndexReader reader, int docId, String field, Analyzer analyzer ) : TokenStream
GetTokenStream ( IndexReader reader, int docId, System field ) : TokenStream
GetTokenStream ( String field, String contents, Analyzer analyzer ) : TokenStream
GetTokenStream ( TermPositionVector tpv ) : TokenStream
GetTokenStream ( TermPositionVector tpv, bool tokenPositionsGuaranteedContiguous ) : TokenStream

Low level api. Returns a token stream or null if no offset info available in index. This can be used to feed the highlighter with a pre-parsed token stream In my tests the speeds to recreate 1000 token streams using this method are: - with TermVector offset only data stored - 420 milliseconds - with TermVector offset AND position data stored - 271 milliseconds (nb timings for TermVector with position data are based on a tokenizer with contiguous positions - no overlaps or gaps) The cost of not using TermPositionVector to store pre-parsed content and using an analyzer to re-parse the original content: - reanalyzing the original content - 980 milliseconds The re-analyze timings will typically vary depending on - 1) The complexity of the analyzer code (timings above were using a stemmer/lowercaser/stopword combo) 2) The number of other fields (Lucene reads ALL fields off the disk when accessing just one document field - can cost dear!) 3) Use of compression on field storage - could be faster due to compression (less disk IO) or slower (more CPU burn) depending on the content.

메소드 상세

GetAnyTokenStream() 공개 정적인 메소드

A convenience method that tries a number of approaches to getting a token stream. The cost of finding there are no termVectors in the index is minimal (1000 invocations still registers 0 ms). So this "lazy" (flexible?) approach to coding is probably acceptable
public static GetAnyTokenStream ( IndexReader reader, int docId, String field, Analyzer analyzer ) : TokenStream
reader IndexReader
docId int
field String
analyzer Analyzer
리턴 TokenStream

GetAnyTokenStream() 공개 정적인 메소드

A convenience method that tries to first get a TermPositionVector for the specified docId, then, falls back to using the passed in {@link org.apache.lucene.document.Document} to retrieve the TokenStream. This is useful when you already have the document, but would prefer to use the vector first.
if there was an error loading
public static GetAnyTokenStream ( IndexReader reader, int docId, String field, Document doc, Analyzer analyzer ) : TokenStream
reader IndexReader The to use to try and get the vector from
docId int The docId to retrieve.
field String The field to retrieve on the document
doc Document The document to fall back on
analyzer Analyzer The analyzer to use for creating the TokenStream if the vector doesn't exist
리턴 TokenStream

GetTokenStream() 공개 정적인 메소드

public static GetTokenStream ( Document doc, String field, Analyzer analyzer ) : TokenStream
doc Document
field String
analyzer Analyzer
리턴 TokenStream

GetTokenStream() 공개 정적인 메소드

public static GetTokenStream ( IndexReader reader, int docId, String field, Analyzer analyzer ) : TokenStream
reader IndexReader
docId int
field String
analyzer Analyzer
리턴 TokenStream

GetTokenStream() 공개 정적인 메소드

public static GetTokenStream ( IndexReader reader, int docId, System field ) : TokenStream
reader IndexReader
docId int
field System
리턴 TokenStream

GetTokenStream() 공개 정적인 메소드

public static GetTokenStream ( String field, String contents, Analyzer analyzer ) : TokenStream
field String
contents String
analyzer Analyzer
리턴 TokenStream

GetTokenStream() 공개 정적인 메소드

public static GetTokenStream ( TermPositionVector tpv ) : TokenStream
tpv TermPositionVector
리턴 TokenStream

GetTokenStream() 공개 정적인 메소드

Low level api. Returns a token stream or null if no offset info available in index. This can be used to feed the highlighter with a pre-parsed token stream In my tests the speeds to recreate 1000 token streams using this method are: - with TermVector offset only data stored - 420 milliseconds - with TermVector offset AND position data stored - 271 milliseconds (nb timings for TermVector with position data are based on a tokenizer with contiguous positions - no overlaps or gaps) The cost of not using TermPositionVector to store pre-parsed content and using an analyzer to re-parse the original content: - reanalyzing the original content - 980 milliseconds The re-analyze timings will typically vary depending on - 1) The complexity of the analyzer code (timings above were using a stemmer/lowercaser/stopword combo) 2) The number of other fields (Lucene reads ALL fields off the disk when accessing just one document field - can cost dear!) 3) Use of compression on field storage - could be faster due to compression (less disk IO) or slower (more CPU burn) depending on the content.
public static GetTokenStream ( TermPositionVector tpv, bool tokenPositionsGuaranteedContiguous ) : TokenStream
tpv TermPositionVector /// true if the token position numbers have no overlaps or gaps. If looking /// to eek out the last drops of performance, set to true. If in doubt, set to false.
tokenPositionsGuaranteedContiguous bool
리턴 TokenStream