C# Class Lucene.Net.Index.LiveIndexWriterConfig

Holds all the configuration used by IndexWriter with few setters for settings that can be changed on an IndexWriter instance "live". @since 4.0
显示文件 Open project: paulirwin/lucene.net Class Usage Examples

Protected Properties

Property Type Description
Commit IndexCommit
MatchVersion Version
PerThreadHardLimitMB int
checkIntegrityAtMerge bool
codec System.Codec
delPolicy IndexDeletionPolicy
flushPolicy Lucene.Net.Index.FlushPolicy
indexerThreadPool Lucene.Net.Index.DocumentsWriterPerThreadPool
indexingChain IndexingChain
infoStream Lucene.Net.Util.InfoStream
mergePolicy MergePolicy
mergeScheduler Lucene.Net.Index.MergeScheduler
openMode OpenMode_e?
readerPooling bool
similarity Similarity
useCompoundFile bool
writeLockTimeout long

Public Methods

Method Description
SetCheckIntegrityAtMerge ( bool checkIntegrityAtMerge ) : LiveIndexWriterConfig

Sets if IndexWriter should call AtomicReader#checkIntegrity() on existing segments before merging them into a new one.

Use true to enable this safety check, which can help reduce the risk of propagating index corruption from older segments into new ones, at the expense of slower merging.

SetMaxBufferedDeleteTerms ( int maxBufferedDeleteTerms ) : LiveIndexWriterConfig

Determines the maximum number of delete-by-term operations that will be buffered before both the buffered in-memory delete terms and queries are applied and flushed.

Disabled by default (writer flushes by RAM usage).

NOTE: this setting won't trigger a segment flush.

Takes effect immediately, but only the next time a document is added, updated or deleted. Also, if you only delete-by-query, this setting has no effect, i.e. delete queries are buffered until the next segment is flushed.

SetMaxBufferedDocs ( int maxBufferedDocs ) : LiveIndexWriterConfig

Determines the minimal number of documents required before the buffered in-memory documents are flushed as a new Segment. Large values generally give faster indexing.

When this is set, the writer will flush every maxBufferedDocs added documents. Pass in IndexWriterConfig#DISABLE_AUTO_FLUSH to prevent triggering a flush due to number of buffered documents. Note that if flushing by RAM usage is also enabled, then the flush will be triggered by whichever comes first.

Disabled by default (writer flushes by RAM usage).

Takes effect immediately, but only the next time a document is added, updated or deleted.

SetMergedSegmentWarmer ( IndexReaderWarmer mergeSegmentWarmer ) : LiveIndexWriterConfig

Set the merged segment warmer. See IndexReaderWarmer.

Takes effect on the next merge.

SetRAMBufferSizeMB ( double ramBufferSizeMB ) : LiveIndexWriterConfig

Determines the amount of RAM that may be used for buffering added documents and deletions before they are flushed to the Directory. Generally for faster indexing performance it's best to flush by RAM usage instead of document count and use as large a RAM buffer as you can.

When this is set, the writer will flush whenever buffered documents and deletions use this much RAM. Pass in IndexWriterConfig#DISABLE_AUTO_FLUSH to prevent triggering a flush due to RAM usage. Note that if flushing by document count is also enabled, then the flush will be triggered by whichever comes first.

The maximum RAM limit is inherently determined by the JVMs available memory. Yet, an IndexWriter session can consume a significantly larger amount of memory than the given RAM limit since this limit is just an indicator when to flush memory resident documents to the Directory. Flushes are likely happen concurrently while other threads adding documents to the writer. For application stability the available memory in the JVM should be significantly larger than the RAM buffer used for indexing.

NOTE: the account of RAM usage for pending deletions is only approximate. Specifically, if you delete by Query, Lucene currently has no way to measure the RAM usage of individual Queries so the accounting will under-estimate and you should compensate by either calling commit() periodically yourself, or by using #setMaxBufferedDeleteTerms(int) to flush and apply buffered deletes by count instead of RAM usage (for each buffered delete Query a constant number of bytes is used to estimate RAM usage). Note that enabling #setMaxBufferedDeleteTerms(int) will not trigger any segment flushes.

NOTE: It's not guaranteed that all memory resident documents are flushed once this limit is exceeded. Depending on the configured FlushPolicy only a subset of the buffered documents are flushed and therefore only parts of the RAM buffer is released.

The default value is IndexWriterConfig#DEFAULT_RAM_BUFFER_SIZE_MB.

Takes effect immediately, but only the next time a document is added, updated or deleted.

SetReaderTermsIndexDivisor ( int divisor ) : LiveIndexWriterConfig

Sets the termsIndexDivisor passed to any readers that IndexWriter opens, for example when applying deletes or creating a near-real-time reader in DirectoryReader#open(IndexWriter, boolean). If you pass -1, the terms index won't be loaded by the readers. this is only useful in advanced situations when you will only .Next() through all terms; attempts to seek will hit an exception.

Takes effect immediately, but only applies to readers opened after this call

NOTE: divisor settings > 1 do not apply to all PostingsFormat implementations, including the default one in this release. It only makes sense for terms indexes that can efficiently re-sample terms at load time.

SetTermIndexInterval ( int interval ) : LiveIndexWriterConfig

Expert: set the interval between indexed terms. Large values cause less memory to be used by IndexReader, but slow random-access to terms. Small values cause more memory to be used by an IndexReader, and speed random-access to terms.

this parameter determines the amount of computation required per query term, regardless of the number of documents that contain that term. In particular, it is the maximum number of other terms that must be scanned before a term is located and its frequency and position information may be processed. In a large index with user-entered query terms, query processing time is likely to be dominated not by term lookup but rather by the processing of frequency and positional data. In a small index or when many uncommon query terms are generated (e.g., by wildcard queries) term lookup may become a dominant cost.

In particular, numUniqueTerms/interval terms are read into memory by an IndexReader, and, on average, interval/2 terms must be scanned for each random term access.

Takes effect immediately, but only applies to newly flushed/merged segments.

NOTE: this parameter does not apply to all PostingsFormat implementations, including the default one in this release. It only makes sense for term indexes that are implemented as a fixed gap between terms. For example, Lucene41PostingsFormat implements the term index instead based upon how terms share prefixes. To configure its parameters (the minimum and maximum size for a block), you would instead use Lucene41PostingsFormat#Lucene41PostingsFormat(int, int). which can also be configured on a per-field basis:

 //customize Lucene41PostingsFormat, passing minBlockSize=50, maxBlockSize=100 final PostingsFormat tweakedPostings = new Lucene41PostingsFormat(50, 100); iwc.SetCodec(new Lucene45Codec() { @Override public PostingsFormat getPostingsFormatForField(String field) { if (field.equals("fieldWithTonsOfTerms")) return tweakedPostings; else return super.getPostingsFormatForField(field); } }); 
Note that other implementations may have their own parameters, or no parameters at all.

SetUseCompoundFile ( bool useCompoundFile ) : LiveIndexWriterConfig

Sets if the IndexWriter should pack newly written segments in a compound file. Default is true.

Use false for batch indexing with very large ram buffer settings.

Note: To control compound file usage during segment merges see MergePolicy#setNoCFSRatio(double) and MergePolicy#setMaxCFSSegmentSizeMB(double). this setting only applies to newly created segments.

ToString ( ) : string

Private Methods

Method Description
LiveIndexWriterConfig ( Analyzer analyzer, Version matchVersion ) : System.Text
LiveIndexWriterConfig ( IndexWriterConfig config ) : System.Text

Creates a new config that that handles the live IndexWriter settings.

Method Details

SetCheckIntegrityAtMerge() public method

Sets if IndexWriter should call AtomicReader#checkIntegrity() on existing segments before merging them into a new one.

Use true to enable this safety check, which can help reduce the risk of propagating index corruption from older segments into new ones, at the expense of slower merging.

public SetCheckIntegrityAtMerge ( bool checkIntegrityAtMerge ) : LiveIndexWriterConfig
checkIntegrityAtMerge bool
return LiveIndexWriterConfig

SetMaxBufferedDeleteTerms() public method

Determines the maximum number of delete-by-term operations that will be buffered before both the buffered in-memory delete terms and queries are applied and flushed.

Disabled by default (writer flushes by RAM usage).

NOTE: this setting won't trigger a segment flush.

Takes effect immediately, but only the next time a document is added, updated or deleted. Also, if you only delete-by-query, this setting has no effect, i.e. delete queries are buffered until the next segment is flushed.

/// if maxBufferedDeleteTerms is enabled but smaller than 1 ///
public SetMaxBufferedDeleteTerms ( int maxBufferedDeleteTerms ) : LiveIndexWriterConfig
maxBufferedDeleteTerms int
return LiveIndexWriterConfig

SetMaxBufferedDocs() public method

Determines the minimal number of documents required before the buffered in-memory documents are flushed as a new Segment. Large values generally give faster indexing.

When this is set, the writer will flush every maxBufferedDocs added documents. Pass in IndexWriterConfig#DISABLE_AUTO_FLUSH to prevent triggering a flush due to number of buffered documents. Note that if flushing by RAM usage is also enabled, then the flush will be triggered by whichever comes first.

Disabled by default (writer flushes by RAM usage).

Takes effect immediately, but only the next time a document is added, updated or deleted.

/// if maxBufferedDocs is enabled but smaller than 2, or it disables /// maxBufferedDocs when ramBufferSize is already disabled
public SetMaxBufferedDocs ( int maxBufferedDocs ) : LiveIndexWriterConfig
maxBufferedDocs int
return LiveIndexWriterConfig

SetMergedSegmentWarmer() public method

Set the merged segment warmer. See IndexReaderWarmer.

Takes effect on the next merge.

public SetMergedSegmentWarmer ( IndexReaderWarmer mergeSegmentWarmer ) : LiveIndexWriterConfig
mergeSegmentWarmer IndexReaderWarmer
return LiveIndexWriterConfig

SetRAMBufferSizeMB() public method

Determines the amount of RAM that may be used for buffering added documents and deletions before they are flushed to the Directory. Generally for faster indexing performance it's best to flush by RAM usage instead of document count and use as large a RAM buffer as you can.

When this is set, the writer will flush whenever buffered documents and deletions use this much RAM. Pass in IndexWriterConfig#DISABLE_AUTO_FLUSH to prevent triggering a flush due to RAM usage. Note that if flushing by document count is also enabled, then the flush will be triggered by whichever comes first.

The maximum RAM limit is inherently determined by the JVMs available memory. Yet, an IndexWriter session can consume a significantly larger amount of memory than the given RAM limit since this limit is just an indicator when to flush memory resident documents to the Directory. Flushes are likely happen concurrently while other threads adding documents to the writer. For application stability the available memory in the JVM should be significantly larger than the RAM buffer used for indexing.

NOTE: the account of RAM usage for pending deletions is only approximate. Specifically, if you delete by Query, Lucene currently has no way to measure the RAM usage of individual Queries so the accounting will under-estimate and you should compensate by either calling commit() periodically yourself, or by using #setMaxBufferedDeleteTerms(int) to flush and apply buffered deletes by count instead of RAM usage (for each buffered delete Query a constant number of bytes is used to estimate RAM usage). Note that enabling #setMaxBufferedDeleteTerms(int) will not trigger any segment flushes.

NOTE: It's not guaranteed that all memory resident documents are flushed once this limit is exceeded. Depending on the configured FlushPolicy only a subset of the buffered documents are flushed and therefore only parts of the RAM buffer is released.

The default value is IndexWriterConfig#DEFAULT_RAM_BUFFER_SIZE_MB.

Takes effect immediately, but only the next time a document is added, updated or deleted.

/// if ramBufferSize is enabled but non-positive, or it disables /// ramBufferSize when maxBufferedDocs is already disabled
public SetRAMBufferSizeMB ( double ramBufferSizeMB ) : LiveIndexWriterConfig
ramBufferSizeMB double
return LiveIndexWriterConfig

SetReaderTermsIndexDivisor() public method

Sets the termsIndexDivisor passed to any readers that IndexWriter opens, for example when applying deletes or creating a near-real-time reader in DirectoryReader#open(IndexWriter, boolean). If you pass -1, the terms index won't be loaded by the readers. this is only useful in advanced situations when you will only .Next() through all terms; attempts to seek will hit an exception.

Takes effect immediately, but only applies to readers opened after this call

NOTE: divisor settings > 1 do not apply to all PostingsFormat implementations, including the default one in this release. It only makes sense for terms indexes that can efficiently re-sample terms at load time.

public SetReaderTermsIndexDivisor ( int divisor ) : LiveIndexWriterConfig
divisor int
return LiveIndexWriterConfig

SetTermIndexInterval() public method

Expert: set the interval between indexed terms. Large values cause less memory to be used by IndexReader, but slow random-access to terms. Small values cause more memory to be used by an IndexReader, and speed random-access to terms.

this parameter determines the amount of computation required per query term, regardless of the number of documents that contain that term. In particular, it is the maximum number of other terms that must be scanned before a term is located and its frequency and position information may be processed. In a large index with user-entered query terms, query processing time is likely to be dominated not by term lookup but rather by the processing of frequency and positional data. In a small index or when many uncommon query terms are generated (e.g., by wildcard queries) term lookup may become a dominant cost.

In particular, numUniqueTerms/interval terms are read into memory by an IndexReader, and, on average, interval/2 terms must be scanned for each random term access.

Takes effect immediately, but only applies to newly flushed/merged segments.

NOTE: this parameter does not apply to all PostingsFormat implementations, including the default one in this release. It only makes sense for term indexes that are implemented as a fixed gap between terms. For example, Lucene41PostingsFormat implements the term index instead based upon how terms share prefixes. To configure its parameters (the minimum and maximum size for a block), you would instead use Lucene41PostingsFormat#Lucene41PostingsFormat(int, int). which can also be configured on a per-field basis:

 //customize Lucene41PostingsFormat, passing minBlockSize=50, maxBlockSize=100 final PostingsFormat tweakedPostings = new Lucene41PostingsFormat(50, 100); iwc.SetCodec(new Lucene45Codec() { @Override public PostingsFormat getPostingsFormatForField(String field) { if (field.equals("fieldWithTonsOfTerms")) return tweakedPostings; else return super.getPostingsFormatForField(field); } }); 
Note that other implementations may have their own parameters, or no parameters at all.
public SetTermIndexInterval ( int interval ) : LiveIndexWriterConfig
interval int
return LiveIndexWriterConfig

SetUseCompoundFile() public method

Sets if the IndexWriter should pack newly written segments in a compound file. Default is true.

Use false for batch indexing with very large ram buffer settings.

Note: To control compound file usage during segment merges see MergePolicy#setNoCFSRatio(double) and MergePolicy#setMaxCFSSegmentSizeMB(double). this setting only applies to newly created segments.

public SetUseCompoundFile ( bool useCompoundFile ) : LiveIndexWriterConfig
useCompoundFile bool
return LiveIndexWriterConfig

ToString() public method

public ToString ( ) : string
return string

Property Details

Commit protected_oe property

IndexCommit that IndexWriter is opened on.
protected IndexCommit,Lucene.Net.Index Commit
return IndexCommit

MatchVersion protected_oe property

Version that IndexWriter should emulate.
protected Version MatchVersion
return Version

PerThreadHardLimitMB protected_oe property

Sets the hard upper bound on RAM usage for a single segment, after which the segment is forced to flush.
protected int PerThreadHardLimitMB
return int

checkIntegrityAtMerge protected_oe property

True if merging should check integrity of segments before merge
protected bool checkIntegrityAtMerge
return bool

codec protected_oe property

Codec used to write new segments.
protected Codec,System codec
return System.Codec

delPolicy protected_oe property

DelPolicy controlling when commit points are deleted.
protected IndexDeletionPolicy delPolicy
return IndexDeletionPolicy

flushPolicy protected_oe property

FlushPolicy to control when segments are flushed.
protected FlushPolicy,Lucene.Net.Index flushPolicy
return Lucene.Net.Index.FlushPolicy

indexerThreadPool protected_oe property

{@code DocumentsWriterPerThreadPool} to control how threads are allocated to {@code DocumentsWriterPerThread}.
protected DocumentsWriterPerThreadPool,Lucene.Net.Index indexerThreadPool
return Lucene.Net.Index.DocumentsWriterPerThreadPool

indexingChain protected_oe property

IndexingChain that determines how documents are indexed.
protected IndexingChain indexingChain
return IndexingChain

infoStream protected_oe property

InfoStream for debugging messages.
protected InfoStream,Lucene.Net.Util infoStream
return Lucene.Net.Util.InfoStream

mergePolicy protected_oe property

MergePolicy for selecting merges.
protected MergePolicy,Lucene.Net.Index mergePolicy
return MergePolicy

mergeScheduler protected_oe property

MergeScheduler to use for running merges.
protected MergeScheduler,Lucene.Net.Index mergeScheduler
return Lucene.Net.Index.MergeScheduler

openMode protected_oe property

OpenMode that IndexWriter is opened with.
protected OpenMode_e? openMode
return OpenMode_e?

readerPooling protected_oe property

True if readers should be pooled.
protected bool readerPooling
return bool

similarity protected_oe property

Similarity to use when encoding norms.
protected Similarity similarity
return Similarity

useCompoundFile protected_oe property

True if segment flushes should use compound file format
protected bool useCompoundFile
return bool

writeLockTimeout protected_oe property

Timeout when trying to obtain the write lock on init.
protected long writeLockTimeout
return long