Namespace Lucene.Net.Index
Classes
AbstractAllTermDocs
Base class for enumerating all but deleted docs.
NOTE: this class is meant only to be used internally by Lucene; it's only public so it can be shared across packages. This means the API is freely subject to change, and, the class could be removed entirely, in any Lucene release. Use directly at your own risk! */
ByteBlockPool
ByteBlockPool.Allocator
ByteSliceReader
ByteSliceWriter
Class to write byte streams into slices of shared byte[]. This is used by DocumentsWriter to hold the posting list for many terms in RAM.
CheckIndex
Basic tool and API to check the health of an index and write a new segments file that removes reference to problematic segments.
As this tool checks every byte in the index, on a large index it can take quite a long time to run.
WARNING: this tool and API is new and experimental and is subject to suddenly change in the next release. Please make a complete backup of your index before using this to fix your index!
CheckIndex.Status
Returned from CheckIndex_Renamed_Method() detailing the health and status of the index.
WARNING: this API is new and experimental and is subject to suddenly change in the next release.
CheckIndex.Status.FieldNormStatus
Status from testing field norms.
CheckIndex.Status.SegmentInfoStatus
Holds the status of each segment in the index. See SegmentInfos.
WARNING: this API is new and experimental and is subject to suddenly change in the next release.
CheckIndex.Status.StoredFieldStatus
Status from testing stored fields.
CheckIndex.Status.TermIndexStatus
Status from testing term index.
CheckIndex.Status.TermVectorStatus
Status from testing stored fields.
CompoundFileReader
Class for accessing a compound stream. This class implements a directory, but is limited to only read operations. Directory methods that would normally modify data throw an exception.
CompoundFileReader.CSIndexInput
Implementation of an IndexInput that reads from a portion of the compound file. The visibility is left as "package" only because this helps with testing since JUnit test cases in a different class can then access package fields of this class.
CompoundFileWriter
Combines multiple files into a single compound file.
The file format:
The fileCount integer indicates how many files are contained in this compound file. The {directory} that follows has that many entries. Each directory entry contains a long pointer to the start of this file's data section, and a String with that file's name.
ConcurrentMergeScheduler
A MergeScheduler that runs each merge using a separate thread, up until a maximum number of threads (MaxThreadCount) at which when a merge is needed, the thread(s) that are updating the index will pause until one or more merges completes. This is a simple way to use concurrency in the indexing process without having to create and manage application level threads.
ConcurrentMergeScheduler.MergeThread
CorruptIndexException
This exception is thrown when Lucene detects an inconsistency in the index.
DirectoryReader
An IndexReader which reads indexes with multiple segments.
DocumentsWriter
This class accepts multiple added documents and directly writes a single segment file. It does this more efficiently than creating a single segment per document (with DocumentWriter) and doing standard merges on those segments.
Each added document is passed to the Lucene.Net.Index.DocConsumer, which in turn processes the document and interacts with other consumers in the indexing chain. Certain consumers, like Lucene.Net.Index.StoredFieldsWriter and Lucene.Net.Index.TermVectorsTermsWriter , digest a document and immediately write bytes to the "doc store" files (ie, they do not consume RAM per document, except while they are processing the document).
Other consumers, eg Lucene.Net.Index.FreqProxTermsWriter and Lucene.Net.Index.NormsWriter, buffer bytes in RAM and flush only when a new segment is produced. Once we have used our allowed RAM buffer, or the number of added docs is large enough (in the case we are flushing by doc count instead of RAM usage), we create a real segment and flush it to the Directory.
Threads:
Multiple threads are allowed into addDocument at once. There is an initial synchronized call to getThreadState which allocates a ThreadState for this thread. The same thread will get the same ThreadState over time (thread affinity) so that if there are consistent patterns (for example each thread is indexing a different content source) then we make better use of RAM. Then processDocument is called on that ThreadState without synchronization (most of the "heavy lifting" is in this call). Finally the synchronized "finishDocument" is called to flush changes to the directory.
When flush is called by IndexWriter we forcefully idle all threads and flush only once they are all idle. This means you can call flush with a given thread even while other threads are actively adding/deleting documents.
Exceptions:
Because this class directly updates in-memory posting lists, and flushes stored fields and term vectors directly to files in the directory, there are certain limited times when an exception can corrupt this state. For example, a disk full while flushing stored fields leaves this file in a corrupt state. Or, an OOM exception while appending to the in-memory posting lists can corrupt that posting list. We call such exceptions "aborting exceptions". In these cases we must call abort() to discard all docs added since the last flush.
All other exceptions ("non-aborting exceptions") can still partially update the index structures. These updates are consistent, but, they represent only a part of the document seen up until the exception was hit. When this happens, we immediately mark the document as deleted so that the document is always atomically ("all or none") added to the index.
FieldInfo
FieldInfos
Access to the Fieldable Info file that describes document fields and whether or not they are indexed. Each segment has a separate Fieldable Info file. Objects of this class are thread-safe for multiple readers, but only one thread can be adding documents at a time, with no other reader or writer threads accessing this object.
FieldInvertState
This class tracks the number and position / offset parameters of terms being added to the index. The information collected in this class is also used to calculate the normalization factor for a field.
WARNING: This API is new and experimental, and may suddenly change.
FieldReaderException
FieldSortedTermVectorMapper
For each Field, store a sorted collection of TermVectorEntrys
This is not thread-safe.
FieldsReader
Class responsible for access to stored document fields.
It uses <segment>.fdt and <segment>.fdx; files.
FilterIndexReader
A FilterIndexReader
contains another IndexReader, which it
uses as its basic source of data, possibly transforming the data along the
way or providing additional functionality. The class
FilterIndexReader
itself simply implements all abstract methods
of IndexReader
with versions that pass all requests to the
contained index reader. Subclasses of FilterIndexReader
may
further override some of these methods and may also provide additional
methods and fields.
FilterIndexReader.FilterTermDocs
Base class for filtering TermDocs implementations.
FilterIndexReader.FilterTermEnum
Base class for filtering TermEnum implementations.
FilterIndexReader.FilterTermPositions
Base class for filtering TermPositions() implementations.
IndexCommit
Expert: represents a single commit into an index as seen by the IndexDeletionPolicy or IndexReader.
Changes to the content of an index are made visible
only after the writer who made that change commits by
writing a new segments file
(segments_N
). This point in time, when the
action of writing of a new segments file to the directory
is completed, is an index commit.
Each index commit point has a unique segments file associated with it. The segments file associated with a later index commit point would have a larger N.
WARNING: This API is a new and experimental and may suddenly change.
IndexFileDeleter
This class keeps track of each SegmentInfos instance that is still "live", either because it corresponds to a segments_N file in the Directory (a "commit", i.e. a committed SegmentInfos) or because it's an in-memory SegmentInfos that a writer is actively updating but has not yet committed. This class uses simple reference counting to map the live SegmentInfos instances to individual files in the Directory.
The same directory file may be referenced by more than one IndexCommit, i.e. more than one SegmentInfos. Therefore we count how many commits reference each file. When all the commits referencing a certain file have been deleted, the refcount for that file becomes zero, and the file is deleted.
A separate deletion policy interface (IndexDeletionPolicy) is consulted on creation (onInit) and once per commit (onCommit), to decide when a commit should be removed.
It is the business of the IndexDeletionPolicy to choose when to delete commit points. The actual mechanics of file deletion, retrying, etc, derived from the deletion of commit points is the business of the IndexFileDeleter.
The current default deletion policy is KeepOnlyLastCommitDeletionPolicy, which removes all prior commits when a new commit has completed. This matches the behavior before 2.2.
Note that you must hold the write.lock before instantiating this class. It opens segments_N file(s) directly with no retry logic.
IndexFileNameFilter
Filename filter that accept filenames and extensions only created by Lucene.
IndexFileNames
Useful constants representing filenames and extensions used by lucene
IndexReader
IndexReader is an abstract class, providing an interface for accessing an index. Search of an index is done entirely through this abstract interface, so that any subclass which implements it is searchable.
Concrete subclasses of IndexReader are usually constructed with a call to
one of the static open()
methods, e.g. Open(Directory, Boolean)
.
For efficiency, in this API documents are often referred to via document numbers, non-negative integers which each name a unique document in the index. These document numbers are ephemeral--they may change as documents are added to and deleted from an index. Clients should thus not rely on a given document having the same number between sessions.
An IndexReader can be opened on a directory for which an IndexWriter is opened already, but it cannot be used to delete documents from the index then.
NOTE: for backwards API compatibility, several methods are not listed as abstract, but have no useful implementations in this base class and instead always throw UnsupportedOperationException. Subclasses are strongly encouraged to override these methods, but in many cases may not need to.
NOTE: as of 2.4, it's possible to open a read-only IndexReader using the static open methods that accepts the boolean readOnly parameter. Such a reader has better better concurrency as it's not necessary to synchronize on the isDeleted method. You must explicitly specify false if you want to make changes with the resulting IndexReader.
NOTE: IndexReader
instances are completely thread
safe, meaning multiple threads can call any of its methods,
concurrently. If your application requires external
synchronization, you should not synchronize on the
IndexReader
instance; use your own
(non-Lucene) objects instead.
IndexReader.FieldOption
Constants describing field properties, for example used for GetFieldNames(IndexReader.FieldOption).
IndexWriter
An IndexWriter
creates and maintains an index.
The create
argument to the
IndexWriter(Directory, Analyzer, Boolean, IndexWriter.MaxFieldLength) determines
whether a new index is created, or whether an existing index is
opened. Note that you can open an index with create=true
even while readers are using the index. The old readers will
continue to search the "point in time" snapshot they had opened,
and won't see the newly created index until they re-open. There are
also IndexWriter(Directory, Analyzer, IndexWriter.MaxFieldLength)
with no create
argument which will create a new index
if there is not already an index at the provided path and otherwise
open the existing index.
In either case, documents are added with AddDocument(Document) and removed with DeleteDocuments(Term) or DeleteDocuments(Query). A document can be updated with UpdateDocument(Term, Document) (which just deletes and then adds the entire document). When finished adding, deleting and updating documents, Close() should be called.
These changes are buffered in memory and periodically flushed to the Directory (during the above method calls). A flush is triggered when there are enough buffered deletes (see SetMaxBufferedDeleteTerms(Int32)) or enough added documents since the last flush, whichever is sooner. For the added documents, flushing is triggered either by RAM usage of the documents (see SetRAMBufferSizeMB(Double)) or the number of added documents. The default is to flush when RAM usage hits 16 MB. For best indexing speed you should flush by RAM usage with a large RAM buffer. Note that flushing just moves the internal buffered state in IndexWriter into the index, but these changes are not visible to IndexReader until either Commit() or Close() is called. A flush may also trigger one or more segment merges which by default run with a background thread so as not to block the addDocument calls (see below for changing the MergeScheduler).
If an index will not have more documents added for a while and optimal search performance is desired, then either the full Optimize() method or partial Optimize(Int32) method should be called before the index is closed.
Opening an IndexWriter
creates a lock file for the directory in use. Trying to open
another IndexWriter
on the same directory will lead to a
LockObtainFailedException. The LockObtainFailedException
is also thrown if an IndexReader on the same directory is used to delete documents
from the index.
IndexWriter.IndexReaderWarmer
If GetReader() has been called (ie, this writer is in near real-time mode), then after a merge completes, this class can be invoked to warm the reader on the newly merged segment, before the merge commits. This is not required for near real-time search, but will reduce search latency on opening a new near real-time reader after a merge completes.
NOTE: This API is experimental and might change in incompatible ways in the next release.
NOTE: warm is called before any deletes have been carried over to the merged segment.
IndexWriter.MaxFieldLength
Specifies maximum field length (in number of tokens/terms) in IndexWriter constructors. SetMaxFieldLength(Int32) overrides the value set by the constructor.
KeepOnlyLastCommitDeletionPolicy
This IndexDeletionPolicy implementation that keeps only the most recent commit and immediately removes all prior commits after a new commit is done. This is the default deletion policy.
LogByteSizeMergePolicy
This is a LogMergePolicy that measures size of a segment as the total byte size of the segment's files.
LogDocMergePolicy
This is a LogMergePolicy that measures size of a segment as the number of documents (not taking deletions into account).
LogMergePolicy
This class implements a MergePolicy that tries to merge segments into levels of exponentially increasing size, where each level has fewer segments than the value of the merge factor. Whenever extra segments (beyond the merge factor upper bound) are encountered, all segments within the level are merged. You can get or set the merge factor using MergeFactor and MergeFactor respectively.
This class is abstract and requires a subclass to define the Size(SegmentInfo) method which specifies how a segment's size is determined. LogDocMergePolicy is one subclass that measures size by document count in the segment. LogByteSizeMergePolicy is another subclass that measures size as the total byte size of the file(s) for the segment.
MergePolicy
Expert: a MergePolicy determines the sequence of primitive merge operations to be used for overall merge and optimize operations.
Whenever the segments in an index have been altered by IndexWriter, either the addition of a newly flushed segment, addition of many segments from addIndexes* calls, or a previous merge that may now need to cascade, IndexWriter invokes FindMerges(SegmentInfos) to give the MergePolicy a chance to pick merges that are now required. This method returns a MergePolicy.MergeSpecification instance describing the set of merges that should be done, or null if no merges are necessary. When IndexWriter.optimize is called, it calls FindMergesForOptimize(SegmentInfos, Int32, ISet<SegmentInfo>) and the MergePolicy should then return the necessary merges.
Note that the policy can return more than one merge at a time. In this case, if the writer is using SerialMergeScheduler , the merges will be run sequentially but if it is using ConcurrentMergeScheduler they will be run concurrently.
The default MergePolicy is LogByteSizeMergePolicy .
NOTE: This API is new and still experimental (subject to change suddenly in the next release)
NOTE: This class typically requires access to
package-private APIs (e.g. SegmentInfos
) to do its job;
if you implement your own MergePolicy, you'll need to put
it in package Lucene.Net.Index in order to use
these APIs.
MergePolicy.MergeAbortedException
MergePolicy.MergeException
Exception thrown if there are any problems while executing a merge.
MergePolicy.MergeSpecification
A MergeSpecification instance provides the information necessary to perform multiple merges. It simply contains a list of MergePolicy.OneMerge instances.
MergePolicy.OneMerge
OneMerge provides the information necessary to perform an individual primitive merge operation, resulting in a single new segment. The merge spec includes the subset of segments to be merged as well as whether the new segment should use the compound file format.
MergeScheduler
Expert: IndexWriter uses an instance implementing this interface to execute the merges selected by a MergePolicy. The default MergeScheduler is ConcurrentMergeScheduler.
NOTE: This API is new and still experimental (subject to change suddenly in the next release)
NOTE: This class typically requires access to package-private APIs (eg, SegmentInfos) to do its job; if you implement your own MergePolicy, you'll need to put it in package Lucene.Net.Index in order to use these APIs.
MultipleTermPositions
Allows you to iterate over the TermPositions for multiple Terms as a single TermPositions.
MultiReader
An IndexReader which reads multiple indexes, appending their content.
ParallelReader
An IndexReader which reads multiple, parallel indexes. Each index added must have the same number of documents, but typically each contains different fields. Each document contains the union of the fields of all documents with the same document number. When searching, matches for a query term are from the first index added that has the field.
This is useful, e.g., with collections that have large fields which change rarely and small fields that change more frequently. The smaller fields may be re-indexed in a new index and both indexes may be searched together.
Warning: It is up to you to make sure all indexes are created and modified the same way. For example, if you add documents to one index, you need to add the same documents in the same order to the other indexes. Failure to do so will result in undefined behavior.
Payload
A Payload is metadata that can be stored together with each occurrence
of a term. This metadata is stored inline in the posting list of the
specific term.
To store payloads in the index a TokenStream has to be used that produces payload data.
Use PayloadLength and GetPayload(Byte[], Int32)
to retrieve the payloads from the index.
PositionBasedTermVectorMapper
For each Field, store position by position information. It ignores frequency information
This is not thread-safe.
PositionBasedTermVectorMapper.TVPositionInfo
Container for a term at a position
ReadOnlyDirectoryReader
ReadOnlySegmentReader
SegmentInfo
Information about a segment such as it's name, directory, and files related to the segment.
NOTE: This API is new and still experimental (subject to change suddenly in the next release)
SegmentInfos
A collection of segmentInfo objects with methods for operating on those segments in relation to the file system.
NOTE: This API is new and still experimental (subject to change suddenly in the next release)
SegmentInfos.FindSegmentsFile
Utility class for executing code that needs to do something with the current segments file. This is necessary with lock-less commits because from the time you locate the current segments file name, until you actually open it, read its contents, or check modified time, etc., it could have been deleted due to a writer commit finishing.
SegmentMerger
The SegmentMerger class combines two or more Segments, represented by an IndexReader (Add(IndexReader), into a single Segment. After adding the appropriate readers, call the merge method to combine the segments.
If the compoundFile flag is set, then the segments will be merged into a compound file.
SegmentReader
NOTE: This API is new and still experimental (subject to change suddenly in the next release)
SegmentReader.CoreReaders
SegmentReader.Norm
Byte[] referencing is used because a new norm object needs to be created for each clone, and the byte array is all that is needed for sharing between cloned readers. The current norm referencing is for sharing between readers whereas the byte[] referencing is for copy on write which is independent of reader references (i.e. incRef, decRef).
SegmentReader.Ref
SerialMergeScheduler
A MergeScheduler that simply does each merge sequentially, using the current thread.
SnapshotDeletionPolicy
A IndexDeletionPolicy that wraps around any other IndexDeletionPolicy and adds the ability to hold and later release a single "snapshot" of an index. While the snapshot is held, the IndexWriter will not remove any files associated with it even if the index is otherwise being actively, arbitrarily changed. Because we wrap another arbitrary IndexDeletionPolicy, this gives you the freedom to continue using whatever IndexDeletionPolicy you would normally want to use with your index. Note that you can re-use a single instance of SnapshotDeletionPolicy across multiple writers as long as they are against the same index Directory. Any snapshot held when a writer is closed will "survive" when the next writer is opened.
WARNING: This API is a new and experimental and may suddenly change.
SortedTermVectorMapper
Store a sorted collection of TermVectorEntrys. Collects all term information
into a single, SortedSet.
NOTE: This Mapper ignores all Field information for the Document. This means that if you are using offset/positions you will not
know what Fields they correlate with.
This is not thread-safe
StaleReaderException
This exception is thrown when an IndexReader
tries to make changes to the index (via DeleteDocument(Int32)
, UndeleteAll()
or
Term
A Term represents a word from text. This is the unit of search. It is composed of two elements, the text of the word, as a string, and the name of the field that the text occured in, an interned string. Note that terms may represent more than words from text fields, but also things like dates, email addresses, urls, etc.
TermEnum
Abstract class for enumerating terms.
Term enumerations are always ordered by Term.compareTo(). Each term in the enumeration is greater than all that precede it.
TermVectorEntry
Convenience class for holding TermVector information.
TermVectorEntryFreqSortedComparator
Compares TermVectorEntrys first by frequency and then by the term (case-sensitive)
TermVectorMapper
The TermVectorMapper can be used to map Term Vectors into your own structure instead of the parallel array structure used by GetTermFreqVector(Int32, String).
It is up to the implementation to make sure it is thread-safe.
Structs
TermVectorOffsetInfo
The TermVectorOffsetInfo class holds information pertaining to a Term in a TermPositionVector's offset information. This offset information is the character offset as set during the Analysis phase (and thus may not be the actual offset in the original content).
Interfaces
IndexDeletionPolicy
Expert: policy for deletion of stale IndexCommit.
Implement this interface, and pass it to one of the IndexWriter or IndexReader constructors, to customize when older IndexCommit are deleted from the index directory. The default deletion policy is KeepOnlyLastCommitDeletionPolicy, which always removes old commits as soon as a new commit is done (this matches the behavior before 2.2).
One expected use case for this (and the reason why it was first created) is to work around problems with an index directory accessed via filesystems like NFS because NFS does not provide the "delete on last close" semantics that Lucene's "point in time" search normally relies on. By implementing a custom deletion policy, such as "a commit is only removed once it has been stale for more than X minutes", you can give your readers time to refresh to the new commit before IndexWriter removes the old commits. Note that doing so will increase the storage requirements of the index. See LUCENE-710 for details.
ITermFreqVector
Provides access to stored term vector of a document field. The vector consists of the name of the field, an array of the terms tha occur in the field of the Document and a parallel array of frequencies. Thus, getTermFrequencies()[5] corresponds with the frequency of getTerms()[5], assuming there are at least 5 terms in the Document.
TermDocs
TermDocs provides an interface for enumerating <document, frequency> pairs for a term.
The document portion names each document containing the term. Documents are indicated by number. The frequency portion gives the number of times the term occurred in each document.
The pairs are ordered by document number.
TermPositions
TermPositions provides an interface for enumerating the <document, frequency, <position>* > tuples for a term.
The document and frequency are the same as for a TermDocs. The positions portion lists the ordinal positions of each occurrence of a term in a document.
TermPositionVector
Extends TermFreqVector
to provide additional information about
positions in which each of the terms is found. A TermPositionVector not necessarily
contains both positions and offsets, but at least one of these arrays exists.