Namespace Lucene.Net.Search
Classes
BooleanClause
A clause in a BooleanQuery.
BooleanQuery
A Query that matches documents matching boolean combinations of other queries, e.g. TermQuerys, PhraseQuerys or other BooleanQuerys.
BooleanQuery.BooleanWeight
Expert: the Weight for BooleanQuery, used to normalize, score and explain these queries.
NOTE: this API and implementation is subject to change suddenly in the next release.
BooleanQuery.TooManyClauses
Thrown when an attempt is made to add more than MaxClauseCount clauses. This typically happens if a PrefixQuery, FuzzyQuery, WildcardQuery, or TermRangeQuery is expanded to many terms during search.
BooleanScorer
CacheEntry
EXPERT: A unique Identifier/Description for each item in the FieldCache. Can be useful for logging/debugging.
EXPERIMENTAL API: This API is considered extremely advanced and experimental. It may be removed or altered w/o warning in future releases of Lucene.
CachingSpanFilter
Wraps another SpanFilter's result and caches it. The purpose is to allow filters to simply filter, and then wrap with this class to add caching.
CachingWrapperFilter
Wraps another filter's result and caches it. The purpose is to allow filters to simply filter, and then wrap with this class to add caching.
Collector
Expert: Collectors are primarily meant to be used to gather raw results from a search, and implement sorting or custom result filtering, collation, etc.
Lucene's core collectors are derived from Collector. Likely your application can use one of these classes, or subclass TopDocsCollector<T>, instead of implementing Collector directly:
Collector decouples the score from the collected doc: the score computation is skipped entirely if it's not needed. Collectors that do need the score should implement the SetScorer(Scorer) method, to hold onto the passed Scorer instance, and call Score() within the collect method to compute the current hit's score. If your collector may request the score for a single hit multiple times, you should use ScoreCachingWrappingScorer.
NOTE: The doc that is passed to the collect method is relative to the current reader. If your collector needs to resolve this to the docID space of the Multi*Reader, you must re-base it by recording the docBase from the most recent setNextReader call. Here's a simple example showing how to collect docIDs into a BitSet:
Searcher searcher = new IndexSearcher(indexReader);
final BitSet bits = new BitSet(indexReader.MaxDoc);
searcher.search(query, new Collector() {
private int docBase;
// ignore scorer
public void setScorer(Scorer scorer) {
}
// accept docs out of order (for a BitSet it doesn't matter)
public boolean acceptsDocsOutOfOrder() {
return true;
}
public void collect(int doc) {
bits.set(doc + docBase);
}
public void setNextReader(IndexReader reader, int docBase) {
this.docBase = docBase;
}
});
Not all collectors will need to rebase the docID. For example, a collector that simply counts the total number of hits would skip it.
NOTE: Prior to 2.9, Lucene silently filtered out hits with score <= 0. As of 2.9, the core Collectors no longer do that. It's very unusual to have such hits (a negative query boost, or function query returning negative custom scores, could cause it to happen). If you need that behavior, use PositiveScoresOnlyCollector .
NOTE: This API is experimental and might change in incompatible ways in the next release.
ComplexExplanation
Expert: Describes the score computation for document and query, and can distinguish a match independent of a positive value.
ConstantScoreQuery
A query that wraps a filter and simply returns a constant score equal to the query boost for every document in the filter.
ConstantScoreQuery.ConstantScorer
ConstantScoreQuery.ConstantWeight
CreationPlaceholder
Expert: Maintains caches of term values.
Created: May 19, 2004 11:13:14 AM
DefaultSimilarity
Expert: Default scoring implementation.
DisjunctionMaxQuery
A query that generates the union of documents produced by its subqueries, and that scores each document with the maximum score for that document as produced by any subquery, plus a tie breaking increment for any additional matching subqueries. This is useful when searching for a word in multiple fields with different boost factors (so that the fields cannot be combined equivalently into a single search field). We want the primary score to be the one associated with the highest boost, not the sum of the field scores (as BooleanQuery would give). If the query is "albino elephant" this ensures that "albino" matching one field and "elephant" matching another gets a higher score than "albino" matching both fields. To get this result, use both BooleanQuery and DisjunctionMaxQuery: for each term a DisjunctionMaxQuery searches for it in each field, while the set of these DisjunctionMaxQuery's is combined into a BooleanQuery. The tie breaker capability allows results that include the same term in multiple fields to be judged better than results that include this term in only the best of those multiple fields, without confusing this with the better case of two different terms in the multiple fields.
DisjunctionMaxQuery.DisjunctionMaxWeight
Expert: the Weight for DisjunctionMaxQuery, used to normalize, score and explain these queries.
NOTE: this API and implementation is subject to change suddenly in the next release.
DocIdSet
A DocIdSet contains a set of doc ids. Implementing classes must only implement Iterator() to provide access to the set.
DocIdSet.AnonymousClassDocIdSet
DocIdSet.AnonymousClassDocIdSet.AnonymousClassDocIdSetIterator
DocIdSetIterator
This abstract class defines methods to iterate over a set of non-decreasing
doc ids. Note that this class assumes it iterates on doc Ids, and therefore
NO_MORE_DOCS is set to Int32.MaxValue in order to be used as
a sentinel object. Implementations of this class are expected to consider
Explanation
Expert: Describes the score computation for document and query.
Explanation.IDFExplanation
Small Util class used to pass both an idf factor as well as an explanation for that factor.
This class will likely be held on a Weight, so be aware before storing any large or un-serializable fields.
FieldCacheRangeFilter
A range filter built on top of a cached single term field (in FieldCache).
FieldCacheRangeFilter builds a single cache for the field the first time it is used. Each subsequent FieldCacheRangeFilter on the same field then reuses this cache, even if the range itself changes.
This means that FieldCacheRangeFilter is much faster (sometimes more than 100x as fast) as building a TermRangeFilter if using a NewStringRange(String, String, String, Boolean, Boolean). However, if the range never changes it is slower (around 2x as slow) than building a CachingWrapperFilter on top of a single TermRangeFilter.
For numeric data types, this filter may be significantly faster than NumericRangeFilter<T>. Furthermore, it does not need the numeric values encoded by NumericField. But it has the problem that it only works with exact one value/document (see below).
As with all FieldCache based functionality, FieldCacheRangeFilter is only valid for fields which exact one term for each document (except for NewStringRange(String, String, String, Boolean, Boolean) where 0 terms are also allowed). Due to a restriction of FieldCache, for numeric ranges all terms that do not have a numeric value, 0 is assumed.
Thus it works on dates, prices and other single value fields but will not work on
regular text fields. It is preferable to use a NOT_ANALYZED
field to ensure that
there is only a single term.
This class does not have an constructor, use one of the static factory methods available, that create a correct instance for different data types supported by FieldCache.
FieldCacheRangeFilter<T>
FieldCacheTermsFilter
A Filter that only accepts documents whose single term value in the specified field is contained in the provided set of allowed terms.
This is the same functionality as TermsFilter (from contrib/queries), except this filter requires that the field contains only a single term for all documents. Because of drastically different implementations, they also have different performance characteristics, as described below.
The first invocation of this filter on a given field will be slower, since a StringIndex must be created. Subsequent invocations using the same field will re-use this cache. However, as with all functionality based on FieldCache, persistent RAM is consumed to hold the cache, and is not freed until the IndexReader is closed. In contrast, TermsFilter has no persistent RAM consumption.
With each search, this filter translates the specified set of Terms into a private Lucene.Net.Util.OpenBitSet keyed by term number per unique IndexReader (normally one reader per segment). Then, during matching, the term number for each docID is retrieved from the cache and then checked for inclusion using the Lucene.Net.Util.OpenBitSet. Since all testing is done using RAM resident data structures, performance should be very fast, most likely fast enough to not require further caching of the DocIdSet for each possible combination of terms. However, because docIDs are simply scanned linearly, an index with a great many small documents may find this linear scan too costly.
In contrast, TermsFilter builds up an Lucene.Net.Util.OpenBitSet, keyed by docID, every time it's created, by enumerating through all matching docs using TermDocs to seek and scan through each term's docID list. While there is no linear scan of all docIDs, besides the allocation of the underlying array in the Lucene.Net.Util.OpenBitSet, this approach requires a number of "disk seeks" in proportion to the number of terms, which can be exceptionally costly when there are cache misses in the OS's IO cache.
Generally, this filter will be slower on the first invocation for a given field, but subsequent invocations, even if you change the allowed set of Terms, should be faster than TermsFilter, especially as the number of Terms being matched increases. If you are matching only a very small number of terms, and those terms in turn match a very small number of documents, TermsFilter may perform faster.
Which filter is best is very application dependent.
FieldCacheTermsFilter.FieldCacheTermsFilterDocIdSet
FieldCacheTermsFilter.FieldCacheTermsFilterDocIdSet.FieldCacheTermsFilterDocIdSetIterator
FieldComparator
Expert: a FieldComparator compares hits so as to determine their sort order when collecting the top results with TopFieldCollector . The concrete public FieldComparator classes here correspond to the SortField types.
This API is designed to achieve high performance sorting, by exposing a tight interaction with FieldValueHitQueue as it visits hits. Whenever a hit is competitive, it's enrolled into a virtual slot, which is an int ranging from 0 to numHits-1. The FieldComparator is made aware of segment transitions during searching in case any internal state it's tracking needs to be recomputed during these transitions.
A comparator must define these functions:
NOTE: This API is experimental and might change in incompatible ways in the next release.
FieldComparator.ByteComparator
Parses field's values as byte (using
FieldComparator.DocComparator
Sorts by ascending docID
FieldComparator.DoubleComparator
Parses field's values as double (using
FieldComparator.FloatComparator
Parses field's values as float (using
FieldComparator.IntComparator
Parses field's values as int (using
FieldComparator.LongComparator
Parses field's values as long (using
FieldComparator.RelevanceComparator
Sorts by descending relevance. NOTE: if you are sorting only by descending relevance and then secondarily by ascending docID, peformance is faster using TopScoreDocCollector directly (which Search(Query, Int32) uses when no Sort is specified).
FieldComparator.ShortComparator
Parses field's values as short (using
FieldComparator.StringComparatorLocale
Sorts by a field's value using the Collator for a given Locale.
FieldComparator.StringOrdValComparator
Sorts by field's natural String sort order, using ordinals. This is functionally equivalent to FieldComparator.StringValComparator , but it first resolves the string to their relative ordinal positions (using the index returned by GetStringIndex(IndexReader, String)), and does most comparisons using the ordinals. For medium to large results, this comparator will be much faster than FieldComparator.StringValComparator. For very small result sets it may be slower.
FieldComparator.StringValComparator
Sorts by field's natural String sort order. All comparisons are done using String.compareTo, which is slow for medium to large result sets but possibly very fast for very small results sets.
FieldComparatorSource
Provides a FieldComparator for custom field sorting.
NOTE: This API is experimental and might change in incompatible ways in the next release.
FieldDoc
Expert: A ScoreDoc which also contains information about
how to sort the referenced document. In addition to the
document number and score, this object contains an array
of values for the document from the field(s) used to sort.
For example, if the sort criteria was to sort by fields
"a", "b" then "c", the fields
object array
will have three elements, corresponding respectively to
the term values for the document in fields "a", "b" and "c".
The class of each element in the array will be either
Integer, Float or String depending on the type of values
in the terms of each field.
Created: Feb 11, 2004 1:23:38 PM
FieldValueHitQueue
Expert: A hit queue for sorting by hits by terms in more than one field.
Uses FieldCache.DEFAULT
for maintaining
internal term lookup tables.
NOTE: This API is experimental and might change in incompatible ways in the next release.
FieldValueHitQueue.Entry
Filter
Abstract base class for restricting which documents may be returned during searching.
FilteredDocIdSet
Abstract decorator class for a DocIdSet implementation that provides on-demand filtering/validation mechanism on a given DocIdSet.
Technically, this same functionality could be achieved with ChainedFilter (under contrib/misc), however the benefit of this class is it never materializes the full bitset for the filter. Instead, the Match(Int32) method is invoked on-demand, per docID visited during searching. If you know few docIDs will be visited, and the logic behind Match(Int32) is relatively costly, this may be a better way to filter than ChainedFilter.
FilteredDocIdSetIterator
Abstract decorator class of a DocIdSetIterator implementation that provides on-demand filter/validation mechanism on an underlying DocIdSetIterator. See FilteredDocIdSet .
FilteredQuery
A query that applies a filter to the results of another query.
Note: the bits are retrieved from the filter each time this query is used in a search - use a CachingWrapperFilter to avoid regenerating the bits every time.
Created: Apr 20, 2004 8:58:29 AM
FilteredTermEnum
Abstract class for enumerating a subset of all terms.
Term enumerations are always ordered by Term.compareTo(). Each term in the enumeration is greater than all that precede it.
FilterManager
Filter caching singleton. It can be used to save filters locally for reuse. This class makes it possble to cache Filters even when using RMI, as it keeps the cache on the seaercher side of the RMI connection.
Also could be used as a persistent storage for any filter as long as the filter provides a proper hashCode(), as that is used as the key in the cache.
The cache is periodically cleaned up from a separate thread to ensure the cache doesn't exceed the maximum size.
FilterManager.FilterCleaner
Keeps the cache from getting too big. If we were using Java 1.5, we could use LinkedHashMap and we would not need this thread to clean out the cache.
The SortedSet sortedFilterItems is used only to sort the items from the cache, so when it's time to clean up we have the TreeSet sort the FilterItems by timestamp.
Removes 1.5 * the numbers of items to make the cache smaller. For example: If cache clean size is 10, and the cache is at 15, we would remove (15 - 10) * 1.5 = 7.5 round up to 8. This way we clean the cache a bit more, and avoid having the cache cleaner having to do it frequently.
FilterManager.FilterItem
Holds the filter and the last time the filter was used, to make LRU-based cache cleaning possible. TODO: Clean this up when we switch to Java 1.5
FuzzyQuery
Implements the fuzzy search query. The similarity measurement is based on the Levenshtein (edit distance) algorithm.
Warning: this query is not very scalable with its default prefix length of 0 - in this case, every term will be enumerated and cause an edit score calculation.
FuzzyQuery.ScoreTerm
FuzzyTermEnum
Subclass of FilteredTermEnum for enumerating all terms that are similiar to the specified filter term.
Term enumerations are always ordered by Term.compareTo(). Each term in the enumeration is greater than all that precede it.
HitQueue
IndexSearcher
Implements search over a single IndexReader.
Applications usually need only call the inherited Search(Query, Int32) or Search(Query, Filter, Int32) methods. For performance reasons it is recommended to open only one IndexSearcher and use it for all of your searches.
NOTE:
IndexSearcher instances are completely
thread safe, meaning multiple threads can call any of its
methods, concurrently. If your application requires
external synchronization, you should not
synchronize on the IndexSearcher
instance;
use your own (non-Lucene) objects instead.
MatchAllDocsQuery
A query that matches all documents.
MultiPhraseQuery
MultiPhraseQuery is a generalized version of PhraseQuery, with an added method Add(Term[]). To use this class, to search for the phrase "Microsoft app*" first use add(Term) on the term "Microsoft", then find all terms that have "app" as prefix using IndexReader.terms(Term), and use MultiPhraseQuery.add(Term[] terms) to add them to the query.
MultiSearcher
Implements search over a set of Searchables
.
Applications usually need only call the inherited Search(Query, Int32) or Search(Query, Filter, Int32) methods.
MultiTermQuery
An abstract Query that matches documents containing a subset of terms provided by a FilteredTermEnum enumeration.
This query cannot be used directly; you must subclass it and define GetEnum(IndexReader) to provide a FilteredTermEnum that iterates through the terms to be matched.
NOTE: if RewriteMethod is either CONSTANT_SCORE_BOOLEAN_QUERY_REWRITE or SCORING_BOOLEAN_QUERY_REWRITE , you may encounter a BooleanQuery.TooManyClauses exception during searching, which happens when the number of terms to be searched exceeds MaxClauseCount . Setting RewriteMethod to CONSTANT_SCORE_FILTER_REWRITE prevents this.
The recommended rewrite method is CONSTANT_SCORE_AUTO_REWRITE_DEFAULT : it doesn't spend CPU computing unhelpful scores, and it tries to pick the most performant rewrite method given the query.
Note that Lucene.Net.QueryParsers.QueryParser produces MultiTermQueries using CONSTANT_SCORE_AUTO_REWRITE_DEFAULT by default.
MultiTermQuery.AnonymousClassConstantScoreAutoRewrite
MultiTermQuery.ConstantScoreAutoRewrite
A rewrite method that tries to pick the best constant-score rewrite method based on term and document counts from the query. If both the number of terms and documents is small enough, then CONSTANT_SCORE_BOOLEAN_QUERY_REWRITE is used. Otherwise, CONSTANT_SCORE_FILTER_REWRITE is used.
MultiTermQueryWrapperFilter<T>
A wrapper for MultiTermQuery, that exposes its functionality as a Filter.
MultiTermQueryWrapperFilter
is not designed to
be used by itself. Normally you subclass it to provide a Filter
counterpart for a MultiTermQuery subclass.
For example, TermRangeFilter and PrefixFilter extend
MultiTermQueryWrapperFilter
.
This class also provides the functionality behind
CONSTANT_SCORE_FILTER_REWRITE;
this is why it is not abstract.
NumericRangeFilter
NumericRangeFilter<T>
A Filter that only accepts numeric values within a specified range. To use this, you must first index the numeric values using NumericField (expert: NumericTokenStream ).
You create a new NumericRangeFilter with the static factory methods, eg:
Filter f = NumericRangeFilter.newFloatRange("weight",
new Float(0.3f), new Float(0.10f),
true, true);
accepts all documents whose float valued "weight" field ranges from 0.3 to 0.10, inclusive. See NumericRangeQuery<T> for details on how Lucene indexes and searches numeric valued fields.
NOTE: This API is experimental and might change in incompatible ways in the next release.
NumericRangeQuery
NumericRangeQuery<T>
A Query that matches numeric values within a specified range. To use this, you must first index the numeric values using NumericField (expert: NumericTokenStream ). If your terms are instead textual, you should use TermRangeQuery. NumericRangeFilter<T> is the filter equivalent of this query.
You create a new NumericRangeQuery with the static factory methods, eg:
Query q = NumericRangeQuery.newFloatRange("weight",
new Float(0.3f), new Float(0.10f),
true, true);
matches all documents whose float valued "weight" field
ranges from 0.3 to 0.10, inclusive.
The performance of NumericRangeQuery is much better than the corresponding TermRangeQuery because the number of terms that must be searched is usually far fewer, thanks to trie indexing, described below.
You can optionally specify a precisionStep
when creating this query. This is necessary if you've
changed this configuration from its default (4) during
indexing. Lower values consume more disk space but speed
up searching. Suitable values are between 1 and
8. A good starting point to test is 4,
which is the default value for all Numeric*
classes. See below for
details.
This query defaults to CONSTANT_SCORE_AUTO_REWRITE_DEFAULT for 32 bit (int/float) ranges with precisionStep <8 and 64 bit (long/double) ranges with precisionStep <6. Otherwise it uses CONSTANT_SCORE_FILTER_REWRITE as the number of terms is likely to be high. With precision steps of <4, this query can be run with one of the BooleanQuery rewrite methods without changing BooleanQuery's default max clause count.
NOTE: This API is experimental and
might change in incompatible ways in the next release.
How it works
See the publication about panFMP,
where this algorithm was described (referred to as TrieRangeQuery
):
Schindler, U, Diepenbroek, M, 2008. Generic XML-based Framework for Metadata Portals. Computers & Geosciences 34 (12), 1947-1955. doi:10.1016/j.cageo.2008.02.023
A quote from this paper: Because Apache Lucene is a full-text search engine and not a conventional database, it cannot handle numerical ranges (e.g., field value is inside user defined bounds, even dates are numerical values). We have developed an extension to Apache Lucene that stores the numerical values in a special string-encoded format with variable precision (all numerical values like doubles, longs, floats, and ints are converted to lexicographic sortable string representations and stored with different precisions (for a more detailed description of how the values are stored, see Lucene.Net.Util.NumericUtils). A range is then divided recursively into multiple intervals for searching: The center of the range is searched only with the lowest possible precision in the trie, while the boundaries are matched more exactly. This reduces the number of terms dramatically.
For the variant that stores long values in 8 different precisions (each reduced by 8 bits) that
uses a lowest precision of 1 byte, the index contains only a maximum of 256 distinct values in the
lowest precision. Overall, a range could consist of a theoretical maximum of
72552 + 255 = 3825
distinct terms (when there is a term for every distinct value of an
8-byte-number in the index and the range covers almost all of them; a maximum of 255 distinct values is used
because it would always be possible to reduce the full 256 values to one term with degraded precision).
In practice, we have seen up to 300 terms in most cases (index with 500,000 metadata records
and a uniform value distribution).
Precision Step
You can choose any precisionStep
when encoding values.
Lower step values mean more precisions and so more terms in index (and index gets larger).
On the other hand, the maximum number of terms to match reduces, which optimized query speed.
The formula to calculate the maximum term count is:
n = [ (bitsPerValue/precisionStep - 1) * (2^precisionStep - 1 ) * 2 ] + (2^precisionStep - 1 )
(this formula is only correct, when bitsPerValue/precisionStep
is an integer;
in other cases, the value must be rounded up and the last summand must contain the modulo of the division as
precision step).
For longs stored using a precision step of 4, n = 15152 + 15 = 465
, and for a precision
step of 2, n = 3132 + 3 = 189
. But the faster search speed is reduced by more seeking
in the term enum of the index. Because of this, the ideal precisionStep
value can only
be found out by testing. Important: You can index with a lower precision step value and test search speed
using a multiple of the original step value.
Good values for precisionStep
are depending on usage and data type:
Comparisons of the different types of RangeQueries on an index with about 500,000 docs showed that TermRangeQuery in boolean rewrite mode (with raised BooleanQuery clause count) took about 30-40 secs to complete, TermRangeQuery in constant score filter rewrite mode took 5 secs and executing this class took <100ms to complete (on an Opteron64 machine, Java 1.5, 8 bit precision step). This query type was developed for a geographic portal, where the performance for e.g. bounding boxes or exact date/time stamps is important.
OccurExtensions
ParallelMultiSearcher
Implements parallel search over a set of Searchables
.
Applications usually need only call the inherited Search(Query, Int32) or Search(Query, Filter, Int32) methods.
PhraseQuery
A Query that matches documents containing a particular sequence of terms.
A PhraseQuery is built by QueryParser for input like "new york"
.
This query may be combined with other terms or queries with a BooleanQuery.
PositiveScoresOnlyCollector
A Collector implementation which wraps another Collector and makes sure only documents with scores > 0 are collected.
PrefixFilter
A Filter that restricts search results to values that have a matching prefix in a given field.
PrefixQuery
A Query that matches documents containing terms with a specified prefix. A PrefixQuery
is built by QueryParser for input like app*
.
This query uses the CONSTANT_SCORE_AUTO_REWRITE_DEFAULT rewrite method.
PrefixTermEnum
Subclass of FilteredTermEnum for enumerating all terms that match the specified prefix filter term.
Term enumerations are always ordered by Term.compareTo(). Each term in the enumeration is greater than all that precede it.
Query
The abstract base class for queries.
Instantiable subclasses are:
A parser for queries is contained in:
QueryTermVector
QueryWrapperFilter
Constrains search results to only match those which also match a provided query.
This could be used, for example, with a TermRangeQuery on a suitably formatted date field to implement date filtering. One could re-use a single QueryFilter that matches, e.g., only documents modified within the last week. The QueryFilter and TermRangeQuery would only need to be reconstructed once per day.
RewriteMethod
Abstract class that defines how the query is rewritten.
ScoreCachingWrappingScorer
A Scorer which wraps another scorer and caches the score of the
current document. Successive calls to Score() will return the same
result and will not invoke the wrapped Scorer's score() method, unless the
current document has changed.
This class might be useful due to the changes done to the Collector
interface, in which the score is not computed for a document by default, only
if the collector requests it. Some collectors may need to use the score in
several places, however all they have in hand is a Scorer object, and
might end up computing the score of a document more than once.
ScoreDoc
Expert: Returned by low-level search implementations.
Scorer
Expert: Common scoring functionality for different types of queries.
A Scorer
iterates over documents matching a
query in increasing order of doc Id.
Document scores are computed using a given Similarity
implementation.
NOTE: The values Float.Nan, Float.NEGATIVE_INFINITY and Float.POSITIVE_INFINITY are not valid scores. Certain collectors (eg TopScoreDocCollector ) will not properly collect hits with these scores.
Searcher
An abstract base class for search implementations. Implements the main search methods.
Note that you can only access hits from a Searcher as long as it is not yet closed, otherwise an IOException will be thrown.
Similarity
Expert: Scoring API.
Subclasses implement search scoring.
The score of query q
for document d
correlates to the
cosine-distance or dot-product between document and query vectors in a
Vector Space Model (VSM) of Information Retrieval.
A document whose vector is closer to the query vector in that model is scored higher.
The score is computed as follows:
|
where
SimilarityDelegator
Expert: Delegating scoring implementation. Useful in GetSimilarity(Searcher) implementations, to override only certain methods of a Searcher's Similiarty implementation..
SingleTermEnum
Subclass of FilteredTermEnum for enumerating a single term.
This can be used by MultiTermQuerys that need only visit one term, but want to preserve MultiTermQuery semantics such as RewriteMethod.
Sort
Encapsulates sort criteria for returned hits.
The fields used to determine sort order must be carefully chosen. Documents must contain a single term in such a field, and the value of the term should indicate the document's relative position in a given sort order. The field must be indexed, but should not be tokenized, and does not need to be stored (unless you happen to want it back with the rest of your document data). In other words:
document.add (new Field ("byNumber", Integer.toString(x), Field.Store.NO, Field.Index.NOT_ANALYZED));
Valid Types of Values
There are four possible kinds of term values which may be put into sorting fields: Integers, Longs, Floats, or Strings. Unless SortField objects are specified, the type of value in the field is determined by parsing the first term in the field.
Integer term values should contain only digits and an optional
preceding negative sign. Values must be base 10 and in the range
Integer.MIN_VALUE
and Integer.MAX_VALUE
inclusive.
Documents which should appear first in the sort
should have low value integers, later documents high values
(i.e. the documents should be numbered 1..n
where
1
is the first and n
the last).
Long term values should contain only digits and an optional
preceding negative sign. Values must be base 10 and in the range
Long.MIN_VALUE
and Long.MAX_VALUE
inclusive.
Documents which should appear first in the sort
should have low value integers, later documents high values.
Float term values should conform to values accepted by
NaN
and Infinity
are not supported).
Documents which should appear first in the sort
should have low values, later documents high values.
String term values can contain any valid String, but should
not be tokenized. The values are sorted according to their
Object Reuse
One of these objects can be used multiple times and the sort order changed between usages.
This class is thread safe.
Memory Usage
Sorting uses of caches of term values maintained by the
internal HitQueue(s). The cache is static and contains an integer
or float array of length IndexReader.MaxDoc
for each field
name for which a sort is performed. In other words, the size of the
cache in bytes is:
4 * IndexReader.MaxDoc * (# of different fields actually used to sort)
For String fields, the cache is larger: in addition to the above array, the value of every term in the field is kept in memory. If there are many unique terms in the field, this could be quite large.
Note that the size of the cache is not affected by how many fields are in the index and might be used to sort - only by the ones actually used to sort a result set.
Created: Feb 12, 2004 10:53:57 AM
SortField
Stores information about how to sort documents by terms in an individual field. Fields must be indexed in order to sort by them.
Created: Feb 11, 2004 1:25:29 PM
SpanFilter
Abstract base class providing a mechanism to restrict searches to a subset of an index and also maintains and returns position information. This is useful if you want to compare the positions from a SpanQuery with the positions of items in a filter. For instance, if you had a SpanFilter that marked all the occurrences of the word "foo" in documents, and then you entered a new SpanQuery containing bar, you could not only filter by the word foo, but you could then compare position information for post processing.
SpanFilterResult
The results of a SpanQueryFilter. Wraps the BitSet and the position information from the SpanQuery
NOTE: This API is still experimental and subject to change.
SpanFilterResult.PositionInfo
SpanFilterResult.StartEnd
SpanQueryFilter
Constrains search results to only match those which also match a provided query. Also provides position information about where each document matches at the cost of extra space compared with the QueryWrapperFilter. There is an added cost to this above what is stored in a QueryWrapperFilter. Namely, the position information for each matching document is stored.
This filter does not cache. See the CachingSpanFilter for a wrapper that caches.
StringIndex
Expert: Stores term text values and document ordering data.
TermQuery
A Query that matches documents containing a term. This may be combined with other terms with a BooleanQuery.
TermRangeFilter
A Filter that restricts search results to a range of values in a given field.
This filter matches the documents looking for terms that fall into the
supplied range according to
If you construct a large number of range filters with different ranges but on the same field, FieldCacheRangeFilter may have significantly better performance.
TermRangeQuery
A Query that matches documents within an exclusive range of terms.
This query matches the documents looking for terms that fall into the
supplied range according to
This query uses the CONSTANT_SCORE_AUTO_REWRITE_DEFAULT
rewrite method.
TermRangeTermEnum
Subclass of FilteredTermEnum for enumerating all terms that match the specified range parameters.
Term enumerations are always ordered by Term.compareTo(). Each term in the enumeration is greater than all that precede it.
TermScorer
Expert: A Scorer
for documents matching a Term
.
TimeLimitingCollector
The TimeLimitingCollector is used to timeout search requests that take longer than the maximum allowed search time limit. After this time is exceeded, the search thread is stopped by throwing a TimeLimitingCollector.TimeExceededException.
TimeLimitingCollector.TimeExceededException
Thrown when elapsed search time exceeds allowed search time.
TopDocs
Represents hits returned by Search(Query, Filter, Int32) and Search(Query, Int32)
TopDocsCollector<T>
A base class for all collectors that return a TopDocs output. This
collector allows easy extension by providing a single constructor which
accepts a Lucene.Net.Util.PriorityQueue<T> as well as protected members for that
priority queue and a counter of the number of total hits.
Extending classes can override TopDocs(Int32, Int32) and
TotalHits in order to provide their own implementation.
TopFieldCollector
A Collector that sorts by SortField using FieldComparators.
See the Create(Sort, Int32, Boolean, Boolean, Boolean, Boolean) method for instantiating a TopFieldCollector.
NOTE: This API is experimental and might change in incompatible ways in the next release.
TopFieldDocs
Represents hits returned by Search(Query, Filter, Int32, Sort).
TopScoreDocCollector
A Collector implementation that collects the top-scoring hits, returning them as a TopDocs. This is used by IndexSearcher to implement TopDocs-based search. Hits are sorted by score descending and then (when the scores are tied) docID ascending. When you create an instance of this collector you should know in advance whether documents are going to be collected in doc Id order or not.
NOTE: The values
Weight
Expert: Calculate query weights and build query scorers.
The purpose of Weight is to ensure searching does not
modify a Query, so that a Query instance can be reused.
Searcher dependent state of the query should reside in the
Weight.
IndexReader dependent state should reside in the Scorer(IndexReader, Boolean, Boolean).
A Weight
is used in the following way:
WildcardQuery
Implements the wildcard search query. Supported wildcards are , which
matches any character sequence (including the empty one), and
?
,
which matches any single character. Note this query can be slow, as it
needs to iterate over many terms. In order to prevent extremely slow WildcardQueries,
a Wildcard term should not start with one of the wildcards or
?
.
This query uses the CONSTANT_SCORE_AUTO_REWRITE_DEFAULT
rewrite method.
WildcardTermEnum
Subclass of FilteredTermEnum for enumerating all terms that match the specified wildcard filter term.
Term enumerations are always ordered by Term.compareTo(). Each term in the enumeration is greater than all that precede it.
Structs
FieldCache_Fields
Interfaces
ByteParser
Interface to parse bytes from document fields.
DoubleParser
Interface to parse doubles from document fields.
FieldCache
FloatParser
Interface to parse floats from document fields.
IntParser
Interface to parse ints from document fields.
LongParser
Interface to parse long from document fields.
Parser
Marker interface as super-interface to all parsers. It
is used to specify a custom parser to
Searchable
The interface for search implementations.
Searchable is the abstract network protocol for searching. Implementations provide search over a single index, over multiple indices, and over indices on remote servers.
Queries, filters and sort criteria are designed to be compact so that they may be efficiently passed to a remote index, with only the top-scoring hits being returned, rather than every matching hit.
NOTE: this interface is kept public for convenience. Since it is not expected to be implemented directly, it may be changed unexpectedly between releases.
ShortParser
Interface to parse shorts from document fields.