Namespace Lucene.Net.Analysis
Classes
Analyzer
An Analyzer builds TokenStreams, which analyze text. It thus represents a policy for extracting index terms from text.
Typical implementations first build a Tokenizer, which breaks the stream of characters from the Reader into raw Tokens. One or more TokenFilters may then be applied to the output of the Tokenizer.
ASCIIFoldingFilter
This class converts alphabetic, numeric, and symbolic Unicode characters which are not in the first 127 ASCII characters (the "Basic Latin" Unicode block) into their ASCII equivalents, if one exists.
Characters from the following Unicode blocks are converted; however, only those characters with reasonable ASCII alternatives are converted:
See: http://en.wikipedia.org/wiki/Latin_characters_in_Unicode
The set of character conversions supported by this class is a superset of those supported by Lucene's ISOLatin1AccentFilter which strips accents from Latin1 characters. For example, 'À' will be replaced by 'a'.
BaseCharFilter
- Base utility class for implementing a CharFilter.
- You subclass this, and then record mappings by calling
- AddOffCorrectMap(Int32, Int32), and then invoke the correct
- method to correct an offset.
CachingTokenFilter
This class can be used if the token attributes of a TokenStream are intended to be consumed more than once. It caches all token attribute states locally in a List.
CachingTokenFilter implements the optional method Reset(), which repositions the stream to the first Token.
ChainedFilter
- Allows multiple {@link Filter}s to be chained.
- Logical operations such as NOT and XOR
- are applied between filters. One operation can be used
- for all filters, or a specific operation can be declared
- for each filter.
- Order in which filters are called depends on
- the position of the filter in the chain. It's probably
- more efficient to place the most restrictive filters
- /least computationally-intensive filters first.
CharArraySet
A simple class that stores Strings as char[]'s in a hash table. Note that this is not a general purpose class. For example, it cannot remove items from the set, nor does it resize its hash table to be smaller, etc. It is designed to be quick to test if a char[] is in the set without the necessity of converting it to a String first.
Please note: This class implements
CharArraySet.CharArraySetEnumerator
The IEnumerator<String> for this set. Strings are constructed on the fly,
so use nextCharArray
for more efficient access
CharFilter
Subclasses of CharFilter can be chained to filter CharStream.
They can be used as
CharReader
CharReader is a Reader wrapper. It reads chars from Reader and outputs CharStream, defining an identify function CorrectOffset(Int32) method that simply returns the provided offset.
CharStream
CharStream adds CorrectOffset(Int32)
functionality over
CharTokenizer
An abstract base class for simple, character-oriented tokenizers.
ISOLatin1AccentFilter
A filter that replaces accented characters in the ISO Latin 1 character set (ISO-8859-1) by their unaccented equivalent. The case will not be altered.
For instance, 'À' will be replaced by 'a'.
KeywordAnalyzer
"Tokenizes" the entire stream as a single token. This is useful for data like zip codes, ids, and some product names.
KeywordTokenizer
Emits the entire input as a single token.
LengthFilter
Removes words that are too long or too short from the stream.
LetterTokenizer
A LetterTokenizer is a tokenizer that divides text at non-letters. That's to say, it defines tokens as maximal strings of adjacent letters, as defined by java.lang.Character.isLetter() predicate. Note: this does a decent job for most European languages, but does a terrible job for some Asian languages, where words are not separated by spaces.
LowerCaseFilter
Normalizes token text to lower case.
LowerCaseTokenizer
LowerCaseTokenizer performs the function of LetterTokenizer and LowerCaseFilter together. It divides text at non-letters and converts them to lower case. While it is functionally equivalent to the combination of LetterTokenizer and LowerCaseFilter, there is a performance advantage to doing the two tasks at once, hence this (redundant) implementation.
Note: this does a decent job for most European languages, but does a terrible job for some Asian languages, where words are not separated by spaces.
MappingCharFilter
Simplistic CharFilter that applies the mappings contained in a NormalizeCharMap to the character stream, and correcting the resulting changes to the offsets.
NormalizeCharMap
Holds a map of String input to String output, to be used with MappingCharFilter.
NumericTokenStream
Expert: This class provides a TokenStream for indexing numeric values that can be used by NumericRangeQuery<T> or NumericRangeFilter<T>.
Note that for simple usage, NumericField is recommended. NumericField disables norms and term freqs, as they are not usually needed during searching. If you need to change these settings, you should use this class.
See NumericField for capabilities of fields indexed numerically.
Here's an example usage, for an int
field:
Field field = new Field(name, new NumericTokenStream(precisionStep).setIntValue(value));
field.setOmitNorms(true);
field.setOmitTermFreqAndPositions(true);
document.add(field);
For optimal performance, re-use the TokenStream and Field instance for more than one document:
NumericTokenStream stream = new NumericTokenStream(precisionStep);
Field field = new Field(name, stream);
field.setOmitNorms(true);
field.setOmitTermFreqAndPositions(true);
Document document = new Document();
document.add(field);
for(all documents) {
stream.setIntValue(value)
writer.addDocument(document);
}
This stream is not intended to be used in analyzers; it's more for iterating the different precisions during indexing a specific numeric value.
NOTE: as token streams are only consumed once
the document is added to the index, if you index more
than one numeric field, use a separate NumericTokenStream
instance for each.
See NumericRangeQuery<T> for more details on the
precisionStep
parameter as well as how numeric fields work under the hood.
NOTE: This API is experimental and might change in incompatible ways in the next release. Since 2.9
PerFieldAnalyzerWrapper
This analyzer is used to facilitate scenarios where different fields require different analysis techniques. Use AddAnalyzer(String, Analyzer) to add a non-default analyzer on a field name basis.
Example usage:
PerFieldAnalyzerWrapper aWrapper =
new PerFieldAnalyzerWrapper(new StandardAnalyzer());
aWrapper.addAnalyzer("firstname", new KeywordAnalyzer());
aWrapper.addAnalyzer("lastname", new KeywordAnalyzer());
In this example, StandardAnalyzer will be used for all fields except "firstname" and "lastname", for which KeywordAnalyzer will be used.
A PerFieldAnalyzerWrapper can be used like any other analyzer, for both indexing and query parsing.
PorterStemFilter
Transforms the token stream as per the Porter stemming algorithm. Note: the input to the stemming filter must already be in lower case, so you will need to use LowerCaseFilter or LowerCaseTokenizer farther down the Tokenizer chain in order for this to work properly!
To use this filter with other analyzers, you'll want to write an Analyzer class that sets up the TokenStream chain as you want it. To use this with LowerCaseTokenizer, for example, you'd write an analyzer like this:
class MyAnalyzer extends Analyzer {
public final TokenStream tokenStream(String fieldName, Reader reader) {
return new PorterStemFilter(new LowerCaseTokenizer(reader));
}
}
SimpleAnalyzer
An Analyzer that filters LetterTokenizer with LowerCaseFilter
StopAnalyzer
Filters LetterTokenizer with LowerCaseFilter and StopFilter.
You must specify the required Lucene.Net.Util.Version compatibility when creating StopAnalyzer:
StopFilter
Removes stop words from a token stream.
TeeSinkTokenFilter
This TokenFilter provides the ability to set aside attribute states that have already been analyzed. This is useful in situations where multiple fields share many common analysis steps and then go their separate ways.
It is also useful for doing things like entity extraction or proper noun analysis as part of the analysis workflow and saving off those tokens for use in another field.
TeeSinkTokenFilter source1 = new TeeSinkTokenFilter(new WhitespaceTokenizer(reader1));
TeeSinkTokenFilter.SinkTokenStream sink1 = source1.newSinkTokenStream();
TeeSinkTokenFilter.SinkTokenStream sink2 = source1.newSinkTokenStream();
TeeSinkTokenFilter source2 = new TeeSinkTokenFilter(new WhitespaceTokenizer(reader2));
source2.addSinkTokenStream(sink1);
source2.addSinkTokenStream(sink2);
TokenStream final1 = new LowerCaseFilter(source1);
TokenStream final2 = source2;
TokenStream final3 = new EntityDetect(sink1);
TokenStream final4 = new URLDetect(sink2);
d.add(new Field("f1", final1));
d.add(new Field("f2", final2));
d.add(new Field("f3", final3));
d.add(new Field("f4", final4));
In this example, sink1
and sink2
will both get tokens from both
reader1
and reader2
after whitespace tokenizer
and now we can further wrap any of these in extra analysis, and more "sources" can be inserted if desired.
It is important, that tees are consumed before sinks (in the above example, the field names must be
less the sink's field names). If you are not sure, which stream is consumed first, you can simply
add another sink and then pass all tokens to the sinks at once using ConsumeAllTokens().
This TokenFilter is exhausted after this. In the above example, change
the example above to:
...
TokenStream final1 = new LowerCaseFilter(source1.newSinkTokenStream());
TokenStream final2 = source2.newSinkTokenStream();
sink1.consumeAllTokens();
sink2.consumeAllTokens();
...
In this case, the fields can be added in any order, because the sources are not used anymore and all sinks are ready.
Note, the EntityDetect and URLDetect TokenStreams are for the example and do not currently exist in Lucene.
TeeSinkTokenFilter.AnonymousClassSinkFilter
TeeSinkTokenFilter.SinkFilter
A filter that decides which Lucene.Net.Util.AttributeSource states to store in the sink.
TeeSinkTokenFilter.SinkTokenStream
Token
A Token is an occurrence of a term from the text of a field. It consists of a term's text, the start and end offset of the term in the text of the field, and a type string.
The start and end offsets permit applications to re-associate a token with its source text, e.g., to display highlighted query terms in a document browser, or to show matching text fragments in a KWIC display, etc.
The type is a string, assigned by a lexical analyzer
(a.k.a. tokenizer), naming the lexical or syntactic class that the token
belongs to. For example an end of sentence marker token might be implemented
with type "eos". The default token type is "word".
A Token can optionally have metadata (a.k.a. Payload) in the form of a variable length byte array. Use PayloadLength and GetPayload(Byte[], Int32) to retrieve the payloads from the index.
Token.TokenAttributeFactory
Expert: Creates an AttributeFactory returning {@link Token} as instance for the basic attributes and for all other attributes calls the given delegate factory.
TokenFilter
A TokenFilter is a TokenStream whose input is another TokenStream.
This is an abstract class; subclasses must override IncrementToken().
Tokenizer
A Tokenizer is a TokenStream whose input is a Reader.
This is an abstract class; subclasses must override IncrementToken()
NOTE: Subclasses overriding IncrementToken() must call Lucene.Net.Util.AttributeSource.ClearAttributes before setting attributes.
TokenStream
A TokenStream
enumerates the sequence of tokens, either from
Fields of a Document or from query text.
This is an abstract class. Concrete subclasses are:
TokenStream
API has been introduced with Lucene 2.9. This API
has moved from being Token based to Lucene.Net.Util.IAttribute based. While
Token still exists in 2.9 as a convenience class, the preferred way
to store the information of a Token is to use Lucene.Net.Util.Attributes.
TokenStream
now extends Lucene.Net.Util.AttributeSource, which provides
access to all of the token Lucene.Net.Util.IAttributes for the TokenStream
.
Note that only one instance per Lucene.Net.Util.Attribute is created and reused
for every token. This approach reduces object creation and allows local
caching of references to the Lucene.Net.Util.Attributes. See
IncrementToken() for further details.
The workflow of the new TokenStream
API is as follows:
You can find some example code for the new API in the analysis package level Javadoc.
Sometimes it is desirable to capture a current state of a TokenStream
, e. g. for buffering purposes (see CachingTokenFilter,
TeeSinkTokenFilter). For this usecase
Lucene.Net.Util.AttributeSource.CaptureState and Lucene.Net.Util.AttributeSource.RestoreState(Lucene.Net.Util.AttributeSource.State)
can be used.
WhitespaceAnalyzer
An Analyzer that uses WhitespaceTokenizer.
WhitespaceTokenizer
A WhitespaceTokenizer is a tokenizer that divides text at whitespace. Adjacent sequences of non-Whitespace characters form tokens.
WordlistLoader
Loader for text files that represent a list of stopwords.