Class Token
A Token is an occurrence of a term from the text of a field. It consists of a term's text, the start and end offset of the term in the text of the field, and a type string.
The start and end offsets permit applications to re-associate a token with its source text, e.g., to display highlighted query terms in a document browser, or to show matching text fragments in a KWIC (KeyWord In Context) display, etc.
The type is a string, assigned by a lexical analyzer (a.k.a. tokenizer), naming the lexical or syntactic class that the token belongs to. For example an end of sentence marker token might be implemented with type "eos". The default token type is "word".
A Token can optionally have metadata (a.k.a. payload) in the form of a variable length byte array. Use GetPayload() to retrieve the payloads from the index.
NOTE: As of 2.9, Token implements all IAttribute interfaces that are part of core Lucene and can be found in the Lucene.Net.Analysis.TokenAttributes namespace. Even though it is not necessary to use Token anymore, with the new TokenStream API it can be used as convenience class that implements all IAttributes, which is especially useful to easily switch from the old to the new TokenStream API.
Tokenizers and TokenFilters should try to re-use a Token
instance when possible for best performance, by
implementing the IncrementToken() API.
Failing that, to create a new Token you should first use
one of the constructors that starts with null text. To load
the token from a char[] use CopyBuffer(Char[], Int32, Int32).
To load from a
Typical Token reuse patterns:
- Copying text from a string (type is reset to DEFAULT_TYPE if not specified):
return reusableToken.Reinit(string, startOffset, endOffset[, type]);
- Copying some text from a string (type is reset to DEFAULT_TYPE if not specified):
return reusableToken.Reinit(string, 0, string.Length, startOffset, endOffset[, type]);
- Copying text from char[] buffer (type is reset to DEFAULT_TYPE if not specified):
return reusableToken.Reinit(buffer, 0, buffer.Length, startOffset, endOffset[, type]);
- Copying some text from a char[] buffer (type is reset to DEFAULT_TYPE if not specified):
return reusableToken.Reinit(buffer, start, end - start, startOffset, endOffset[, type]);
- Copying from one one Token to another (type is reset to DEFAULT_TYPE if not specified):
return reusableToken.Reinit(source.Buffer, 0, source.Length, source.StartOffset, source.EndOffset[, source.Type]);
- Clear() initializes all of the fields to default values. this was changed in contrast to Lucene 2.4, but should affect no one.
- Because TokenStreams can be chained, one cannot assume that the Token's current type is correct.
- The startOffset and endOffset represent the start and offset in the source text, so be careful in adjusting them.
- When caching a reusable token, clone it. When injecting a cached token into a stream that can be reset, clone it again.
Please note: With Lucene 3.1, the ToString() method had to be changed to match the ICharSequence interface introduced by the interface ICharTermAttribute. this method now only prints the term text, no additional information anymore.
Inherited Members
Assembly: DistributedLucene.Net.dll
Syntax
public class Token : CharTermAttribute, ICharTermAttribute, ICharSequence, ITermToBytesRefAttribute, ITypeAttribute, IPositionIncrementAttribute, IFlagsAttribute, IOffsetAttribute, IPayloadAttribute, IPositionLengthAttribute, IAttribute
Constructors
Name | Description |
---|---|
Token() | Constructs a Token will null text. |
Token(Char[], Int32, Int32, Int32, Int32) | Constructs a Token with the given term buffer (offset & length), start and end offsets |
Token(Int32, Int32) | Constructs a Token with null text and start & end offsets. |
Token(Int32, Int32, Int32) | Constructs a Token with null text and start & end offsets plus flags. NOTE: flags is EXPERIMENTAL. |
Token(Int32, Int32, String) | Constructs a Token with null text and start & end offsets plus the Token type. |
Token(String, Int32, Int32) | Constructs a Token with the given term text, and start & end offsets. The type defaults to "word." NOTE: for better indexing speed you should instead use the char[] termBuffer methods to set the term text. |
Token(String, Int32, Int32, Int32) | Constructs a Token with the given text, start and end offsets, & type. NOTE: for better indexing speed you should instead use the char[] termBuffer methods to set the term text. |
Token(String, Int32, Int32, String) | Constructs a Token with the given text, start and end offsets, & type. NOTE: for better indexing speed you should instead use the char[] termBuffer methods to set the term text. |
Fields
Name | Description |
---|---|
TOKEN_ATTRIBUTE_FACTORY | Convenience factory that returns Token as implementation for the basic attributes and return the default impl (with "Impl" appended) for all other attributes. @since 3.0 |
Properties
Name | Description |
---|---|
EndOffset | Returns this Token's ending offset, one greater than the position of the last character corresponding to this token in the source text. The length of the token in the source text is ( - StartOffset).
|
Flags | Get the bitset for any bits that have been set. This is completely distinct from Type, although they do share similar purposes. The flags can be used to encode information about the token for use by other TokenFilters. |
Payload | Gets or Sets this Token's payload. |
PositionIncrement | Gets or Sets the position increment (the distance from the prior term). The default value is one. |
PositionLength | Gets or Sets the position length of this Token (how many positions this token spans). The default value is one. |
StartOffset | Returns this Token's starting offset, the position of the first character corresponding to this token in the source text. Note that the difference between EndOffset and StartOffset may not be equal to termText.Length, as the term text may have been altered by a stemmer or some other filter. |
Type | Gets or Sets this Token's lexical type. Defaults to "word". |