Class TokenSources
Hides implementation issues associated with obtaining a for use with
the Highlighter - can obtain from
term vectors with offsets and positions or from an Analyzer re-parsing the stored content.
see TokenStreamFromTermVector
Inheritance
System.Object
TokenSources
Assembly: Lucene.Net.Highlighter.dll
Syntax
public class TokenSources : object
Methods
Name |
Description |
GetAnyTokenStream(IndexReader, Int32, String, Analyzer)
|
A convenience method that tries a number of approaches to getting a token stream.
The cost of finding there are no termVectors in the index is minimal (1000 invocations still
registers 0 ms). So this "lazy" (flexible?) approach to coding is probably acceptable
|
GetAnyTokenStream(IndexReader, Int32, String, Document, Analyzer)
|
A convenience method that tries to first get a TermPositionVector for the specified docId, then, falls back to
using the passed in to retrieve the . This is useful when
you already have the document, but would prefer to use the vector first.
|
GetTokenStream(Document, String, Analyzer)
|
|
GetTokenStream(IndexReader, Int32, String, Analyzer)
|
|
GetTokenStream(String, String, Analyzer)
|
|
GetTokenStream(Terms)
|
|
GetTokenStream(Terms, Boolean)
|
Low level api. Returns a token stream generated from a . This
can be used to feed the highlighter with a pre-parsed token
stream. The must have offsets available.
In my tests the speeds to recreate 1000 token streams using this method are:
-
with TermVector offset only data stored - 420 milliseconds
-
with TermVector offset AND position data stored - 271 milliseconds
(nb timings for TermVector with position data are based on a tokenizer with contiguous
positions - no overlaps or gaps)
-
The cost of not using TermPositionVector to store
pre-parsed content and using an analyzer to re-parse the original content:
- reanalyzing the original content - 980 milliseconds
The re-analyze timings will typically vary depending on -
-
The complexity of the analyzer code (timings above were using a
stemmer/lowercaser/stopword combo)
-
The number of other fields (Lucene reads ALL fields off the disk
when accessing just one document field - can cost dear!)
-
Use of compression on field storage - could be faster due to compression (less disk IO)
or slower (more CPU burn) depending on the content.
|
GetTokenStreamWithOffsets(IndexReader, Int32, String)
|
Returns a with positions and offsets constructed from
field termvectors. If the field has no termvectors or offsets
are not included in the termvector, return null. See
GetTokenStream(Terms)
for an explanation of what happens when positions aren't present.
|
Extension Methods