Frames | No Frames |
Classes derived from org.apache.lucene.analysis.TokenStream | |
class | A filter that stems German words. |
Constructors with parameter type org.apache.lucene.analysis.TokenStream | |
Construct a token stream filtering the given input. | |
Builds a GermanStemFilter that uses an exclusiontable. | |
Builds a GermanStemFilter that uses an exclusiontable. |
Methods with return type org.apache.lucene.analysis.TokenStream | |
TokenStream | GermanAnalyzer.tokenStream(String fieldName, Reader reader) Creates a TokenStream which tokenizes all the text in the provided Reader. |
Classes derived from org.apache.lucene.analysis.TokenStream | |
class | A RussianLetterTokenizer is a tokenizer that extends LetterTokenizer by additionally looking up letters
in a given "russian charset". |
class | Normalizes token text to lower case, analyzing given ("russian") charset. |
class | A filter that stems Russian words. |
Constructors with parameter type org.apache.lucene.analysis.TokenStream | |
Methods with return type org.apache.lucene.analysis.TokenStream | |
TokenStream | RussianAnalyzer.tokenStream(String fieldName, Reader reader) Creates a TokenStream which tokenizes all the text in the provided Reader. |
Classes derived from org.apache.lucene.analysis.TokenStream | |
class | An abstract base class for simple, character-oriented tokenizers. |
class | A LetterTokenizer is a tokenizer that divides text at non-letters. |
class | Normalizes token text to lower case. |
class | LowerCaseTokenizer performs the function of LetterTokenizer
and LowerCaseFilter together. |
class | Transforms the token stream as per the Porter stemming algorithm. |
class | Removes stop words from a token stream. |
class | A TokenFilter is a TokenStream whose input is another token stream. |
class | A Tokenizer is a TokenStream whose input is a Reader. |
class | A WhitespaceTokenizer is a tokenizer that divides text at whitespace. |
Constructors with parameter type org.apache.lucene.analysis.TokenStream | |
Construct a token stream filtering the given input. | |
Construct a token stream filtering the given input. | |
Constructs a filter which removes words from the input
TokenStream that are named in the Hashtable. | |
Constructs a filter which removes words from the input
TokenStream that are named in the Set. | |
Constructs a filter which removes words from the input
TokenStream that are named in the array of words. | |
Construct a token stream filtering the given input. |
Fields of type org.apache.lucene.analysis.TokenStream | |
TokenStream | The source of tokens for this filter. |
Methods with return type org.apache.lucene.analysis.TokenStream | |
TokenStream | Analyzer.tokenStream(Reader reader) Creates a TokenStream which tokenizes all the text in the provided
Reader. |
TokenStream | Analyzer.tokenStream(String fieldName, Reader reader) Creates a TokenStream which tokenizes all the text in the provided
Reader. |
TokenStream | PerFieldAnalyzerWrapper.tokenStream(String fieldName, Reader reader) |
TokenStream | SimpleAnalyzer.tokenStream(String fieldName, Reader reader) Creates a TokenStream which tokenizes all the text in the provided
Reader. |
TokenStream | StopAnalyzer.tokenStream(String fieldName, Reader reader) Filters LowerCaseTokenizer with StopFilter. |
TokenStream | WhitespaceAnalyzer.tokenStream(String fieldName, Reader reader) |
Classes derived from org.apache.lucene.analysis.TokenStream | |
class | Normalizes tokens extracted with StandardTokenizer . |
class | A grammar-based tokenizer constructed with JavaCC. |
Constructors with parameter type org.apache.lucene.analysis.TokenStream | |
Construct filtering in. |
Methods with return type org.apache.lucene.analysis.TokenStream | |
TokenStream | StandardAnalyzer.tokenStream(String fieldName, Reader reader) |