Contains functions and classes related to fields.
Represents the collection of fields in an index. Maps field names to FieldType objects which define the behavior of each field.
Low-level parts of the index use field numbers instead of field names for compactness. This class has several methods for converting between the field name, field number, and field object itself.
All keyword arguments to the constructor are treated as fieldname = fieldtype pairs. The fieldtype can be an instantiated FieldType object, or a FieldType sub-class (in which case the Schema will instantiate it with the default constructor before adding it).
For example:
s = Schema(content = TEXT,
title = TEXT(stored = True),
tags = KEYWORD(stored = True))
Adds a field to this schema.
Parameters: |
|
---|
Returns a shallow copy of the schema. The field instances are not deep copied, so they are shared between schema copies.
Returns True if any of the fields in this schema store term vectors.
Returns a list of (“fieldname”, field_object) pairs for the fields in this schema.
Returns a list of the names of the fields in this schema.
Parameters: | check_names – (optional) sequence of field names to check whether the schema accepts them as (dynamic) field names - acceptable names will also be in the result list. Note: You may also have static field names in check_names, that won’t create duplicates in the result list. Unsupported names will not be in the result list. |
---|
Returns a list of the names of fields that store field lengths.
Returns a list of the names of fields that require special handling for generating spelling graphs... either because they store graphs but aren’t indexed, or because the analyzer is stemmed.
Returns a list of the names of fields that are stored.
Returns a list of the names of fields that store vectors.
Allows you to define a schema using declarative syntax, similar to Django models:
class MySchema(SchemaClass):
path = ID
date = DATETIME
content = TEXT
You can use inheritance to share common fields between schemas:
class Parent(SchemaClass):
path = ID(stored=True)
date = DATETIME
class Child1(Parent):
content = TEXT(positions=False)
class Child2(Parent):
tags = KEYWORD
This class overrides __new__ so instantiating your sub-class always results in an instance of Schema.
>>> class MySchema(SchemaClass):
... title = TEXT(stored=True)
... content = TEXT
...
>>> s = MySchema()
>>> type(s)
<class 'whoosh.fields.Schema'>
All keyword arguments to the constructor are treated as fieldname = fieldtype pairs. The fieldtype can be an instantiated FieldType object, or a FieldType sub-class (in which case the Schema will instantiate it with the default constructor before adding it).
For example:
s = Schema(content = TEXT,
title = TEXT(stored = True),
tags = KEYWORD(stored = True))
Represents a field configuration.
The FieldType object supports the following attributes:
The constructor for the base field type simply lets you supply your own configured field format, vector format, and scorable and stored values. Subclasses may configure some or all of this for you.
Clears any cached information in the field and any child objects.
Returns True if this field by default performs morphological transformations on its terms, e.g. stemming.
Returns an iterator of (btext, frequency, weight, encoded_value) tuples for each unique word in the input value.
The default implementation uses the analyzer attribute to tokenize the value into strings, then encodes them into bytes using UTF-8.
When self_parsing() returns True, the query parser will call this method to parse basic query text.
When self_parsing() returns True, the query parser will call this method to parse range query text. If this method returns None instead of a query object, the parser will fall back to parsing the start and end terms using process_text().
Analyzes the given string and returns an iterator of token texts.
>>> field = fields.TEXT()
>>> list(field.process_text("The ides of March"))
["ides", "march"]
Subclasses should override this method to return True if they want the query parser to call the field’s parse_query() method instead of running the analyzer on text in this field. This is useful where the field needs full control over how queries are interpreted, such as in the numeric field type.
Returns True if this field requires special handling of the words that go into the field’s word graph.
The default behavior is to return True if the field is “spelled” but not indexed, or if the field is indexed but the analyzer has morphological transformations (e.g. stemming). Exotic field types may need to override this behavior.
This method should return False if the field does not support spelling (i.e. the spelling attribute is False).
Returns an iterator of the “sortable” tokens in the given reader and field. These values can be used for sorting. The default implementation simply returns all tokens in the field.
This can be overridden by field types such as NUMERIC where some values in a field are not useful for sorting.
Returns an iterator of each unique word (in sorted order) in the input value, suitable for inclusion in the field’s word graph.
The default behavior is to call the field analyzer with the keyword argument no_morph=True, which should make the analyzer skip any morphological transformation filters (e.g. stemming) to preserve the original form of the words. Exotic field types may need to override this behavior.
Returns True if the underlying format supports the given posting value type.
>>> field = TEXT()
>>> field.supports("positions")
True
>>> field.supports("characters")
False
Returns a bytes representation of the given value, appropriate to be written to disk. The default implementation assumes a unicode value and encodes it using UTF-8.
Returns an object suitable to be inserted into the document values column for this field. The default implementation simply calls self.to_bytes(value).
Analyzes the given string and returns an iterator of Token objects (note: for performance reasons, actually the same token yielded over and over with different attributes).
Configured field type that indexes the entire value of the field as one token. This is useful for data you don’t want to tokenize, such as the path of a file.
Parameters: | stored – Whether the value of this field is stored with the |
---|
document.
Configured field type for fields containing IDs separated by whitespace and/or punctuation (or anything else, using the expression param).
Parameters: |
|
---|
Configured field type for fields you want to store but not index.
Configured field type for fields containing space-separated or comma-separated keyword-like data (such as tags). The default is to not store positional information (so phrase searching is not allowed in this field) and to not make the field scorable.
Parameters: |
|
---|
Configured field type for text fields (for example, the body text of an article). The default is to store positional information to allow phrase searching. This field type is always scorable.
Parameters: |
|
---|
Special field type that lets you index integer or floating point numbers in relatively short fixed-width terms. The field converts numbers to sortable bytes for you before indexing.
You specify the numeric type of the field (int or float) when you create the NUMERIC object. The default is int. For int, you can specify a size in bits (32 or 64). For both int and float you can specify a signed keyword argument (default is True).
>>> schema = Schema(path=STORED, position=NUMERIC(int, 64, signed=False))
>>> ix = storage.create_index(schema)
>>> with ix.writer() as w:
... w.add_document(path="/a", position=5820402204)
...
You can also use the NUMERIC field to store Decimal instances by specifying a type of int or long and the decimal_places keyword argument. This simply multiplies each number by (10 ** decimal_places) before storing it as an integer. Of course this may throw away decimal prcesision (by truncating, not rounding) and imposes the same maximum value limits as int/long, but these may be acceptable for certain applications.
>>> from decimal import Decimal
>>> schema = Schema(path=STORED, position=NUMERIC(int, decimal_places=4))
>>> ix = storage.create_index(schema)
>>> with ix.writer() as w:
... w.add_document(path="/a", position=Decimal("123.45")
...
Parameters: |
|
---|
Special field type that lets you index datetime objects. The field converts the datetime objects to sortable text for you before indexing.
Since this field is based on Python’s datetime module it shares all the limitations of that module, such as the inability to represent dates before year 1 in the proleptic Gregorian calendar. However, since this field stores datetimes as an integer number of microseconds, it could easily represent a much wider range of dates if the Python datetime implementation ever supports them.
>>> schema = Schema(path=STORED, date=DATETIME)
>>> ix = storage.create_index(schema)
>>> w = ix.writer()
>>> w.add_document(path="/a", date=datetime.now())
>>> w.commit()
Parameters: |
|
---|
Special field type that lets you index boolean values (True and False). The field converts the boolean values to text for you before indexing.
>>> schema = Schema(path=STORED, done=BOOLEAN)
>>> ix = storage.create_index(schema)
>>> w = ix.writer()
>>> w.add_document(path="/a", done=False)
>>> w.commit()
Parameters: | stored – Whether the value of this field is stored with the |
---|
document.
Configured field that indexes text as N-grams. For example, with a field type NGRAM(3,4), the value “hello” will be indexed as tokens “hel”, “hell”, “ell”, “ello”, “llo”. This field type chops the entire text into N-grams, including whitespace and punctuation. See NGRAMWORDS for a field type that breaks the text into words first before chopping the words into N-grams.
Parameters: |
|
---|
Configured field that chops text into words using a tokenizer, lowercases the words, and then chops the words into N-grams.
Parameters: |
|
---|