qruise-kb¶
qruise.kb.create_session(profile=None, profiles=None, logger=LOGGER.info, check=False, connect_elasticsearch=True, configuration_manager=None, **kwargs)
¶
Create a session to a QruiseOS knowledge database.
This function initializes a session to interact with the QruiseOS knowledge database. It supports various configurations through profiles and a configuration manager. The session can optionally connect to Elasticsearch for enhanced search capabilities.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
profile
|
str
|
The name of the profile to use. By default, it uses the default profile specified as |
None
|
profiles
|
str
|
The path to the profiles file. By default, it uses the file |
None
|
logger
|
callable
|
The logger to use. By default, it uses |
info
|
check
|
bool
|
Whether to check the connection. By default, it is False. |
False
|
connect_elasticsearch
|
bool
|
Whether to connect to Elasticsearch. By default, it is True. |
True
|
configuration_manager
|
QruiseConfigurationManager
|
The configuration manager to use. By default, it is None. Cannot be specified together with |
None
|
**kwargs
|
Any
|
Additional keyword arguments to pass to |
{}
|
Returns:
Type | Description |
---|---|
Session
|
The session to the QruiseOS knowledge database. |
Raises:
Type | Description |
---|---|
ValueError
|
If both |
Examples:
Create a session using the default profile:
Create a session with a specific profile:
Create a session with a custom profiles file:
Create a session with a custom logger:
>>> import logging
>>> custom_logger = logging.getLogger('custom_logger')
>>> session = create_session(logger=custom_logger.info)
Create a session without connecting to Elasticsearch:
Create a session with a configuration manager:
qruise.kb.Session
¶
Session class for interacting with QruiseOS knowledge base.
This class provides methods to interact with the QruiseOS knowledge base, allowing for the loading, saving, and querying of documents. It supports various configurations and can optionally connect to a search client for enhanced search capabilities.
branch
property
writable
¶
changed_documents
property
¶
Get the documents that have been changed in the current session. This property retrieves the documents that have been modified in the current session.
Returns:
Type | Description |
---|---|
Dict[str, DocumentTemplate]
|
A dictionary of changed document IDs to |
Examples:
Access the changed_documents property to get the modified documents:
client
property
¶
database
property
¶
default_load_type = default_load_type
instance-attribute
¶
The default document type to load if no type is specified.
organization
property
¶
ref
property
writable
¶
Get or set the current commit reference used for reads.
This property allows you to retrieve the current commit reference that the client is using for reads. It also allows you to set the commit reference, enabling time travel to a specific commit.
Examples:
To do time travel you can have a look at commit history:
And then decide to go back 5 commits:
Reset to use the latest commit from the current branch:
Returns:
Type | Description |
---|---|
str
|
The current commit reference used for reads. |
schema
property
¶
The schema of the database.
This property retrieves the current schema of the database. If the schema is not already loaded, it will be loaded from the database.
Returns:
Type | Description |
---|---|
Schema
|
The current schema of the database. |
Examples:
Access the schema property to get the current schema:
Check if a specific document type exists in the schema:
types
property
¶
Get the types defined in the schema. This property retrieves the types defined in the current schema of the database.
Returns:
Type | Description |
---|---|
Mapping[str, DocumentTemplate]
|
A mapping of type names to |
Examples:
Access the types property to get the available document types:
__init__(client, default_load_type='Entity', store=None, blob_store=None, search_client=None)
¶
Create a new Qruise knowledge base session.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
client
|
Client
|
The client to use for interacting with the database. |
required |
default_load_type
|
Optional[str]
|
The default document type to load if no type is specified, by default "Entity". |
'Entity'
|
store
|
Optional[QruiseStore]
|
The object store to use for storing documents, by default None. |
None
|
blob_store
|
Optional[BlobStore]
|
The blob store to use for storing blobs, by default None. |
None
|
search_client
|
Optional[QruiseSearchClient]
|
The search client to use for searching documents, by default None. |
None
|
See Also
create_session : A helper function to create a Session instance from a profile.
delete_document(document, graph_type=GraphType.INSTANCE, commit_msg=None, last_data_version=None)
¶
Delete a document from the database.
This method deletes a document or multiple documents from the database. It supports specifying the graph type, commit message, and the last data version.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
document
|
Union[str, list, dict, Iterable]
|
The document(s) to delete. This can be a single document ID, a list of document IDs, a dictionary representing a document, or an iterable of such items. |
required |
graph_type
|
GraphType
|
The type of graph from which to delete the document(s). Default is GraphType.INSTANCE. |
INSTANCE
|
commit_msg
|
Optional[str]
|
The commit message describing the deletion. Default is None. |
None
|
last_data_version
|
Optional[str]
|
The last data version for concurrency control. Default is None. |
None
|
Examples:
Delete a single document by its ID:
Delete multiple documents by their IDs:
>>> session.delete_document(document=["doc:Qubit/Q1", "doc:Qubit/Q2"], commit_msg="Deleted multiple Qubits")
Delete a document represented as a dictionary:
>>> document_dict = {"id": "doc:Qubit/Q1", "name": "Q1"}
>>> session.delete_document(document=document_dict, commit_msg="Deleted Qubit Q1")
Delete documents using an iterable of document IDs:
load(doc_type=None, skip=0, limit=None)
¶
Load all documents of a given type from the database.
This method retrieves documents of the specified type from the database. If the schema is not already loaded, it will be loaded from the database, and the client will be locked to a specific commit to ensure consistent reads.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
doc_type
|
Union[str, List[str], None]
|
The type of document to load. If not specified, the default document type will be used. |
None
|
skip
|
int
|
The number of documents to skip. Default is 0. |
0
|
limit
|
Optional[int]
|
The maximum number of documents to load. If None, all documents will be loaded. Default is None. |
None
|
Returns:
Type | Description |
---|---|
List[DocumentTemplate]
|
A list of documents of the given type. |
Examples:
When just creating a session, nothing is loaded into memory by default.
For example, if interested in getting parameters from Qubit
documents, just load the 'Qubit' specific type:
Then it is possible to display what are the available qubits:
So that one can see the values it holds
Load documents of multiple types:
Skip the first 10 documents and load the next 5:
See Also
load_documents : for additional query and ordering possibilities load_document : for loading a specific document from its identifier
load_document(id)
¶
Load a document by its IRI ID from the database.
This method retrieves a document from the database using its IRI ID. If the document is found, it is imported into the session's schema and returned. If no document is found, None is returned.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
id
|
Optional[str]
|
The IRI ID of the document to load. If None, the method returns None. |
required |
Returns:
Type | Description |
---|---|
Optional[DocumentTemplate]
|
The loaded document as a DocumentTemplate object, or None if no document was found. |
Examples:
Load a document by its IRI ID:
>>> document = session.load_document("doc:Qubit/Q1")
>>> if document:
>>> print(document.id, document.name)
Attempt to load a document with a non-existent ID:
load_documents(doc_type=None, skip=0, limit=10, where=None, order_by=None)
¶
Load documents of a specified type from the database and add them to the current session.
This method retrieves documents of the specified type from the database, applying optional filters and sorting.
It supports pagination through the skip
and limit
parameters.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
doc_type
|
Union[str, type]
|
The type of document to load. Default is None. |
None
|
skip
|
int
|
The number of documents to skip. Default is 0. |
0
|
limit
|
int
|
The number of documents to load. Default is 10. |
10
|
where
|
Sequence[Tuple[str, str, Any]]
|
The where clause as triple of property name, operator and value. Default is None (no where clause). Supported operators: - 'eq': equal - 'neq': not equal |
None
|
order_by
|
Sequence[Tuple[str, str]]
|
The sort order by pair of property name and sort order as 'asc' or 'desc'. Default is None (unsorted) |
None
|
Examples:
Load qubit with name 'Q3':
Load last 5 experiments of type AmplitudeRabi sorted by timestamp descending, skipping the last experiment:
>>> last_experiments = session.load_documents(doc_type="AmplitudeRabi",
order_by=(("timestamp", "desc"),),
limit=5,
skip=1)
Returns:
Type | Description |
---|---|
List[DocumentTemplate]
|
A list of documents of the given type found in the database. |
See Also
load : for a simpler loading function load_document : for loading a specific document from its identifier
load_schema()
¶
Load the schema from the database and pin the client to the specific commit to ensure consistent reads.
Notes
- This method ensures that the schema is loaded only once.
- It locks the client to a specific commit to provide consistent reads.
- Logs or prints the commit reference and branch information based on the environment.
Examples:
Load the schema and lock the client to a specific commit:
Locked to commit:
save(commit_msg, post_commit_hook=None)
¶
Save all changes to the database and reset the client reference.
This method commits all changes made in the session to the database with the provided commit message. Optionally, a post-commit hook can be executed after the commit to perform additional actions such as updating a search index.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
commit_msg
|
str
|
The commit message describing the changes. |
required |
post_commit_hook
|
Optional[PostCommitHook]
|
A hook function to be called after the commit, by default None. |
None
|
Examples:
Save changes with a commit message:
Save changes with a post-commit hook:
save_schema(commit_msg, full_replace=None)
¶
Save the schema to the database and set the session reference to the new commit.
This method commits the current schema to the database with the provided commit message. It updates the session's reference to the new commit created by this operation. Optionally, the schema can be fully replaced.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
commit_msg
|
str
|
The commit message describing the changes to the schema. |
required |
full_replace
|
Optional[bool]
|
Whether to fully replace the schema. If None, the schema is not fully replaced. Default is None. |
None
|
Examples:
Save the schema with a commit message:
Save the schema with a full replace:
qruise.kb.session.Schema
¶
changed_documents
property
¶
Get the documents that have been changed in the current schema.
This property retrieves a list of documents that have been modified in the current schema. It iterates over all types in the schema and collects instances that have been marked as changed.
Returns:
Type | Description |
---|---|
list
|
A list of changed document instances. |
Examples:
Access the changed_documents property to get the modified documents:
context
property
writable
¶
Get the context of the schema.
This property constructs and returns the context dictionary for the schema, including the title, description, authors, schema reference, and base reference.
Returns:
Type | Description |
---|---|
dict
|
A dictionary representing the context of the schema. |
Examples:
Access the context property to get the schema context:
ref
property
¶
Get the reference for the current data version.
This property converts the current data version to a reference format
using the data_version_to_ref
utility function.
Returns:
Type | Description |
---|---|
str
|
The reference string corresponding to the current data version. |
Examples:
Access the ref property to get the reference for the current data version:
__call__(title=None, description=None, authors=None, is_library=DEFAULT_IS_LIBRARY)
¶
Create a derived schema from the current schema.
__init__(title=None, description=None, authors=None, use_weak_ref=False, extends=None, is_library=DEFAULT_IS_LIBRARY)
¶
Initialize a Schema object.
This constructor initializes a Schema object with optional title, description, authors, and other parameters. It can also extend an existing schema.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
title
|
str
|
The title of the schema, by default None. |
None
|
description
|
str
|
The description of the schema, by default None. |
None
|
authors
|
List[str]
|
A list of authors of the schema, by default None. |
None
|
use_weak_ref
|
bool
|
Whether to use weak references for objects, by default False. |
False
|
extends
|
Schema
|
An existing schema to extend, by default None. |
None
|
is_library
|
bool
|
Indicates if the schema is a library schema and will not extend the current schema context. Must be set to True if the schema is an abstract base schema, by default False. |
DEFAULT_IS_LIBRARY
|
Examples:
Create a basic schema:
Create a schema that extends another schema:
add_enum_class(class_name, class_values)
¶
Construct a TerminusDB Enum class by providing class name and member values, then add it to the schema.
This method creates an Enum class with the specified name and member values, and adds it to the schema.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
class_name
|
str
|
The name of the Enum class to be constructed. |
required |
class_values
|
list
|
A list of values to be included in the Enum class. |
required |
Returns:
Type | Description |
---|---|
EnumMetaTemplate
|
An Enum object with the specified name and members. |
Examples:
Create an Enum class with the name 'Color' and values 'Red', 'Green', and 'Blue':
commit(client, commit_msg=None, full_replace=None)
¶
Commit the schema to the database.
This method commits the current schema to the database using the provided client.
It updates the schema context with the appropriate prefixes if they are not already set.
Depending on the full_replace
flag, it either fully replaces the old schema graph or updates it.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
client
|
Client
|
A client that is connected to a database. |
required |
commit_msg
|
str
|
Commit message, by default "Schema object insert/ update by Python client." |
None
|
full_replace
|
Optional[bool]
|
If True, the commit fully wipes out the old schema graph, by default False. |
None
|
Examples:
Commit the schema with a custom commit message:
>>> schema = Schema(title="My Schema")
>>> client = Client()
>>> schema.commit(client, commit_msg="Initial schema commit")
Commit the schema with full replacement:
flush()
¶
Mark all changed documents clean.
This method iterates over all documents that have been marked as changed in the current schema and marks them as clean, indicating that they have been saved or committed.
Examples:
Mark all changed documents as clean:
from_db(client, select=None)
¶
Load classes from the database schema into the current schema.
This method retrieves all existing classes from the database schema using the provided client. It then updates the current schema with these classes. Optionally, a subset of classes can be selected for import.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
client
|
Client
|
A client that is connected to the database. |
required |
select
|
list of str
|
A list of class names to be imported. If None, all classes will be imported. Default is None. |
None
|
Returns:
Type | Description |
---|---|
List[Dict[str, Any]]
|
A list of dictionaries representing all existing classes in the database schema. |
Examples:
Load all classes from the database schema:
>>> schema = Schema(title="My Schema")
>>> client = Client()
>>> all_classes = schema.from_db(client)
>>> print(all_classes)
Load specific classes from the database schema:
from_dict(schema_documents, select=None)
¶
Load schema classes from a dictionary.
This method processes a dictionary of schema documents and constructs the corresponding classes. Optionally, a subset of classes can be selected for import.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
schema_documents
|
dict
|
A dictionary containing schema documents. |
required |
select
|
list of str
|
A list of class names to be imported. If None, all classes will be imported. Default is None. |
None
|
Examples:
Load all classes from a dictionary of schema documents:
>>> schema = Schema(title="My Schema")
>>> schema_documents = {
>>> "@context": {"@type": "@context", "@documentation": {"@title": "My Schema"}},
>>> "Class1": {"@id": "Class1", "@type": "Class"},
>>> "Class2": {"@id": "Class2", "@type": "Class"}
>>> }
>>> schema.from_dict(schema_documents)
Load specific classes from a dictionary of schema documents:
>>> schema = Schema(title="My Schema")
>>> schema_documents = {
>>> "@context": {"@type": "@context", "@documentation": {"@title": "My Schema"}},
>>> "Class1": {"@id": "Class1", "@type": "Class"},
>>> "Class2": {"@id": "Class2", "@type": "Class"}
>>> }
>>> schema.from_dict(schema_documents, select=["Class1"])
from_json_schema(name, json_schema, pipe=False, subdocument=False)
¶
Load class object from JSON schema (http://json-schema.org/) and, if pipe mode is off, add into schema. All referenced objects will be treated as subdocuments.
This method processes a JSON schema and constructs the corresponding class object. If pipe
mode is enabled,
it returns the schema in TerminusDB dictionary format without loading it into the schema object. If subdocument
is set to True, the class object will be added as a subdocument class.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
name
|
str
|
Name of the class object. |
required |
json_schema
|
Union[dict, str, StringIO]
|
JSON Schema in dictionary or JSON-able string format or JSON file stream. |
required |
pipe
|
bool
|
Pipe mode, if True will return the schema in TerminusDB dictionary format without loading it into the schema object. Default is False. |
False
|
subdocument
|
bool
|
If not in pipe mode, the class object will be added as a subdocument class. Default is False. |
False
|
Examples:
Load a class object from a JSON schema string:
>>> schema = Schema(title="My Schema")
>>> json_schema = '{"properties": {"name": {"type": "string"}}}'
>>> schema.from_json_schema(name="MyClass", json_schema=json_schema)
Load a class object from a JSON schema dictionary:
>>> schema = Schema(title="My Schema")
>>> json_schema = {"properties": {"name": {"type": "string"}}}
>>> schema.from_json_schema(name="MyClass", json_schema=json_schema)
Load a class object from a JSON schema file:
>>> schema = Schema(title="My Schema")
>>> with open("schema.json", "r") as file:
>>> schema.from_json_schema(name="MyClass", json_schema=file)
Returns:
Type | Description |
---|---|
dict or None
|
If |
import_objects(obj_dict, blob_store=None, is_load=False)
¶
Import a list of documents in JSON format to Python objects.
This method imports documents from a JSON format into Python objects. The schema of these documents must be present in the current schema. Optionally, a blob store can be used for handling binary data, and the import can be marked as a load operation.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
obj_dict
|
Union[List[dict], dict]
|
A list or dictionary of documents in JSON format to be imported. |
required |
blob_store
|
optional
|
An optional blob store for handling binary data, by default None. |
None
|
is_load
|
bool
|
A flag indicating if the import is a load operation, by default False. |
False
|
Returns:
Type | Description |
---|---|
Any
|
The result of the document import operation. |
Examples:
Import a list of documents:
>>> schema = Schema(title="My Schema")
>>> documents = [{"@type": "Class1", "name": "Document1"}, {"@type": "Class2", "name": "Document2"}]
>>> result = schema.import_objects(documents)
>>> print(result)
Import a single document with a blob store:
prepare_migration(old_schema, module='schema', filepath=None, dir=None, replace=False)
¶
Prepare to migrate the schema to the database.
This method prepares the schema migration by loading the old schema and updating it with the new schema defined in the specified module. It can optionally replace the old schema entirely.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
old_schema
|
List[Dict]
|
A list of dictionaries representing the old schema. |
required |
module
|
str
|
The module name to load, by default "schema". |
'schema'
|
filepath
|
str
|
The file path to the module, by default None. If not provided, the module will be loaded from the current directory. |
None
|
dir
|
str
|
The directory containing the module, by default None. If not provided, the current working directory will be used. |
None
|
replace
|
bool
|
If True, the old schema will be replaced entirely, by default False. |
False
|
Returns:
Type | Description |
---|---|
SchemaMigrator
|
An instance of SchemaMigrator to handle the migration process. |
Examples:
Prepare to migrate the schema with the default module:
>>> schema = Schema(title="My Schema")
>>> old_schema = [{"@id": "Class1", "@type": "Class"}]
>>> migrator = schema.prepare_migration(old_schema)
Prepare to migrate the schema with a custom module and directory:
>>> schema = Schema(title="My Schema")
>>> old_schema = [{"@id": "Class1", "@type": "Class"}]
>>> migrator = schema.prepare_migration(old_schema, module="custom_schema", dir="/path/to/directory")
Prepare to migrate the schema and replace the old schema entirely:
to_dict()
¶
Return the schema in the TerminusDB dictionary format.
This method converts the schema into a list of dictionaries representing the classes,
prepended with the context information. The classes are sorted by their names (@id
).
Returns:
Type | Description |
---|---|
List[Dict[str, Any]]
|
A list of dictionaries representing the classes, including the context information. |
Examples:
Convert the schema to a dictionary format:
>>> schema = Schema(title="My Schema")
>>> schema_dict = schema.to_dict()
>>> print(schema_dict)
[
{
'@type': '@context',
'@documentation': {
'@title': 'My Schema',
'@description': '',
'@authors': None
},
'@schema': None,
'@base': None
},
{
'@id': 'Class1',
'@type': 'Class',
'property1': 'xsd:string'
},
{
'@id': 'Class2',
'@type': 'Class',
'property2': 'xsd:integer'
}
]
to_json_schema(class_object)
¶
Return the schema in the JSON schema (http://json-schema.org/) format as a dictionary for the class object.
This method converts a class object from the schema into a JSON schema format. The class object can be specified either by its name or as a dictionary representation.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
class_object
|
Union[str, dict]
|
Name of the class object or the class object represented as a dictionary. |
required |
Returns:
Type | Description |
---|---|
dict
|
A dictionary representing the class object in JSON schema format. |
Examples:
Convert a class object to JSON schema format by name:
>>> schema = Schema(title="My Schema")
>>> json_schema = schema.to_json_schema("MyClass")
>>> print(json_schema)
Convert a class object to JSON schema format by dictionary:
update_from(module='schema', filepath=None, dir=None)
¶
Update schema from a module. Adds methods and adds/updates properties of schema classes.
This method loads a Python module and updates the schema with methods and properties defined in the module. It can load the module from a specified file path or directory.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
module
|
str
|
The module name to load, by default "schema". |
'schema'
|
filepath
|
str
|
The file path to the module, by default None. If not provided, the module will be loaded from the current directory. |
None
|
dir
|
str
|
The directory containing the module, by default None. If not provided, the current working directory will be used. |
None
|
Returns:
Type | Description |
---|---|
ModuleType
|
The loaded module instance. |
Examples:
Update the schema from a module named "schema" in the current directory:
Update the schema from a module named "custom_schema" in a specific directory:
>>> schema = Schema(title="My Schema")
>>> module = schema.update_from(module="custom_schema", dir="/path/to/directory")
Update the schema from a specific file path: