API Reference#

Structure#

This module provides functions for creating and manipulating atomic structures. It includes functions for creating crystals, general lattices, dislocations, grain boundaries, and reading structures from files. The module also provides functionality for adding interstitial impurities, substituting atoms, deleting atoms, and adding vacancies. The structures can be converted to RDF graphs using the atomrdf library. The main object in this module is the System class, which extends the functionality of the pyscal3.core.System class and provides additional methods for working with atomic structures.

class atomrdf.structure.System(filename=None, format='lammps-dump', compressed=False, customkeys=None, species=None, source=None, graph=None, names=False, warn_read_in=True)[source]#
add_gb(gb_dict)[source]#

Add GB details which will be annotated using PLDO

Parameters:

gb_dict (dict) – A dictionary containing details about the grain boundary. It should have the following keys: - “GBType” (str): The type of grain boundary. Possible values are “Twist”, “Tilt”, “Symmetric Tilt”, and “Mixed”. - “sigma” (int): The sigma value of the grain boundary. - “GBPlane” (str): The plane of the grain boundary. - “RotationAxis” (list): The rotation axis of the grain boundary. - “MisorientationAngle” (float): The misorientation angle of the grain boundary.

Return type:

None

Notes

This method adds grain boundary details to the structure and annotates it using PLDO ontology. The grain boundary type, sigma value, GB plane, rotation axis, and misorientation angle are stored as attributes of the grain boundary node in the graph.

add_interstitial_impurities(element, void_type='tetrahedral', lattice_constant=None, threshold=0.01)[source]#

Add interstitial impurities to the System

Parameters:
  • element (string or list) – Chemical symbol of the elements/elements to be added element = ‘Al’ will add one interstitial while element = [‘Al’, ‘Al’] or element = [‘Al’, ‘Li’] will add two impurities

  • void_type (string) – type of void to be added. {tetrahedral, octahedral}

  • lattice_constant (float, optional) – lattice constant of the system. Required only for octahedral voids

  • threshold (float, optional) – threshold for the distance from the lattice constant for octahedral voids to account for fluctuations in atomic positions

Returns:

system with the added impurities

Return type:

System

Notes

The validity of the void positions are not checked! This means that temperature, presence of vacancies or other interstitials could affect the addition.

add_vacancy(concentration, number=None)[source]#

Add Vacancy details which will be annotated by PODO

Parameters:
  • concentration (float) – vacancy concentration, value should be between 0-1

  • number (int) – Number of atoms that were deleted, optional

Return type:

None

delete(ids=None, indices=None, condition=None, selection=False)[source]#

Delete atoms from the structure.

Parameters:
  • ids (list, optional) – A list of atom IDs to delete. Default is None.

  • indices (list, optional) – A list of atom indices to delete. Default is None.

  • condition (str, optional) – A condition to select atoms to delete. Default is None.

  • selection (bool, optional) – If True, delete atoms based on the current selection. Default is False.

Return type:

None

Notes

Deletes atoms from the structure based on the provided IDs, indices, condition, or selection. If the structure has a graph associated with it, the graph will be updated accordingly.

substitute_atoms(substitution_element, ids=None, indices=None, condition=None, selection=False)[source]#

Substitute atoms in the structure with a given element.

Parameters:
  • substitution_element (str) – The element to substitute the atoms with.

  • ids (list, optional) – A list of atom IDs to consider for substitution. Defaults to None.

  • indices (list, optional) – A list of atom indices to consider for substitution. Defaults to None.

  • condition (callable, optional) – A callable that takes an atom as input and returns a boolean indicating whether the atom should be considered for substitution. Defaults to None.

  • selection (bool, optional) – If True, only selected atoms will be considered for substitution. Defaults to False.

Return type:

None

Notes

  • This method substitutes atoms in the structure with a given element.

  • The substitution is performed based on the provided IDs, indices, condition, and selection parameters.

  • The substituted atoms will have their species and types updated accordingly.

  • If the graph is not None, the method also operates on the graph by removing existing elements and adding new ones based on the composition of the substituted atoms.

  • The method also cleans up items in the file associated with the graph.

Examples

# Substitute selected atoms with nitrogen structure.substitute_atoms(“N”, ids=[1, 3, 5])

to_file(outfile, format='lammps-dump', customkeys=None, customvals=None, compressed=False, timestep=0, species=None, add_sample_id=True, input_data=None, pseudopotentials=None, kspacing=None, kpts=None, koffset=(0, 0, 0), crystal_coordinates=False)[source]#

Write the structure to a file in the specified format.

Parameters:
  • outfile (str) – The path to the output file.

  • format (str, optional) – The format of the output file. Defaults to ‘lammps-dump’.

  • customkeys (list, optional) – A list of custom keys to include in the output file. Defaults to None. Only valid if format is ‘lammps-dump’.

  • customvals (list, optional) – A list of custom values corresponding to the custom keys. Defaults to None. Only valid if format is ‘lammps-dump’.

  • compressed (bool, optional) – Whether to compress the output file. Defaults to False.

  • timestep (int, optional) – The timestep value to include in the output file. Defaults to 0. Only valid if format is ‘lammps-dump’.

  • species (list, optional) – A list of species to include in the output file. Defaults to None. Only valid for ASE, if species is not specified.

  • add_sample_id (bool, optional) – Whether to add a sample ID to the output file. Defaults to True. Only valid for poscar and quantum-espresso formats.

  • input_data (str, optional) – Additional input data to include in the output file. Defaults to None. Only valid for quantum-espresso format. See ASE write docs for more information.

  • pseudopotentials (str, optional) – The path to the pseudopotentials file. Defaults to None. Only valid for quantum-espresso format. See ASE write docs for more information.

  • kspacing (float, optional) – The k-spacing value to include in the output file. Defaults to None. Only valid for quantum-espresso format. See ASE write docs for more information.

  • kpts (list, optional) – A list of k-points to include in the output file. Defaults to None. Only valid for quantum-espresso format. See ASE write docs for more information.

  • koffset (tuple, optional) – The k-offset values to include in the output file. Defaults to (0, 0, 0). Only valid for quantum-espresso format. See ASE write docs for more information.

  • crystal_coordinates (bool, optional) – Whether to include crystal coordinates in the output file. Defaults to False. Only valid for quantum-espresso format. See ASE write docs for more information.

Return type:

None

to_graph()[source]#

Converts the structure object to a graph representation.

Return type:

None

KnowledgeGraph#

Graph module contains the basic RDFGraph object in atomrdf. This object gets a structure as an input and annotates it with the CMSO ontology (PLDO and PODO too as needed). The annotated object is stored in triplets.

Notes

  • To ensure domain and range checking works as expected, always add type before adding further properties!

Classes#

  • KnowledgeGraph: Represents a knowledge graph that stores and annotates structure objects.

- defstyledict
Type:

A dictionary containing default styles for visualizing the graph.

class atomrdf.graph.KnowledgeGraph(graph_file=None, store='Memory', store_file=None, identifier='http://default_graph', ontology=None, structure_store=None, enable_log=False)[source]#

Represents a knowledge graph.

Parameters:
  • graph_file (str, optional) – The path to the graph file to be parsed. Default is None.

  • store (str, optional) – The type of store to use. Default is “Memory”.

  • store_file (str, optional) – The path to the store file. Default is None.

  • identifier (str, optional) – The identifier for the graph. Default is “http://default_graph”.

  • ontology (Ontology, optional) – The ontology object to be used. Default is None.

  • structure_store (StructureStore, optional) – The structure store object to be used. Default is None.

  • enable_log (bool, optional) – Whether to enable logging. Default is False. If true, a log file named atomrdf.log will be created in the current working directory.

graph#

The RDF graph.

Type:

rdflib.Graph

sgraph#

The structure graph for a single chosen sample

Type:

rdflib.Graph

ontology#

The ontology object.

Type:

Ontology

terms#

The dictionary of ontology terms.

Type:

dict

store#

The type of store used.

Type:

str

add_structure(structure)[source]#

Add a structure to the knowledge graph.

add(triple, validate=True)[source]#

Add a triple to the knowledge graph.

triples(triple)[source]#

Return the triples in the knowledge graph that match the given triple pattern.

property activity_ids#

Returns a list of all Samples in the graph

add(triple, validate=True)[source]#

Add a triple to the knowledge graph.

Parameters:
  • triple (tuple) – The triple to be added in the form (subject, predicate, object).

  • validate (bool, optional) – Whether to validate the triple against the domain and range. Default is True.

Return type:

None

Notes

This method adds a triple to the knowledge graph. The triple should be provided as a tuple in the form (subject, predicate, object). By default, the triple is validated against the domain and range. If the validate parameter is set to False, the validation is skipped.

Examples

>>> graph = Graph()
>>> graph.add(("Alice", "likes", "Bob"))
>>> graph.add(("Bob", "age", 25), validate=False)
add_calculated_quantity(sample, propertyname, value, unit=None)[source]#

Add a calculated quantity to a sample.

Parameters:
  • sample (URIRef) – The URIRef of the sample to which the calculated quantity is being added.

  • propertyname (str) – The name of the calculated property.

  • value (str) – The value of the calculated property.

  • unit (str, optional) – The unit of the calculated property. Default is None. The unit should be from QUDT. See http://qudt.org/vocab/unit/

Return type:

None

Notes

This method adds a calculated quantity to a sample in the knowledge graph. The calculated quantity is represented as a triple with the sample as the subject, the calculated property as the predicate, and the value as the object. The calculated property is created as a node in the graph with the given name and value. If a unit is provided, it is also added as a property of the calculated property node.

Examples

>>> graph = KnowledgeGraph()
>>> sample = graph.create_node("Sample1", CMSO.Sample)
>>> graph.add_calculated_quantity(sample, "energy", "10.5", "eV")
add_structure(structure)[source]#

Add a structure to the knowledge graph.

Parameters:

structure (Structure) – The structure object to be added.

Return type:

None

Notes

This method adds a structure object to the knowledge graph. The structure object should be an instance of the Structure class. The structure object is assigned to the graph and converted to RDF format.

archive(package_name, format='turtle', compress=True, add_simulations=False)[source]#

Publish a dataset from graph including per atom quantities.

Parameters:#

package_namestr

The name of the package to be created.

formatstr, optional

The format in which the dataset should be written. Default is “turtle”.

compressbool, optional

Whether to compress the package into a tarball. Default is True.

Raises:#

ValueError

If the package_name already exists or if the tarball already exists.

Notes:#

This method creates a package containing a dataset from the graph, including per atom quantities. The package consists of a folder named package_name, which contains the dataset and related files. If compress is True, the package is compressed into a tarball.

The method performs the following steps: 1. Checks if the package_name already exists. If it does, raises a ValueError. 2. If compress is True, checks if the tarball already exists. If it does, raises a ValueError. 3. Creates a folder named package_name. 4. Creates a subfolder named rdf_structure_store within the package folder. 5. Copies the files associated with each sample to the rdf_structure_store folder, while fixing the paths. 6. Updates the paths in the graph to point to the copied files. 7. Writes the dataset to a file named “triples” within the package folder. 8. If compress is True, compresses the package folder into a tarball. 9. Removes the package folder.

auto_query(source, destination, return_query=False, enforce_types=None, return_df=True)[source]#

Automatically generates and executes a query based on the provided parameters.

Parameters:
  • source (OntoTerm) – The source of the query.

  • destination (OntoTerm) – The destination of the query.

  • return_query (bool, optional) – If True, returns the generated query instead of executing it. Defaults to False.

  • enforce_types (bool, optional) – If provided, enforces the specified type for the query. Defaults to None.

  • return_df (bool, optional) – if True, returns the results as a pandas DataFrame. Default is True.

Returns:

The result of the query execution. If return_query is True, returns the generated query as a string. Otherwise, returns the result of the query execution as a pandas DataFrame.

Return type:

pandas DataFrame or str

close(filename, format='json-ld')[source]#

Close the graph and write to a file

Parameters:

filename (string) – name of output file

Return type:

None

create_node(namestring, classtype, label=None)[source]#

Create a new node in the graph.

Parameters:
  • namestring (str) – The name of the node.

  • classtype (Object from a given ontology) – The class type of the node.

Returns:

The newly created node.

Return type:

URIRef

get_sample(sample, no_atoms=False)[source]#

Get the Sample as a KnowledgeGraph

Parameters:
  • sample (string) – sample id

  • no_atoms (bool, optional) – if True, returns the number of atoms in the sample

Returns:

  • sgraph (RDFGraph) – the RDFGraph of the queried sample

  • na (int, only retured if no_atoms is True)

get_system_from_sample(sample)[source]#

Get a pyscal atomrdf.structure.System from the selected sample

Parameters:

sample (string) – sample id

Returns:

system – corresponding system

Return type:

atomrdf.structure.System

inspect_sample(sample)[source]#

Inspects a sample and retrieves information about its atoms, material, defects, composition, crystal structure, space group, calculated properties, and units.

Parameters:

sample (The sample to inspect.)

Returns:

string

Return type:

A string containing the information about the sample.

iterate_graph(item, create_new_graph=False)[source]#

Iterate through the graph starting from the given item.

Parameters:
  • item (object) – The item to start the iteration from.

  • create_new_graph (bool, optional) – If True, create a new KnowledgeGraph object to store the iteration results. Default is False. The results are stored in self.sgraph.

Return type:

None

property n_samples#

Number of samples in the Graph

query(inquery, return_df=True)[source]#

Query the graph using SPARQL

Parameters:
  • inquery (string) – SPARQL query to be executed

  • return_df (bool, optional) – if True, returns the results as a pandas DataFrame. Default is True.

Returns:

res – pandas dataframe results

Return type:

pandas DataFrame

query_sample(destination, return_query=False, enforce_types=None)[source]#

Query the knowledge graph for atomic scale samples.

Parameters:
  • destination (OntoTerm) – The destination of the query.

  • return_query (bool, optional) – If True, returns the generated query instead of executing it. Defaults to False.

  • enforce_types (bool, optional) – If provided, enforces the specified type for the query. Defaults to None.

Returns:

The result of the query execution. If return_query is True, returns the generated query as a string. Otherwise, returns the result of the query execution as a pandas DataFrame.

Return type:

pandas DataFrame or str

remove(triple)[source]#

Remove a triple from the knowledge graph.

Parameters:

triple (tuple) – The triple to be removed in the form (subject, predicate, object).

Return type:

None

Notes

This method removes a triple from the knowledge graph. The triple should be provided as a tuple in the form (subject, predicate, object).

Examples

>>> graph = KnowledgeGraph()
>>> graph.add(("Alice", "likes", "Bob"))
>>> graph.remove(("Alice", "likes", "Bob"))
property sample_ids#

Returns a list of all Samples in the graph

property sample_names#

Returns a list of all Sample names in the graph

to_file(sample, filename=None, format='poscar', add_sample_id=True, input_data=None, pseudopotentials=None, kspacing=None, kpts=None, koffset=(0, 0, 0), crystal_coordinates=False)[source]#

Save a given sample to a file

Parameters:
  • sample – ID of the sample

  • filename (string) – name of output file

  • format (string, {"lammps-dump","lammps-data", "poscar", 'cif', 'quantum-espresso'}) – or any format supported by ase

  • input_data (str, optional) – Additional input data to include in the output file. Defaults to None. Only valid for quantum-espresso format. See ASE write docs for more information.

  • pseudopotentials (str, optional) – The path to the pseudopotentials file. Defaults to None. Only valid for quantum-espresso format. See ASE write docs for more information.

  • kspacing (float, optional) – The k-spacing value to include in the output file. Defaults to None. Only valid for quantum-espresso format. See ASE write docs for more information.

  • kpts (list, optional) – A list of k-points to include in the output file. Defaults to None. Only valid for quantum-espresso format. See ASE write docs for more information.

  • koffset (tuple, optional) – The k-offset values to include in the output file. Defaults to (0, 0, 0). Only valid for quantum-espresso format. See ASE write docs for more information.

  • crystal_coordinates (bool, optional) – Whether to include crystal coordinates in the output file. Defaults to False. Only valid for quantum-espresso format. See ASE write docs for more information.

Return type:

None

triples(triple)[source]#

Return the triples in the knowledge graph that match the given triple pattern.

Parameters:

triple (tuple) – The triple pattern to match in the form (subject, predicate, object).

Returns:

A generator that yields the matching triples.

Return type:

generator

Examples

>>> graph = KnowledgeGraph()
>>> graph.add(("Alice", "likes", "Bob"))
>>> graph.add(("Alice", "dislikes", "Charlie"))
>>> graph.add(("Bob", "likes", "Alice"))
>>> for triple in graph.triples(("Alice", None, None)):
...     print(triple)
('Alice', 'likes', 'Bob')
('Alice', 'dislikes', 'Charlie')
classmethod unarchive(package_name, compress=True, store='Memory', store_file=None, identifier='http://default_graph', ontology=None)[source]#

Unarchives a package and returns an instance of the Graph class.

Parameters:
  • package_name (str) – The name of the package to unarchive.

  • compress (bool, optional) – Whether to compress the package. Defaults to True.

  • store (str, optional) – The type of store to use. Defaults to “Memory”.

  • store_file (str, optional) – The file to use for the store. Defaults to None.

  • identifier (str, optional) – The identifier for the graph. Defaults to “http://default_graph”.

  • ontology (str, optional) – The ontology to use. Defaults to None.

Returns:

An instance of the Graph class.

Return type:

Graph

Raises:
  • FileNotFoundError – If the package file is not found.

  • tarfile.TarError – If there is an error while extracting the package.

value(arg1, arg2)[source]#

Get the value of a triple in the knowledge graph.

Parameters:
  • arg1 (object) – The subject of the triple.

  • arg2 (object) – The predicate of the triple.

Returns:

The value of the triple if it exists, otherwise None.

Return type:

object or None

Notes

This method retrieves the value of a triple in the knowledge graph. The triple is specified by providing the subject and predicate as arguments. If the triple exists in the graph, the corresponding value is returned. If the triple does not exist, None is returned.

Examples

>>> graph = KnowledgeGraph()
>>> graph.add(("Alice", "likes", "Bob"))
>>> value = graph.value("Alice", "likes")
>>> print(value)
Bob
visualise(styledict=None, rankdir='BT', hide_types=False, workflow_view=False, sample_view=False, size=None, layout='neato')[source]#

Visualize the RDF tree of the Graph.

Parameters:
  • styledict (dict, optional) – If provided, allows customization of color and other properties.

  • rankdir (str, optional) – The direction of the graph layout. Default is “BT” (bottom to top).

  • hide_types (bool, optional) – Whether to hide the types in the visualization. Default is False.

  • workflow_view (bool, optional) – Whether to enable the workflow view. Default is False.

  • sample_view (bool, optional) – Whether to enable the sample view. Default is False.

  • size (tuple, optional) – The size of the visualization. Default is None.

  • layout (str, optional) – The name of the layout algorithm for the graph. Default is “neato”.

Returns:

The visualization of the RDF tree.

Return type:

graphviz.dot.Digraph

Notes

The styledict parameter allows customization of the visualization style. It has the following options:

BNode:
colorstr

The color of the BNode boxes.

shapestr

The shape of the BNode boxes.

stylestr

The style of the BNode boxes.

URIRef:
colorstr

The color of the URIRef boxes.

shapestr

The shape of the URIRef boxes.

stylestr

The style of the URIRef boxes.

Literal:
colorstr

The color of the Literal boxes.

shapestr

The shape of the Literal boxes.

stylestr

The style of the Literal boxes.

visualize(*args, **kwargs)[source]#

Visualizes the graph using the specified arguments.

This method is a wrapper around the visualise method and passes the same arguments to it.

Parameters:
  • *args (Variable length argument list.)

  • **kwargs (Arbitrary keyword arguments.)

Returns:

dot

Return type:

The visualization of the RDF tree.

write(filename, format='json-ld')[source]#

Write the serialised version of the graph to a file

Parameters:
  • filename (string) – name of output file

  • format (string, {'turtle', 'xml', 'json-ld', 'ntriples', 'n3'}) – output format to be written to

Return type:

None

Workflow#

Workflows aspects for non-automated annotation of structures.

This consists of a workflow class which implements the necessary methods to serialise triples as needed. Custom workflow solutions can be implemented. An example available here is pyiron. The custom workflow env should implement the following functions:

_check_if_job_is_valid _add_structure _identify_method extract_calculated_properties inform_graph

See atomrdf.workflow.pyiron for more details

Network#

class atomrdf.network.network.OntologyNetwork(infile=None, delimiter='/')[source]#

Network representation of Onto

add_namespace(namespace_name, namespace_iri)[source]#

Add a new namespace.

Parameters:
  • namespace_name (str) – The name of the namespace to add.

  • namespace_iri (str) – The IRI of the namespace.

Raises:

KeyError – If the namespace already exists.

add_path(triple)[source]#

Add a triple as path.

Note that all attributes of the triple should already exist in the graph. The ontology itself is not modified. Only the graph representation of it is. The expected use is to bridge between two (or more) different ontologies.

Parameters:
  • triple (tuple)

  • elements (A tuple representing the triple to be added. The tuple should contain three)

  • subject

  • predicate

  • object. (and)

Raises:
  • ValueError

  • If the subject or object of the triple is not found in the attributes of the ontology.

add_term(uri, node_type, namespace=None, dm=(), rn=(), data_type=None, node_id=None, delimiter='/')[source]#

Add a node.

Parameters:
  • uri (str) – The URI of the node.

  • node_type (str) – The type of the node.

  • namespace (str, optional) – The namespace of the node.

  • dm (list, optional) – The domain metadata of the node.

  • rn (list, optional) – The range metadata of the node.

  • data_type (str, optional) – The data type of the node.

  • node_id (str, optional) – The ID of the node.

  • delimiter (str, optional) – The delimiter used for parsing the URI.

Raises:

ValueError – If the namespace is not found.

create_query(source, destinations, enforce_types=True)[source]#

Create a SPARQL query string based on the given source, destinations, condition, and enforce_types.

Parameters:
  • source (Node) – The source node from which the query starts.

  • destinations (list or Node) – The destination node(s) to which the query should reach. If a single node is provided, it will be converted to a list.

  • enforce_types (bool, optional) – Whether to enforce the types of the source and destination nodes in the query. Defaults to True.

Returns:

The generated SPARQL query string.

Return type:

str

draw(styledict={'class': {'shape': 'box'}, 'data_property': {'shape': 'ellipse'}, 'literal': {'shape': 'parallelogram'}, 'object_property': {'shape': 'ellipse'}})[source]#

Draw the network graph using graphviz.

Parameters:

styledict (dict, optional) – A dictionary specifying the styles for different node types. The keys of the dictionary are the node types, and the values are dictionaries specifying the shape for each node type. Defaults to None.

Returns:

The graph object representing the network graph.

Return type:

graphviz.Digraph

Example

styledict = {

“class”: {“shape”: “box”}, “object_property”: {“shape”: “ellipse”}, “data_property”: {“shape”: “ellipse”}, “literal”: {“shape”: “parallelogram”},

} network.draw(styledict)

get_path_from_sample(target)[source]#

Get the shortest path from the ‘cmso:ComputationalSample’ node to the target node.

Parameters:

target (OntoTerm) – The target node to find the shortest path to.

Returns:

A list of triples representing the shortest path from ‘cmso:ComputationalSample’ to the target node.

Return type:

list

get_shortest_path(source, target, triples=False)[source]#

Compute the shortest path between two nodes in the graph.

Parameters:#

sourcenode

The starting node for the path.

targetnode

The target node for the path.

triplesbool, optional

If True, returns the path as a list of triples. Each triple consists of three consecutive nodes in the path. If False, returns the path as a list of nodes.

Returns:#

pathlist

The shortest path between the source and target nodes. If triples is True, the path is returned as a list of triples. If triples is False, the path is returned as a list of nodes.

Namespace#

This module provides the Namespace class for managing namespaces in the AtomRDF library.

The Namespace class extends the rdflib.Namespace class and provides additional functionality for working with namespaces.

Classes#

Namespace

A class representing a namespace in the AtomRDF library.

class atomrdf.namespace.Namespace(value: str | bytes)[source]#

A class representing a namespace in the AtomRDF library.

This class extends the rdflib.Namespace classes.

Parameters:
  • infile (str) – The input file path.

  • delimiter (str, optional) – The delimiter used in the input file. Defaults to “/”.

network#

The ontology network associated with the namespace.

Type:

OntologyNetwork

name#

The name of the namespace.

Type:

str

Stores#

atomrdf.stores.create_store(kg, store, identifier, store_file=None, structure_store=None)[source]#

Create a store based on the given parameters.

Parameters:#

kgKnowledgeGraph

The knowledge graph object.

storestr or Project

The type of store to create. It can be either “Memory”, “SQLAlchemy”, or a pyiron Project object.

identifierstr

The identifier for the store.

store_filestr, optional

The file path to store the data (only applicable for certain store types).

structure_storestr, optional

The structure store to use (only applicable for certain store types).

Raises:#

ValueError

If an unknown store type is provided.

atomrdf.stores.store_alchemy(kg, store, identifier, store_file=None, structure_store=None)[source]#

Store the knowledge graph using SQLAlchemy.

Parameters:
  • kg (KnowledgeGraph) – The knowledge graph to be stored.

  • store (str) – The type of store to be used.

  • identifier (str) – The identifier for the graph.

  • store_file (str, optional) – The file path for the store. Required if store is not ‘memory’.

  • structure_store (str, optional) – The structure store to be used.

Raises:

ValueError – If store_file is None and store is not ‘memory’.

Return type:

None

atomrdf.stores.store_memory(kg, store, identifier, store_file=None, structure_store=None)[source]#

Store the knowledge graph in memory.

Parameters:
  • kg (KnowledgeGraph) – The knowledge graph to be stored.

  • store (str) – The type of store to use for storing the graph.

  • identifier (str) – The identifier for the graph.

  • store_file (str, optional) – The file to store the graph in. Defaults to None.

  • structure_store (str, optional) – The structure store to use. Defaults to None.

Return type:

None