Client
- class labelbox.client.Client(api_key=None, endpoint='https://api.labelbox.com/graphql', enable_experimental=False, app_url='https://app.labelbox.com', rest_endpoint='https://api.labelbox.com/api/v1')[source]
Bases:
object
A Labelbox client.
Provides functions for querying and creating top-level data objects (Projects, Datasets).
- __init__(api_key=None, endpoint='https://api.labelbox.com/graphql', enable_experimental=False, app_url='https://app.labelbox.com', rest_endpoint='https://api.labelbox.com/api/v1')[source]
Creates and initializes a Labelbox Client.
Logging is defaulted to level WARNING. To receive more verbose output to console, update logging.level to the appropriate level.
>>> logging.basicConfig(level = logging.INFO) >>> client = Client("<APIKEY>")
- Parameters:
api_key (str) – API key. If None, the key is obtained from the “LABELBOX_API_KEY” environment variable.
endpoint (str) – URL of the Labelbox server to connect to.
enable_experimental (bool) – Indicates whether or not to use experimental features
app_url (str) – host url for all links to the web app
- Raises:
AuthenticationError – If no api_key is provided as an argument or via the environment variable.
- assign_global_keys_to_data_rows(global_key_to_data_row_inputs: List[Dict[str, str]], timeout_seconds=60) Dict[str, str | List[Any]] [source]
Assigns global keys to data rows.
- Parameters:
global_key. (A list of dicts containing data_row_id and) –
- Returns:
Dictionary containing ‘status’, ‘results’ and ‘errors’.
’Status’ contains the outcome of this job. It can be one of ‘Success’, ‘Partial Success’, or ‘Failure’.
’Results’ contains the successful global_key assignments, including global_keys that have been sanitized to Labelbox standards.
’Errors’ contains global_key assignments that failed, along with the reasons for failure.
Examples
>>> global_key_data_row_inputs = [ {"data_row_id": "cl7asgri20yvo075b4vtfedjb", "global_key": "key1"}, {"data_row_id": "cl7asgri10yvg075b4pz176ht", "global_key": "key2"}, ] >>> job_result = client.assign_global_keys_to_data_rows(global_key_data_row_inputs) >>> print(job_result['status']) Partial Success >>> print(job_result['results']) [{'data_row_id': 'cl7tv9wry00hlka6gai588ozv', 'global_key': 'gk', 'sanitized': False}] >>> print(job_result['errors']) [{'data_row_id': 'cl7tpjzw30031ka6g4evqdfoy', 'global_key': 'gk"', 'error': 'Invalid global key'}]
- static build_catalog_query(data_rows: UniqueIds | GlobalKeys)[source]
Given a list of data rows, builds a query that can be used to fetch the associated data rows from the catalog.
- Parameters:
data_rows – A list of data rows. Can be either UniqueIds or GlobalKeys.
Returns: A query that can be used to fetch the associated data rows from the catalog.
- clear_global_keys(global_keys: List[str], timeout_seconds=60) Dict[str, str | List[Any]] [source]
Clears global keys for the data rows tha correspond to the global keys provided.
- Parameters:
keys (A list of global) –
- Returns:
Dictionary containing ‘status’, ‘results’ and ‘errors’.
’Status’ contains the outcome of this job. It can be one of ‘Success’, ‘Partial Success’, or ‘Failure’.
’Results’ contains a list global keys that were successfully cleared.
’Errors’ contains a list of global_keys correspond to the data rows that could not be modified, accessed by the user, or not found.
Examples
>>> job_result = client.clear_global_keys(["key1","key2","notfoundkey"]) >>> print(job_result['status']) Partial Success >>> print(job_result['results']) ['key1', 'key2'] >>> print(job_result['errors']) [{'global_key': 'notfoundkey', 'error': 'Failed to find data row matching provided global key'}]
- create_dataset(iam_integration='DEFAULT', **kwargs) Dataset [source]
Creates a Dataset object on the server.
Attribute values are passed as keyword arguments.
- Parameters:
iam_integration (IAMIntegration) – Uses the default integration. Optionally specify another integration or set as None to not use delegated access
**kwargs – Keyword arguments with Dataset attribute values.
- Returns:
A new Dataset object.
- Raises:
InvalidAttributeError – If the Dataset type does not contain any of the attribute names given in kwargs.
Examples
Create a dataset >>> dataset = client.create_dataset(name=”<dataset_name>”) Create a dataset with description >>> dataset = client.create_dataset(name=”<dataset_name>”, description=”<dataset_description>”)
- create_embedding(name: str, dims: int) Embedding [source]
Create a new embedding. You must provide a name and the number of dimensions the embedding has. Once an embedding has been created, you can upload the vector data associated with the embedding id.
- Parameters:
name – The name of the embedding.
dims – The number of dimensions.
- Returns:
A new Embedding object.
- create_feature_schema(normalized)[source]
- Creates a feature schema from normalized data.
>>> normalized = {'tool': 'polygon', 'name': 'cat', 'color': 'black'} >>> feature_schema = client.create_feature_schema(normalized)
- Or use the Tool or Classification objects. It is especially useful for complex tools.
>>> normalized = Tool(tool=Tool.Type.BBOX, name="cat", color = 'black').asdict() >>> feature_schema = client.create_feature_schema(normalized)
- Subclasses are also supported
>>> normalized = Tool( tool=Tool.Type.SEGMENTATION, name="cat", classifications=[ Classification( class_type=Classification.Type.TEXT, name="name" ) ] ) >>> feature_schema = client.create_feature_schema(normalized)
- More details can be found here:
https://github.com/Labelbox/labelbox-python/blob/develop/examples/basics/ontologies.ipynb
- Parameters:
normalized (dict) – A normalized tool or classification payload. See above for details
- Returns:
The created FeatureSchema.
- create_model(name, ontology_id) Model [source]
Creates a Model object on the server.
>>> model = client.create_model(<model_name>, <ontology_id>)
- Parameters:
name (string) – Name of the model
ontology_id (string) – ID of the related ontology
- Returns:
A new Model object.
- Raises:
InvalidAttributeError – If the Model type does not contain any of the attribute names given in kwargs.
- create_model_config(name: str, model_id: str, inference_params: dict) ModelConfig [source]
- Creates a new model config with the given params.
Model configs are scoped to organizations, and can be reused between projects.
- Parameters:
name (str) – Name of the model config
model_id (str) – ID of model to configure
inference_params (dict) – JSON of model configuration parameters.
- Returns:
str, id of the created model config
- create_model_evaluation_project(name: str, description: str | None = None, quality_modes: Set[QualityMode] | None = {QualityMode.Benchmark, QualityMode.Consensus}, is_benchmark_enabled: bool | None = None, is_consensus_enabled: bool | None = None, dataset_id: str | None = None, dataset_name: str | None = None, data_row_count: int | None = None) Project [source]
Use this method exclusively to create a chat model evaluation project. :param dataset_name: When creating a new dataset, pass the name :param dataset_id: When using an existing dataset, pass the id :param data_row_count: The number of data row assets to use for the project :param See create_project for additional parameters:
- Returns:
The created project
- Return type:
Examples
>>> client.create_model_evaluation_project(name=project_name, media_type=dataset_name="new data set") >>> This creates a new dataset with a default number of rows (100), creates new project and assigns a batch of the newly created datarows to the project.
>>> client.create_model_evaluation_project(name=project_name, dataset_name="new data set", data_row_count=10) >>> This creates a new dataset with 10 data rows, creates new project and assigns a batch of the newly created datarows to the project.
>>> client.create_model_evaluation_project(name=project_name, dataset_id="clr00u8j0j0j0") >>> This creates a new project, and adds 100 datarows to the dataset with id "clr00u8j0j0j0" and assigns a batch of the newly created data rows to the project.
>>> client.create_model_evaluation_project(name=project_name, dataset_id="clr00u8j0j0j0", data_row_count=10) >>> This creates a new project, and adds 100 datarows to the dataset with id "clr00u8j0j0j0" and assigns a batch of the newly created 10 data rows to the project.
>>> client.create_model_evaluation_project(name=project_name) >>> This creates a new project with no data rows.
- create_offline_model_evaluation_project(name: str, description: str | None = None, quality_modes: Set[QualityMode] | None = {QualityMode.Benchmark, QualityMode.Consensus}, is_benchmark_enabled: bool | None = None, is_consensus_enabled: bool | None = None) Project [source]
Creates a project for offline model evaluation. :param See create_project for parameters:
- Returns:
The created project
- Return type:
- create_ontology(name, normalized, media_type: MediaType | None = None, ontology_kind: OntologyKind | None = None) Ontology [source]
- Creates an ontology from normalized data
>>> normalized = {"tools" : [{'tool': 'polygon', 'name': 'cat', 'color': 'black'}], "classifications" : []} >>> ontology = client.create_ontology("ontology-name", normalized)
- Or use the ontology builder. It is especially useful for complex ontologies
>>> normalized = OntologyBuilder(tools=[Tool(tool=Tool.Type.BBOX, name="cat", color = 'black')]).asdict() >>> ontology = client.create_ontology("ontology-name", normalized)
To reuse existing feature schemas, use create_ontology_from_feature_schemas() More details can be found here: https://github.com/Labelbox/labelbox-python/blob/develop/examples/basics/ontologies.ipynb
- Parameters:
name (str) – Name of the ontology
normalized (dict) – A normalized ontology payload. See above for details.
media_type (MediaType or None) – Media type of a new ontology
ontology_kind (OntologyKind or None) – set to OntologyKind.ModelEvaluation if the ontology is for chat evaluation or OntologyKind.ResponseCreation if ontology is for response creation, leave as None otherwise.
- Returns:
The created Ontology
NOTE for chat evaluation, we currently force media_type to Conversational and for response creation, we force media_type to Text.
- create_ontology_from_feature_schemas(name, feature_schema_ids, media_type: MediaType | None = None, ontology_kind: OntologyKind | None = None) Ontology [source]
Creates an ontology from a list of feature schema ids
- Parameters:
name (str) – Name of the ontology
feature_schema_ids (List[str]) – List of feature schema ids corresponding to top level tools and classifications to include in the ontology
media_type (MediaType or None) – Media type of a new ontology.
ontology_kind (OntologyKind or None) – set to OntologyKind.ModelEvaluation if the ontology is for chat evaluation, leave as None otherwise.
- Returns:
The created Ontology
NOTE for chat evaluation, we currently force media_type to Conversational and for response creation, we force media_type to Text.
- create_project(name: str, media_type: MediaType, description: str | None = None, quality_modes: Set[QualityMode] | None = {QualityMode.Benchmark, QualityMode.Consensus}, is_benchmark_enabled: bool | None = None, is_consensus_enabled: bool | None = None) Project [source]
Creates a Project object on the server.
Attribute values are passed as keyword arguments.
>>> project = client.create_project( name="<project_name>", description="<project_description>", media_type=MediaType.Image, )
- Parameters:
name (str) – A name for the project
description (str) – A short summary for the project
media_type (MediaType) – The type of assets that this project will accept
quality_modes (Optional[List[QualityMode]]) – The quality modes to use (e.g. Benchmark, Consensus). Defaults to Benchmark.
is_benchmark_enabled (Optional[bool]) – Whether the project supports benchmark. Defaults to None.
is_consensus_enabled (Optional[bool]) – Whether the project supports consensus. Defaults to None.
- Returns:
A new Project object.
- Raises:
ValueError – If inputs are invalid.
- create_prompt_response_generation_project(name: str, media_type: MediaType, description: str | None = None, auto_audit_percentage: float | None = None, auto_audit_number_of_labels: int | None = None, quality_modes: Set[QualityMode] | None = {QualityMode.Benchmark, QualityMode.Consensus}, is_benchmark_enabled: bool | None = None, is_consensus_enabled: bool | None = None, dataset_id: str | None = None, dataset_name: str | None = None, data_row_count: int = 100) Project [source]
Use this method exclusively to create a prompt and response generation project.
- Parameters:
dataset_name – When creating a new dataset, pass the name
dataset_id – When using an existing dataset, pass the id
data_row_count – The number of data row assets to use for the project
media_type – The type of assets that this project will accept. Limited to LLMPromptCreation and LLMPromptResponseCreation
parameters (See create_project for additional) –
- Returns:
The created project
- Return type:
NOTE: Only a dataset_name or dataset_id should be included
Examples
>>> client.create_prompt_response_generation_project(name=project_name, dataset_name="new data set", media_type=MediaType.LLMPromptResponseCreation) >>> This creates a new dataset with a default number of rows (100), creates new prompt and response creation project and assigns a batch of the newly created data rows to the project.
>>> client.create_prompt_response_generation_project(name=project_name, dataset_name="new data set", data_row_count=10, media_type=MediaType.LLMPromptCreation) >>> This creates a new dataset with 10 data rows, creates new prompt creation project and assigns a batch of the newly created datarows to the project.
>>> client.create_prompt_response_generation_project(name=project_name, dataset_id="clr00u8j0j0j0", media_type=MediaType.LLMPromptCreation) >>> This creates a new prompt creation project, and adds 100 datarows to the dataset with id "clr00u8j0j0j0" and assigns a batch of the newly created data rows to the project.
>>> client.create_prompt_response_generation_project(name=project_name, dataset_id="clr00u8j0j0j0", data_row_count=10, media_type=MediaType.LLMPromptResponseCreation) >>> This creates a new prompt and response creation project, and adds 100 datarows to the dataset with id "clr00u8j0j0j0" and assigns a batch of the newly created 10 data rows to the project.
- create_response_creation_project(name: str, description: str | None = None, quality_modes: Set[QualityMode] | None = {QualityMode.Benchmark, QualityMode.Consensus}, is_benchmark_enabled: bool | None = None, is_consensus_enabled: bool | None = None) Project [source]
Creates a project for response creation. :param See create_project for parameters:
- Returns:
The created project
- Return type:
- delete_feature_schema_from_ontology(ontology_id: str, feature_schema_id: str) DeleteFeatureFromOntologyResult [source]
Deletes or archives a feature schema from an ontology. If the feature schema is a root level node with associated labels, it will be archived. If the feature schema is a nested node in the ontology and does not have associated labels, it will be deleted. If the feature schema is a nested node in the ontology and has associated labels, it will not be deleted.
- Parameters:
ontology_id (str) – The ID of the ontology.
feature_schema_id (str) – The ID of the feature schema.
- Returns:
The result of the feature schema removal.
- Return type:
DeleteFeatureFromOntologyResult
Example
>>> client.delete_feature_schema_from_ontology(<ontology_id>, <feature_schema_id>)
- delete_model_config(id: str) bool [source]
Deletes an existing model config with the given id
- Parameters:
id (str) – ID of existing model config
- Returns:
bool, indicates if the operation was a success.
- delete_unused_feature_schema(feature_schema_id: str) None [source]
Deletes a feature schema if it is not used by any ontologies or annotations :param feature_schema_id: The id of the feature schema to delete :type feature_schema_id: str
Example
>>> client.delete_unused_feature_schema("cleabc1my012ioqvu5anyaabc")
- delete_unused_ontology(ontology_id: str) None [source]
Deletes an ontology if it is not used by any annotations :param ontology_id: The id of the ontology to delete :type ontology_id: str
Example
>>> client.delete_unused_ontology("cleabc1my012ioqvu5anyaabc")
- execute(query=None, params=None, data=None, files=None, timeout=60.0, experimental=False, error_log_key='message', raise_return_resource_not_found=False, error_handlers: Dict[str, Callable[[Response], None]] | None = None) Dict[str, Any] [source]
Executes a GraphQL query.
- Parameters:
query (str) – The query to execute.
params (dict) – Variables to pass to the query.
data (dict) – Includes the query and variables as well as the map for file upload multipart/form-data requests as per GraphQL multipart request specification.
files (dict) – File descriptors to pass to the query for file upload multipart/form-data requests.
timeout (float) – Timeout for the request.
experimental (bool) – Whether to use experimental features.
error_log_key (str) – Key to use for error logging.
raise_return_resource_not_found (bool) – If True, raise a ResourceNotFoundError if the query returns None.
error_handlers (dict) – A dictionary mapping graphql error code to handler functions. Allows a caller to handle specific errors reporting in a custom way or produce more user-friendly readable messages
- Returns:
The response from the server.
- Return type:
dict
See UserGroupV2.upload_members for an example of how to use this method for file upload.
- get_catalog_slice(slice_id) CatalogSlice [source]
Fetches a Catalog Slice by ID.
- Parameters:
slice_id (str) – The ID of the Slice
- Returns:
CatalogSlice
- get_data_row(data_row_id)[source]
- Returns:
returns a single data row given the data row id
- Return type:
- get_data_row_by_global_key(global_key: str) DataRow [source]
Returns: DataRow: returns a single data row given the global key
- get_data_row_ids_for_external_ids(external_ids: List[str]) Dict[str, List[str]] [source]
Returns a list of data row ids for a list of external ids. There is a max of 1500 items returned at a time.
- Parameters:
external_ids – List of external ids to fetch data row ids for
- Returns:
A dict of external ids as keys and values as a list of data row ids that correspond to that external id.
- get_data_row_ids_for_global_keys(global_keys: List[str], timeout_seconds=60) Dict[str, str | List[Any]] [source]
Gets data row ids for a list of global keys.
- Parameters:
keys (A list of global) –
- Returns:
Dictionary containing ‘status’, ‘results’ and ‘errors’.
’Status’ contains the outcome of this job. It can be one of ‘Success’, ‘Partial Success’, or ‘Failure’.
’Results’ contains a list of the fetched corresponding data row ids in the input order. For data rows that cannot be fetched due to an error, or data rows that do not exist, empty string is returned at the position of the respective global_key. More error information can be found in the ‘Errors’ section.
’Errors’ contains a list of global_keys that could not be fetched, along with the failure reason
Examples
>>> job_result = client.get_data_row_ids_for_global_keys(["key1","key2"]) >>> print(job_result['status']) Partial Success >>> print(job_result['results']) ['cl7tv9wry00hlka6gai588ozv', 'cl7tv9wxg00hpka6gf8sh81bj'] >>> print(job_result['errors']) [{'global_key': 'asdf', 'error': 'Data Row not found'}]
- get_data_row_metadata_ontology() DataRowMetadataOntology [source]
- Returns:
The ontology for Data Row Metadata for an organization
- Return type:
- get_dataset(dataset_id) Dataset [source]
Gets a single Dataset with the given ID.
>>> dataset = client.get_dataset("<dataset_id>")
- Parameters:
dataset_id (str) – Unique ID of the Dataset.
- Returns:
The sought Dataset.
- Raises:
ResourceNotFoundError – If there is no Dataset with the given ID.
- get_datasets(where=None) PaginatedCollection [source]
Fetches one or more datasets.
>>> datasets = client.get_datasets(where=(Dataset.name == "<dataset_name>") & (Dataset.description == "<dataset_description>"))
- Parameters:
where (Comparison, LogicalOperation or None) – The where clause for filtering.
- Returns:
PaginatedCollection of all datasets the user has access to or datasets matching the criteria specified.
- get_embedding_by_id(id: str) Embedding [source]
Return the embedding for the provided embedding id.
- Parameters:
id – The embedding ID.
- Returns:
The embedding object.
- get_embedding_by_name(name: str) Embedding [source]
Return the embedding for the provided embedding name.
- Parameters:
name – The embedding name
- Returns:
The embedding object.
- get_embeddings() List[Embedding] [source]
Return a list of all embeddings for the current organization.
- Returns:
A list of embedding objects.
- get_feature_schema(feature_schema_id)[source]
Fetches a feature schema. Only supports top level feature schemas.
- Parameters:
feature_schema_id (str) – The id of the feature schema to query for
- Returns:
FeatureSchema
- get_feature_schemas(name_contains) PaginatedCollection [source]
Fetches top level feature schemas with names that match the name_contains string
- Parameters:
name_contains (str) – search filter for a name of a root feature schema If present, results in a case insensitive ‘like’ search for feature schemas If None, returns all top level feature schemas
- Returns:
PaginatedCollection of FeatureSchemas with names that match name_contains
- get_labeling_service_dashboards(search_query: List[OrganizationFilter | WorkspaceFilter | SharedWithOrganizationFilter | TagFilter | ProjectStageFilter | WorkforceRequestedDateFilter | WorkforceStageUpdatedFilter | WorkforceRequestedDateRangeFilter | WorkforceStageUpdatedRangeFilter | TaskCompletedCountFilter | TaskRemainingCountFilter] | None = None) PaginatedCollection [source]
Get all labeling service dashboards for a given org.
Optional parameters: search_query: A list of search filters representing the search
Note
Retrieves all projects for the organization or as filtered by the search query - INCLUDING those not requesting labeling services
Sorted by project created date in ascending order.
Examples
Retrieves all labeling service dashboards for a given workspace id: >>> workspace_filter = WorkspaceFilter( >>> operation=OperationType.Workspace, >>> operator=IdOperator.Is, >>> values=[workspace_id]) >>> labeling_service_dashboard = [ >>> ld for ld in project.client.get_labeling_service_dashboards(search_query=[workspace_filter])]
Retrieves all labeling service dashboards requested less than 7 days ago: >>> seven_days_ago = (datetime.now() - timedelta(days=7)).strftime(“%Y-%m-%d”) >>> workforce_requested_filter_before = WorkforceRequestedDateFilter( >>> operation=OperationType.WorforceRequestedDate, >>> value=DateValue(operator=RangeDateTimeOperatorWithSingleValue.GreaterThanOrEqual, >>> value=seven_days_ago)) >>> labeling_service_dashboard = [ld for ld in project.client.get_labeling_service_dashboards(search_query=[workforce_requested_filter_before])]
See libs/labelbox/src/labelbox/schema/search_filters.py and libs/labelbox/tests/unit/test_unit_search_filters.py for more examples.
- get_model(model_id) Model [source]
Gets a single Model with the given ID.
>>> model = client.get_model("<model_id>")
- Parameters:
model_id (str) – Unique ID of the Model.
- Returns:
The sought Model.
- Raises:
ResourceNotFoundError – If there is no Model with the given ID.
- get_model_run(model_run_id: str) ModelRun [source]
Gets a single ModelRun with the given ID.
>>> model_run = client.get_model_run("<model_run_id>")
- Parameters:
model_run_id (str) – Unique ID of the ModelRun.
- Returns:
A ModelRun object.
- get_model_slice(slice_id) ModelSlice [source]
Fetches a Model Slice by ID.
- Parameters:
slice_id (str) – The ID of the Slice
- Returns:
ModelSlice
- get_models(where=None) List[Model] [source]
Fetches all the models the user has access to.
>>> models = client.get_models(where=(Model.name == "<model_name>"))
- Parameters:
where (Comparison, LogicalOperation or None) – The where clause for filtering.
- Returns:
An iterable of Models (typically a PaginatedCollection).
- get_ontologies(name_contains) PaginatedCollection [source]
Fetches all ontologies with names that match the name_contains string.
- Parameters:
name_contains (str) – the string to search ontology names by
- Returns:
PaginatedCollection of Ontologies with names that match name_contains
- get_ontology(ontology_id) Ontology [source]
Fetches an Ontology by id.
- Parameters:
ontology_id (str) – The id of the ontology to query for
- Returns:
Ontology
- get_organization() Organization [source]
Gets the Organization DB object of the current user.
>>> organization = client.get_organization()
- get_project(project_id) Project [source]
Gets a single Project with the given ID.
>>> project = client.get_project("<project_id>")
- Parameters:
project_id (str) – Unique ID of the Project.
- Returns:
The sought Project.
- Raises:
ResourceNotFoundError – If there is no Project with the given ID.
- get_projects(where=None) PaginatedCollection [source]
Fetches all the projects the user has access to.
>>> projects = client.get_projects(where=(Project.name == "<project_name>") & (Project.description == "<project_description>"))
- Parameters:
where (Comparison, LogicalOperation or None) – The where clause for filtering.
- Returns:
PaginatedCollection of all projects the user has access to or projects matching the criteria specified.
- get_roles() Dict[str, Role] [source]
- Returns:
Provides information on available roles within an organization. Roles are used for user management.
- Return type:
Roles
- get_task_by_id(task_id: str) Task | DataUpsertTask [source]
Fetches a task by ID.
- Parameters:
task_id (str) – The ID of the task.
- Returns:
Task or DataUpsertTask
- Throws:
ResourceNotFoundError: If the task does not exist.
NOTE: Export task is not supported yet
- get_unused_feature_schemas(after: str | None = None) List[str] [source]
Returns a list of unused feature schema ids :param after: The cursor to use for pagination :type after: str
- Returns:
A list of unused feature schema ids
Example
To get the first page of unused feature schema ids (100 at a time) >>> client.get_unused_feature_schemas() To get the next page of unused feature schema ids >>> client.get_unused_feature_schemas(“cleabc1my012ioqvu5anyaabc”)
- get_unused_ontologies(after: str | None = None) List[str] [source]
Returns a list of unused ontology ids :param after: The cursor to use for pagination :type after: str
- Returns:
A list of unused ontology ids
Example
To get the first page of unused ontology ids (100 at a time) >>> client.get_unused_ontologies() To get the next page of unused ontology ids >>> client.get_unused_ontologies(“cleabc1my012ioqvu5anyaabc”)
- get_users(where=None) PaginatedCollection [source]
Fetches all the users.
>>> users = client.get_users(where=User.email == "<user_email>")
- Parameters:
where (Comparison, LogicalOperation or None) – The where clause for filtering.
- Returns:
An iterable of Users (typically a PaginatedCollection).
- insert_feature_schema_into_ontology(feature_schema_id: str, ontology_id: str, position: int) None [source]
Inserts a feature schema into an ontology. If the feature schema is already in the ontology, it will be moved to the new position. :param feature_schema_id: The feature schema id to upsert :type feature_schema_id: str :param ontology_id: The id of the ontology to insert the feature schema into :type ontology_id: str :param position: The position number of the feature schema in the ontology :type position: int
Example
>>> client.insert_feature_schema_into_ontology("cleabc1my012ioqvu5anyaabc", "clefdvwl7abcgefgu3lyvcde", 2)
- is_feature_schema_archived(ontology_id: str, feature_schema_id: str) bool [source]
Returns true if a feature schema is archived in the specified ontology, returns false otherwise.
- Parameters:
feature_schema_id (str) – The ID of the feature schema
ontology_id (str) – The ID of the ontology
- Returns:
bool
- run_foundry_app(model_run_name: str, data_rows: UniqueIds | GlobalKeys, app_id: str) Task [source]
Run a foundry app
- Parameters:
model_run_name (str) – Name of a new model run to store app predictions in
data_rows (DataRowIds or GlobalKeys) – Data row identifiers to run predictions on
app_id (str) – Foundry app to run predictions with
- send_to_annotate_from_catalog(destination_project_id: str, task_queue_id: str | None, batch_name: str, data_rows: UniqueIds | GlobalKeys, params: Dict[str, Any])[source]
Sends data rows from catalog to a specified project for annotation.
- Example usage:
>>> task = client.send_to_annotate_from_catalog( >>> destination_project_id=DESTINATION_PROJECT_ID, >>> task_queue_id=TASK_QUEUE_ID, >>> batch_name="batch_name", >>> data_rows=UniqueIds([DATA_ROW_ID]), >>> params={ >>> "source_project_id": >>> SOURCE_PROJECT_ID, >>> "override_existing_annotations_rule": >>> ConflictResolutionStrategy.OverrideWithAnnotations >>> }) >>> task.wait_till_done()
- Parameters:
destination_project_id – The ID of the project to send the data rows to.
task_queue_id – The ID of the task queue to send the data rows to. If not specified, the data rows will be sent to the Done workflow state.
batch_name – The name of the batch to create. If more than one batch is created, additional batches will be named with a monotonically increasing numerical suffix, starting at “_1”.
data_rows – The data rows to send to the project.
params – Additional parameters to configure the job. See SendToAnnotateFromCatalogParams for more details.
Returns: The created task for this operation.
- unarchive_feature_schema_node(ontology_id: str, root_feature_schema_id: str) None [source]
Unarchives a feature schema node in an ontology. Only root level feature schema nodes can be unarchived. :param ontology_id: The ID of the ontology :type ontology_id: str :param root_feature_schema_id: The ID of the root level feature schema :type root_feature_schema_id: str
- Returns:
None
- update_feature_schema_title(feature_schema_id: str, title: str) FeatureSchema [source]
Updates a title of a feature schema :param feature_schema_id: The id of the feature schema to update :type feature_schema_id: str :param title: The new title of the feature schema :type title: str
- Returns:
The updated feature schema
Example
>>> client.update_feature_schema_title("cleabc1my012ioqvu5anyaabc", "New Title")
- upsert_feature_schema(feature_schema: Dict) FeatureSchema [source]
Upserts a feature schema :param feature_schema: Dict representing the feature schema to upsert
- Returns:
The upserted feature schema
Example
Insert a new feature schema >>> tool = Tool(name=”tool”, tool=Tool.Type.BOUNDING_BOX, color=”#FF0000”) >>> client.upsert_feature_schema(tool.asdict()) Update an existing feature schema >>> tool = Tool(feature_schema_id=”cleabc1my012ioqvu5anyaabc”, name=”tool”, tool=Tool.Type.BOUNDING_BOX, color=”#FF0000”) >>> client.upsert_feature_schema(tool.asdict())
- upsert_label_feedback(label_id: str, feedback: str, scores: Dict[str, float]) List[LabelScore] [source]
Submits the label feedback which is a free-form text and numeric label scores.
- Parameters:
label_id – Target label ID
feedback – Free text comment regarding the label
scores – A dict of scores, the key is a score name and the value is
value (the score) –
- Returns:
A list of LabelScore instances