Model Run

class labelbox.schema.model_run.DataSplit(value)[source]

Bases: Enum

An enumeration.

class labelbox.schema.model_run.ModelRun(client, field_values)[source]

Bases: DbObject

class Status(value)[source]

Bases: Enum

An enumeration.

add_predictions(name: str, predictions: str | Path | Iterable[Dict] | Iterable[Label]) MEAPredictionImport[source]

Uploads predictions to a new Editor project.

Parameters:
  • name (str) – name of the AnnotationImport job

  • predictions (str or Path or Iterable) – url that is publicly accessible by Labelbox containing an ndjson file OR local path to an ndjson file OR iterable of annotation rows

Returns:

AnnotationImport

delete()[source]

Deletes specified Model Run.

Returns:

Query execution success.

delete_model_run_data_rows(data_row_ids: List[str])[source]

Deletes data rows from Model Runs.

Parameters:

data_row_ids (list) – List of data row ids to delete from the Model Run.

Returns:

Query execution success.

export(task_name: str | None = None, params: ModelRunExportParams | None = None) ExportTask[source]

Creates a model run export task with the given params and returns the task.

>>>    export_task = export("my_export_task", params={"media_attributes": True})
export_v2(task_name: str | None = None, params: ModelRunExportParams | None = None) Task | ExportTask[source]

Creates a model run export task with the given params and returns the task.

>>>    export_task = export_v2("my_export_task", params={"media_attributes": True})
get_config() Dict[str, Any][source]

Gets Model Run’s training metadata :returns: training metadata as a dictionary

reset_config() Dict[str, Any][source]

Resets Model Run’s training metadata config :returns: Model Run id and reset training metadata

send_to_annotate_from_model(destination_project_id: str, task_queue_id: str | None, batch_name: str, data_rows: UniqueIds | GlobalKeys, params: SendToAnnotateFromModelParams) Task[source]

Sends data rows from a model run to a project for annotation.

Example Usage:
>>> task = model_run.send_to_annotate_from_model(
>>>     destination_project_id=DESTINATION_PROJECT_ID,
>>>     batch_name="batch",
>>>     data_rows=UniqueIds([DATA_ROW_ID]),
>>>     task_queue_id=TASK_QUEUE_ID,
>>>     params={})
>>> task.wait_till_done()
Parameters:
  • destination_project_id – The ID of the project to send the data rows to.

  • task_queue_id – The ID of the task queue to send the data rows to. If not specified, the data rows will be sent to the Done workflow state.

  • batch_name – The name of the batch to create. If more than one batch is created, additional batches will be named with a monotonically increasing numerical suffix, starting at “_1”.

  • data_rows – The data rows to send to the project.

  • params – Additional parameters for this operation. See SendToAnnotateFromModelParams for details.

Returns: The created task for this operation.

update_config(config: Dict[str, Any]) Dict[str, Any][source]

Updates the Model Run’s training metadata config :param config: A dictionary of keys and values :type config: dict

Returns:

Model Run id and updated training metadata

upsert_data_rows(data_row_ids=None, global_keys=None, timeout_seconds=3600)[source]

Adds data rows to a Model Run without any associated labels :param data_row_ids: data row ids to add to model run :type data_row_ids: list :param global_keys: global keys for data rows to add to model run :type global_keys: list :param timeout_seconds: Max waiting time, in seconds. :type timeout_seconds: float

Returns:

ID of newly generated async task

upsert_labels(label_ids: List[str] | None = None, project_id: str | None = None, timeout_seconds=3600)[source]

Adds data rows and labels to a Model Run

Parameters:
  • label_ids (list) – label ids to insert

  • project_id (string) – project uuid, all project labels will be uploaded Either label_ids OR project_id is required but NOT both

  • timeout_seconds (float) – Max waiting time, in seconds.

Returns:

ID of newly generated async task

upsert_predictions_and_send_to_project(name: str, predictions: str | Path | Iterable[Dict], project_id: str, priority: int | None = 5) MEAPredictionImport[source]
Provides a convenient way to execute the following steps in a single function call:
  1. Upload predictions to a Model

  2. Create a batch from data rows that had predictions assocated with them

  3. Attach the batch to a project

  4. Add those same predictions to the project as MAL annotations

Note that partial successes are possible. If it is important that all stages are successful then check the status of each individual task with task.errors. E.g.

>>>    mea_import_job, batch, mal_import_job = upsert_predictions_and_send_to_project(name, predictions, project_id)
>>>    # handle mea import job successfully created (check for job failure or partial failures)
>>>    print(mea_import_job.status, mea_import_job.errors)
>>>    if batch is None:
>>>        # Handle batch creation failure
>>>    if mal_import_job is None:
>>>        # Handle mal_import_job creation failure
>>>    else:
>>>        # handle mal import job successfully created (check for job failure or partial failures)
>>>        print(mal_import_job.status, mal_import_job.errors)
Parameters:
  • name (str) – name of the AnnotationImport job as well as the name of the batch import

  • predictions (Iterable) – iterable of annotation rows

  • project_id (str) – id of the project to import into

  • priority (int) – priority of the job

Returns:

Tuple[MEAPredictionImport, Batch, MEAToMALPredictionImport] If any of these steps fail the return value will be None.

class labelbox.schema.model_run.ModelRunDataRow(client, model_id, *args, **kwargs)[source]

Bases: DbObject