AppApi

class AppApi[source]

Bases: supervisely.api.task_api.TaskApi

AppApi

Methods

create_task_detached

delete_unused_checkpoints

rtype

Dict

deploy_model

deploy_model_async

rtype

int

download_git_archive

download_git_file

download_import_file

get_context

Get context information by task ID.

get_ecosystem_module_id

Returns ecosystem module id by slug.

get_ecosystem_module_info

Returns ModuleInfo object by module id and version.

get_field

get_fields

get_import_files_list

get_info

get_info_by_id

type id

int

get_list

rtype

List[AppInfo]

get_list_all_pages

Get list of all or limited quantity entities from the Supervisely server.

get_list_all_pages_generator

This generator function retrieves a list of all or a limited quantity of entities from the Supervisely server, yielding batches of entities as they are retrieved

get_list_ecosystem_modules

get_list_idx_page_async

Get the list of items for a given page number.

get_list_page_generator_async

Yields list of images in dataset asynchronously page by page.

get_sessions

rtype

List[SessionInfo]

get_status

rtype

Status

get_training_metrics

get_url

info_sequence

info_tuple_name

initialize

is_ready_for_api_calls

Checks if app is ready for API calls.

list_checkpoints

raise_for_status

Raise error if Task status is ERROR.

run_dtl

run_inference

run_train

send_request

set_field

rtype

Dict

set_fields

rtype

Dict

set_fields_from_dict

rtype

Dict

set_output_archive

rtype

Dict

set_output_directory

set_output_error

Set custom error message to the task output.

set_output_file_download

rtype

Dict

set_output_project

rtype

Dict

set_output_report

rtype

Dict

set_output_text

Set custom text message to the task output.

start

rtype

SessionInfo

stop

rtype

Status

submit_logs

rtype

None

update_meta

Update given task metadata :type id: int :param id: int — task id :type data: dict :param data: dict — meta data to update

update_status

Sets the specified status for the task.

upload_dtl_archive

upload_files

wait

wait_until_ready_for_api_calls

Waits until app is ready for API calls.

Attributes

MAX_WAIT_ATTEMPTS

Maximum number of attempts that will be made to wait for a certain condition to be met.

WAIT_ATTEMPT_TIMEOUT_SEC

Number of seconds for intervals between attempts.

InfoType

alias of supervisely.api.module_api.AppInfo

class PluginTaskType

Bases: supervisely.collection.str_enum.StrEnum

PluginTaskType

class RestartPolicy

Bases: supervisely.collection.str_enum.StrEnum

RestartPolicy

class Status

Bases: supervisely.collection.str_enum.StrEnum

Status

class Workflow[source]

Bases: object

The workflow functionality is used to create connections between the states of projects and tasks (application sessions) that interact with them in some way. By assigning connections to various entities, the workflow tab allows tracking the history of project changes. The active task always acts as a node, for which input and output elements are defined. There can be multiple input and output elements. A task can also be used as an input or output element. For example, an inference task takes a deployed model and a project as inputs, and the output is a new state of the project. This functionality uses versioning optionally.

If instances are not compatible with the workflow features, the functionality will be disabled.

Parameters
api : supervisely.api.api.Api

Supervisely API object

min_instance_version : str

Minimum version of the instance that supports workflow features

add_input_dataset(dataset, task_id=None, meta=None)[source]

Add input type “dataset” to the workflow node. This type is used to show that the application has used the specified dataset. Customization of the dataset node is not supported and will be ignored. You can only customize the main node with this method.

Parameters
dataset : Union[int, DatasetInfo]

Dataset ID or DatasetInfo object.

task_id : Optional[int]

Task ID. If not specified, the task ID will be determined automatically.

meta : Optional[Union[WorkflowMeta, dict]]

Additional data for node customization.

Returns

Response from the API.

Return type

dict

add_input_file(file, model_weight=False, task_id=None, meta=None)[source]

Add input type “file” to the workflow node. This type is used to show that the application has used the specified file.

Parameters
file : Union[int, FileInfo, str]

File ID, FileInfo object or file path in team Files.

model_weight : bool

Flag to indicate if the file is a model weight.

task_id : Optional[int]

Task ID. If not specified, the task ID will be determined automatically.

meta : Optional[Union[WorkflowMeta, dict]]

Additional data for node customization.

Returns

Response from the API.

Return type

dict

add_input_folder(path, task_id=None, meta=None)[source]

Add input type “folder” to the workflow node. Path to the folder is a path in Team Files. This type is used to show that the application has used files from the specified folder.

Parameters
path : str

Path to the folder in Team Files.

task_id : Optional[int]

Task ID. If not specified, the task ID will be determined automatically.

meta : Optional[Union[WorkflowMeta, dict]]

Additional data for node customization.

Returns

Response from the API.

Return type

dict

add_input_job(id, task_id=None, meta=None)[source]

Add input type “job” to the workflow node. Job is a Labeling Job. This type indicates that the application has utilized a labeling job during its operation.

Parameters
id : int

Labeling Job ID.

task_id : Optional[int]

Task ID. If not specified, the task ID will be determined automatically.

meta : Optional[Union[WorkflowMeta, dict]]

Additional data for node customization.

Returns

Response from the API.

Return type

dict

add_input_project(project=None, version_id=None, version_num=None, task_id=None, meta=None)[source]

Add input type “project” to the workflow node. The project version can be specified to indicate that the project version was used especially for this task. Arguments project and version_id are mutually exclusive. If both are specified, version_id will be used. Argument version_num can only be used in conjunction with the project. This type is used to show that the application has used the specified project. Customization of the project node is not supported and will be ignored. You can only customize the main node with this method.

Parameters
project : Optional[Union[int, ProjectInfo]]

Project ID or ProjectInfo object.

version_id : Optional[int]

Version ID of the project.

version_num : Optional[int]

Version number of the project. This argument can only be used in conjunction with the project.

task_id : Optional[int]

Task ID. If not specified, the task ID will be determined automatically.

meta : Optional[Union[WorkflowMeta, dict]]

Additional data for node customization.

Returns

Response from the API.

Return type

dict

add_input_task(input_task_id, task_id=None, meta=None)[source]

Add input type “task” to the workflow node. This type usually indicates that the one application has used another application for its work.

Parameters
input_task_id : int

Task ID that is used as input.

task_id : Optional[int]

Task ID of the node. If not specified, the task ID will be determined automatically.

meta : Optional[Union[WorkflowMeta, dict]]

Additional data for node customization.

Returns

Response from the API.

Return type

dict

add_output_app(task_id=None, meta=None)[source]

Add output type “app_session” to the workflow node. This type is used to show that the application has an offline session in which you can find the result of its work.

Parameters
task_id : Optional[int]

App Task ID. If not specified, the task ID will be determined automatically.

meta : Optional[Union[WorkflowMeta, dict]]

Additional data for node customization.

Returns

Response from the API.

Return type

dict

add_output_dataset(dataset, task_id=None, meta=None)[source]

Add output type “dataset” to the workflow node. This type is used to show that the application has created a dataset with the result of its work. Customization of the dataset node is not supported and will be ignored. You can only customize the main node with this method.

Parameters
dataset : Union[int, DatasetInfo]

Dataset ID or DatasetInfo object.

task_id : Optional[int]

Task ID. If not specified, the task ID will be determined automatically.

meta : Optional[Union[WorkflowMeta, dict]]

Additional data for node customization.

Returns

Response from the API.

Return type

dict

add_output_file(file, model_weight=False, task_id=None, meta=None)[source]

Add output type “file” to the workflow node. This type is used to show that the application has created a file with the result of its work.

Parameters
file : Union[int, FileInfo]

File ID or FileInfo object.

model_weight : bool

Flag to indicate if the file is a model weight.

task_id : Optional[int]

Task ID. If not specified, the task ID will be determined automatically.

meta : Optional[Union[WorkflowMeta, dict]]

Additional data for node customization.

Returns

Response from the API.

Return type

dict

add_output_folder(path, task_id=None, meta=None)[source]

Add output type “folder” to the workflow node. Path to the folder is a path in Team Files. This type is used to show that the application has created a folder with the result files of its work.

Parameters
path : str

Path to the folder.

task_id : Optional[int]

Task ID. If not specified, the task ID will be determined automatically.

meta : Optional[Union[WorkflowMeta, dict]]

Additional data for node customization.

Returns

Response from the API.

Return type

dict

add_output_job(id, task_id=None, meta=None)[source]

Add output type “job” to the workflow node. Job is a Labeling Job. This type is used to show that the application has created a labeling job with the result of its work.

Parameters
id : int

Labeling Job ID.

task_id : Optional[int]

Task ID. If not specified, the task ID will be determined automatically.

meta : Optional[Union[WorkflowMeta, dict]]

Additional data for node customization.

Returns

Response from the API.

Return type

dict

add_output_project(project, version_id=None, task_id=None, meta=None)[source]

Add output type “project” to the workflow node. The project version can be specified with “version” argument to indicate that the project version was created especially as result of this task. This type is used to show that the application has created a project with the result of its work. Customization of the project node is not supported and will be ignored. You can only customize the main node with this method.

Parameters
project : Union[int, ProjectInfo]

Project ID or ProjectInfo object.

version_id : Optional[int]

Version ID of the project.

task_id : Optional[int]

Task ID. If not specified, the task ID will be determined automatically.

meta : Optional[Union[WorkflowMeta, dict]]

Additional data for node customization.

Returns

Response from the API.

Return type

dict

add_output_task(output_task_id, task_id=None, meta=None)[source]

Add output type “task” to the workflow node. This type is used to show that the application has created a task with the result of its work.

Parameters
output_task_id : int

Created task ID.

task_id : Optional[int]

Task ID of the node. If not specified, the task ID will be determined automatically.

meta : Optional[Union[WorkflowMeta, dict]]

Additional data for node customization.

Returns

Response from the API.

Return type

dict

check_instance_compatibility()[source]

Decorator to check instance compatibility with workflow features. If the instance is not compatible, the function will not be executed.

Parameters
min_instance_version : Optional[str]

Determine the minimum instance version that accepts the workflow method.

If not specified, the minimum version will be “6.9.31”. :type min_instance_version: Optional[str]

disable()[source]

Disable the workflow functionality.

enable()[source]

Enable the workflow functionality.

is_enabled()[source]

Check if the workflow functionality is enabled.

Return type

bool

create_task_detached(workspace_id, task_type=None)[source]
delete_unused_checkpoints(task_id)
Return type

Dict

deploy_model(agent_id, model_id)[source]
deploy_model_async(agent_id, model_id)
Return type

int

download_git_archive(ecosystem_item_id, app_id, version, save_path, log_progress=True, ext_logger=None)[source]
download_git_file(app_id, version, file_path, save_path)[source]
download_import_file(id, file_path, save_path)[source]
get_context(id)

Get context information by task ID.

Parameters
id : int

Task ID in Supervisely.

Returns

Context information in dict format

Return type

dict

Usage example
import supervisely as sly

task_id = 121230

os.environ['SERVER_ADDRESS'] = 'https://app.supervisely.com'
os.environ['API_TOKEN'] = 'Your Supervisely API Token'
api = sly.Api.from_env()

context = api.task.get_context(task_id)
print(context)
# Output: {
#     "team": {
#         "id": 16087,
#         "name": "alexxx"
#     },
#     "workspace": {
#         "id": 23821,
#         "name": "my_super_workspace"
#     }
# }
get_ecosystem_module_id(slug)[source]

Returns ecosystem module id by slug. E.g. slug = “supervisely-ecosystem/export-to-supervisely-format”. Slug can be obtained from the application URL in browser.

Parameters
slug : str

module slug, starts with “supervisely-ecosystem/”

Returns

ID of the module

Return type

int

Raises
  • KeyError – if module with given slug not found

  • KeyError – if there are multiple modules with the same slug

Usage example
import os
from dotenv import load_dotenv

import supervisely as sly

# Load secrets and create API object from .env file (recommended)
# Learn more here: https://developer.supervisely.com/getting-started/basics-of-authentication
load_dotenv(os.path.expanduser("~/supervisely.env"))
api = sly.Api.from_env()

slug = "supervisely-ecosystem/export-to-supervisely-format"
module_id = api.app.get_ecosystem_module_id(slug)
print(f"Module {slug} has id {module_id}")
# Module supervisely-ecosystem/export-to-supervisely-format has id 81
get_ecosystem_module_info(module_id, version=None)[source]

Returns ModuleInfo object by module id and version.

Parameters
module_id : int

ID of the module

version : Optional[str]

version of the module, e.g. “v1.0.0”

Returns

ModuleInfo object

Return type

ModuleInfo

Usage example
import os
from dotenv import load_dotenv

import supervisely as sly

# Load secrets and create API object from .env file (recommended)
# Learn more here: https://developer.supervisely.com/getting-started/basics-of-authentication
load_dotenv(os.path.expanduser("~/supervisely.env"))
api = sly.Api.from_env()

module_id = 81
module_info = api.app.get_ecosystem_module_info(module_id)
get_field(task_id, field)
get_fields(task_id, fields)
get_import_files_list(id)[source]
get_info(module_id, version=None)[source]
get_info_by_id(id)[source]
Parameters
id : int

int

Return type

AppInfo

Returns

application info by numeric id

get_list(team_id, filter=None, context=None, repository_key=None, show_disabled=False, integrated_into=None, session_tags=None, only_running=False, with_shared=True)[source]
Return type

List[AppInfo]

get_list_all_pages(method, data, progress_cb=None, convert_json_info_cb=None, limit=None, return_first_response=False)

Get list of all or limited quantity entities from the Supervisely server.

Parameters
method : str

Request method name

data : dict

Dictionary with request body info

progress_cb : Progress, optional

Function for tracking download progress.

convert_json_info_cb : Callable, optional

Function for convert json info

limit : int, optional

Number of entity to retrieve

return_first_response : bool, optional

Specify if return first response

get_list_all_pages_generator(method, data, progress_cb=None, convert_json_info_cb=None, limit=None, return_first_response=False)

This generator function retrieves a list of all or a limited quantity of entities from the Supervisely server, yielding batches of entities as they are retrieved

Parameters
method : str

Request method name

data : dict

Dictionary with request body info

progress_cb : Progress, optional

Function for tracking download progress.

convert_json_info_cb : Callable, optional

Function for convert json info

limit : int, optional

Number of entity to retrieve

return_first_response : bool, optional

Specify if return first response

async get_list_idx_page_async(method, data)

Get the list of items for a given page number. Page number is specified in the data dictionary.

Parameters
method : str

Method to call for listing items.

data : dict

Data to pass to the API method.

Returns

List of items.

Return type

Tuple[int, List[NamedTuple]]

async get_list_page_generator_async(method, data, pages_count=None, semaphore=None)

Yields list of images in dataset asynchronously page by page.

Parameters
method : str

Method to call for listing items.

data : dict

Data to pass to the API method.

pages_count : int, optional

Preferred number of pages to retrieve if used with a per_page limit. Will be automatically adjusted if the pagesCount differs from the requested number.

semaphore : asyncio.Semaphore, optional

Semaphore for limiting the number of simultaneous requests.

kwargs

Additional arguments.

Returns

List of images in dataset.

Return type

AsyncGenerator[List[ImageInfo]]

Usage example
import supervisely as sly
import asyncio

os.environ['SERVER_ADDRESS'] = 'https://app.supervisely.com'
os.environ['API_TOKEN'] = 'Your Supervisely API Token'
api = sly.Api.from_env()

method = 'images.list'
data = {
    'datasetId': 123456
}

loop = sly.utils.get_or_create_event_loop()
images = loop.run_until_complete(api.image.get_list_generator_async(method, data))
get_training_metrics(task_id)[source]
get_url(task_id)[source]
static info_sequence()[source]
static info_tuple_name()[source]
initialize(task_id, template, data=None, state=None)[source]
is_ready_for_api_calls(task_id)[source]

Checks if app is ready for API calls. :type task_id: int :param task_id: ID of the running task. :type task_id: int :rtype: bool :return: True if app is ready for API calls, False otherwise.

list_checkpoints(task_id)
raise_for_status(status)

Raise error if Task status is ERROR.

Parameters
status : Status

Status object.

Returns

None

Return type

NoneType

run_dtl(workspace_id, dtl_graph, agent_id=None)[source]
run_inference(agent_id, input_project_id, input_model_id, result_project_name, inference_config=None)[source]
run_train(agent_id, input_project_id, input_model_id, result_nn_name, train_config=None)[source]
send_request(task_id, method, data, context={}, skip_response=False, timeout=60, outside_request=True, retries=10, raise_error=False)
set_field(task_id, field, payload, append=False, recursive=False)
Return type

Dict

set_fields(task_id, fields)
Return type

Dict

set_fields_from_dict(task_id, d)
Return type

Dict

set_output_archive(task_id, file_id, file_name, file_url=None)
Return type

Dict

set_output_directory(task_id, file_id, directory_path)
set_output_error(task_id, title, description=None, show_logs=True)

Set custom error message to the task output.

Parameters
task_id : int

Application task ID.

title : str

Error message to be displayed in the task output.

description : Optional[str]

Description to be displayed in the task output.

show_logs : Optional[bool], default True

If True, the link to the task logs will be displayed in the task output.

Returns

Response JSON.

Return type

Dict

Usage example
import os
from dotenv import load_dotenv

import supervisely as sly

# Load secrets and create API object from .env file (recommended)
# Learn more here: https://developer.supervisely.com/getting-started/basics-of-authentication
if sly.is_development():
   load_dotenv(os.path.expanduser("~/supervisely.env"))
api = sly.Api.from_env()

task_id = 12345
title = "Something went wrong"
description = "Please check the task logs"
show_logs = True
api.task.set_output_error(task_id, title, description, show_logs)
set_output_file_download(task_id, file_id, file_name, file_url=None, download=True)
Return type

Dict

set_output_project(task_id, project_id, project_name=None)
Return type

Dict

set_output_report(task_id, file_id, file_name)
Return type

Dict

set_output_text(task_id, title, description=None, show_logs=False, zmdi_icon='zmdi-comment-alt-text', icon_color='#33c94c', background_color='#d9f7e4')

Set custom text message to the task output.

Parameters
task_id : int

Application task ID.

title : str

Text message to be displayed in the task output.

description : Optional[str]

Description to be displayed in the task output.

show_logs : Optional[bool], default False

If True, the link to the task logs will be displayed in the task output.

zmdi_icon : Optional[str], default "zmdi-comment-alt-text"

Icon class name from Material Design Icons (ZMDI).

icon_color : Optional[str], default "#33c94c" (nearest Duron Jolly Green)

Icon color in HEX format.

background_color : Optional[str], default "#d9f7e4" (Cosmic Latte)

Background color in HEX format.

Returns

Response JSON.

Return type

Dict

Usage example

import os
from dotenv import load_dotenv

import supervisely as sly

# Load secrets and create API object from .env file (recommended)
# Learn more here: https://developer.supervisely.com/getting-started/basics-of-authentication
if sly.is_development():
load_dotenv(os.path.expanduser("~/supervisely.env"))
api = sly.Api.from_env()

task_id = 12345
title = "Task is finished"
api.task.set_output_text(task_id, title)
stop(id)[source]
Return type

Status

submit_logs(logs)
Return type

None

update_meta(id, data, agent_storage_folder=None, relative_app_dir=None)

Update given task metadata :type id: int :param id: int — task id :type data: dict :param data: dict — meta data to update

update_status(task_id, status)

Sets the specified status for the task.

Parameters
task_id : int

Task ID in Supervisely.

status : One of the values from Status, e.g. Status.FINISHED, Status.ERROR, etc.

Task status to set.

Raises

ValueError – If the status value is not allowed.

Return type

None

upload_dtl_archive(task_id, archive_path, progress_cb=None)
upload_files(task_id, abs_paths, names, progress_cb=None)[source]
wait(id, target_status, attempts=None, attempt_delay_sec=None)[source]
wait_until_ready_for_api_calls(task_id, attempts=10, attempt_delay_sec=10)[source]

Waits until app is ready for API calls.

Parameters
task_id : int

ID of the running task.

attempts : int

Number of attempts to check if app is ready for API calls.

attempt_delay_sec : int

Delay between attempts in seconds.

Return type

bool

Returns

True if app is ready for API calls, False otherwise.