Defining Workflow Tasks
Specify the tasks that your apps can run.
Render Workflows are in limited early access.
During the early access period, the Workflows API and Python SDK might introduce breaking changes.
After you create your first workflow, you can start defining your own tasks. This article describes supported syntax and configuration options.
Basic example
The Workflows SDK is currently available only for Python.
SDKs for other languages are coming soon. It is not currently possible to define tasks in languages besides Python.
Define a task in your workflow service by applying the @task
decorator to any function, like so:
This defines a basic task named calculate_square
that takes a single integer argument and returns its square.
For all supported @task
options, see the Python SDK reference.
Task arguments
A task's function can define any number of arguments. This example task takes three arguments of different types:
Task arguments are positional. Whenever you run a task, you provide its arguments in a JSON array in the same order as they appear in the task's function signature:
Argument and return types must be JSON-serializable. Your applications provide task arguments in a JSON array via the Render API, and a task's result is also returned as JSON.
All task arguments are required. If you attempt to run a task with missing arguments (or too many arguments), the task will fail. Argument values can be null, as long as your task logic supports this.
Retry logic
Your tasks can automatically retry if a run fails. A task run is considered to have failed if its function raises an exception instead of returning a value.
Retries are useful for tasks that might be affected by temporary failures, such as network errors or timeouts.
Default retry behavior
By default, tasks use the following retry logic:
- Retry up to 3 times (i.e., 4 total attempts)
- Wait 1 second before attempting the first retry
- Double the wait time after each retry (i.e., one second, two seconds, four seconds)
Customizing retries
You can customize retry behavior on a per-task basis:
Provide retry settings to the @task
decorator with the following syntax:
This contrived example defines a task named flip_coin
that raises an exception when it "flips tails", causing the run to fail and retry according to its settings.
Running subtasks
A task can straightforwardly run other tasks that are defined in the same workflow. These subtasks each run in their own instance, just like their parent.
When should I run a subtask?
Subtasks are most helpful when different parts of a larger job benefit from long-running, independent compute.
For simple jobs (such as the very basic example below), it's more efficient to perform the entirety of your logic in a single task.
The simple sum_squares
task below runs two calculate_square
subtasks:
When running subtasks:
- In most cases, the parent task should be defined as
async
.- Otherwise, it can't
await
the results of its subtasks.
- Otherwise, it can't
- You run a subtask by calling the corresponding function (e.g.,
calculate_square
above).- However, this call doesn't return the function's defined return value!
- Instead, this kicks off a task run and returns a special
TaskInstance
object. - As shown, you can
await
this object to obtain the subtask's actual return value.
- Your task can call functions that are not marked as tasks. These functions run and return as normal (they do not spin up their own task instances).
A task can run another task defined in a different workflow. However:
- This requires instead using the Workflows SDK or Render API, as described in Running Workflow Tasks.
- This is not tracked as a task/subtask relationship when visualizing task execution in the Render Dashboard.
Parallelizing subtasks
When running subtasks, it's usually helpful to run multiple of them in parallel, such as to chunk a large workload into smaller independent pieces. Common examples include processing batches of images or analyzing different sections of a large document.
Use asyncio.gather
to run multiple subtasks in parallel. In the example below, the process_photo_upload
task runs a separate process_image
subtask for each element in its image_urls
argument:
If you don't use asyncio.gather
or a similar function, subtasks run serially.
For example:
Serial execution is helpful when running a chain of subtasks that depend on each other. However, it dramatically slows execution for subtasks that are completely independent. Parallelize wherever your use case allows.