Skip to main content

Inference job with realtime logs

The API endpoint supports launching inference jobs and displaying real-time logs. Our code examples on GitHub explain how to make this work TypeScript, Python

Inference jobs in batch

The API endpoint supports launching any number of jobs, after a job is scheduled, you will get a prompt_id. With the prompt_id you can use the query endpoint to get the real-time status of the job, and download the assets when it is finished. Our code examples on GitHub explain how to make this work TypeScript, Python

Query the status of an inference job

1 - Grab your API Keys from the dashboard in the “Your Workflows” tab
2 - Make a GET to this endpoint: https://api.viewcomfy.com/api/workflow/infer/?prompt_ids=${PROMPT_ID}
3 - The query parameters of this endpoint accept multiple prompt_ids. To send more than one, you need to encode them as a URI
4 - When an inference has finished, it will have the property completed = True and the status will be success or error
5 - If the status is success you can grab the files from the outputs property. More information about the model can be found here TypeScript, Python
snippet.ts

const promptIds = ["123", "564"];
const clientId = "<YOUR_CLIENT_ID>";
const clientSecret = "<YOUR_CLIENT_SECRET>";

const urlParams = `?${promptIds.map(id => `prompt_ids=${encodeURIComponent(id)}`).join('&')}`;
const url = `https://api.viewcomfy.com/api/workflow/infer/${urlParams}`;

const response = await fetch(url, {
    headers: {
        "client_id": clientId,
        "client_secret": clientSecret,
        "content-type": "application/json"
    },
});

Cancel an inference job

Using the prompt_id that was returned when launching the inference job, you can cancel an ongoing inference job by calling this endpoint. Our code examples on GitHub explain how to make this work TypeScript, Python