Inference job with realtime logs
The API endpoint supports launching inference jobs and displaying real-time logs. Our code examples on GitHub explain how to make this work TypeScript, PythonInference jobs in batch
The API endpoint supports launching any number of jobs, after a job is scheduled, you will get a prompt_id. With the prompt_id you can use the query endpoint to get the real-time status of the job, and download the assets when it is finished. Our code examples on GitHub explain how to make this work TypeScript, PythonQuery the status of an inference job
1 - Grab your API Keys from the dashboard in the “Your Workflows” tab2 - Make a GET to this endpoint:
https://api.viewcomfy.com/api/workflow/infer/?prompt_ids=${PROMPT_ID}3 - The query parameters of this endpoint accept multiple prompt_ids. To send more than one, you need to encode them as a URI
4 - When an inference has finished, it will have the property
completed = True and the status will be success or error5 - If the status is
success you can grab the files from the outputs property. More information about the model can be found here TypeScript, Python
snippet.ts