This guide will walk you through submitting your first batch to Doubleword. You can submit a batch either through the [Doubleword Console](https://app.doubleword.ai/) or the via the Batch API, which is fully compatible with the [OpenAI batch endpoint](https://platform.openai.com/docs/api-reference/batch), making it seamless to switch between providers. When you submit a batch file, you're sending multiple requests to be processed in parallel. Each request is formatted as a single line in your batch file, allowing for efficient bulk processing. Batch processing is ideal for workloads that: * Contain multiple independent requests that can run simultaneously * Don't require immediate responses * Would otherwise exceed rate limits if sent individually :::tip Don't want to go step by step? Jump straight to our interactive API reference [docs](/batches/api-reference). ::: ## 1. Prepare your batch file (.jsonl) Each batch workload starts with a `.jsonl` file where each line contains a single, valid API request[^1]. Each line in your `.jsonl` file must include these **required** fields: * **`custom_id`**: Your unique identifier for tracking the request (string, max 64 characters) * **`method`**: HTTP method, always `"POST"` * **`url`**: API endpoint, typically `"/v1/chat/completions"` * **`body`**: The actual API request parameters (model, messages, temperature, etc.). This is the same body as your real-time request. Below is an example batch jsonl file with 4 requests - including both text-only and multi-modal requests with both image URLs and base64 encoded images[^2]. ```json {"custom_id": "colorado", "method": "POST", "url": "/v1/chat/completions", "body": {"model": "{{selectedModel.name}}", "messages": [{"role": "user", "content": "What is the capital of Colorado?"}]}} {"custom_id": "image-boardwalk", "method": "POST", "url": "/v1/chat/completions", "body": {"model": "{{selectedModel.name}}", "messages": [{"role": "user", "content": [{"type": "text", "text": "What is in this image?"}, {"type": "image_url", "image_url": {"url": "https://images.unsplash.com/photo-1506905925346-21bda4d32df4"}}]}]}} {"custom_id": "python-code", "method": "POST", "url": "/v1/chat/completions", "body": {"model": "{{selectedModel.name}}", "messages": [{"role": "user", "content": "Write a Python function that calculates the fibonacci sequence."}]}} {"custom_id": "blue-square", "method": "POST", "url": "/v1/chat/completions", "body": {"model": "{{selectedModel.name}}", "messages": [{"role": "user", "content": [{"type": "text", "text": "What color is this image?"}, {"type": "image_url", "image_url": {"url": "data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBkSEw8UHRofHh0aHBwgJC4nICIsIxwcKDcpLDAxNDQ0Hyc5PTgyPC4zNDL/2wBDAQkJCQwLDBgNDRgyIRwhMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjL/wAARCABkAGQDASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwDxyiiiv3E8wKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooA//Z"}}]}]}} ``` Copy the content into a file named `batch_input_example.jsonl` or download the example file [here.](https://cdn.sanity.io/files/g1zo7y59/production/fb5e131808b6c631ff1283f8774e965514b56493.jsonl) ## 2. Upload your batch input file :::caution The current limits are 200MB max file size and 50,000 max requests per file. If your file exceeds either limit, we suggest splitting it into multiple files. ::: #### Batch API Simply point the OpenAI base URL to Doubleword and include your API key[^3]. ```python tabs=upload name=Python sync=api from openai import OpenAI client = OpenAI( api_key="{{apiKey}}", base_url="https://api.doubleword.ai/v1" ) batch_input_file = client.files.create( file=open("batch_input_example.jsonl", "rb"), purpose="batch" ) print(batch_input_file) ``` ```javascript tabs=upload name=JavaScript sync=api import OpenAI from "openai"; import fs from "fs"; const client = new OpenAI({ apiKey: "{{apiKey}}", baseURL: "https://api.doubleword.ai/v1/" }); const batchInputFile = await.client.files.create({ file: fs.createReadStream("batch_input_example.jsonl"), purpose: "batch" }); console.log(batchInputFile); ``` ```bash tabs=upload name=cURL sync=api curl https://api.doubleword.ai/v1/files \ -H "Authorization: Bearer {{apiKey}}" \ -F purpose="batch" \ -F file="@batch_input_example.jsonl" ``` The results should look like: ```json {"id":"09efc84c-cd4f-4ae9-a2fd-5efbc4213225", "object": "file", "bytes": 899, "created_at": 1766154501, "filename": "batch_input_example.jsonl", "purpose": "batch", "expires_at": 1768746501} ``` :::note The `id` field returned. This is the batch id that we'll use it later to trigger the batch. ::: #### Doubleword Console Files can also be uploaded via the [Doubleword Console](https://app.doubleword.ai/batches?batchesPage=1). To do so: 1. Navigate to the **Batches** section 2. If you want to dispatch the file for batch processing immediately, click the **Create First Batch** button. If you want to upload the file for later processing, click on [files](https://app.doubleword.ai/batches?batchesPage=1&tab=files) and then click **Upload first file**. 3. Upload your `.jsonl` file ## 3. Create the Batch #### Batch API Once you have uploaded the file, you can now create the batch, and set the requests in motion. ```python tabs=create name=Python sync=api from openai import OpenAI client = OpenAI( api_key="{{apiKey}}", base_url="https://api.doubleword.ai/v1/" ) batch_input_file_id = batch_input_file.id result = client.batches.create( input_file_id=batch_input_file_id, endpoint="/v1/chat/completions", completion_window="24h", metadata={ "description": "daily eval job" } ) print(result) ``` ```javascript tabs=create name=JavaScript sync=api import OpenAI from "openai"; const client = new OpenAI({ apiKey: "{{apiKey}}", baseURL: "https://api.doubleword.ai/v1/" }); const batchInputFileId = batchInputFile.id; const result = await client.batches.create({ input_file_id: batchInputFileId, endpoint: "/v1/chat/completions", completion_window: "24h", metadata: { description: "daily eval job" } }); console.log(result); ``` ```bash tabs=create name=cURL sync=api curl https://api.doubleword.ai/v1/batches \ -H "Authorization: Bearer {{apiKey}}" \ -H "Content-Type: application/json" \ -d '{ "input_file_id": $INPUT_FILE_ID, "endpoint": "/v1/chat/completions", "completion_window": "24h", "metadata": { "description": "daily eval job" } }' ``` You can set the completion window to either `24h` or `1h`. `24h` gives the best price, but `1hr` means you get the results back quicker. The API will respond with an object with information about your batch. ```json { "id": "29cd13af-5ce9-487e-98ff-4eb6730c12b4", "object": "batch", "endpoint": "/v1/chat/completions", "errors": null, "input_file_id": "09efc84c-cd4f-4ae9-a2fd-5efbc4213225", "completion_window": "24h", "status": "validating", "output_file_id": "b4213e44-3c85-4689-9cae-e6d986785143", "error_file_id": "49fdc045-77a8-409b-bb3e-d661df05f9fb", "created_at": 1714508499, "in_progress_at": null, "expires_at": 1714536634, "completed_at": null, "failed_at": null, "expired_at": null, "request_counts": { "total": 0, "completed": 0, "failed": 0 }, "metadata": null } ``` The output and error file ids are populated immediately, and allow you to retrieve individual results as they become available. :::note We will use the `id`, `error_file_id`, and `output_file_id` fields later to track the progress of the batch, and download the results as they complete. ::: #### Doubleword Console To create a batch via the Doubleword Console simply navigate to the [Files Tab](https://app.doubleword.ai/batches?batchesPage=1&tab=files) under **Batches** and hit the play button under **Actions** to trigger the batch. ## 4. Tracking your batches progress :::tip Did you know you can download partial results as the batch is processing? To do this just download the output file that is created when the batch is submitted. ::: #### Batch API ```python tabs=track name=Python sync=api from openai import OpenAI client = OpenAI( api_key="{{apiKey}}", base_url="https://api.doubleword.ai/v1/" ) # Step 3: Check batch status batch_status = client.batches.retrieve(batch.id) print(f"Status: {batch_status.status}") ``` ```javascript tabs=track name=JavaScript sync=api import OpenAI from "openai"; const client = new OpenAI({ apiKey: "{{apiKey}}", baseURL: "https://api.doubleword.ai/v1/" }); const batchStatus = await client.batches.retrieve(batch.id); console.log(`Status: ${batchStatus.status}`); ``` ```bash tabs=track name=cURL sync=api curl "https://api.doubleword.ai/v1/batches/batch_abc123" \ -H "Authorization: Bearer {{apiKey}}" ``` #### Doubleword Console The [Doubleword Console](https://app.doubleword.ai/batches) contains tools to track the completion of the requests in your batch in real time. Simply click on a batch to view each response as requests complete, see the price of your batch over time, and view the number of completed requests. ## 5. Retrieving results When a batch is triggered, the Doubleword Batch engine will create two files, an _Error_ file, and an _Output_ file. The identifiers for these file can be found in the response to the batch creation request. The error file contains details on any request that failed, while the output file contains all the outputs from valid requests. These files are accessible as soon as the batch is triggered [^4]. To download all the completed responses for an in-progress batch, download the output or error file. If the batch is still in progress, the response headers will indicate how to resume the download from the last retrieved response. :::note Make sure to replace the output file id with the one returned from the batch creation request! ::: #### Batch API ```python tabs=retrieve name=Python sync=api import requests #Get file output ID batch_output_file_id = result.output_file_id # Download file content url = f"https://api.doubleword.ai/v1/files/{batch_output_file_id}/content" headers = { "Authorization": "Bearer {{apiKey}}" } response = requests.get(url, headers=headers) # Check if file is incomplete (batch still running) is_incomplete = response.headers.get("X-Incomplete") == "true" last_line = response.headers.get("X-Last-Line") # Save to file with open("batch-output.jsonl", "wb") as f: f.write(response.content) if is_incomplete: print(f"Partial file downloaded (up to line {last_line})") print(f"To resume from this point: add ?offset={last_line} to the URL") else: print("Complete file downloaded!") ``` ```javascript tabs=retrieve name=JavaScript sync=api const response = await fetch( "https://api.doubleword.ai/v1/files/cb4213e44-3c85-4689-9cae-e6d986785143/content", { headers: { "Authorization": "Bearer {{apiKey}}" } } ); // Check if file is incomplete (batch still running) const isIncomplete = response.headers.get("X-Incomplete") === "true"; const lastLine = response.headers.get("X-Last-Line"); // Save to file const content = await response.text(); await fs.promises.writeFile( "batch-589a3368-d899-450f-8619-343a9292fa18-output.jsonl", content ); if (isIncomplete) { console.log(`Partial file downloaded (up to line ${lastLine})`); console.log(`To resume: add ?offset=${lastLine} to the URL`); } else { console.log("Complete file downloaded!"); } ``` ```bash tabs=retrieve name=cURL sync=api curl https://api.doubleword.ai/v1/files/b4213e44-3c85-4689-9cae-e6d986785143/content \ -H "Authorization: Bearer {{apiKey}}" \ -D headers.txt \ -o batch-output.jsonl # Check completion status if grep -q "X-Incomplete: true" headers.txt; then last_line=$(grep "X-Last-Line:" headers.txt | cut -d' ' -f2 | tr -d '\r') echo "Partial file downloaded (up to line $last_line)" echo "" echo "To resume, run:" echo "curl 'https://api.doubleword.ai/v1/files/b4213e44-3c85-4689-9cae-e6d98678514/content?offset=$last_line' \\" echo " -H 'Authorization: Bearer {{apiKey}}' >> batch-output.jsonl" else echo "Complete file downloaded!" fi ``` #### Doubleword Console The output file of the batches can be retrived from the **[Batches](https://app.doubleword.ai/batches?tab=batches) Section** by clicking the Document Icon under **Results** in the table and then clicking **Download**. ## 6. Cancel a batch You can cancel a batch at any time. Any requests that haven't started processing will immediately move to the cancelled state[^5], and you won't be charged for successfully cancelled requests. If some requests have already been processed before cancellation, you can still download and view those results. #### Batch API ```python tabs=cancel name=Python sync=api from openai import OpenAI client = OpenAI( api_key="{{apiKey}}", base_url="https://api.doubleword.ai/v1" ) client.batches.cancel("29cd13af-5ce9-487e-98ff-4eb6730c12b4") ``` ```javascript tabs=cancel name=JavaScript sync=api import OpenAI from "openai"; const client = new OpenAI({ apiKey: "{{apiKey}}", baseURL: "https://api.doubleword.ai/v1" }); await client.batches.cancel("29cd13af-5ce9-487e-98ff-4eb6730c12b4"); ``` ```bash tabs=cancel name=cURL sync=api curl https://api.doubleword.ai/v1/batches/29cd13af-5ce9-487e-98ff-4eb6730c12b4/cancel \ -H "Authorization: Bearer {{apiKey}}" \ -X POST ``` #### Doubleword Console In the **[Batches](https://app.doubleword.ai/batches?tab=batches)** section, simply navigate to the row in the table you would like to cancel and select the X icon under **Results** to cancel the batch. ## 7. Get a list of all batches #### Batch API You can see all of your batches by calling the list batches endpoint. ```python tabs=list name=Python sync=api from openai import OpenAI client = OpenAI( api_key="{{apiKey}}", base_url="https://api.doubleword.ai/v1" ) client.batches.list(limit=10) ``` ```javascript tabs=list name=JavaScript sync=api import OpenAI from "openai"; const client = new OpenAI({ apiKey: "{{apiKey}}", baseURL: "https://api.doubleword.ai/v1" }); const list = await openai.batches.list(); for await (const batch of list) { console.log(batch); } ``` ```bash tabs=list name=cURL sync=api curl "https://api.doubleword.ai/v1/batches?limit=10" \ -H "Authorization: Bearer {{apiKey}}" \ -H "Content-Type: application/json" ``` #### Doubleword Console You can list all batches by navitating to the **[Batches](https://app.doubleword.ai/batches?tab=batches)** section ## What next? After reading this guide, you should be able to submit batches to the Doubleword API, track their progress, and download their results. :::note For any issues you encounter while using the API, please contact [us](mailto:support@doubleword.ai). ::: ## Common Pitfalls Troubleshooting your first batch submissionIf your batch fails to submit, check that you: - Have access to the model specified in your JSONL file[^1] - Have created an API key[^3]. - Have enough credits in your account to run the batch. If you need to add more credits to your account, read the documentation [here](https://docs.doubleword.ai/batches/adding-credits-to-your-account). [^1]: To understand what a jsonl file is and how to create one, check out our [JSONL Files guide](https://docs.doubleword.ai/batches/jsonl-files). [^2]: For more information on how to format multi-modal requests see [here](https://platform.openai.com/docs/guides/images-vision?api-mode=chat&format=ba{se64-encoded#giving-a-model-images-as-input). If you have many multimodal requests, we recommend uploading them somewhere you control (like s3) and then using the urls [^3]: To generate an API key for use with the Doubleword API, navigate to the [API keys](https://app.doubleword.ai/api-keys) page in the console or read the documentation [here](https://docs.doubleword.ai/batches/creating-an-api-key). [^4]: This is different from the openAI API, in which you must wait for your whole batch to complete before you can retrieve your results. [^5]: Requests in flight are cancelled on a best-effort basis.