Lunos supports multimodal requests for models that can process non-text inputs. You can combine text with images, PDFs, audio, or video in one request and send it through the same chat-style API flow.
For output generation, use dedicated image generation endpoints.
Most multimodal requests use:
POST /v1/chat/completions
The request body uses messages, and each message can include a content array with multiple content blocks.
textimage_urlfile (for PDFs)input_audiovideo_urlPOST /v1/chat/completions for understanding existing files (image/PDF/audio/video input).POST /v1/images/generations when you want the model to create a new image.curl -X POST "https://api.lunos.tech/v1/chat/completions" \
-H "Authorization: Bearer YOUR_SECRET_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "google/gemini-2.5-flash",
"messages": [
{
"role": "user",
"content": [
{ "type": "text", "text": "Summarize the key information from this file and image." },
{ "type": "file", "file": { "url": "https://example.com/report.pdf" } },
{ "type": "image_url", "image_url": { "url": "https://example.com/diagram.png" } }
]
}
]
}'
import requests
url = "https://api.lunos.tech/v1/chat/completions"
headers = {
"Authorization": "Bearer YOUR_SECRET_KEY",
"Content-Type": "application/json",
}
payload = {
"model": "google/gemini-2.5-flash",
"messages": [
{
"role": "user",
"content": [
{"type": "text", "text": "Summarize the key information from this file and image."},
{"type": "file", "file": {"url": "https://example.com/report.pdf"}},
{"type": "image_url", "image_url": {"url": "https://example.com/diagram.png"}},
],
}
],
}
response = requests.post(url, headers=headers, json=payload)
print(response.json())
const response = await fetch("https://api.lunos.tech/v1/chat/completions", {
method: "POST",
headers: {
Authorization: "Bearer YOUR_SECRET_KEY",
"Content-Type": "application/json",
},
body: JSON.stringify({
model: "google/gemini-2.5-flash",
messages: [
{
role: "user",
content: [
{ type: "text", text: "Summarize the key information from this file and image." },
{ type: "file", file: { url: "https://example.com/report.pdf" } },
{ type: "image_url", image_url: { url: "https://example.com/diagram.png" } },
],
},
],
}),
});
const data = await response.json();
console.log(data);
Not every model supports every modality. Before sending multimodal data:
GET /v1/modelsinputModalities on your selected modelinputModalities) instead of hardcoding one modelNo headings found on this page.
