Send a conversation to a model and get a response. The SDK supports all OpenAI-compatible chat completion features.
const response = await client.chat.completions.create({
model: "openai/gpt-4o",
messages: [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "Explain quantum computing in one sentence." },
],
temperature: 0.7,
max_tokens: 256,
});
console.log(response.choices[0].message.content);
| Parameter | Type | Description |
|---|---|---|
model |
string |
Model identifier (e.g. openai/gpt-4o) |
messages |
ChatCompletionMessageParam[] |
Conversation messages |
temperature |
number |
Randomness (0–2). Default varies by model |
max_tokens |
number |
Maximum response length |
top_p |
number |
Nucleus sampling threshold |
stop |
string | string[] |
Stop sequences |
frequency_penalty |
number |
Penalize repeated tokens |
presence_penalty |
number |
Penalize tokens already present |
tools |
Tool[] |
Available tools/functions |
tool_choice |
string | object |
Tool selection strategy |
response_format |
ResponseFormat |
Output format constraint |
stream |
boolean |
Enable streaming |
observability |
boolean |
Enable request tracing |
| Role | Purpose |
|---|---|
system |
Sets the AI's behavior and personality |
user |
Your input / questions |
assistant |
Prior AI responses (for multi-turn context) |
tool |
Results from tool/function calls |
Pass images alongside text for vision-capable models:
const response = await client.chat.completions.create({
model: "openai/gpt-4o",
messages: [
{
role: "user",
content: [
{ type: "text", text: "What's in this image?" },
{
type: "image_url",
image_url: { url: "https://example.com/photo.jpg" },
},
],
},
],
});
Force the model to return JSON matching a specific schema:
const response = await client.chat.completions.create({
model: "openai/gpt-4o",
messages: [{ role: "user", content: "Weather in Tokyo" }],
response_format: {
type: "json_schema",
json_schema: {
name: "weather",
strict: true,
schema: {
type: "object",
properties: {
location: { type: "string" },
temperature: { type: "number" },
},
required: ["location", "temperature"],
additionalProperties: false,
},
},
},
});
const data = JSON.parse(response.choices[0].message.content!);
Define tools the model can request to call:
const response = await client.chat.completions.create({
model: "openai/gpt-4o",
messages: [{ role: "user", content: "What's the weather in Paris?" }],
tools: [
{
type: "function",
function: {
name: "get_weather",
description: "Get current weather for a location",
parameters: {
type: "object",
properties: { location: { type: "string" } },
required: ["location"],
},
},
},
],
});
const toolCall = response.choices[0].message.tool_calls?.[0];
if (toolCall) {
const args = JSON.parse(toolCall.function.arguments);
// Execute your function, then send result back with role: "tool"
}
Give the model access to real-time web information:
const response = await client.chat.completions.create({
model: "openai/gpt-4o",
messages: [{ role: "user", content: "Latest AI news this week" }],
tools: [{ type: "web_search" }],
});
Enable request tracing for debugging in your Lunos dashboard:
const response = await client.chat.completions.create({
model: "openai/gpt-4o",
messages: [{ role: "user", content: "Debug this request" }],
observability: true,
});
No headings found on this page.
