Skip to content

Chat Completions

Use the OpenAI-compatible Chat Completions endpoint to build chat and task workflows quickly. If you already use the OpenAI SDK, you can usually switch the base URL and continue using your existing code.

Endpoint

Finding the Chat Completion Project Id

Use the UUID in the url of the project as the Project Id.

Minimal request

curl
curl --request POST \
  --url https://api.nouswise.ai/v1/chat/completions \
  --header 'Authorization: Bearer <API_KEY>' \
  --header 'Content-Type: application/json' \
  --data '{
    "model": "standard",
    "messages": [
      {"role": "system", "content": "You are a helpful assistant."},
      {"role": "user", "content": "Summarize the key ideas of the document."}
    ],
    "stream": false,
    "extra_body": { "projectId": "<PROJECT_ID>" }
  }'
python
from openai import OpenAI

client = OpenAI(api_key="<API_KEY>", base_url="https://api.nouswise.ai/v1")

resp = client.chat.completions.create(
  model="standard",
  messages=[
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "What changed between release A and B?"}
  ],
  stream=False,
  extra_body={"projectId": "<PROJECT_ID>"}
)

print(resp.choices[0].message.content)
js
import OpenAI from "openai";

const client = new OpenAI({ apiKey: process.env.NW_API_KEY, baseURL: "https://api.nouswise.ai/v1" });

const resp = await client.chat.completions.create({
  model: "standard",
  messages: [
    { role: "user", content: "List three ways to improve onboarding." }
  ],
  stream: false,
  extra_body: { projectId: process.env.NW_PROJECT_ID }
});

console.log(resp.choices[0].message?.content);

Streaming

Set stream=true to receive a stream of incremental tokens. The API returns an async iterator in SDKs or Server-Sent Events when using raw HTTP.

curl
curl --request POST \
  --url https://api.nouswise.ai/v1/chat/completions \
  --header 'Authorization: Bearer <API_KEY>' \
  --header 'Content-Type: application/json' \
  --data '{
    "model": "standard",
    "messages": [
      {"role": "user", "content": "Outline a product brief for a note-taking app."}
    ],
    "stream": true,
    "extra_body": { "projectId": "<PROJECT_ID>" }
  }'
python
from openai import OpenAI

client = OpenAI(api_key="<API_KEY>", base_url="https://api.nouswise.ai/v1")

stream = client.chat.completions.create(
  model="standard",
  messages=[{"role": "user", "content": "Outline a product brief for a note-taking app."}],
  stream=True,
  extra_body={"projectId": "<PROJECT_ID>"}
)

for chunk in stream:
  delta = chunk.choices[0].delta
  if delta and delta.content:
    print(delta.content, end="")
js
import OpenAI from "openai";
const client = new OpenAI({ apiKey: process.env.NW_API_KEY, baseURL: "https://api.nouswise.ai/v1" });

const stream = await client.chat.completions.create({
  model: "standard",
  messages: [{ role: "user", content: "Draft a project roadmap in bullets." }],
  stream: true,
  extra_body: { projectId: process.env.NW_PROJECT_ID }
});

for await (const chunk of stream) {
  const delta = chunk.choices?.[0]?.delta?.content;
  if (delta) process.stdout.write(delta);
}

Tips

  • Use the system message to establish behavior and constraints.
  • For grounded answers with citations and tools, prefer the Responses API. Chat Completions is best for general chat and text generation.
  • Provide your project identifier via extra_body.projectId for workspace routing.