Tools and MCP
The Responses API supports tools for retrieval and external actions. Two key capabilities are available:
- File Search tool: retrieves context from your connected sources to ground answers.
- MCP tools: connect to Model Context Protocol servers and optionally require approval before execution.
This page explains how to enable tools in a request and how to react to tool-related streaming events.
Enabling File Search
Add a File Search entry under tools and allow the model to call tools automatically by setting tool_choice to "auto" (or explicitly require a tool if desired).
Finding the Responses Project Id
Copy the Responses Project Id from the share dialog in the app.
curl --request POST \
--url https://api.nouswise.ai/v1/responses \
--header 'Authorization: Bearer <API_KEY>' \
--header 'Content-Type: application/json' \
--data '{
"model": "standard",
"tool_choice": "auto",
"tools": [
{
"type": "file_search",
"vector_store_ids": ["<PROJECT_ID>"],
"max_num_results": 50
}
],
"input": [
{
"type": "message",
"role": "user",
"content": [
{ "type": "input_text", "text": "What changed in the 2024 Q3 release? Provide references." }
]
}
],
"stream": true
}'from openai import OpenAI
import os
# Configure OpenAI SDK to use the Nouswise Responses API
client = OpenAI(
api_key=os.environ.get("NW_API_KEY"),
base_url="https://api.nouswise.ai/v1",
)
with client.responses.stream(
model="standard",
tool_choice="auto",
tools=[
{
"type": "file_search",
"vector_store_ids": ["<PROJECT_ID>"],
"max_num_results": 50,
}
],
input=[
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": "What changed in the 2024 Q3 release? Provide references.",
}
],
}
],
) as stream:
for event in stream:
if event.type == "response.output_text.delta":
print(event.delta, end="")
if event.type.startswith("response.file_search_call."):
pass # optional: update UI with tool progress
final = stream.get_final_response()import OpenAI from "openai";
const client = new OpenAI({
apiKey: process.env.NW_API_KEY,
baseURL: "https://api.nouswise.ai/v1",
});
const stream = await client.responses.stream({
model: "standard",
tool_choice: "auto",
tools: [
{
type: "file_search",
vector_store_ids: ["<PROJECT_ID>"],
max_num_results: 50,
},
],
input: [
{
type: "message",
role: "user",
content: [
{ type: "input_text", text: "What changed in the 2024 Q3 release? Provide references." },
],
},
],
});
for await (const event of stream) {
if (event.type === "response.output_text.delta") {
process.stdout.write(event.delta);
}
if (event.type?.startsWith("response.file_search_call.")) {
// optional: update UI
}
}
const final = await stream.finalResponse();When the model uses File Search, the stream will include events such as:
- response.file_search_call.in_progress
- response.file_search_call.searching
- response.file_search_call.completed
Continue rendering output_text deltas as usual. Citations and annotations will accompany the content so you can build rich UI.
Adding an MCP server
You can allow the model to call tools exposed by an MCP server you control.
curl --request POST \
--url https://api.nouswise.ai/v1/responses \
--header 'Authorization: Bearer <API_KEY>' \
--header 'Content-Type: application/json' \
--data '{
"model": "standard",
"tool_choice": "auto",
"tools": [
{
"type": "mcp",
"server_label": "internal-tools",
"server_url": "https://mcp.example.com",
"headers": { "X-Workspace": "A" },
"require_approval": "always"
}
],
"input": [
{ "type": "message", "role": "user", "content": [ {"type": "input_text", "text": "Schedule a meeting with the platform team next week."} ] }
],
"stream": true
}'from openai import OpenAI
import os
client = OpenAI(api_key=os.environ.get("NW_API_KEY"), base_url="https://api.nouswise.ai/v1")
with client.responses.stream(
model="standard",
tool_choice="auto",
tools=[
{
"type": "mcp",
"server_label": "internal-tools",
"server_url": "https://mcp.example.com",
"headers": {"X-Workspace": "A"},
"require_approval": "always",
}
],
input=[
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": "Schedule a meeting with the platform team next week.",
}
],
}
],
) as stream:
for event in stream:
if event.type == "response.output_text.delta":
print(event.delta, end="")
if event.type.startswith("response.mcp_call."):
pass
final = stream.get_final_response()import OpenAI from "openai";
const client = new OpenAI({ apiKey: process.env.NW_API_KEY, baseURL: "https://api.nouswise.ai/v1" });
const stream = await client.responses.stream({
model: "standard",
tool_choice: "auto",
tools: [
{
type: "mcp",
server_label: "internal-tools",
server_url: "https://mcp.example.com",
headers: { "X-Workspace": "A" },
require_approval: "always",
},
],
input: [
{ type: "message", role: "user", content: [{ type: "input_text", text: "Schedule a meeting with the platform team next week." }] },
],
});
for await (const event of stream) {
if (event.type === "response.output_text.delta") process.stdout.write(event.delta);
if (event.type?.startsWith("response.mcp_call.")) {
// optional: show tool progress
}
}
const final = await stream.finalResponse();With require_approval enabled, the stream may include an approval request event that describes the tool and arguments the model intends to use. You can show this to the user and ask for consent before proceeding.
After approval in your UI, continue the conversation and include an approval response item in the next request to Responses, indicating approve=true or false, along with an optional reason.
curl --request POST \
--url https://api.nouswise.ai/v1/responses \
--header 'Authorization: Bearer <API_KEY>' \
--header 'Content-Type: application/json' \
--data '{
"model": "standard",
"input": [
{
"type": "mcp_approval_response",
"approval_request_id": "<ID_FROM_EVENT>",
"approve": true,
"reason": "User approved via UI"
}
],
"stream": true
}'from openai import OpenAI
import os
client = OpenAI(api_key=os.environ.get("NW_API_KEY"), base_url="https://api.nouswise.ai/v1")
with client.responses.stream(
model="standard",
input=[
{
"type": "mcp_approval_response",
"approval_request_id": "<ID_FROM_EVENT>",
"approve": True,
"reason": "User approved via UI",
}
],
) as stream:
for event in stream:
if event.type == "response.output_text.delta":
print(event.delta, end="")
final = stream.get_final_response()import OpenAI from "openai";
const client = new OpenAI({ apiKey: process.env.NW_API_KEY, baseURL: "https://api.nouswise.ai/v1" });
const stream = await client.responses.stream({
model: "standard",
input: [
{
type: "mcp_approval_response",
approval_request_id: "<ID_FROM_EVENT>",
approve: true,
reason: "User approved via UI",
},
],
});
for await (const event of stream) {
if (event.type === "response.output_text.delta") process.stdout.write(event.delta);
}
const final = await stream.finalResponse();MCP Approval Flow
Ask users for consent before executing sensitive tools exposed via an MCP server. This recipe shows how to listen for approval requests and send back an approval decision.
Step 1 — Start a streaming run
Request tool usage from your MCP server and require approval.
curl --request POST \
--url https://api.nouswise.ai/v1/responses \
--header 'Authorization: Bearer <API_KEY>' \
--header 'Content-Type: application/json' \
--data '{
"model": "standard",
"tool_choice": "auto",
"tools": [
{ "type": "mcp", "server_label": "internal", "server_url": "https://mcp.example.com", "require_approval": "always" }
],
"input": [
{ "type": "message", "role": "user", "content": [ {"type": "input_text", "text": "Create a calendar event for Friday 2pm with the platform team."} ] }
],
"stream": true
}'from openai import OpenAI
import os
client = OpenAI(api_key=os.environ.get("NW_API_KEY"), base_url="https://api.nouswise.ai/v1")
with client.responses.stream(
model="standard",
tool_choice="auto",
tools=[
{
"type": "mcp",
"server_label": "internal",
"server_url": "https://mcp.example.com",
"require_approval": "always",
}
],
input=[
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": "Create a calendar event for Friday 2pm with the platform team.",
}
],
}
],
) as stream:
for event in stream:
if event.type == "response.output_text.delta":
print(event.delta, end="")
if event.type.startswith("response.mcp_call."):
pass
final = stream.get_final_response()import OpenAI from "openai";
const client = new OpenAI({ apiKey: process.env.NW_API_KEY, baseURL: "https://api.nouswise.ai/v1" });
const stream = await client.responses.stream({
model: "standard",
tool_choice: "auto",
tools: [
{ type: "mcp", server_label: "internal", server_url: "https://mcp.example.com", require_approval: "always" },
],
input: [
{ type: "message", role: "user", content: [{ type: "input_text", text: "Create a calendar event for Friday 2pm with the platform team." }] },
],
});
for await (const event of stream) {
if (event.type === "response.output_text.delta") process.stdout.write(event.delta);
}
const final = await stream.finalResponse();Step 2 — Detect approval request event
While consuming SSE events, you may receive an approval request. Surface this in your UI to the end-user with clear details.
Pseudocode:
if (event.type === 'response.mcp_call.in_progress' && event.approval_request) {
showApprovalModal({
toolName: event.approval_request.name,
arguments: event.approval_request.arguments,
onDecision: (approve, reason) => sendDecision(event.approval_request.id, approve, reason)
});
}Step 3 — Send approval response
Continue the conversation by posting an approval response item.
curl --request POST \
--url https://api.nouswise.ai/v1/responses \
--header 'Authorization: Bearer <API_KEY>' \
--header 'Content-Type: application/json' \
--data '{
"model": "standard",
"input": [
{
"type": "mcp_approval_response",
"approval_request_id": "<ID_FROM_EVENT>",
"approve": true,
"reason": "User approved"
}
],
"stream": true
}'from openai import OpenAI
import os
client = OpenAI(api_key=os.environ.get("NW_API_KEY"), base_url="https://api.nouswise.ai/v1")
with client.responses.stream(
model="standard",
input=[
{
"type": "mcp_approval_response",
"approval_request_id": "<ID_FROM_EVENT>",
"approve": True,
"reason": "User approved",
}
],
) as stream:
for event in stream:
if event.type == "response.output_text.delta":
print(event.delta, end="")
final = stream.get_final_response()import OpenAI from "openai";
const client = new OpenAI({ apiKey: process.env.NW_API_KEY, baseURL: "https://api.nouswise.ai/v1" });
const stream = await client.responses.stream({
model: "standard",
input: [
{
type: "mcp_approval_response",
approval_request_id: "<ID_FROM_EVENT>",
approve: true,
reason: "User approved",
},
],
});
for await (const event of stream) {
if (event.type === "response.output_text.delta") process.stdout.write(event.delta);
}
const final = await stream.finalResponse();If approved, the stream will show the tool call arguments, progress, and completion. If declined, the assistant continues without the tool.
Tips
- Keep users informed about what a tool will do before requesting consent.
- Store an audit trail of approvals/denials in your application.
- Consider per-tool policies (e.g., always/never ask for approval) based on sensitivity.
If approval is granted, the stream will show the tool call progressing and completing. If denied, the model will proceed without invoking that tool.
Best practices
- Use tool_choice="auto" to let the model decide when to call tools; use "required" if you need a tool call before answering.
- Inform users when external actions may occur and obtain explicit consent for sensitive operations.
- Combine File Search with user-provided context to improve grounding.
- Always render output incrementally while also listening for tool events to keep users informed.