///
Better UI is designed from the ground up to be "AI-first," meaning its core `Tool` abstraction is built with AI model integration in mind. A key feature is the seamless way you can expose your defined
188 views
~188 views from guests
Guest views are estimated from total page views. These include anonymous visitors and users who weren't logged in when they viewed the page.
Better UI is designed from the ground up to be "AI-first," meaning its core Tool abstraction is built with AI model integration in mind. A key feature is the seamless way you can expose your defined tools to AI models, particularly using the Vercel AI SDK↗. This page details how to integrate your Better UI tools with AI models via a server-side API endpoint, leveraging the toAITool() method.
toAITool() MethodThe Tool class in Better UI provides a dedicated method, toAITool(), which converts a Better UI Tool instance into a format directly compatible with the Vercel AI SDK (version 5 and later). This method ensures that your tool's description, input schema, and execution logic are correctly presented to the AI model.
The toAITool() method returns an object with the following structure:
description: This field is taken directly from your tool's description property and is used by the AI model to understand the tool's purpose and decide when to call it.inputSchema: This is your tool's input Zod schema, which the AI model uses to generate structured arguments for the tool call.execute: This is a function that the AI SDK will call when the AI model decides to use your tool. Crucially, this execute function internally calls your Better UI tool's run() method with isServer: true. This guarantees that the tool's server-side implementation (.server() handler) is invoked in a secure backend environment, even if the tool also has a client-side implementation.For more details on the toAITool() method and other Tool class properties, refer to the [Tool API Reference].
To integrate your Better UI tools with an AI model, you typically set up a server-side API endpoint. This endpoint will receive messages from your frontend, forward them to the AI model along with your registered tools, and then stream the AI's response (including tool calls and results) back to the client.
Here's a full example from app/api/chat/route.ts demonstrating this integration:
Imports:
openai: The OpenAI model adapter from @ai-sdk/openai.streamText, stepCountIs, convertToModelMessages: Core utilities from the @ai-sdk/react library for streaming AI responses and message conversion.weatherTool, searchTool, counterTool: These are your Better UI tool instances, typically defined once in a shared library file (e.g., lib/tools.tsx). Each of these tools has its own .server() implementation for backend logic and a .view() implementation for rendering results on the frontend.POST Function:
POST requests, typically containing an array of messages from the client's chat interface.const { messages } = await req.json();: Extracts the chat messages from the request body.convertToModelMessages(messages):
UIMessage objects on the client (which can include text and tool parts) and ModelMessage objects for interaction with the AI model.convertToModelMessages transforms the incoming UIMessage[] from the client into the ModelMessage[] format expected by the streamText function, ensuring proper communication with the AI model.streamText Call:
model: openai('gpt-4o-mini'): Specifies the AI model to use (e.g., GPT-4o Mini).messages: modelMessages: Provides the conversation history to the model.tools: { ... }: This is where your Better UI tools are registered. You simply map your Tool instances using toAITool():
When the AI model decides to call one of these tools, the
execute function provided by toAITool() will be invoked, triggering your tool's .server() handler.stopWhen: stepCountIs(5): An optional parameter to stop generation after a certain number of AI processing steps.result.toUIMessageStreamResponse():
streamText call, result contains the streamed response from the AI model, which can include both text and tool calls/outputs.toUIMessageStreamResponse() converts this model-oriented stream back into a format suitable for the client's UI, allowing the client to render both the AI's textual responses and the visual outputs of executed tools.tool()). You don't duplicate schemas or logic for AI integration.Tool definitions provide end-to-end type safety, from AI model input generation to your server-side logic and frontend rendering.toAITool().execute method ensures that tool logic always runs in its designated server environment when triggered by an AI model, preventing exposure of sensitive server code or credentials to the client..view() component allows AI-triggered tool results to be rendered visually and interactively within your chat interface, creating a rich, dynamic user experience.