Skip to Content

toVercelTools() function

undefined

@typia/vercel
export function toVercelTools(props: { controllers: Array<ILlmController | IHttpLlmController>; prefix?: boolean | undefined; }): Record<string, Tool>;

Vercel AI SDK  integration for typia.

toVercelTools() converts TypeScript classes or OpenAPI documents into Vercel AI SDK Record<string, Tool> at once.

Every class method becomes a tool, JSDoc comments become tool descriptions, and TypeScript types become JSON schemas — all at compile time. For OpenAPI documents, every API endpoint is converted to a Vercel tool with schemas from the specification.

Lenient JSON parsing, type coercion, and validation feedback are all embedded automatically — the complete function calling harness that turns unreliable LLM output into 100% correct structured data.

Setup

Terminal
npm install @typia/vercel ai npm install typia npx typia setup

From TypeScript Class

src/main.ts
import { openai } from "@ai-sdk/openai"; import { toVercelTools } from "@typia/vercel"; import { generateText, GenerateTextResult, Tool } from "ai"; import typia from "typia"; import { Calculator } from "./Calculator"; const tools: Record<string, Tool> = toVercelTools({ controllers: [ typia.llm.controller<Calculator>("calculator", new Calculator()), ], }); const result: GenerateTextResult = await generateText({ model: openai("gpt-4o"), prompt: "What is 10 + 5?", tools, });

Create controllers from TypeScript classes with typia.llm.controller<Class>(), and pass them to toVercelTools().

  • controllers: Array of controllers created via typia.llm.controller<Class>() or HttpLlm.controller()
  • prefix: When true (default), tool names are formatted as {controllerName}_{methodName}. Set to false to use bare method names

Type Restrictions

Every method’s parameter type must be a keyworded object type with static keys — not a primitive, array, or union. The return type must also be an object type or void. Primitive return types like number or string are not allowed; wrap them in an object (e.g., { value: number }). See typia.llm.application() Restrictions for details.

From OpenAPI Document

src/main.ts
import { toVercelTools } from "@typia/vercel"; import { HttpLlm } from "@typia/utils"; import { Tool } from "ai"; const tools: Record<string, Tool> = toVercelTools({ controllers: [ HttpLlm.controller({ name: "shopping", document: await fetch( "https://shopping-be.wrtn.ai/editor/swagger.json", ).then((r) => r.json()), connection: { host: "https://shopping-be.wrtn.ai", headers: { Authorization: "Bearer ********" }, }, }), ], });

Create controllers from OpenAPI documents with HttpLlm.controller(), and pass them to toVercelTools().

  • name: Controller name used as prefix for tool names
  • document: Swagger/OpenAPI document (v2.0, v3.0, or v3.1)
  • connection: HTTP connection info including host and optional headers

The Function Calling Harness

toVercelTools() embeds lenient JSON parsing, type coercion, and validation feedback in every tool — all automatically. When validation fails, the error is returned as text content with inline // ❌ comments at each invalid property:

{ "name": "John", "age": "twenty", // ❌ [{"path":"$input.age","expected":"number"}] "email": "not-an-email", // ❌ [{"path":"$input.email","expected":"string & Format<\"email\">"}] "hobbies": "reading" // ❌ [{"path":"$input.hobbies","expected":"Array<string>"}] }

The LLM reads this feedback and self-corrects on the next turn.

In the AutoBe  project (AI-powered backend code generator by Wrtn Technologies ), qwen3-coder-next showed only 6.75% raw function calling success rate on compiler AST types. However, with the complete harness, it reached 100% — across all four tested Qwen models.

Working on compiler AST means working on any type and any use case.

AutoBeTest.IExpression
// Compiler AST may be the hardest type structure possible // // Unlimited union types + unlimited depth + recursive references export type IExpression = | IBooleanLiteral | INumericLiteral | IStringLiteral | IArrayLiteralExpression // <- recursive (contains IExpression[]) | IObjectLiteralExpression // <- recursive (contains IExpression) | INullLiteral | IUndefinedKeyword | IIdentifier | IPropertyAccessExpression // <- recursive | IElementAccessExpression // <- recursive | ITypeOfExpression // <- recursive | IPrefixUnaryExpression // <- recursive | IPostfixUnaryExpression // <- recursive | IBinaryExpression // <- recursive (left & right) | IArrowFunction // <- recursive (body is IExpression) | ICallExpression // <- recursive (args are IExpression[]) | INewExpression // <- recursive | IConditionalPredicate // <- recursive (then & else branches) | ... // 30+ expression types total

Structured Output

Use typia.llm.parameters<T>() with Vercel’s jsonSchema() to generate structured output with validation:

src/main.ts
import { openai } from "@ai-sdk/openai"; import { dedent, LlmJson } from "@typia/utils"; import { generateObject, jsonSchema } from "ai"; import typia, { tags } from "typia"; interface IMember { email: string & tags.Format<"email">; name: string; age: number & tags.Minimum<0> & tags.Maximum<100>; hobbies: string[]; joined_at: string & tags.Format<"date">; } const { object } = await generateObject({ model: openai("gpt-4o"), schema: jsonSchema<IMember>(typia.llm.parameters<IMember>(), { validate: (value) => { const result = typia.validate<IMember>(value); if (result.success) return { success: true, value: result.data }; return { success: false, error: new Error(LlmJson.stringify(result)), }; }, }), prompt: dedent` I am a new member of the community. My name is John Doe, and I am 25 years old. I like playing basketball and reading books, and joined to this community at 2022-01-01. `, });
Terminal
{ email: 'john.doe@example.com', name: 'John Doe', age: 25, hobbies: [ 'playing basketball', 'reading books' ], joined_at: '2022-01-01' }

The IMember interface is the single source of truth. typia.llm.parameters<IMember>() generates the JSON schema, and typia.validate<IMember>() validates the output — all from the same type.

Error Handling

test_vercel_class_controller_error_handling.ts
import type { Tool } from "ai"; import { TestValidator } from "@nestia/e2e"; import { ILlmController } from "@typia/interface"; import { toVercelTools } from "@typia/vercel"; import typia from "typia"; import { Calculator } from "../structures/Calculator"; export const test_vercel_class_controller_error_handling = async (): Promise<void> => { // 1. Create class-based controller using typia.llm.controller const controller: ILlmController<Calculator> = typia.llm.controller<Calculator>("calculator", new Calculator()); // 2. Convert to Vercel tools const tools: Record<string, Tool> = toVercelTools({ controllers: [controller], }); // 3. Test divide by zero (throws an error) const divideTool: Tool = tools["divide"]!; const result: unknown = await divideTool.execute!( { x: 10, y: 0 }, { toolCallId: "test-1", messages: [], abortSignal: undefined as any }, ); // 4. Verify the result contains error TestValidator.predicate("result should be a failure object", () => { const res = result as { success?: boolean; error?: string }; return res.success === false && typeof res.error === "string"; }); TestValidator.predicate("error should contain division by zero", () => { const res = result as { success: boolean; error: string }; return res.error.includes("Division by zero"); }); };

When a tool execution throws a runtime error (e.g., division by zero), @typia/vercel catches the exception and returns { error: true, message: "Error: Division by zero is not allowed" }. This is different from validation errors — validation errors indicate wrong argument types, while runtime errors indicate the function itself failed.

Last updated on