HttpLlm module
undefined
export namespace HttpLlm {
// Create IHttpLlmController from OpenAPI document
export function controller(props: {
name: string;
document:
| SwaggerV2.IDocument
| OpenApiV3.IDocument
| OpenApiV3_1.IDocument
| OpenApiV3_2.IDocument;
connection: IHttpConnection;
config?: Partial<IHttpLlmApplication.IConfig>;
execute?: IHttpLlmController["execute"];
}): IHttpLlmController;
}LLM function calling from OpenAPI documents.
HttpLlm is a utility module from @typia/utils that converts OpenAPI (Swagger) documents into LLM function calling schemas. While typia.llm.application<Class>() generates schemas from TypeScript class types at compile time, HttpLlm generates them from OpenAPI documents at runtime — making any REST API instantly callable by LLMs.
It supports all OpenAPI versions: Swagger v2.0, OpenAPI v3.0, v3.1, and v3.2.
OpenAPI Conversion Pipeline
HttpLlm first upgrades any OpenAPI version to an emended OpenAPI v3.2 format, then converts each operation into an IHttpLlmFunction with parameter schemas, descriptions, and HTTP metadata. The resulting IHttpLlmController can be passed to MCP, Vercel AI SDK, or Agentica.
HttpLlm.controller()
import { HttpLlm } from "@typia/utils";
import { IHttpLlmController } from "@typia/interface";
const controller: IHttpLlmController = HttpLlm.controller({
name: "shopping",
document: await fetch(
"https://shopping-be.wrtn.ai/editor/swagger.json",
).then((r) => r.json()),
connection: {
host: "https://shopping-be.wrtn.ai",
headers: { Authorization: "Bearer ********" },
},
});HttpLlm.controller() creates an IHttpLlmController from an OpenAPI document. Every API operation is converted to an IHttpLlmFunction with schemas, descriptions, and HTTP metadata — bundled together with the connection info so it can be both described to and executed by LLMs.
name: Controller name used as prefix for tool namesdocument: Swagger/OpenAPI document (v2.0, v3.0, v3.1, or v3.2)connection: HTTP connection info includinghostand optionalheadersconfig: Optional LLM schema conversion configurationexecute: Optional custom executor (defaults toHttpLlm.execute())
Integrations
HttpLlm.controller() wraps an OpenAPI document into an IHttpLlmController that can be plugged into any supported framework. Every API operation becomes a tool — OpenAPI descriptions become tool descriptions, request/response schemas become JSON schemas, and validation feedback is embedded automatically.
Vercel AI SDK
import { openai } from "@ai-sdk/openai";
import { toVercelTools } from "@typia/vercel";
import { generateText, Tool } from "ai";
import { HttpLlm } from "@typia/utils";
const tools: Record<string, Tool> = toVercelTools({
controllers: [
HttpLlm.controller({
name: "shopping",
document: await fetch(
"https://shopping-be.wrtn.ai/editor/swagger.json",
).then((r) => r.json()),
connection: {
host: "https://shopping-be.wrtn.ai",
headers: { Authorization: "Bearer ********" },
},
}),
],
});
const result = await generateText({
model: openai("gpt-4o"),
tools,
prompt: "I wanna buy MacBook Pro",
});Validation Feedback
When used through MCP, Vercel AI SDK, or Agentica, HttpLlm.controller() embeds typia.validate<T>() in every tool for automatic argument validation. When validation fails, the error is returned as text content with inline // ❌ comments at each invalid property:
{
"name": "John",
"age": "twenty", // ❌ [{"path":"$input.age","expected":"number"}]
"email": "not-an-email", // ❌ [{"path":"$input.email","expected":"string & Format<\"email\">"}]
"hobbies": "reading" // ❌ [{"path":"$input.hobbies","expected":"Array<string>"}]
}The LLM reads this feedback and self-corrects on the next turn.
In the AutoBe project (AI-powered backend code generator), qwen3-coder-next showed only 6.75% raw function calling success rate on compiler AST types. However, with validation feedback, it reached 100%.
Working on compiler AST means working on any type and any use case.
// Compiler AST may be the hardest type structure possible
//
// Unlimited union types + unlimited depth + recursive references
export type IExpression =
| IBooleanLiteral
| INumericLiteral
| IStringLiteral
| IArrayLiteralExpression // <- recursive (contains IExpression[])
| IObjectLiteralExpression // <- recursive (contains IExpression)
| INullLiteral
| IUndefinedKeyword
| IIdentifier
| IPropertyAccessExpression // <- recursive
| IElementAccessExpression // <- recursive
| ITypeOfExpression // <- recursive
| IPrefixUnaryExpression // <- recursive
| IPostfixUnaryExpression // <- recursive
| IBinaryExpression // <- recursive (left & right)
| IArrowFunction // <- recursive (body is IExpression)
| ICallExpression // <- recursive (args are IExpression[])
| INewExpression // <- recursive
| IConditionalPredicate // <- recursive (then & else branches)
| ... // 30+ expression types totalLenient JSON Parsing
Parsing Example
import { dedent } from "@typia/utils";
import typia, { ILlmApplication, ILlmFunction, tags } from "typia";
const app: ILlmApplication = typia.llm.application<OrderService>();
const func: ILlmFunction = app.functions[0];
// LLM sometimes returns malformed JSON with wrong types
const llmOutput = dedent`
> LLM sometimes returns some prefix text with markdown JSON code block.
I'd be happy to help you with your order! 😊
\`\`\`json
{
"order": {
"payment": "{\"type\":\"card\",\"cardNumber\":\"1234-5678", // unclosed string & bracket
"product": {
name: "Laptop", // unquoted key
price: "1299.99", // wrong type (string instead of number)
quantity: 2, // trailing comma
},
"customer": {
// incomplete keyword + unclosed brackets
"name": "John Doe",
"email": "john@example.com",
vip: tru
\`\`\`
`;
const result = func.parse(llmOutput);
if (result.success) console.log(result);
interface IOrder {
payment: IPayment;
product: {
name: string;
price: number & tags.Minimum<0>;
quantity: number & tags.Type<"uint32">;
};
customer: {
name: string;
email: string & tags.Format<"email">;
vip: boolean;
};
}
type IPayment =
| { type: "card"; cardNumber: string }
| { type: "bank"; accountNumber: string };
declare class OrderService {
/**
* Create a new order.
*
* @param props Order properties
*/
createOrder(props: { order: IOrder }): { id: string };
}Each IHttpLlmFunction inherits parse(), coerce(), and validate() methods from ILlmFunction. These are specifically designed for the messy reality of LLM responses:
Lenient JSON Features:
- Unclosed brackets
{,[and strings - Trailing commas
[1, 2, 3, ] - JavaScript-style comments (
//and/* */) - Unquoted object keys (JavaScript identifier style)
- Incomplete keywords (
tru,fal,nul) - Markdown code block extraction (
```json ... ```) - Junk text prefix skipping (explanatory text LLMs often add)
Type Coercion:
LLMs frequently return wrong types — numbers as strings, booleans as strings, or even double-stringified JSON objects. IHttpLlmFunction.parse() automatically coerces these based on the function’s parameter schema.
// LLM returns: { "count": "42", "active": "true", "data": "{\"x\": 1}" }
// After coercion: { "count": 42, "active": true, "data": { x: 1 } }0% → 100% Success Rate on Union Types
Qwen3.5 model shows 0% success rate when handling union types with double-stringified JSON objects. With IHttpLlmFunction.parse() type coercion, the success rate jumps to 100%.
For Pre-parsed Objects, Use IHttpLlmFunction.coerce()
Some LLM SDKs (Anthropic, Vercel AI, LangChain, MCP) parse JSON internally and return JavaScript objects directly. In these cases, use IHttpLlmFunction.coerce() instead of IHttpLlmFunction.parse() to fix types without re-parsing.
For more details, see JSON Utilities.