Skip to Content
๐Ÿ“– Guide DocumentsLLM Function Callingapplication() function

application() function

undefined

typia
export namespace llm { // LLM FUNCTION CALLING APPLICATION SCHEMA export function application< App extends Record<string, any>, Config extends Partial<ILlmSchema.IConfig> = {}, >( config?: Partial<Pick<ILlmApplication.IConfig, "validate">>, ): ILlmApplication; // STRUCTURED OUTPUT export function parameters< Parameters extends Record<string, any>, Config extends Partial<ILlmSchema.IConfig> = {}, >(): ILlmSchema.IParameters; // TYPE SCHEMA export function schema< T, Config extends Partial<ILlmSchema.IConfig> = {}, >( $defs: Record<string, ILlmSchema>, ): ILlmSchema; }

LLM function calling application schema from a native TypeScript class or interface type.

typia.llm.application<App>() is a function composing LLM (Large Language Model) calling application schema from a native TypeScript class or interface type. The function returns an ILlmApplication instance, which is a data structure representing a collection of LLM function calling schemas โ€” each with built-in parse(), coerce(), and validate() methods.

If you put LLM function schema instances registered in the ILlmApplication.functions to the LLM provider like OpenAI ChatGPT, the LLM will select a proper function to call with parameter values of the target function in the conversations with the user. This is the โ€œLLM Function Callingโ€.

Letโ€™s make A.I. Chatbot super-easily with typia.llm.application<App>() function.

LLM Function Calling and Structured Output

LLM selects proper function and fill arguments.

In nowadays, most LLM (Large Language Model) like OpenAI are supporting โ€œfunction callingโ€ feature. The โ€œLLM function callingโ€ means that LLM automatically selects a proper function and fills parameter values from conversation with the user (may by chatting text).

Structured output is another feature of LLM. The โ€œstructured outputโ€ means that LLM automatically transforms the output conversation into a structured data format like JSON.

example/src/llm/application.ts
import typia, { ILlmApplication } from "typia"; import { BbsArticleService } from "./BbsArticleService"; const app: ILlmApplication = typia.llm.application<BbsArticleService>(); console.log(app);

๐Ÿ’ป Playground Link

Integrations

typia.llm.controller<Class>() wraps a TypeScript class into an ILlmController that can be plugged into any supported framework. Every class method becomes a tool โ€” JSDoc comments become descriptions, TypeScript types become JSON schemas, and validation feedback is embedded automatically.

src/main.ts
import { openai } from "@ai-sdk/openai"; import { toVercelTools } from "@typia/vercel"; import { generateText, Tool } from "ai"; import typia from "typia"; import { BbsArticleService } from "./BbsArticleService"; const tools: Record<string, Tool> = toVercelTools({ controllers: [ typia.llm.controller<BbsArticleService>( "bbs", new BbsArticleService(), ), ], }); const result = await generateText({ model: openai("gpt-4o"), tools, prompt: "I want to create a new article about TypeScript", });

The Function Calling Harness

The function calling harness is typiaโ€™s three-layer pipeline that turns unreliable LLM output into 100% correct structured data:

  1. Lenient JSON Parsing โ€” recovers broken JSON (unclosed brackets, trailing commas, markdown wrapping, etc.)
  2. Type Coercion โ€” fixes wrong types ("42" โ†’ 42, double-stringified objects โ†’ objects, etc.)
  3. Validation Feedback โ€” pinpoints remaining value errors with inline // โŒ annotations so the LLM can self-correct and retry

Each layer catches what the previous one didnโ€™t. Together they form a deterministic correction loop around the probabilistic LLM.

Lenient JSON Parsing & Type Coercion

examples/src/llm/application-parse.ts
import { dedent } from "@typia/utils"; import typia, { ILlmApplication, ILlmFunction, tags } from "typia"; const app: ILlmApplication = typia.llm.application<OrderService>(); const func: ILlmFunction = app.functions[0]; // LLM sometimes returns malformed JSON with wrong types const llmOutput = dedent` > LLM sometimes returns some prefix text with markdown JSON code block. I'd be happy to help you with your order! ๐Ÿ˜Š \`\`\`json { "order": { "payment": "{\"type\":\"card\",\"cardNumber\":\"1234-5678", // unclosed string & bracket "product": { name: "Laptop", // unquoted key price: "1299.99", // wrong type (string instead of number) quantity: 2, // trailing comma }, "customer": { // incomplete keyword + unclosed brackets "name": "John Doe", "email": "john@example.com", vip: tru \`\`\` `; const result = func.parse(llmOutput); if (result.success) console.log(result); interface IOrder { payment: IPayment; product: { name: string; price: number & tags.Minimum<0>; quantity: number & tags.Type<"uint32">; }; customer: { name: string; email: string & tags.Format<"email">; vip: boolean; }; } type IPayment = | { type: "card"; cardNumber: string } | { type: "bank"; accountNumber: string }; declare class OrderService { /** * Create a new order. * * @param props Order properties */ createOrder(props: { order: IOrder }): { id: string }; }

Each ILlmFunction includes a parse() method for handling LLM JSON outputs. This parser is specifically designed for the messy reality of LLM responses:

Lenient JSON Features:

  • Unclosed brackets {, [ and strings
  • Trailing commas [1, 2, 3, ]
  • JavaScript-style comments (// and /* */)
  • Unquoted object keys (JavaScript identifier style)
  • Incomplete keywords (tru, fal, nul)
  • Markdown code block extraction (```json ... ```)
  • Junk text prefix skipping (explanatory text LLMs often add)

Type Coercion:

LLMs frequently return wrong types โ€” numbers as strings, booleans as strings, or even double-stringified JSON objects. ILlmFunction.parse() automatically coerces these based on the functionโ€™s parameter schema.

0% โ†’ 100% Success Rate on Union Types

Qwen3.5 model shows 0% success rate when handling union types with double-stringified JSON objects. With ILlmFunction.parse() type coercion, the success rate jumps to 100%.

For Pre-parsed Objects, Use ILlmFunction.coerce()

Some LLM SDKs (Anthropic, Vercel AI, LangChain, MCP) parse JSON internally and return JavaScript objects directly. In these cases, use ILlmFunction.coerce() instead of ILlmFunction.parse() to fix types without re-parsing.

For more details, see JSON Utilities.

Validation Feedback

examples/src/llm/application-validate.ts
import { LlmJson } from "@typia/utils"; import typia, { ILlmApplication, ILlmFunction, IValidation, tags } from "typia"; const app: ILlmApplication = typia.llm.application<OrderService>(); const func: ILlmFunction = app.functions[0]; // LLM generated invalid data const input = { order: { payment: { type: "card", cardNumber: 12345678 }, // should be string product: { name: "Laptop", price: -100, // violates Minimum<0> quantity: 2.5, // should be uint32 }, customer: { name: "John Doe", email: "invalid-email", // violates Format<"email"> vip: "yes", // should be boolean }, }, }; // Validate and format errors for LLM feedback const result: IValidation = func.validate(input); if (result.success === false) { const feedback: string = LlmJson.stringify(result); console.log(feedback); } interface IOrder { payment: IPayment; product: { name: string; price: number & tags.Minimum<0>; quantity: number & tags.Type<"uint32">; }; customer: { name: string; email: string & tags.Format<"email">; vip: boolean; }; } type IPayment = | { type: "card"; cardNumber: string } | { type: "bank"; accountNumber: string }; declare class OrderService { /** * Create a new order. * * @param props Order properties */ createOrder(props: { order: IOrder }): { id: string }; }

typia.llm.application<App>() embeds typia.validate<T>() in every function for automatic argument validation. When validation fails, use LlmJson.stringify() from @typia/utils to format errors with inline // โŒ comments:

{ "order": { "payment": { "type": "card", "cardNumber": 12345678 // โŒ [{"path":"$input.order.payment.cardNumber","expected":"string"}] }, "product": { "name": "Laptop", "price": -100, // โŒ [{"path":"$input.order.product.price","expected":"number & Minimum<0>"}] "quantity": 2.5 // โŒ [{"path":"$input.order.product.quantity","expected":"number & Type<\"uint32\">"}] }, "customer": { "name": "John Doe", "email": "invalid-email", // โŒ [{"path":"$input.order.customer.email","expected":"string & Format<\"email\">"}] "vip": "yes" // โŒ [{"path":"$input.order.customer.vip","expected":"boolean"}] } } }

The LLM reads this feedback and self-corrects on the next turn. Together with lenient parsing and type coercion above, this parse โ†’ coerce โ†’ validate โ†’ feedback โ†’ retry cycle completes the harness.

In Production

In the AutoBeย  project (AI-powered backend code generator by Wrtn Technologiesย ), qwen3-coder-next showed only 6.75% raw function calling success rate on compiler AST types. With the complete harness, it reached 100% โ€” across all four tested Qwen models.

AutoBe once shipped a build with the system prompt completely missing. Nobody noticed โ€” output quality was identical. The types were the best prompt; the harness was the best orchestration.

Working on compiler AST means working on any type and any use case.

AutoBeTest.IExpression
// Compiler AST may be the hardest type structure possible // // Unlimited union types + unlimited depth + recursive references export type IExpression = | IBooleanLiteral | INumericLiteral | IStringLiteral | IArrayLiteralExpression // <- recursive (contains IExpression[]) | IObjectLiteralExpression // <- recursive (contains IExpression) | INullLiteral | IUndefinedKeyword | IIdentifier | IPropertyAccessExpression // <- recursive | IElementAccessExpression // <- recursive | ITypeOfExpression // <- recursive | IPrefixUnaryExpression // <- recursive | IPostfixUnaryExpression // <- recursive | IBinaryExpression // <- recursive (left & right) | IArrowFunction // <- recursive (body is IExpression) | ICallExpression // <- recursive (args are IExpression[]) | INewExpression // <- recursive | IConditionalPredicate // <- recursive (then & else branches) | ... // 30+ expression types total

Restrictions

typia.llm.application<App>() follows the same restrictions of below.

About the function parameters type, it follows the restriction of both typia.llm.parameters<Params>() and typia.llm.schema<T>() functions. Therefore, the parameters must be a keyworded object type with static keys without any dynamic keys. Also, the object type must not be nullable or optional.

About the return value type, it also follows the restriction of typia.llm.parameters<Params>() function. Therefore, the return type must be a keyworded object type with static keys, or void. Primitive types (like number, string, boolean), array types, and union types with undefined are not allowed as return types. If you need to return a primitive or array value, wrap it in an object type (e.g., { value: number } or { items: T[] }).

example/src/llm/application.violation.ts
import typia, { ILlmApplication, tags } from "typia"; const app: ILlmApplication = typia.llm.application<BbsArticleController>(); console.log(app); interface BbsArticleController { /** * Create a new article. * * Writes a new article and archives it into the DB. * * @param props Properties of create function * @returns Newly created article */ create(props: { /** * Information of the article to create */ input: IBbsArticle.ICreate; }): Promise<IBbsArticle | undefined>; /** * Add two numbers. * * @param props Properties of add function * @returns The sum value */ add(props: { x: number; y: number }): number; erase(id: string & tags.Format<"uuid">): Promise<void>; }
Last updated on