Skip to Content

parameters() function

undefined

typia
export namespace llm { // LLM FUNCTION CALLING APPLICATION SCHEMA export function application< App extends Record<string, any>, Config extends Partial<ILlmSchema.IConfig> = {}, >( config?: Partial<Pick<ILlmApplication.IConfig, "validate">>, ): ILlmApplication; // STRUCTURED OUTPUT export function parameters< Parameters extends Record<string, any>, Config extends Partial<ILlmSchema.IConfig> = {}, >(): ILlmSchema.IParameters; // TYPE SCHEMA export function schema< T, Config extends Partial<ILlmSchema.IConfig> = {}, >( $defs: Record<string, ILlmSchema>, ): ILlmSchema; }

Structured output schema of LLM (Large Language Model).

typia.llm.parameters<Parameters>() is a function generating structured output of LLM (Large Language Model) from a TypeScript object type. It is used to LLM function calling or structured output feature provided by OpenAI like LLM providers.

Return value type ILlmSchema.IParameters is a similar with the JSON schema definition’s object type.

LLM Function Calling and Structured Output

LLM selects proper function and fill arguments.

In nowadays, most LLM (Large Language Model) like OpenAI are supporting β€œfunction calling” feature. The β€œLLM function calling” means that LLM automatically selects a proper function and fills parameter values from conversation with the user (may by chatting text).

Structured output is another feature of LLM. The β€œstructured output” means that LLM automatically transforms the output conversation into a structured data format like JSON.

example/src/llm/parameters.ts
import typia, { ILlmSchema, tags } from "typia"; const parameters: ILlmSchema.IParameters = typia.llm.parameters<IMember>(); console.log(parameters); interface IMember { email: string & tags.Format<"email">; name: string; age: number; hobbies: string[]; joined_at: string & tags.Format<"date">; }

πŸ“– Playground Link

Structured Output

src/examples/llm.parameters.ts
import OpenAI from "openai"; import typia, { tags } from "typia"; interface IMember { email: string & tags.Format<"email">; name: string; age: number; hobbies: string[]; joined_at: string & tags.Format<"date">; } const main = async (): Promise<void> => { const client: OpenAI = new OpenAI({ apiKey: TestGlobal.env.CHATGPT_API_KEY, // apiKey: "<YOUR_OPENAI_API_KEY>", }); const completion: OpenAI.ChatCompletion = await client.chat.completions.create({ model: "gpt-4o", messages: [ { role: "user", content: [ "I am a new member of the community.", "", "My name is John Doe, and I am 25 years old.", "I like playing basketball and reading books,", "and joined to this community at 2022-01-01.", ].join("\n"), }, ], response_format: { type: "json_schema", json_schema: { name: "member", schema: typia.llm.parameters<IMember>() as any, }, }, }); console.log(JSON.parse(completion.choices[0].message.content!)); }; main().catch(console.error);
Terminal
{ email: 'john.doe@example.com', name: 'John Doe', age: 25, hobbies: [ 'playing basketball', 'reading books' ], joined_at: '2022-01-01' }

You can utilize the typia.llm.parameters<Parameters>() function to generate structured output like above.

Just configure output mode as JSON schema, and deliver the typia.llm.parameters<Parameters>() function returned value to the LLM provider like OpenAI (ChatGPT). Then, the LLM provider will automatically transform the output conversation into a structured data format of the Parameters type.

The Function Calling Harness

The function calling harness is typia’s three-layer pipeline that turns unreliable LLM output into 100% correct structured data:

  1. Lenient JSON Parsing β€” recovers broken JSON (unclosed brackets, trailing commas, markdown wrapping, etc.)
  2. Type Coercion β€” fixes wrong types ("42" β†’ 42, double-stringified objects β†’ objects, etc.)
  3. Validation Feedback β€” pinpoints remaining value errors with inline // ❌ annotations so the LLM can self-correct and retry

Each layer catches what the previous one didn’t. Together they form a deterministic correction loop around the probabilistic LLM.

Lenient JSON Parsing & Type Coercion

examples/src/llm/parameters-parse.ts
import { LlmJson } from "@typia/utils"; import { dedent } from "@typia/utils"; import typia, { ILlmSchema, tags } from "typia"; const params: ILlmSchema.IParameters = typia.llm.parameters<IProps>(); // LLM sometimes returns malformed JSON with wrong types const llmOutput = dedent` > LLM sometimes returns some prefix text with markdown JSON code block. I'd be happy to help you with your order! 😊 \`\`\`json { "order": { "payment": "{\"type\":\"card\",\"cardNumber\":\"1234-5678", // unclosed string & bracket "product": { name: "Laptop", // unquoted key price: "1299.99", // wrong type (string instead of number) quantity: 2, // trailing comma }, "customer": { // incomplete keyword + unclosed brackets "name": "John Doe", "email": "john@example.com", vip: tru \`\`\` `; const result = LlmJson.parse(llmOutput, params); if (result.success) console.log(result.data); interface IProps { order: IOrder; } interface IOrder { payment: IPayment; product: { name: string; price: number & tags.Minimum<0>; quantity: number & tags.Type<"uint32">; }; customer: { name: string; email: string & tags.Format<"email">; vip: boolean; }; } type IPayment = | { type: "card"; cardNumber: string } | { type: "bank"; accountNumber: string };

Use LlmJson.parse() from @typia/utils to handle LLM JSON outputs with the parameter schema from typia.llm.parameters<T>(). This parser is specifically designed for the messy reality of LLM responses:

Lenient JSON Features:

  • Unclosed brackets {, [ and strings
  • Trailing commas [1, 2, 3, ]
  • JavaScript-style comments (// and /* */)
  • Unquoted object keys (JavaScript identifier style)
  • Incomplete keywords (tru, fal, nul)
  • Markdown code block extraction (```json ... ```)
  • Junk text prefix skipping (explanatory text LLMs often add)

Type Coercion:

LLMs frequently return wrong types β€” numbers as strings, booleans as strings, or even double-stringified JSON objects. LlmJson.parse() automatically coerces these based on the parameter schema.

0% β†’ 100% Success Rate on Union Types

Qwen3.5 model shows 0% success rate when handling union types with double-stringified JSON objects. With LlmJson.parse() type coercion, the success rate jumps to 100%.

For Pre-parsed Objects, Use LlmJson.coerce()

Some LLM SDKs (Anthropic, Vercel AI, LangChain, MCP) parse JSON internally and return JavaScript objects directly. In these cases, use LlmJson.coerce() instead of LlmJson.parse() to fix types without re-parsing.

For more details, see JSON Utilities.

Validation Feedback

examples/src/llm/parameters-validate.ts
import { LlmJson } from "@typia/utils"; import typia, { IValidation, tags } from "typia"; // LLM generated invalid data const input = { order: { payment: { type: "card", cardNumber: 12345678 }, // should be string product: { name: "Laptop", price: -100, // violates Minimum<0> quantity: 2.5, // should be uint32 }, customer: { name: "John Doe", email: "invalid-email", // violates Format<"email"> vip: "yes", // should be boolean }, }, }; // Validate and format errors for LLM feedback const result: IValidation<IProps> = typia.validate<IProps>(input); if (result.success === false) { const feedback: string = LlmJson.stringify(result); console.log(feedback); } interface IProps { order: IOrder; } interface IOrder { payment: IPayment; product: { name: string; price: number & tags.Minimum<0>; quantity: number & tags.Type<"uint32">; }; customer: { name: string; email: string & tags.Format<"email">; vip: boolean; }; } type IPayment = | { type: "card"; cardNumber: string } | { type: "bank"; accountNumber: string };

Use typia.validate<T>() for validation feedback on structured output. When validation fails, use LlmJson.stringify() from @typia/utils to format errors with inline // ❌ comments:

{ "name": "John Doe", "age": -5, // ❌ [{"path":"$input.age","expected":"number & Minimum<0>"}] "email": "invalid-email", // ❌ [{"path":"$input.email","expected":"string & Format<\"email\">"}] "hobbies": "reading", // ❌ [{"path":"$input.hobbies","expected":"Array<string>"}] "joined_at": "not-a-date", // ❌ [{"path":"$input.joined_at","expected":"string & Format<\"date\">"}] "metadata": { "level": "platinum", // ❌ [{"path":"$input.metadata","expected":"(IMetadata & { level: \"gold\"; }) | (IMetadata & { level: \"silver\"; })"}] "points": 1500 } }

The LLM reads this feedback and self-corrects on the next turn.

In the AutoBeΒ  project (AI-powered backend code generator by Wrtn TechnologiesΒ ), qwen3-coder-next showed only 6.75% raw function calling success rate on compiler AST types. However, with the complete harness, it reached 100% β€” across all four tested Qwen models.

Working on compiler AST means working on any type and any use case.

AutoBeTest.IExpression
// Compiler AST may be the hardest type structure possible // // Unlimited union types + unlimited depth + recursive references export type IExpression = | IBooleanLiteral | INumericLiteral | IStringLiteral | IArrayLiteralExpression // <- recursive (contains IExpression[]) | IObjectLiteralExpression // <- recursive (contains IExpression) | INullLiteral | IUndefinedKeyword | IIdentifier | IPropertyAccessExpression // <- recursive | IElementAccessExpression // <- recursive | ITypeOfExpression // <- recursive | IPrefixUnaryExpression // <- recursive | IPostfixUnaryExpression // <- recursive | IBinaryExpression // <- recursive (left & right) | IArrowFunction // <- recursive (body is IExpression) | ICallExpression // <- recursive (args are IExpression[]) | INewExpression // <- recursive | IConditionalPredicate // <- recursive (then & else branches) | ... // 30+ expression types total

Restrictions

typia.llm.parameters<Parameters>() follows the same restrictions typia.llm.schema<T>() function. Also, it has only one additional restriction; the keyworded argument.

In the LLM function calling and structured output, the parameters must be a keyworded object type with static keys without any dynamic keys. Also, the object type must not be nullable or optional.

If you don’t follow the LLM’s keyworded arguments rule, typia.llm.parameters<Parameters>() will throw compilation error like below.

src/examples/llm.parameters.violation.ts
import typia from "typia"; typia.llm.parameters<string>(); typia.llm.parameters<Record<string, boolean>>(); typia.llm.parameters<Array<number>>();
Last updated on