The expected output type
Coerce pre-parsed output to match expected schema types.
Use this only when the SDK already parses JSON internally. For raw JSON strings from LLM output, use parse instead — it handles both lenient parsing and type coercion in one step.
LLMs often return values with incorrect types even after parsing:
"42" → 42 (when schema expects number)"true" → true (when schema expects boolean)"null" → null (when schema expects null)"{...}" → {...} (when schema expects object)"[...]" → [...] (when schema expects array)This function recursively coerces these double-stringified values based on the parameters schema.
Type validation is NOT performed — use validate after coercion.
JSON schema for the structured output.
Pass this schema to LLM providers (OpenAI, Anthropic, Google, etc.) to
constrain the output format. The schema includes $defs for shared type
definitions and properties for the output structure.
Most LLM providers accept this directly in their structured output or response format configuration.
Lenient JSON parser with schema-based type coercion.
Handles incomplete or malformed JSON commonly produced by LLMs:
// and multi-line)tru, fal, nul)Also coerces double-stringified values based on the schema:
"42" → 42 (when schema expects number)"true" → true (when schema expects boolean)"null" → null (when schema expects null)"{...}" → {...} (when schema expects object)"[...]" → [...] (when schema expects array)Type validation is NOT performed — use validate after parsing.
If the SDK (e.g., LangChain, Vercel AI, MCP) already parses JSON internally and provides a pre-parsed object, use coerce instead.
Raw JSON string from LLM output
Parse result with data on success, or partial data with errors
Validates LLM-generated output against the schema.
LLMs frequently make type errors such as returning strings instead of numbers or missing required properties. Use this validator to check output before further processing.
When validation fails, use LlmJson.stringify from @typia/utils to
format the error for LLM feedback. The formatted output shows the invalid
JSON with inline error comments, helping the LLM understand and correct its
mistakes in the next turn.
The output generated by the LLM
Validation result with success status and any errors
LLM structured output schema with parsing and validation utilities.
ILlmStructuredOutput<T>is generated bytypia.llm.structuredOutput<T>()to provide everything needed for handling LLM structured outputs: the JSON schema for prompting, and functions for parsing, coercing, and validating responses.Structured outputs allow LLMs to generate data conforming to a predefined schema instead of free-form text. This is useful for:
Workflow:
Author
Jeongho Nam - https://github.com/samchon