parameters() function

typia
export namespace llm {
  // LLM FUNCTION CALLING APPLICATION SCHEMA
  export function application<
    App extends Record<string, any>,
    Model extends ILlmSchema.Model,
    Config extends Partial<ILlmSchema.ModelConfig[Model]> = {},
  >(
    options?: Partial<Pick<ILlmApplication.IOptions<Model>, "separate">>,
  ): ILlmApplication<Model>;
 
  // +VALIDATE FUNCTION EMBEDDED
  export function applicationOfValidate<
    App extends Record<string, any>,
    Model extends ILlmSchema.Model,
    Config extends Partial<ILlmSchema.ModelConfig[Model]> = {},
  >(
    options?: Partial<Pick<ILlmApplicationOfValidate.IOptions<Model>, "separate">>,
  ): ILlmApplicationOfValidate<Model>;
 
  // STRUCTURED OUTPUT
  export function parameters<
    Parameters extends Record<string, any>,
    Model extends ILlmSchema.Model,
    Config extends Partial<ILlmSchema.ModelConfig[Model]> = {},
  >(): ILlmSchema.ModelParameters[Model];
 
  // TYPE SCHEMA
  export function schema<
    T,
    Model extends ILlmSchema.Model,
    Config extends Partial<ILlmSchema.ModelConfig[Model]> = {},
  >(
    ...$defs: Extract<
      ILlmSchema.ModelSchema[Model],
      { $ref: string }
    > extends never
      ? []
      : [Record<string, ILlmSchema.ModelSchema[Model]>]
  ): ILlmSchema.ModelSchema[Model];
}

Structured output schema of LLM (Large Language Model).

typia.llm.parameters<Parameters, Model>() is a function generating structured output of LLM (Large Language Model) from a TypeScript object type. It is used to LLM function calling or structured output feature provided by OpenAI like LLM providers.

Return value type ILlmSchema.IParameters is a similar with the JSON schema definition’s object type. However, its detailed specification becomes different by LLM provider model you’ve chosen. Here is the list of LLM schema definitions of each model. Determine one of them carefully reading the LLM schema definitions.

LLM Function Calling and Structured Output

LLM selects proper function and fill arguments.

In nowadays, most LLM (Large Language Model) like OpenAI are supporting β€œfunction calling” feature. The β€œLLM function calling” means that LLM automatically selects a proper function and fills parameter values from conversation with the user (may by chatting text).

Structured output is another feature of LLM. The β€œstructured output” means that LLM automatically transforms the output conversation into a structured data format like JSON.

example/src/llm.parameters.ts
import { IChatGptSchema } from "@samchon/openapi";
import typia, { tags } from "typia";
 
interface IMember {
  email: string & tags.Format<"email">;
  name: string;
  age: number;
  hobbies: string[];
  joined_at: string & tags.Format<"date">;
}
const p: IChatGptSchema.IParameters = typia.llm.parameters<IMember, "chatgpt">();
console.log(p);

πŸ“– Playground Link

Structured Output

src/examples/llm.parameters.ts
import OpenAI from "openai";
import typia, { tags } from "typia";
 
interface IMember {
  email: string & tags.Format<"email">;
  name: string;
  age: number;
  hobbies: string[];
  joined_at: string & tags.Format<"date">;
}
 
const main = async (): Promise<void> => {
  const client: OpenAI = new OpenAI({
    apiKey: TestGlobal.env.CHATGPT_API_KEY,
    // apiKey: "<YOUR_OPENAI_API_KEY>",
  });
  const completion: OpenAI.ChatCompletion =
    await client.chat.completions.create({
      model: "gpt-4o",
      messages: [
        {
          role: "user",
          content: [
            "I am a new member of the community.",
            "",
            "My name is John Doe, and I am 25 years old.",
            "I like playing basketball and reading books,",
            "and joined to this community at 2022-01-01.",
          ].join("\n"),
        },
      ],
      response_format: {
        type: "json_schema",
        json_schema: {
          name: "member",
          schema: typia.llm.parameters<IMember, "chatgpt">() as any,
        },
      },
    });
  console.log(JSON.parse(completion.choices[0].message.content!));
};
main().catch(console.error);
Terminal
{
  email: 'john.doe@example.com',
  name: 'John Doe',
  age: 25,
  hobbies: [ 'playing basketball', 'reading books' ],
  joined_at: '2022-01-01'
}

You can utilize the typia.llm.parameters<Parameters, Model>() function to generate structured output like above.

Just configure output mode as JSON schema, and deliver the typia.llm.parameters<Parameters, Model>() function returned value to the LLM provider like OpenAI (ChatGPT). Then, the LLM provider will automatically transform the output conversation into a structured data format of the Parameters type.

Validation Feedback

src/examples/llm.parameters.ts
import OpenAI from "openai";
import typia, { IValidation, tags } from "typia";
 
interface IMember {
  email: string & tags.Format<"email">;
  name: string;
  age: number;
  hobbies: string[];
  joined_at: string & tags.Format<"date">;
}
 
const step = async (
  failure?: IValidation.IFailure | undefined,
): Promise<IValidation<IMember>> => {
  const client: OpenAI = new OpenAI({
    apiKey: "<YOUR_OPENAI_API_KEY>",
  });
  const completion: OpenAI.ChatCompletion =
    await client.chat.completions.create({
      model: "gpt-4o",
      messages: [
        {
          role: "user",
          content: [
            "I am a new member of the community.",
            "",
            "My name is John Doe, and I am 25 years old.",
            "I like playing basketball and reading books,",
            "and joined to this community at 2022-01-01.",
          ].join("\n"),
        },
        ...(failure
          ? [
              {
                role: "system",
                content: [
                  "You A.I. agent had taken a mistak that",
                  "returning wrong typed structured data.",
                  "",
                  "Here is the detailed list of type errors.",
                  "Review and correct them at the next step.",
                  "",
                  "```json",
                  JSON.stringify(failure.errors, null, 2),
                  "```",
                ].join("\n"),
              } satisfies OpenAI.ChatCompletionSystemMessageParam,
            ]
          : []),
      ],
      response_format: {
        type: "json_schema",
        json_schema: {
          name: "member",
          schema: typia.llm.parameters<IMember, "chatgpt">() as any,
        },
      },
    });
  const member: IMember = JSON.parse(completion.choices[0].message.content!);
  return typia.validate(member);
};
 
const main = async (): Promise<void> => {
  let result: IValidation<IMember> | undefined = undefined;
  for (let i: number = 0; i < 2; ++i) {
    if (result && result.success === true) break;
    result = await step(result);
  }
  console.log(result);
};
 
main().catch(console.error);
Terminal
{
  email: 'john.doe@example.com',
  name: 'John Doe',
  age: 25,
  hobbies: [ 'playing basketball', 'reading books' ],
  joined_at: '2022-01-01'
}

Is LLM Structured Output perfect? No, absolutely not.

LLM (Large Language Model) service vendor like OpenAI takes a lot of type level mistakes when composing the arguments of function calling or structured output. Even though target schema is super simple like Array<string> type, LLM often fills it just by a string typed value.

In my experience, OpenAI gpt-4o-mini (8b parameters) is taking about 70% of type level mistakes when filling the arguments of structured output to Shopping Mall service. To overcome the imperfection of such structured output, you have to utilize the validation feedback strategy with typia.validate<T>() function.

The key concept of validation feedback strategy is, let LLM structured output to construct invalid typed arguments first, and informing detailed type errors to the LLM, so that induce LLM to emend the wrong typed arguments at the next turn. In this way, I could uprise the success rate of structured output from 30% to 99% just by one step validation feedback. Even though the LLM is still occurs type error, it always has been caught at the next turn.

For reference, the typia.validate<T>() function creates validation logic by analyzing TypeScript source codes and types in the compilation level. Therefore, it is accurate and detailed than any other validator libraries. This is exactly what is needed for function calling, and I can confidentelly say that typia is the best library for LLM structured output.

ComponentstypiaTypeBoxajvio-tszodC.V.
Easy to useβœ…βŒβŒβŒβŒβŒ
Object (simple)βœ”βœ”βœ”βœ”βœ”βœ”
Object (hierarchical)βœ”βœ”βœ”βœ”βœ”βœ”
Object (recursive)βœ”βŒβœ”βœ”βœ”βœ”
Object (union, implicit)βœ…βŒβŒβŒβŒβŒ
Object (union, explicit)βœ”βœ”βœ”βœ”βœ”βŒ
Object (additional tags)βœ”βœ”βœ”βœ”βœ”βœ”
Object (template literal types)βœ”βœ”βœ”βŒβŒβŒ
Object (dynamic properties)βœ”βœ”βœ”βŒβŒβŒ
Array (rest tuple)βœ…βŒβŒβŒβŒβŒ
Array (hierarchical)βœ”βœ”βœ”βœ”βœ”βœ”
Array (recursive)βœ”βœ”βœ”βœ”βœ”βŒ
Array (recursive, union)βœ”βœ”βŒβœ”βœ”βŒ
Array (R+U, implicit)βœ…βŒβŒβŒβŒβŒ
Array (repeated)βœ…βŒβŒβŒβŒβŒ
Array (repeated, union)βœ…βŒβŒβŒβŒβŒ
Ultimate Union Typeβœ…βŒβŒβŒβŒβŒ

C.V. means class-validator

Additionally, this validation feedback strategy is useful for some LLM providers do not supporting restriction properties of JSON schema like OpenAI (IChatGptSchema) and Gemini (IGeminiSchema). For example, OpenAI and Gemini do not support format property of JSON schema, so that cannot understand the UUID like type. Even though typia.llm.application<App, Model>() function is writing the restriction information to the description property of JSON schema, but LLM provider does not reflect it perfectly.

Also, some LLM providers which have not specified the JSON schema version like Claude (IClaudeSchema) and Llama (ILlamaSchema), they tend to fail a lot of function calling about the restriction properties. In fact, Llama does not support the structured output formally, so you have to detour it by prompt template, and its success rate is lower than others.

In that case, if you give validation feedback from ILlmFunction.validate() function to the LLM agent, the LLM agent will be able to understand the restriction information exactly and fill the arguments properly.

  • Restriction properties of JSON schema
    • string: minLength, maxLength, pattern, format, contentMediaType
    • number: minimum, maximum, exclusiveMinimum, exclusiveMaximum, multipleOf
    • array: minItems, maxItems, uniqueItems, items

Restrictions

typia.llm.parameters<Parameters, Model>() follows the same restrictions typia.llm.schema<T, Model>() function. Also, it has only one additional restriction; the keyworded argument.

In the LLM function calling and structured output, the parameters must be a keyworded object type with static keys without any dynamic keys. Also, the object type must not be nullable or optional.

If you don’t follow the LLM’s keyworded arguments rule, typia.llm.parameters<Parameters, Model>() will throw compilation error like below.

src/examples/llm.parameters.violation.ts
import typia from "typia";
 
typia.llm.parameters<string>();
typia.llm.parameters<Record<string, boolean>, "chatgpt">();
typia.llm.parameters<Array<number>>();