đź“– Guide DocumentsLLM Function Callingapplication() functions

application() function

typia
export namespace llm {
  // LLM FUNCTION CALLING APPLICATION SCHEMA
  export function application<
    App extends Record<string, any>,
    Model extends ILlmSchema.Model,
    Config extends Partial<ILlmSchema.ModelConfig[Model]> = {},
  >(
    options?: Partial<Pick<ILlmApplication.IOptions<Model>, "separate">>,
  ): ILlmApplication<Model>;
 
  // +VALIDATE FUNCTION EMBEDDED
  export function applicationOfValidate<
    App extends Record<string, any>,
    Model extends ILlmSchema.Model,
    Config extends Partial<ILlmSchema.ModelConfig[Model]> = {},
  >(
    options?: Partial<Pick<ILlmApplicationOfValidate.IOptions<Model>, "separate">>,
  ): ILlmApplicationOfValidate<Model>;
 
  // STRUCTURED OUTPUT
  export function parameters<
    Parameters extends Record<string, any>,
    Model extends ILlmSchema.Model,
    Config extends Partial<ILlmSchema.ModelConfig[Model]> = {},
  >(): ILlmSchema.ModelParameters[Model];
 
  // TYPE SCHEMA
  export function schema<
    T,
    Model extends ILlmSchema.Model,
    Config extends Partial<ILlmSchema.ModelConfig[Model]> = {},
  >(
    ...$defs: Extract<
      ILlmSchema.ModelSchema[Model],
      { $ref: string }
    > extends never
      ? []
      : [Record<string, ILlmSchema.ModelSchema[Model]>]
  ): ILlmSchema.ModelSchema[Model];
}

LLM function calling application schema from a native TypeScript class or interface type.

typia.llm.application<App, Model>() is a function composing LLM (Large Language Model) calling application schema from a native TypeScript class or interface type. The function returns an ILlmApplication instance, which is a data structure representing a collection of LLM function calling schemas.

If you put LLM function schema instances registered in the ILlmApplication.functions to the LLM provider like OpenAI ChatGPT, the LLM will select a proper function to call with parameter values of the target function in the conversations with the user. This is the “LLM Function Calling”.

You can specify the LLM provide model by the second Model template argument. It’s because detailed specification of the function schema is different by the LLM provider model. Here is the list of LLM schema definitions of each model. Determine one of them carefully reading the LLM schema definitions.

If you’ve determined, let’s make A.I. Chatbot super-easily with typia.llm.application<App, Model>() function.

LLM Function Calling and Structured Output

LLM selects proper function and fill arguments.

In nowadays, most LLM (Large Language Model) like OpenAI are supporting “function calling” feature. The “LLM function calling” means that LLM automatically selects a proper function and fills parameter values from conversation with the user (may by chatting text).

Structured output is another feature of LLM. The “structured output” means that LLM automatically transforms the output conversation into a structured data format like JSON.

applicationOfValidate()

typia
export namespace llm {
  // LLM FUNCTION CALLING APPLICATION SCHEMA
  export function application<
    App extends Record<string, any>,
    Model extends ILlmSchema.Model,
    Config extends Partial<ILlmSchema.ModelConfig[Model]> = {},
  >(
    options?: Partial<Pick<ILlmApplication.IOptions<Model>, "separate">>,
  ): ILlmApplication<Model>;
 
  // +VALIDATE FUNCTION EMBEDDED
  export function applicationOfValidate<
    App extends Record<string, any>,
    Model extends ILlmSchema.Model,
    Config extends Partial<ILlmSchema.ModelConfig[Model]> = {},
  >(
    options?: Partial<Pick<ILlmApplicationOfValidate.IOptions<Model>, "separate">>,
  ): ILlmApplicationOfValidate<Model>;
 
  // STRUCTURED OUTPUT
  export function parameters<
    Parameters extends Record<string, any>,
    Model extends ILlmSchema.Model,
    Config extends Partial<ILlmSchema.ModelConfig[Model]> = {},
  >(): ILlmSchema.ModelParameters[Model];
 
  // TYPE SCHEMA
  export function schema<
    T,
    Model extends ILlmSchema.Model,
    Config extends Partial<ILlmSchema.ModelConfig[Model]> = {},
  >(
    ...$defs: Extract<
      ILlmSchema.ModelSchema[Model],
      { $ref: string }
    > extends never
      ? []
      : [Record<string, ILlmSchema.ModelSchema[Model]>]
  ): ILlmSchema.ModelSchema[Model];
}

LLM function calling application schema with validators.

typia.llm.applicationOfValidate<App, Model>() is a function combined typia.llm.application<App, Model>() and typia.validate<T>() functions. Every function schema instances registered in the ILlmApplicationOfValidate.functions have their own parameters validator ILlmFunctionOfValidate.validate(), and they will report the detailed information about the wrong typed parameters.

This validator function is useful when implementing the actual LLM function calling feature. It’s because LLM function calling sometimes takes a mistake that composing wrong typed arguments. In that case, you can correct the arguments by delivering the return value of the ILlmFunctionOfValidate.validate() function to the LLM provider. Then, the LLM provider will correct the arguments at the next function calling.

Here is an actual program code correcting the OpenAI (ChatGPT) function calling.

Note that, if you are developing an A.I. chatbot project, such validation feedback strategy is essential for both LLM function calling and structured output features. Tends to my experiments, even though the LLM makes a wrong typed structured data, it always be corrected just by only one validation feedback step.

ChatGptFunctionCaller.ts
import OpenAI from "openai";
import typia, { ILlmFunctionOfValidate, IValidation } from "typia";
 
import { ILlmMessage } from "../structures/ILlmMessage";
 
export namespace ChatGptFunctionCaller {
  export interface IProps {
    function: ILlmFunctionOfValidate<"chatgpt">;
    messages: ILlmMessage[];
  }
 
  export const test = async (props: IProps): Promise<void> => {
    let result: IValidation<any> | undefined = undefined;
    for (let i: number = 0; i < 3; ++i) {
      if (result && result.success === true) break;
      result = await step(props, result);
    }
  };
 
  const step = async (
    props: IProps,
    previous?: IValidation.IFailure | undefined,
  ): Promise<IValidation<any>> => {
    const client: OpenAI = new OpenAI({
      apiKey: "YOUR-SECRET-KEY",
    });
    const completion: OpenAI.ChatCompletion =
      await client.chat.completions.create({
        model: "gpt-4o",
        messages: previous
          ? [
              ...props.messages.slice(0, -1),
              {
                role: "assistant",
                content: [
                  "You A.I. assistant has composed wrong typed arguments.",
                  "",
                  "Here is the detailed list of type errors. Review and correct them at the next function calling.",
                  "",
                  "```json",
                  JSON.stringify(previous.errors, null, 2),
                  "```",
                ].join("\n"),
              } satisfies OpenAI.ChatCompletionMessageParam,
              ...props.messages.slice(-1),
            ]
          : props.messages,
        tools: [
          {
            type: "function",
            function: {
              name: props.function.name,
              description: props.function.description,
              parameters: props.function.parameters as Record<string, any>,
            },
          },
        ],
        tool_choice: "required",
        parallel_tool_calls: false,
      });
 
    const toolCalls: OpenAI.ChatCompletionMessageToolCall[] = completion.choices
      .map((c) => c.message.tool_calls ?? [])
      .flat();
    if (toolCalls.length === 0)
      throw new Error("ChatGPT has not called any function.");
 
    const results: IValidation<any>[] = toolCalls.map((call) => {
      const { input } = typia.assert<{ input: any }>(
        JSON.parse(call.function.arguments),
      );
      return props.function.validate(input);
    });
    return results.find((r) => r.success === true) ?? results[0];
  };
}

Parameters’ Separation

Parameter values from both LLM and Human sides.

When composing parameter arguments through the LLM (Large Language Model) function calling, there can be a case that some parameters (or nested properties) must be composed not by LLM, but by Human. File uploading feature, or sensitive information like secret key (password) cases are the representative examples.

In that case, you can configure the LLM function calling schemas to exclude such Human side parameters (or nested properties) by ILlmApplication.options.separate property. Instead, you have to merge both Human and LLM composed parameters into one by calling the HttpLlm.mergeParameters() before the LLM function call execution.

Here is the example separating the parameter schemas.

example/src/llm.application.separate.ts
import { ILlmApplication } from "@samchon/openapi";
import typia, { tags } from "typia";
 
const app: ILlmApplication<"claude"> = typia.llm.application<
  BbsArticleController,
  "claude"
>({
  separate: (schema: ILlmSchema<"claude">) =>
    ClaudeTypeChecker.isString(schema) && schema.contentMediaType !== undefined,
});
 
console.log(app);
 
interface BbsArticleController {
  /**
   * Create a new article.
   *
   * Writes a new article and archives it into the DB.
   *
   * @param props Properties of create function
   * @returns Newly created article
   */
  create(props: {
    /**
     * Information of the article to create
     */
    input: IBbsArticle.ICreate;
  }): Promise<IBbsArticle>;
 
  /**
   * Update an article.
   *
   * Updates an article with new content.
   *
   * @param props Properties of update function
   * @param input New content to update
   */
  update(props: {
    /**
     * Target article's {@link IBbsArticle.id}.
     */
    id: string & tags.Format<"uuid">;
 
    /**
     * New content to update.
     */
    input: IBbsArticle.IUpdate;
  }): Promise<void>;
 
  /**
   * Erase an article.
   *
   * Erases an article from the DB.
   *
   * @param props Properties of erase function
   */
  erase(props: {
    /**
     * Target article's {@link IBbsArticle.id}.
     */
    id: string & tags.Format<"uuid">;
  }): Promise<void>;
}

Restrictions

typia.llm.application<App, Model>() follows the same restrictions of below.

About the function parameters type, it follows the restriction of both typia.llm.parameters<Params, Models>() and typia.llm.schema<T, Model>() functions. Therefore, the parameters must be a keyworded object type with static keys without any dynamic keys. Also, the object type must not be nullable or optional.

About the return value type, it follows the restriction of typia.llm.schema<T, Model>() function. By the way, if the return type is union type with undefined, it would be compilation error, due to OpenAPI (JSON schema) specification does not support the undefindable union type.

src/examples/llm.application.violation.ts
import { ILlmApplication } from "@samchon/openapi";
import typia, { tags } from "typia";
 
const app: ILlmApplication<"chatgpt"> = typia.llm.application<
  BbsArticleController,
  "chatgpt"
>();
 
console.log(app);
 
interface BbsArticleController {
  /**
   * Create a new article.
   *
   * Writes a new article and archives it into the DB.
   *
   * @param props Properties of create function
   * @returns Newly created article
   */
  create(props: {
    /**
     * Information of the article to create
     */
    input: IBbsArticle.ICreate;
  }): Promise<IBbsArticle | undefined>;
 
  erase(id: string & tags.Format<"uuid">): Promise<void>;
}