đź“– Guide DocumentsLLM Function Callingapplication() functions

application() function

typia
export namespace llm {
  // LLM FUNCTION CALLING APPLICATION SCHEMA
  export function application<
    App extends Record<string, any>,
    Model extends ILlmSchema.Model,
    Config extends Partial<ILlmSchema.ModelConfig[Model]> = {},
  >(
    options?: Partial<Pick<ILlmApplication.IOptions<Model>, "separate">>,
  ): ILlmApplication<Model>;
 
  // STRUCTURED OUTPUT
  export function parameters<
    Parameters extends Record<string, any>,
    Model extends ILlmSchema.Model,
    Config extends Partial<ILlmSchema.ModelConfig[Model]> = {},
  >(): ILlmSchema.ModelParameters[Model];
 
  // TYPE SCHEMA
  export function schema<
    T,
    Model extends ILlmSchema.Model,
    Config extends Partial<ILlmSchema.ModelConfig[Model]> = {},
  >(
    ...$defs: Extract<
      ILlmSchema.ModelSchema[Model],
      { $ref: string }
    > extends never
      ? []
      : [Record<string, ILlmSchema.ModelSchema[Model]>]
  ): ILlmSchema.ModelSchema[Model];
}

LLM function calling application schema from a native TypeScript class or interface type.

typia.llm.application<App, Model>() is a function composing LLM (Large Language Model) calling application schema from a native TypeScript class or interface type. The function returns an ILlmApplication instance, which is a data structure representing a collection of LLM function calling schemas.

If you put LLM function schema instances registered in the ILlmApplication.functions to the LLM provider like OpenAI ChatGPT, the LLM will select a proper function to call with parameter values of the target function in the conversations with the user. This is the “LLM Function Calling”.

You can specify the LLM provide model by the second Model template argument. It’s because detailed specification of the function schema is different by the LLM provider model. Here is the list of LLM schema definitions of each model. Determine one of them carefully reading the LLM schema definitions.

If you’ve determined, let’s make A.I. Chatbot super-easily with typia.llm.application<App, Model>() function.

LLM Function Calling and Structured Output

LLM selects proper function and fill arguments.

In nowadays, most LLM (Large Language Model) like OpenAI are supporting “function calling” feature. The “LLM function calling” means that LLM automatically selects a proper function and fills parameter values from conversation with the user (may by chatting text).

Structured output is another feature of LLM. The “structured output” means that LLM automatically transforms the output conversation into a structured data format like JSON.

example/src/llm.application.separate.ts
import { ILlmApplication } from "@samchon/openapi";
import typia from "typia";
 
import { BbsArticleService } from "./BbsArticleService";
 
const app: ILlmApplication<"chatgpt"> = typia.llm.application<
  BbsArticleService,
  "chatgpt"
>();
console.log(app);

đź’» Playground Link

Validation Feedback

validation-feedback-concept.ts
import { ILlmApplication, ILlmFunction, IValidation } from "@samchon/openapi";
import { FunctionCall } from "pseudo";
 
export const correctFunctionCall = (props: {
  functionCall: FunctionCall;
  application: ILlmApplication<"chatgpt">;
  retry: (reason: string, errors?: IValidation.IError[]) => Promise<unknown>;
}): Promise<unknown> => {
  // FIND FUNCTION
  const func: ILlmFunction<"chatgpt"> | undefined =
    props.application.functions.find((f) => f.name === call.name);
  if (func === undefined) {
    // never happened in my experience
    return props.retry(
      "Unable to find the matched function name. Try it again.",
    );
  }
 
  // VALIDATE
  const result: IValidation<unknown> = func.validate(
    props.functionCall.arguments,
  );
  if (result.success === false) {
    // 1st trial: 30% (gpt-4o-mini in shopping mall chatbot)
    // 2nd trial with validation feedback: 99%
    // 3nd trial with validation feedback again: never have failed
    return props.retry(
      "Type errors are detected. Correct it through validation errors",
      {
        errors: result.errors,
      },
    );
  }
  return result.data;
};

Is LLM Function Calling perfect? No, absolutely not.

LLM (Large Language Model) service vendor like OpenAI takes a lot of type level mistakes when composing the arguments of function calling or structured output. Even though target schema is super simple like Array<string> type, LLM often fills it just by a string typed value.

In my experience, OpenAI gpt-4o-mini (8b parameters) is taking about 70% of type level mistakes when filling the arguments of function calling to Shopping Mall service. To overcome the imperfection of such LLM function calling, typia.llm.application<App, Model>() function embeds typia.validate<T>() function for the validation feedback strategy.

The key concept of validation feedback strategy is, let LLM function calling to construct invalid typed arguments first, and informing detailed type errors to the LLM, so that induce LLM to emend the wrong typed arguments at the next turn. In this way, I could uprise the success rate of function calling from 30% to 99% just by one step validation feedback. Even though the LLM is still occurs type error, it always has been caught at the next turn.

For reference, the embedded typia.validate<T>() function creates validation logic by analyzing TypeScript source codes and types in the compilation level. Therefore, it is accurate and detailed than any other validator libraries. This is exactly what is needed for function calling, and I can confidentelly say that typia is the best library for LLM function calling.

ComponentstypiaTypeBoxajvio-tszodC.V.
Easy to use✅❌❌❌❌❌
Object (simple)✔✔✔✔✔✔
Object (hierarchical)✔✔✔✔✔✔
Object (recursive)✔❌✔✔✔✔
Object (union, implicit)✅❌❌❌❌❌
Object (union, explicit)✔✔✔✔✔❌
Object (additional tags)✔✔✔✔✔✔
Object (template literal types)✔✔✔❌❌❌
Object (dynamic properties)✔✔✔❌❌❌
Array (rest tuple)✅❌❌❌❌❌
Array (hierarchical)✔✔✔✔✔✔
Array (recursive)✔✔✔✔✔❌
Array (recursive, union)✔✔❌✔✔❌
Array (R+U, implicit)✅❌❌❌❌❌
Array (repeated)✅❌❌❌❌❌
Array (repeated, union)✅❌❌❌❌❌
Ultimate Union Type✅❌❌❌❌❌

C.V. means class-validator

Additionally, this validation feedback strategy is useful for some LLM providers do not supporting restriction properties of JSON schema like OpenAI (IChatGptSchema) and Gemini (IGeminiSchema). For example, OpenAI and Gemini do not support format property of JSON schema, so that cannot understand the UUID like type. Even though typia.llm.application<App, Model>() function is writing the restriction information to the description property of JSON schema, but LLM provider does not reflect it perfectly.

Also, some LLM providers which have not specified the JSON schema version like Claude (IClaudeSchema) and Llama (ILlamaSchema), they tend to fail a lot of function calling about the restriction properties. In fact, Llama does not support function calling formally, so you have to detour it by prompt template, and its success rate is lower than others.

In that case, if you give validation feedback from ILlmFunction.validate() function to the LLM agent, the LLM agent will be able to understand the restriction information exactly and fill the arguments properly.

  • Restriction properties of JSON schema
    • string: minLength, maxLength, pattern, format, contentMediaType
    • number: minimum, maximum, exclusiveMinimum, exclusiveMaximum, multipleOf
    • array: minItems, maxItems, uniqueItems, items

Parameters’ Separation

Parameter values from both LLM and Human sides.

When composing parameter arguments through the LLM (Large Language Model) function calling, there can be a case that some parameters (or nested properties) must be composed not by LLM, but by Human. File uploading feature, or sensitive information like secret key (password) cases are the representative examples.

In that case, you can configure the LLM function calling schemas to exclude such Human side parameters (or nested properties) by ILlmApplication.options.separate property. Instead, you have to merge both Human and LLM composed parameters into one by calling the HttpLlm.mergeParameters() before the LLM function call execution.

Here is the example separating the parameter schemas.

example/src/llm.application.separate.ts
import { ClaudeTypeChecker, IClaudeSchema, ILlmApplication } from "@samchon/openapi";
import typia from "typia";
 
import { BbsArticleService } from "./BbsArticleService";
 
const app: ILlmApplication<"claude"> = typia.llm.application<
  BbsArticleService,
  "claude"
>({
  separate: (schema: IClaudeSchema) =>
    ClaudeTypeChecker.isString(schema) && schema.contentMediaType !== undefined,
});
console.log(app);

đź’» Playground Link

Restrictions

typia.llm.application<App, Model>() follows the same restrictions of below.

About the function parameters type, it follows the restriction of both typia.llm.parameters<Params, Models>() and typia.llm.schema<T, Model>() functions. Therefore, the parameters must be a keyworded object type with static keys without any dynamic keys. Also, the object type must not be nullable or optional.

About the return value type, it follows the restriction of typia.llm.schema<T, Model>() function. By the way, if the return type is union type with undefined, it would be compilation error, due to OpenAPI (JSON schema) specification does not support the undefindable union type.

example/src/llm.application.violation.ts
import { ILlmApplication } from "@samchon/openapi";
import typia, { tags } from "typia";
 
const app: ILlmApplication<"chatgpt"> = typia.llm.application<
  BbsArticleController,
  "chatgpt"
>();
 
console.log(app);
 
interface BbsArticleController {
  /**
   * Create a new article.
   *
   * Writes a new article and archives it into the DB.
   *
   * @param props Properties of create function
   * @returns Newly created article
   */
  create(props: {
    /**
     * Information of the article to create
     */
    input: IBbsArticle.ICreate;
  }): Promise<IBbsArticle | undefined>;
 
  erase(id: string & tags.Format<"uuid">): Promise<void>;
}