Function application

  • You must configure the generic argument App.

    TypeScript functions to LLM function calling application.

    Creates an application of LLM (Large Language Model) function calling application from a TypeScript class or interface type containig the target functions to be called by the LLM function calling feature.

    If you put the returned ILlmApplication.functions objects to the LLM provider like OpenAI (ChatGPT), the LLM will automatically select the proper function and fill its arguments from the conversation (maybe chatting text) with user (human). This is the concept of the LLM function calling.

    By the way, there can be some parameters (or their nested properties) which must be composed by human, not by LLM. File uploading feature or some sensitive information like secrety key (password) are the examples. In that case, you can separate the function parameters to both LLM and human sides by configuring the ILlmApplication.IOptions.separate property. The separated parameters are assigned to the ILlmFunction.separated property.

    For reference, the actual function call execution is not by LLM, but by you. When the LLM selects the proper function and fills the arguments, you just call the function with the LLM prepared arguments. And then informs the return value to the LLM by system prompt. The LLM will continue the next conversation based on the return value.

    Additionally, if you've configured ILlmApplication.IOptions.separate, so that the parameters are separated to human and LLM sides, you can merge these humand and LLM sides' parameters into one through HttpLlm.mergeParameters before the actual LLM function call execution.

    Parameters

    • Optionaloptions: IOptions<ILlmSchema>

      Options for the LLM application construction

    Returns never

    Application of LLM function calling schemas

  • TypeScript functions to LLM function calling application.

    Creates an application of LLM (Large Language Model) function calling application from a TypeScript class or interface type containig the target functions to be called by the LLM function calling feature.

    If you put the returned ILlmApplication.functions objects to the LLM provider like OpenAI (ChatGPT), the LLM will automatically select the proper function and fill its arguments from the conversation (maybe chatting text) with user (human). This is the concept of the LLM function calling.

    By the way, there can be some parameters (or their nested properties) which must be composed by human, not by LLM. File uploading feature or some sensitive information like secrety key (password) are the examples. In that case, you can separate the function parameters to both LLM and human sides by configuring the ILlmApplication.IOptions.separate property. The separated parameters are assigned to the ILlmFunction.separated property.

    For reference, the actual function call execution is not by LLM, but by you. When the LLM selects the proper function and fills the arguments, you just call the function with the LLM prepared arguments. And then informs the return value to the LLM by system prompt. The LLM will continue the next conversation based on the return value.

    Additionally, if you've configured ILlmApplication.IOptions.separate, so that the parameters are separated to human and LLM sides, you can merge these humand and LLM sides' parameters into one through HttpLlm.mergeParameters before the actual LLM function call execution.

    Type Parameters

    • App extends object

      Target class or interface type collecting the functions to call

    Parameters

    • Optionaloptions: IOptions<ILlmSchema>

      Options for the LLM application construction

    Returns ILlmApplication

    Application of LLM function calling schemas