toLangChainTools() function
undefined
export function toLangChainTools(props: {
controllers: Array<ILlmController | IHttpLlmController>;
prefix?: boolean | undefined;
}): DynamicStructuredTool[];LangChain.js integration for typia.
toLangChainTools() converts TypeScript classes or OpenAPI documents into LangChain DynamicStructuredTool[] at once.
Every class method becomes a tool, JSDoc comments become tool descriptions, and TypeScript types become JSON schemas — all at compile time. For OpenAPI documents, every API endpoint is converted to a DynamicStructuredTool with schemas from the specification.
Validation feedback is embedded automatically.
Setup
npm install @typia/langchain @langchain/core
npm install typia
npx typia setupFrom TypeScript Class
LangChain Agent
import { ChainValues, Runnable } from "@langchain/core";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { DynamicStructuredTool } from "@langchain/core/tools";
import { ChatOpenAI } from "@langchain/openai";
import { toLangChainTools } from "@typia/langchain";
import { AgentExecutor, createToolCallingAgent } from "langchain/agents";
import typia from "typia";
import { Calculator } from "./Calculator";
const tools: DynamicStructuredTool[] = toLangChainTools({
controllers: [
typia.llm.controller<Calculator>("calculator", new Calculator()),
],
});
const agent: Runnable = createToolCallingAgent({
llm: new ChatOpenAI({ model: "gpt-4o" }),
tools,
prompt: ChatPromptTemplate.fromMessages([
["system", "You are a helpful assistant."],
["human", "{input}"],
["placeholder", "{agent_scratchpad}"],
]),
});
const executor: AgentExecutor = new AgentExecutor({ agent, tools });
const result: ChainValues = await executor.invoke({
input: "What is 10 + 5?",
});Create controllers from TypeScript classes with typia.llm.controller<Class>(), and pass them to toLangChainTools().
controllers: Array of controllers created viatypia.llm.controller<Class>()orHttpLlm.controller()prefix: Whentrue(default), tool names are formatted as{controllerName}_{methodName}. Set tofalseto use bare method names
From OpenAPI Document
import { DynamicStructuredTool } from "@langchain/core/tools";
import { toLangChainTools } from "@typia/langchain";
import { HttpLlm } from "@typia/utils";
const tools: DynamicStructuredTool[] = toLangChainTools({
controllers: [
HttpLlm.controller({
name: "shopping",
document: await fetch(
"https://shopping-be.wrtn.ai/editor/swagger.json",
).then((r) => r.json()),
connection: {
host: "https://shopping-be.wrtn.ai",
headers: { Authorization: "Bearer ********" },
},
}),
],
});Create controllers from OpenAPI documents with HttpLlm.controller(), and pass them to toLangChainTools().
name: Controller name used as prefix for tool namesdocument: Swagger/OpenAPI document (v2.0, v3.0, or v3.1)connection: HTTP connection info includinghostand optionalheaders
Validation Feedback
Validation Test
import { DynamicStructuredTool } from "@langchain/core/tools";
import { TestValidator } from "@nestia/e2e";
import { ILlmController, IValidation } from "@typia/interface";
import { toLangChainTools } from "@typia/langchain";
import { stringifyValidationFailure } from "@typia/utils";
import typia from "typia";
import { Calculator } from "../structures/Calculator";
export const test_langchain_class_controller_validation =
async (): Promise<void> => {
// 1. Create class-based controller using typia.llm.controller
const controller: ILlmController<Calculator> =
typia.llm.controller<Calculator>("calculator", new Calculator());
// 2. Convert to LangChain tools
const tools: DynamicStructuredTool[] = toLangChainTools({
controllers: [controller],
});
// 3. Find add tool
const addTool = tools.find((t) => t.name === "calculator_add");
if (!addTool) {
throw new Error("Missing add tool");
}
// 4. Test with invalid arguments - string instead of number
// typia validates and returns stringifyValidationFailure format
const invalidResult = await addTool.invoke({ x: "not a number", y: 5 });
// Generate expected validation error message using typia
const expected: IValidation = typia.validate<Calculator.IProps>({
x: "not a number",
y: 5,
});
if (expected.success === true) {
throw new Error("Expected validation to fail, but it succeeded.");
}
const expectedMessage: string = stringifyValidationFailure(expected);
TestValidator.equals(
"Validation failure message should match",
invalidResult,
expectedMessage,
);
// 5. Test with valid arguments - should succeed
const validResult = await addTool.invoke({ x: 10, y: 5 });
TestValidator.equals("valid args should work", validResult, "15");
};import { ILlmApplication, ILlmFunction, IValidation } from "@samchon/openapi";
import { FunctionCall } from "pseudo";
export const correctFunctionCall = (props: {
functionCall: FunctionCall;
application: ILlmApplication<"chatgpt">;
retry: (reason: string, errors?: IValidation.IError[]) => Promise<unknown>;
}): Promise<unknown> => {
// FIND FUNCTION
const func: ILlmFunction<"chatgpt"> | undefined =
props.application.functions.find((f) => f.name === call.name);
if (func === undefined) {
// never happened in my experience
return props.retry(
"Unable to find the matched function name. Try it again.",
);
}
// VALIDATE
const result: IValidation<unknown> = func.validate(
props.functionCall.arguments,
);
if (result.success === false) {
// 1st trial: 30% (gpt-4o-mini in shopping mall chatbot)
// 2nd trial with validation feedback: 99%
// 3nd trial with validation feedback again: never have failed
return props.retry(
"Type errors are detected. Correct it through validation errors",
{
errors: result.errors,
},
);
}
return result.data;
};When LLM sends { x: "not a number", y: 5 }, the validation failure message from stringifyValidationFailure() is returned directly as the tool result string. The LLM reads this and self-corrects. When valid arguments are provided like { x: 10, y: 5 }, the result is "15".
In my experience, OpenAI gpt-4o-mini makes type-level mistakes about 70% of the time on complex schemas (Shopping Mall service). With validation feedback, the success rate jumps from 30% to 99% on the second attempt. Third attempt has never failed.
Bypassing LangChain’s Built-in Validation
LangChain internally uses @cfworker/json-schema to validate tool arguments, which throws ToolInputParsingException before custom validation can run. @typia/langchain solves this by using a passthrough Zod schema (z.record(z.unknown())), allowing typia’s much more detailed and accurate validator to handle all argument validation instead.
The embedded typia.validate<T>() creates validation logic by analyzing TypeScript source codes and types at the compilation level — more accurate and detailed than any runtime validator.
| Components | typia | TypeBox | ajv | io-ts | zod | C.V. |
|---|---|---|---|---|---|---|
| Easy to use | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ |
| Object (simple) | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
| Object (hierarchical) | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
| Object (recursive) | ✔ | ❌ | ✔ | ✔ | ✔ | ✔ |
| Object (union, implicit) | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ |
| Object (union, explicit) | ✔ | ✔ | ✔ | ✔ | ✔ | ❌ |
| Object (additional tags) | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
| Object (template literal types) | ✔ | ✔ | ✔ | ❌ | ❌ | ❌ |
| Object (dynamic properties) | ✔ | ✔ | ✔ | ❌ | ❌ | ❌ |
| Array (rest tuple) | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ |
| Array (hierarchical) | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
| Array (recursive) | ✔ | ✔ | ✔ | ✔ | ✔ | ❌ |
| Array (recursive, union) | ✔ | ✔ | ❌ | ✔ | ✔ | ❌ |
| Array (R+U, implicit) | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ |
| Array (repeated) | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ |
| Array (repeated, union) | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ |
| Ultimate Union Type | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ |
C.V.meansclass-validator
This validation feedback strategy also covers restriction properties:
string:minLength,maxLength,pattern,format,contentMediaTypenumber:minimum,maximum,exclusiveMinimum,exclusiveMaximum,multipleOfarray:minItems,maxItems,uniqueItems,items
Structured Output
Use typia.llm.parameters<T>() with LangChain’s withStructuredOutput():
import { ChatOpenAI } from "@langchain/openai";
import { dedent, stringifyValidationFailure } from "@typia/utils";
import typia, { tags } from "typia";
interface IMember {
email: string & tags.Format<"email">;
name: string;
age: number & tags.Minimum<0> & tags.Maximum<100>;
hobbies: string[];
joined_at: string & tags.Format<"date">;
}
const model = new ChatOpenAI({ model: "gpt-4o" })
.withStructuredOutput(typia.llm.parameters<IMember>());
const member: IMember = await model.invoke(dedent`
I am a new member of the community.
My name is John Doe, and I am 25 years old.
I like playing basketball and reading books,
and joined to this community at 2022-01-01.
`);
// Validate the result
const result = typia.validate<IMember>(member);
if (!result.success) {
console.error(stringifyValidationFailure(result));
}Terminal{ email: 'john.doe@example.com', name: 'John Doe', age: 25, hobbies: [ 'playing basketball', 'reading books' ], joined_at: '2022-01-01' }
The IMember interface is the single source of truth. typia.llm.parameters<IMember>() generates the JSON schema, and typia.validate<IMember>() validates the output — all from the same type. If validation fails, feed the error back to the LLM for correction.