Core Concepts

Tool Use

The capability of an LLM to call external functions, APIs, or services as part of its reasoning process, extending beyond text generation to real-world action.

Definition

Tool Use is the capability of an LLM to call external functions, APIs, or services as part of its reasoning process, extending beyond text generation to real-world action. When a model has access to tools, it can retrieve live data, write to databases, send notifications, execute code, or interact with any external system—turning a language model into an operational agent. Tool use is what bridges the gap between an LLM as a text generator and an LLM as an autonomous actor in a software system.

Engineering Context

Tool use (also called "function calling") is implemented via structured JSON schemas that define available tools. The model selects which tool to call and with what arguments; the application executes the call and returns results to the model. Reliable tool use requires strict schema design, input validation, idempotency for side-effecting tools, and structured error feedback so the model can recover from failures. In practice, tool schemas should be minimal and unambiguous—ambiguous schemas lead to incorrect argument generation. Always validate model-generated arguments before executing any tool with external side effects, and implement per-tool timeouts and error handling.

Related Terms

Building production AI agents?

We design and implement deterministic AI agent systems for enterprise teams.

Start Assessment