Tool Calling (Function Calling)
A capability that lets AI models request the execution of external functions — searching databases, calling APIs, or performing calculations — extending their abilities beyond text generation.
Tool calling (also called function calling) is a capability that lets AI models interact with external systems by requesting the execution of specific functions. Instead of just generating text, the model can search databases, call APIs, perform calculations, send emails, or interact with any software you connect.
Why tool calling matters
LLMs are fundamentally text generators. They cannot check your calendar, query your database, or send a Slack message on their own. Tool calling bridges this gap by letting the model recognise when it needs external capabilities and request them through a structured interface.
How tool calling works
- Define tools: You provide the model with descriptions of available functions — their names, what they do, and what parameters they accept.
- Model decides: When processing a user request, the model determines if a tool is needed and which one.
- Structured request: The model outputs a structured tool call (function name + parameters) instead of a text response.
- Execution: Your application executes the function and returns the result to the model.
- Final response: The model incorporates the tool result into its response to the user.
Example flow
User: "What is the weather in London?"
- Model recognises it needs the weather tool.
- Model outputs: `get_weather(city="London")`
- Your code calls a weather API and returns the result.
- Model responds: "It is currently 15 degrees C and partly cloudy in London."
Common tool types
- Data retrieval: Database queries, search engines, knowledge base lookups.
- Computation: Calculators, data analysis, code execution.
- Communication: Sending emails, Slack messages, notifications.
- System interaction: File operations, API calls, browser control.
- Information verification: Fact-checking, source validation.
Multi-tool and parallel tool calling
Advanced implementations support:
- Sequential tool use: The model calls one tool, uses the result to decide the next tool, and chains multiple calls.
- Parallel tool calling: The model requests multiple independent tool calls simultaneously.
- Nested tool use: One tool's result triggers the need for another tool.
Tool calling and AI agents
Tool calling is the foundation of AI agents. An agent is essentially an LLM with tool-calling capabilities running in a loop — observing the environment, deciding which tools to use, executing them, and continuing until the task is complete.
Best practices
- Write clear, specific tool descriptions — the model uses these to decide when and how to call each tool.
- Validate tool call parameters before execution.
- Handle tool failures gracefully.
- Limit available tools to what is relevant to the current context.
Why This Matters
Tool calling transforms AI from a text generator into an action-taker. It is the core mechanism behind AI assistants that can actually do things — not just talk about them. Understanding tool calling is essential for building AI applications that integrate with your existing systems and automate real workflows.
Related Terms
Continue learning in Practitioner
This topic is covered in our lesson: Connecting AI to Your Existing Tools