Skip to main content

Tools

Tools are utilities designed to be called by a model: their inputs are designed to be generated by models, and their outputs are designed to be passed back to models. Tools are needed whenever you want a model to control parts of your code or call out to external APIs.

A tool consists of:

  1. The name of the tool.
  2. A description of what the tool does.
  3. A JSON schema defining the inputs to the tool.
  4. A function (and, optionally, an async variant of the function).

When a tool is bound to a model, the name, description and JSON schema are provided as context to the model. Given a list of tools and a set of instructions, a model can request to call one or more tools with specific inputs. Typical usage may look like the following:

tools = [...] # Define a list of tools
llm_with_tools = llm.bind_tools(tools)
ai_msg = llm_with_tools.invoke("do xyz...")
# -> AIMessage(tool_calls=[ToolCall(...), ...], ...)

The AIMessage returned from the model MAY have tool_calls associated with it. Read this guide for more information on what the response type may look like.

Once the chosen tools are invoked, the results can be passed back to the model so that it can complete whatever task it's performing. There are generally two different ways to invoke the tool and pass back the response:

Invoke with just the arguments​

When you invoke a tool with just the arguments, you will get back the raw tool output (usually a string). This generally looks like:

# You will want to previously check that the LLM returned tool calls
tool_call = ai_msg.tool_calls[0]
# ToolCall(args={...}, id=..., ...)
tool_output = tool.invoke(tool_call["args"])
tool_message = ToolMessage(
content=tool_output,
tool_call_id=tool_call["id"],
name=tool_call["name"]
)

Note that the content field will generally be passed back to the model. If you do not want the raw tool response to be passed to the model, but you still want to keep it around, you can transform the tool output but also pass it as an artifact (read more about ToolMessage.artifact here)

... # Same code as above
response_for_llm = transform(response)
tool_message = ToolMessage(
content=response_for_llm,
tool_call_id=tool_call["id"],
name=tool_call["name"],
artifact=tool_output
)

Invoke with ToolCall​

The other way to invoke a tool is to call it with the full ToolCall that was generated by the model. When you do this, the tool will return a ToolMessage. The benefits of this are that you don't have to write the logic yourself to transform the tool output into a ToolMessage. This generally looks like:

tool_call = ai_msg.tool_calls[0]
# -> ToolCall(args={...}, id=..., ...)
tool_message = tool.invoke(tool_call)
# -> ToolMessage(
# content="tool result foobar...",
# tool_call_id=...,
# name="tool_name"
# )

If you are invoking the tool this way and want to include an artifact for the ToolMessage, you will need to have the tool return two things. Read more about defining tools that return artifacts here.

Best practices​

When designing tools to be used by a model, it is important to keep in mind that:

  • Chat models that have explicit tool-calling APIs will be better at tool calling than non-fine-tuned models.
  • Models will perform better if the tools have well-chosen names, descriptions, and JSON schemas. This another form of prompt engineering.
  • Simple, narrowly scoped tools are easier for models to use than complex tools.

For specifics on how to use tools, see the tools how-to guides.

To use a pre-built tool, see the tool integration docs.


Was this page helpful?


You can also leave detailed feedback on GitHub.