Skip to main content

ChatAnthropic

This will help you getting started with ChatAnthropic chat models. For detailed documentation of all ChatAnthropic features and configurations head to the API reference.

Overview

Integration details

ClassPackageLocalSerializablePY supportPackage downloadsPackage latest
ChatAnthropic@langchain/anthropicNPM - DownloadsNPM - Version

Model features

Tool callingStructured outputJSON modeImage inputAudio inputVideo inputToken-level streamingToken usageLogprobs

Setup

You’ll need to sign up and obtain an Anthropic API key, and install the @langchain/anthropic integration package.

Credentials

Head to Anthropic’s website to sign up to Anthropic and generate an API key. Once you’ve done this set the ANTHROPIC_API_KEY environment variable:

export ANTHROPIC_API_KEY="your-api-key"


If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:

```{=mdx}

```bash
# export LANGCHAIN_TRACING_V2="true"
# export LANGCHAIN_API_KEY="your-api-key"

### Installation

The LangChain ChatAnthropic integration lives in the `@langchain/anthropic` package:

```{=mdx}

```bash npm2yarn
npm i @langchain/anthropic

## Instantiation

Now we can instantiate our model object and generate chat completions:

::: {.cell execution_count=2}
``` {.typescript .cell-code}
import { ChatAnthropic } from "@langchain/anthropic"

const llm = new ChatAnthropic({
model: "claude-3-haiku-20240307",
temperature: 0,
maxTokens: undefined,
maxRetries: 2,
// other params...
})

:::

Invocation

const aiMsg = await llm.invoke([
[
"system",
"You are a helpful assistant that translates English to French. Translate the user sentence.",
],
["human", "I love programming."],
]);
aiMsg;
AIMessage {
"id": "msg_01M9yt3aSqKJKM1RnZF4f44Q",
"content": "Voici la traduction en français :\n\nJ'adore la programmation.",
"additional_kwargs": {
"id": "msg_01M9yt3aSqKJKM1RnZF4f44Q",
"type": "message",
"role": "assistant",
"model": "claude-3-haiku-20240307",
"stop_reason": "end_turn",
"stop_sequence": null,
"usage": {
"input_tokens": 29,
"output_tokens": 20
}
},
"response_metadata": {
"id": "msg_01M9yt3aSqKJKM1RnZF4f44Q",
"model": "claude-3-haiku-20240307",
"stop_reason": "end_turn",
"stop_sequence": null,
"usage": {
"input_tokens": 29,
"output_tokens": 20
},
"type": "message",
"role": "assistant"
},
"tool_calls": [],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 29,
"output_tokens": 20,
"total_tokens": 49
}
}
console.log(aiMsg.content);
Voici la traduction en français :

J'adore la programmation.

Chaining

We can chain our model with a prompt template like so:

import { ChatPromptTemplate } from "@langchain/core/prompts";

const prompt = ChatPromptTemplate.fromMessages([
[
"system",
"You are a helpful assistant that translates {input_language} to {output_language}.",
],
["human", "{input}"],
]);

const chain = prompt.pipe(llm);
await chain.invoke({
input_language: "English",
output_language: "German",
input: "I love programming.",
});
AIMessage {
"id": "msg_012gUKUG65teaois31W3bfGF",
"content": "Ich liebe das Programmieren.",
"additional_kwargs": {
"id": "msg_012gUKUG65teaois31W3bfGF",
"type": "message",
"role": "assistant",
"model": "claude-3-haiku-20240307",
"stop_reason": "end_turn",
"stop_sequence": null,
"usage": {
"input_tokens": 23,
"output_tokens": 11
}
},
"response_metadata": {
"id": "msg_012gUKUG65teaois31W3bfGF",
"model": "claude-3-haiku-20240307",
"stop_reason": "end_turn",
"stop_sequence": null,
"usage": {
"input_tokens": 23,
"output_tokens": 11
},
"type": "message",
"role": "assistant"
},
"tool_calls": [],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 23,
"output_tokens": 11,
"total_tokens": 34
}
}

Multimodal inputs

Claude-3 models support image multimodal inputs. The passed input must be a base64 encoded image with the filetype as a prefix (e.g. data:image/png;base64,{YOUR_BASE64_ENCODED_DATA}). Here’s an example:

import * as fs from "node:fs/promises";

import { ChatAnthropic } from "@langchain/anthropic";
import { HumanMessage } from "@langchain/core/messages";

const imageData2 = await fs.readFile("../../../../../examples/hotdog.jpg");
const llm2 = new ChatAnthropic({
model: "claude-3-sonnet-20240229",
});
const message2 = new HumanMessage({
content: [
{
type: "text",
text: "What's in this image?",
},
{
type: "image_url",
image_url: {
url: `data:image/jpeg;base64,${imageData2.toString("base64")}`,
},
},
],
});

await llm2.invoke([message2]);
AIMessage {
"id": "msg_01AuGpm6xbacTwoUFdNiCnzu",
"content": "The image shows a hot dog. It consists of a cylindrical bread roll or bun that has been sliced lengthwise, revealing the bright red hot dog sausage filling inside. The hot dog sausage appears to be made from seasoned and smoked meat. This classic fast food item is a popular snack or meal, commonly enjoyed at sporting events, cookouts, and casual eateries.",
"additional_kwargs": {
"id": "msg_01AuGpm6xbacTwoUFdNiCnzu",
"type": "message",
"role": "assistant",
"model": "claude-3-sonnet-20240229",
"stop_reason": "end_turn",
"stop_sequence": null,
"usage": {
"input_tokens": 276,
"output_tokens": 88
}
},
"response_metadata": {
"id": "msg_01AuGpm6xbacTwoUFdNiCnzu",
"model": "claude-3-sonnet-20240229",
"stop_reason": "end_turn",
"stop_sequence": null,
"usage": {
"input_tokens": 276,
"output_tokens": 88
},
"type": "message",
"role": "assistant"
},
"tool_calls": [],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 276,
"output_tokens": 88,
"total_tokens": 364
}
}

See the official docs for a complete list of supported file types.

Agents

Anthropic models that support tool calling can be used in the Tool Calling agent. Here’s an example:

import { z } from "zod";

import { ChatAnthropic } from "@langchain/anthropic";
import { tool } from "@langchain/core/tools";
import { AgentExecutor, createToolCallingAgent } from "langchain/agents";

import { ChatPromptTemplate } from "@langchain/core/prompts";

const llm3 = new ChatAnthropic({
model: "claude-3-sonnet-20240229",
temperature: 0,
});

// Prompt template must have "input" and "agent_scratchpad input variables"
const prompt3 = ChatPromptTemplate.fromMessages([
["system", "You are a helpful assistant"],
["placeholder", "{chat_history}"],
["human", "{input}"],
["placeholder", "{agent_scratchpad}"],
]);

const currentWeatherTool3 = tool(async () => "28 °C", {
name: "get_current_weather",
description: "Get the current weather in a given location",
schema: z.object({
location: z.string().describe("The city and state, e.g. San Francisco, CA"),
}),
});

const agent3 = createToolCallingAgent({
llm: llm3,
tools: [currentWeatherTool3],
prompt: prompt3,
});

const agentExecutor3 = new AgentExecutor({
agent: agent3,
tools: [currentWeatherTool3],
});

const input3 = "What's the weather like in SF?";
const result3 = await agentExecutor3.invoke({ input: input3 });

console.log(result3.output);
[
{
index: 0,
type: 'text',
text: '\n\nThe current weather in San Francisco, CA is 28°C.'
}
]

Custom headers

You can pass custom headers in your requests like this:

import { ChatAnthropic } from "@langchain/anthropic";

const llm4 = new ChatAnthropic({
model: "claude-3-sonnet-20240229",
maxTokens: 1024,
clientOptions: {
defaultHeaders: {
"X-Api-Key": process.env.ANTHROPIC_API_KEY,
},
},
});

const res4 = await llm4.invoke("Why is the sky blue?");

console.log(res4);
AIMessage {
"id": "msg_013Ft3kN62gNtiMWRqg6xxt8",
"content": "The sky appears blue due to a phenomenon called Rayleigh scattering. Here's a brief explanation:\n\n1) Sunlight is made up of different wavelengths of light, including the visible spectrum that we see as colors.\n\n2) As sunlight passes through the Earth's atmosphere, the different wavelengths of light interact with the gas molecules in the air.\n\n3) The shorter wavelengths of light, such as the blue and violet colors, get scattered more easily by the tiny gas molecules. This is because the wavelengths are similar in size to the molecules.\n\n4) The longer wavelengths of light, such as red and orange, get scattered much less by the gas molecules and travel more directly through the atmosphere.\n\n5) The blue wavelengths that are scattered in different directions become scattered across the entire sky, making the sky appear blue to our eyes.\n\n6) During sunrise and sunset, the sun's rays travel through more atmosphere before reaching our eyes, causing the blue light to get scattered away and allowing more of the red/orange wavelengths to pass through, giving those colors in the sky.\n\nSo in essence, the abundant scattering of blue light by the gas molecules in the atmosphere is what causes the sky to appear blue during the daytime.",
"additional_kwargs": {
"id": "msg_013Ft3kN62gNtiMWRqg6xxt8",
"type": "message",
"role": "assistant",
"model": "claude-3-sonnet-20240229",
"stop_reason": "end_turn",
"stop_sequence": null,
"usage": {
"input_tokens": 13,
"output_tokens": 272
}
},
"response_metadata": {
"id": "msg_013Ft3kN62gNtiMWRqg6xxt8",
"model": "claude-3-sonnet-20240229",
"stop_reason": "end_turn",
"stop_sequence": null,
"usage": {
"input_tokens": 13,
"output_tokens": 272
},
"type": "message",
"role": "assistant"
},
"tool_calls": [],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 13,
"output_tokens": 272,
"total_tokens": 285
}
}

Tools

The Anthropic API supports tool calling, along with multi-tool calling. The following examples demonstrate how to call tools:

Single Tool

import { ChatAnthropic } from "@langchain/anthropic";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { z } from "zod";
import { zodToJsonSchema } from "zod-to-json-schema";

const calculatorSchema5 = z.object({
operation: z
.enum(["add", "subtract", "multiply", "divide"])
.describe("The type of operation to execute."),
number1: z.number().describe("The first number to operate on."),
number2: z.number().describe("The second number to operate on."),
});

const tool5 = {
name: "calculator",
description: "A simple calculator tool",
input_schema: zodToJsonSchema(calculatorSchema5),
};

const llm5 = new ChatAnthropic({
apiKey: process.env.ANTHROPIC_API_KEY,
model: "claude-3-haiku-20240307",
}).bind({
tools: [tool5],
});

const prompt5 = ChatPromptTemplate.fromMessages([
[
"system",
"You are a helpful assistant who always needs to use a calculator.",
],
["human", "{input}"],
]);

// Chain your prompt and model together
const chain5 = prompt5.pipe(llm5);

const response5 = await chain5.invoke({
input: "What is 2 + 2?",
});
console.log(response5);
AIMessage {
"id": "msg_01XPUHrR4sNCqPr1i9zcsAsg",
"content": [
{
"type": "text",
"text": "Okay, let me use the calculator tool to find the answer:"
},
{
"type": "tool_use",
"id": "toolu_01MhUVuUedc1drBKLarhedFZ",
"name": "calculator",
"input": {
"number1": 2,
"number2": 2,
"operation": "add"
}
}
],
"additional_kwargs": {
"id": "msg_01XPUHrR4sNCqPr1i9zcsAsg",
"type": "message",
"role": "assistant",
"model": "claude-3-haiku-20240307",
"stop_reason": "tool_use",
"stop_sequence": null,
"usage": {
"input_tokens": 449,
"output_tokens": 101
}
},
"response_metadata": {
"id": "msg_01XPUHrR4sNCqPr1i9zcsAsg",
"model": "claude-3-haiku-20240307",
"stop_reason": "tool_use",
"stop_sequence": null,
"usage": {
"input_tokens": 449,
"output_tokens": 101
},
"type": "message",
"role": "assistant"
},
"tool_calls": [
{
"name": "calculator",
"args": {
"number1": 2,
"number2": 2,
"operation": "add"
},
"id": "toolu_01MhUVuUedc1drBKLarhedFZ",
"type": "tool_call"
}
],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 449,
"output_tokens": 101,
"total_tokens": 550
}
}

Forced tool calling

In this example we’ll provide the model with two tools:

  • calculator
  • get_weather

Then, when we call bindTools, we’ll force the model to use the get_weather tool by passing the tool_choice arg like this:

.bindTools({
tools,
tool_choice: {
type: "tool",
name: "get_weather",
}
});

Finally, we’ll invoke the model, but instead of asking about the weather, we’ll ask it to do some math. Since we explicitly forced the model to use the get_weather tool, it will ignore the input and return the weather information (in this case it returned <UNKNOWN>, which is expected.)

import { ChatAnthropic } from "@langchain/anthropic";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { z } from "zod";
import { zodToJsonSchema } from "zod-to-json-schema";

const calculatorSchema6 = z.object({
operation: z
.enum(["add", "subtract", "multiply", "divide"])
.describe("The type of operation to execute."),
number1: z.number().describe("The first number to operate on."),
number2: z.number().describe("The second number to operate on."),
});

const weatherSchema6 = z.object({
city: z.string().describe("The city to get the weather from"),
state: z.string().optional().describe("The state to get the weather from"),
});

const tools6 = [
{
name: "calculator",
description: "A simple calculator tool",
input_schema: zodToJsonSchema(calculatorSchema6),
},
{
name: "get_weather",
description:
"Get the weather of a specific location and return the temperature in Celsius.",
input_schema: zodToJsonSchema(weatherSchema6),
},
];

const llm6 = new ChatAnthropic({
apiKey: process.env.ANTHROPIC_API_KEY,
model: "claude-3-haiku-20240307",
}).bind({
tools: tools6,
tool_choice: {
type: "tool",
name: "get_weather",
},
});

const prompt6 = ChatPromptTemplate.fromMessages([
[
"system",
"You are a helpful assistant who always needs to use a calculator.",
],
["human", "{input}"],
]);

// Chain your prompt and model together
const chain6 = prompt6.pipe(llm6);

const response6 = await chain6.invoke({
input: "What is the sum of 2725 and 273639",
});

console.log(response6);
AIMessage {
"id": "msg_018G4mEZu8KNKtaQxZQ3o8YB",
"content": [
{
"type": "tool_use",
"id": "toolu_01DS9RwsFKdhHNYmhwPJHdHa",
"name": "get_weather",
"input": {
"city": "<UNKNOWN>",
"state": "<UNKNOWN>"
}
}
],
"additional_kwargs": {
"id": "msg_018G4mEZu8KNKtaQxZQ3o8YB",
"type": "message",
"role": "assistant",
"model": "claude-3-haiku-20240307",
"stop_reason": "tool_use",
"stop_sequence": null,
"usage": {
"input_tokens": 672,
"output_tokens": 51
}
},
"response_metadata": {
"id": "msg_018G4mEZu8KNKtaQxZQ3o8YB",
"model": "claude-3-haiku-20240307",
"stop_reason": "tool_use",
"stop_sequence": null,
"usage": {
"input_tokens": 672,
"output_tokens": 51
},
"type": "message",
"role": "assistant"
},
"tool_calls": [
{
"name": "get_weather",
"args": {
"city": "<UNKNOWN>",
"state": "<UNKNOWN>"
},
"id": "toolu_01DS9RwsFKdhHNYmhwPJHdHa",
"type": "tool_call"
}
],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 672,
"output_tokens": 51,
"total_tokens": 723
}
}

The tool_choice argument has three possible values:

  • { type: "tool", name: "tool_name" } | string - Forces the model to use the specified tool. If passing a single string, it will be treated as the tool name.
  • "any" - Allows the model to choose the tool, but still forcing it to choose at least one.
  • "auto" - The default value. Allows the model to select any tool, or none.

withStructuredOutput

import { ChatAnthropic } from "@langchain/anthropic";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { z } from "zod";

const calculatorSchema7 = z
.object({
operation: z
.enum(["add", "subtract", "multiply", "divide"])
.describe("The type of operation to execute."),
number1: z.number().describe("The first number to operate on."),
number2: z.number().describe("The second number to operate on."),
})
.describe("A simple calculator tool");

const llm7 = new ChatAnthropic({
apiKey: process.env.ANTHROPIC_API_KEY,
model: "claude-3-haiku-20240307",
});

// Pass the schema and tool name to the withStructuredOutput method
const modelWithTool7 = llm7.withStructuredOutput(calculatorSchema7);

const prompt7 = ChatPromptTemplate.fromMessages([
[
"system",
"You are a helpful assistant who always needs to use a calculator.",
],
["human", "{input}"],
]);

// Chain your prompt and model together
const chain7 = prompt7.pipe(modelWithTool7);

const response7 = await chain7.invoke({
input: "What is 2 + 2?",
});
console.log(response7);
{ operation: 'add', number1: 2, number2: 2 }

You can supply a “name” field to give the LLM additional context around what you are trying to generate. You can also pass includeRaw to get the raw message back from the model too.

const includeRawModel7 = llm7.withStructuredOutput(calculatorSchema7, {
name: "calculator",
includeRaw: true,
});
const includeRawChain7 = prompt7.pipe(includeRawModel7);

const includeRawResponse7 = await includeRawChain7.invoke({
input: "What is 2 + 2?",
});

console.log(includeRawResponse7);
{
raw: AIMessage {
"id": "msg_01TrkHbEkioCYNHQhqxw5unu",
"content": [
{
"type": "tool_use",
"id": "toolu_01XMrGHXeSVTfSw1oKFZokzG",
"name": "calculator",
"input": {
"number1": 2,
"number2": 2,
"operation": "add"
}
}
],
"additional_kwargs": {
"id": "msg_01TrkHbEkioCYNHQhqxw5unu",
"type": "message",
"role": "assistant",
"model": "claude-3-haiku-20240307",
"stop_reason": "tool_use",
"stop_sequence": null,
"usage": {
"input_tokens": 552,
"output_tokens": 69
}
},
"response_metadata": {
"id": "msg_01TrkHbEkioCYNHQhqxw5unu",
"model": "claude-3-haiku-20240307",
"stop_reason": "tool_use",
"stop_sequence": null,
"usage": {
"input_tokens": 552,
"output_tokens": 69
},
"type": "message",
"role": "assistant"
},
"tool_calls": [
{
"name": "calculator",
"args": {
"number1": 2,
"number2": 2,
"operation": "add"
},
"id": "toolu_01XMrGHXeSVTfSw1oKFZokzG",
"type": "tool_call"
}
],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 552,
"output_tokens": 69,
"total_tokens": 621
}
},
parsed: { operation: 'add', number1: 2, number2: 2 }
}

API reference

For detailed documentation of all ChatAnthropic features and configurations head to the API reference: https://api.js.langchain.com/classes/langchain_anthropic.ChatAnthropic.html


Was this page helpful?


You can also leave detailed feedback on GitHub.