Claude API Reference
Messages
Create a Message
Send a structured list of input messages with text and/or image content, and the model will generate the next message in the conversation.
The Messages API can be used for either single queries or stateless multi-turn conversations.
Example Request
curl https://api.rockapi.ru/anthropic/v1/messages \
--header "x-api-key: $ROCKAPI_API_KEY" \
--header "anthropic-version: 2023-06-01" \
--header "content-type: application/json" \
--data \
'{
"model": "claude-3-5-sonnet-20240620",
"max_tokens": 1024,
"messages": [
{"role": "user", "content": "Hello, world"}
]
}'
Example Response
{
"content": [
{
"text": "Hi! My name is Claude.",
"type": "text"
}
],
"id": "msg_013Zva2CMHLNnXjNJJKqJ2EF",
"model": "claude-3-5-sonnet-20240620",
"role": "assistant",
"stop_reason": "end_turn",
"stop_sequence": null,
"type": "message",
"usage": {
"input_tokens": 10,
"output_tokens": 25
}
}
Parameters
model (string, required)
The model that will complete your prompt.
messages (object[], required)
Input messages.
Our models are trained to operate on alternating user
and assistant
conversational turns. When creating a new Message
, you specify the prior conversational turns with the messages
parameter, and the model then generates the next Message
in the conversation.
Each input message must be an object with a role
and content
. You can specify a single user
-role message, or you can include multiple user
and assistant
messages. The first message must always use the user
role.
If the final message uses the assistant
role, the response content will continue immediately from the content in that message. This can be used to constrain part of the model's response.
Example with a single user
message:
[{"role": "user", "content": "Hello, Claude"}]
Example with multiple conversational turns:
[
{"role": "user", "content": "Hello there."},
{"role": "assistant", "content": "Hi, I'm Claude. How can I help you?"},
{"role": "user", "content": "Can you explain LLMs in plain English?"}
]
Example with a partially-filled response from Claude:
[
{"role": "user", "content": "What's the Greek name for Sun? (A) Sol (B) Helios (C) Sun"},
{"role": "assistant", "content": "The best answer is ("}
]
Each input message content
may be either a single string
or an array of content blocks, where each block has a specific type
. Using a string
for content
is shorthand for an array of one content block of type "text"
. The following input messages are equivalent:
{"role": "user", "content": "Hello, Claude"}
{"role": "user", "content": [{"type": "text", "text": "Hello, Claude"}]}
Starting with Claude 3 models, you can also send image content blocks:
{"role": "user", "content": [
{
"type": "image",
"source": {
"type": "base64",
"media_type": "image/jpeg",
"data": "/9j/4AAQSkZJRg..."
}
},
{"type": "text", "text": "What is in this image?"}
]}
We currently support the base64
source type for images, and the image/jpeg
, image/png
, image/gif
, and image/webp
media types.
Note that if you want to include a system prompt, you can use the top-level system parameter — there is no "system" role for input messages in the Messages API.
messages.role (enum<string>, required)
Available options: user, assistant
messages.content (string, required)
max_tokens (integer, required)
The maximum number of tokens to generate before stopping.
Note that our models may stop before reaching this maximum. This parameter only specifies the absolute maximum number of tokens to generate.
Different models have different maximum values for this parameter. See models for details.
metadata (object)
An object describing metadata about the request.
metadata.user_id (string | null)
An external identifier for the user who is associated with the request.
This should be a uuid, hash value, or other opaque identifier. Anthropic may use this id to help detect abuse. Do not include any identifying information such as name, email address, or phone number.
stop_sequences (string[])
Custom text sequences that will cause the model to stop generating.
Our models will normally stop when they have naturally completed their turn, which will result in a response stop_reason
of "end_turn"
.
If you want the model to stop generating when it encounters custom strings of text, you can use the stop_sequences
parameter. If the model encounters one of the custom sequences, the response stop_reason
value will be "stop_sequence"
and the response stop_sequence
value will contain the matched stop sequence.
stream (boolean)
Whether to incrementally stream the response using server-sent events.
See streaming for details.
system (string)
System prompt.
A system prompt is a way of providing context and instructions to Claude, such as specifying a particular goal or role. See our guide to system prompts.
temperature (number)
Amount of randomness injected into the response.
Defaults to 1.0
. Ranges from 0.0
to 1.0
. Use temperature
closer to 0.0
for analytical / multiple choice, and closer to 1.0
for creative and generative tasks.
Note that even with temperature
of 0.0
, the results will not be fully deterministic.
tool_choice (object)
How the model should use the provided tools. The model can use a specific tool, any available tool, or decide by itself.
tool_choice.type (enum<string>, required)
Available options: auto/any/tool
tool_choice.name (string, required)
The name of the tool to use.
tools (object[])
Definitions of tools that the model may use.
If you include tools
in your API request, the model may return tool_use
content blocks that represent the model's use of those tools. You can then run those tools using the tool input generated by the model and then optionally return results back to the model using tool_result
content blocks.
Each tool definition includes:
name
: Name of the tool.description
: Optional, but strongly-recommended description of the tool.input_schema
: JSON schema for the toolinput
shape that the model will produce intool_use
output content blocks.
For example, if you defined tools
as:
[
{
"name": "get_stock_price",
"description": "Get the current stock price for a given ticker symbol.",
"input_schema": {
"type": "object",
"properties": {
"ticker": {
"type": "string",
"description": "The stock ticker symbol, e.g. AAPL for Apple Inc."
}
},
"required": ["ticker"]
}
}
]
And then asked the model "What's the S&P 500 at today?", the model might produce tool_use
content blocks in the response like this:
[
{
"type": "tool_use",
"id": "toolu_01D7FLrfh4GYq7yT1ULFeyMV",
"name": "get_stock_price",
"input": { "ticker": "^GSPC" }
}
]
You might then run your get_stock_price
tool with {"ticker": "^GSPC"}
as an input, and return the following back to the model in a subsequent user
message:
[
{
"type": "tool_result",
"tool_use_id": "toolu_01D7FLrfh4GYq7yT1ULFeyMV",
"content": "259.75 USD"
}
]
Tools can be used for workflows that include running client-side tools and functions, or more generally whenever you want the model to produce a particular JSON structure of output.
See our guide for more details.
tools.description (string)
Description of what this tool does.
Tool descriptions should be as detailed as possible. The more information that the model has about what the tool is and how to use it, the better it will perform. You can use natural language descriptions to reinforce important aspects of the tool input JSON schema.
tools.name (string, required)
tools.input_schema (object, required)
JSON schema for this tool's input.
This defines the shape of the input that your tool accepts and that the model will produce.
tools.input_schema.type (enum<string>, required)
Available options: object
tools.input_schema.properties (object | null)
Response
id (string, required)
Unique object identifier.
The format and length of IDs may change over time.
type (enum<string>, default: message, required)
Object type.
For Messages, this is always "message"
.
Available options: message
role (enum<string>, default: assistant, required)
Conversational role of the generated message.
This will always be "assistant"
.
Available options: assistant
content (object[], required)
Content generated by the model.
This is an array of content blocks, each of which has a type
that determines its shape.
Example:
[{"type": "text", "text": "Hi, I'm Claude."}]
If the request input messages
ended with an assistant
turn, then the response content
will continue directly from that last turn. You can use this to constrain the model's output.
For example, if the input messages
were:
[
{"role": "user", "content": "What's the Greek name for Sun? (A) Sol (B) Helios (C) Sun"},
{"role": "assistant", "content": "The best answer is ("}
]
Then the response content
might be:
[{"type": "text", "text": "B)"}]
content.type (enum<string>, default: text, required)
Available options: text
content.text (string, required)
model (string, required)
The model that handled the request.
stop_reason (enum<string> | null, required)
The reason that we stopped.
This may be one the following values:
"end_turn"
: the model reached a natural stopping point"max_tokens"
: we exceeded the requestedmax_tokens
or the model's maximum"stop_sequence"
: one of your provided customstop_sequences
was generated"tool_use"
: the model invoked one or more tools
In non-streaming mode this value is always non-null. In streaming mode, it is null in the message_start
event and non-null otherwise.
Available options: end_turn
, max_tokens
, stop_sequence
, tool_use
stop_sequence (string | null, required)
Which custom stop sequence was generated, if any.
This value will be a non-null string if one of your custom stop sequences was generated.
usage (object, required)
Billing and rate-limit usage.
Anthropic's API bills and rate-limits by token counts, as tokens represent the underlying cost to our systems.
Under the hood, the API transforms requests into a format suitable for the model. The model's output then goes through a parsing stage before becoming an API response. As a result, the token counts in usage
will not match one-to-one with the exact visible content of an API request or response.
For example, output_tokens
will be non-zero, even for an empty string response from Claude.
usage.input_tokens (integer, required)
The number of input tokens which were used.
usage.output_tokens (integer, required)
The number of output tokens which were used.
Streaming Messages
When creating a Message, you can set "stream": true
to incrementally stream the response using server-sent events (SSE).
Streaming with SDKs
Our Python and Typescript SDKs offer multiple ways of streaming. The Python SDK allows both sync and async streams. See the documentation in each SDK for details.
import anthropic
client = anthropic.Anthropic(
api_key="$ROCKAPI_API_KEY",
base_url="https://api.rockapi.ru/anthropic"
)
with client.messages.stream(
max_tokens=1024,
messages=[{"role": "user", "content": "Hello"}],
model="claude-3-5-sonnet-20240620",
) as stream:
for text in stream.text_stream:
print(text, end="", flush=True)
Event types
Each server-sent event includes a named event type and associated JSON data. Each event will use an SSE event name (e.g. event: message_stop
), and include the matching event type
in its data.
Each stream uses the following event flow:
message_start
: contains aMessage
object with emptycontent
.- A series of content blocks, each of which have a
content_block_start
, one or morecontent_block_delta
events, and acontent_block_stop
event. Each content block will have anindex
that corresponds to its index in the final Messagecontent
array. - One or more
message_delta
events, indicating top-level changes to the finalMessage
object. - A final
message_stop
event.
Ping events
Event streams may also include any number of ping
events.
Error events
We may occasionally send errors in the event stream. For example, during periods of high usage, you may receive an overloaded_error
, which would normally correspond to an HTTP 529 in a non-streaming context:
event: error
data: {"type": "error", "error": {"type": "overloaded_error", "message": "Overloaded"}}
Other events
In accordance with our versioning policy, we may add new event types, and your code should handle unknown event types gracefully.
Delta types
Each content_block_delta
event contains a delta
of a type that updates the content
block at a given index
.
Text delta
A text
content block delta
looks like:
event: content_block_delta
data: {"type": "content_block_delta", "index": 0, "delta": {"type": "text_delta", "text": "ello frien"}}
Input JSON delta
The deltas for tool_use
content blocks correspond to updates for the input
field of the block. To support maximum granularity, the deltas are partial JSON strings, whereas the final tool_use.input
is always an object.
You can accumulate the string deltas and parse the JSON once you receive a content_block_stop
event, by using a library like Pydantic to do partial JSON parsing, or by using our SDKs, which provide helpers to access parsed incremental values.
A tool_use
content block delta looks like:
event: content_block_delta
data: {"type": "content_block_delta", "index": 1, "delta": {"type": "input_json_delta", "partial_json": "{\"location\": \"San Fra"}}
Note: Our current models only support emitting one complete key and value property from input
at a time. As such, when using tools, there may be delays between streaming events while the model is working. Once an input
key and value are accumulated, we emit them as multiple content_block_delta
events with chunked partial json so that the format can automatically support finer granularity in future models.
Raw HTTP Stream response
We strongly recommend that use our client SDKs when using streaming mode. However, if you are building a direct API integration, you will need to handle these events yourself.
A stream response is comprised of:
- A
message_start
event - Potentially multiple content blocks, each of which contains:
a. A
content_block_start
event b. Potentially multiplecontent_block_delta
events c. Acontent_block_stop
event - A
message_delta
event - A
message_stop
event
There may be ping
events dispersed throughout the response as well. See Event types for more details on the format.
Basic streaming request
curl https://api.rockapi.ru/anthropic/v1/messages \
--header "anthropic-version: 2023-06-01" \
--header "content-type: application/json" \
--header "x-api-key: $ROCKAPI_API_KEY" \
--data '{
"model": "claude-3-5-sonnet-20240620",
"messages": [{"role": "user", "content": "Hello"}],
"max_tokens": 256,
"stream": true
}'
Response
event: message_start
data: {"type": "message_start", "message": {"id": "msg_1nZdL29xx5MUA1yADyHTEsnR8uuvGzszyY", "type": "message", "role": "assistant", "content": [], "model": "claude-3-5-sonnet-20240620", "stop_reason": null, "stop_sequence": null, "usage": {"input_tokens": 25, "output_tokens": 1}}}
event: content_block_start
data: {"type": "content_block_start", "index": 0, "content_block": {"type": "text", "text": ""}}
event: ping
data: {"type": "ping"}
event: content_block_delta
data: {"type": "content_block_delta", "index": 0, "delta": {"type": "text_delta", "text": "Hello"}}
event: content_block_delta
data: {"type": "content_block_delta", "index": 0, "delta": {"type": "text_delta", "text": "!"}}
event: content_block_stop
data: {"type": "content_block_stop", "index": 0}
event: message_delta
data: {"type": "message_delta", "delta": {"stop_reason": "end_turn", "stop_sequence": null}, "usage": {"output_tokens": 15}}
event: message_stop
data: {"type": "message_stop"}
Streaming request with tool use
In this request, we ask Claude to use a tool to tell us the weather.
curl https://api.rockapi.ru/anthropic/v1/messages \
-H "content-type: application/json" \
-H "x-api-key: $ROCKAPI_API_KEY" \
-H "anthropic-version: 2023-06-01" \
-d '{
"model": "claude-3-5-sonnet-20240620",
"max_tokens": 1024,
"tools": [
{
"name": "get_weather",
"description": "Get the current weather in a given location",
"input_schema": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
}
},
"required": ["location"]
}
}
],
"tool_choice": {"type": "any"},
"messages": [
{
"role": "user",
"content": "What is the weather like in San Francisco?"
}
],
"stream": true
}'
Response
event: message_start
data: {"type":"message_start","message":{"id":"msg_014p7gG3wDgGV9EUtLvnow3U","type":"message","role":"assistant","model":"claude-3-haiku-20240307","stop_sequence":null,"usage":{"input_tokens":472,"output_tokens":2},"content":[],"stop_reason":null}}
event: content_block_start
data: {"type":"content_block_start","index":0,"content_block":{"type":"text","text":""}}
event: ping
data: {"type":"ping"}
event: content_block_delta
data: {"type":"content_block_delta","index":0,"delta":{"type":"text_delta","text":"Okay"}}
event: content_block_delta
data: {"type":"content_block_delta","index":0,"delta":{"type":"text_delta","text":","}}
event: content_block_delta
data: {"type":"content_block_delta","index":0,"delta":{"type":"text_delta","text":" let"}}
event: content_block_delta
data: {"type":"content_block_delta","index":0,"delta":{"type":"text_delta","text":"'s"}}
event: content_block_delta
data: {"type":"content_block_delta","index":0,"delta":{"type":"text_delta","text":" check"}}
event: content_block_delta
data: {"type":"content_block_delta","index":0,"delta":{"type":"text_delta","text":" the"}}
event: content_block_delta
data: {"type":"content_block_delta","index":0,"delta":{"type":"text_delta","text":" weather"}}
event: content_block_delta
data: {"type":"content_block_delta","index":0,"delta":{"type":"text_delta","text":" for"}}
event: content_block_delta
data: {"type":"content_block_delta","index":0,"delta":{"type":"text_delta","text":" San"}}
event: content_block_delta
data: {"type":"content_block_delta","index":0,"delta":{"type":"text_delta","text":" Francisco"}}
event: content_block_delta
data: {"type":"content_block_delta","index":0,"delta":{"type":"text_delta","text":","}}
event: content_block_delta
data: {"type":"content_block_delta","index":0,"delta":{"type":"text_delta","text":" CA"}}
event: content_block_delta
data: {"type":"content_block_delta","index":0,"delta":{"type":"text_delta","text":":"}}
event: content_block_stop
data: {"type":"content_block_stop","index":0}
event: content_block_start
data: {"type":"content_block_start","index":1,"content_block":{"type":"tool_use","id":"toolu_01T1x1fJ34qAmk2tNTrN7Up6","name":"get_weather","input":{}}}
event: content_block_delta
data: {"type":"content_block_delta","index":1,"delta":{"type":"input_json_delta","partial_json":""}}
event: content_block_delta
data: {"type":"content_block_delta","index":1,"delta":{"type":"input_json_delta","partial_json":"{\"location\":"}}
event: content_block_delta
data: {"type":"content_block_delta","index":1,"delta":{"type":"input_json_delta","partial_json":" \"San"}}
event: content_block_delta
data: {"type":"content_block_delta","index":1,"delta":{"type":"input_json_delta","partial_json":" Francisc"}}
event: content_block_delta
data: {"type":"content_block_delta","index":1,"delta":{"type":"input_json_delta","partial_json":"o,"}}
event: content_block_delta
data: {"type":"content_block_delta","index":1,"delta":{"type":"input_json_delta","partial_json":" CA\""}}
event: content_block_delta
data: {"type":"content_block_delta","index":1,"delta":{"type":"input_json_delta","partial_json":", "}}
event: content_block_delta
data: {"type":"content_block_delta","index":1,"delta":{"type":"input_json_delta","partial_json":"\"unit\": \"fah"}}
event: content_block_delta
data: {"type":"content_block_delta","index":1,"delta":{"type":"input_json_delta","partial_json":"renheit\"}"}}
event: content_block_stop
data: {"type":"content_block_stop","index":1}
event: message_delta
data: {"type":"message_delta","delta":{"stop_reason":"tool_use","stop_sequence":null},"usage":{"output_tokens":89}}
event: message_stop
data: {"type":"message_stop"}
Migrating from Text Completions to Messages
When migrating from from Text Completions to Messages, consider the following changes.
Inputs and outputs
The largest change between Text Completions and the Messages is the way in which you specify model inputs and receive outputs from the model.
With Text Completions, inputs are raw strings:
prompt = "\n\nHuman: Hello there\n\nAssistant: Hi, I'm Claude. How can I help?\n\nHuman: Can you explain Glycolysis to me?\n\nAssistant:"
With Messages, you specify a list of input messages instead of a raw prompt:
messages = [
{"role": "user", "content": "Hello there."},
{"role": "assistant", "content": "Hi, I'm Claude. How can I help?"},
{"role": "user", "content": "Can you explain Glycolysis to me?"},
]
Each input message has a role
and content
.
The Text Completions API expects alternating
\n\nHuman:
and\n\nAssistant:
turns, but the Messages API expectsuser
andassistant
roles. You may see documentation referring to either “human” or “user” turns. These refer to the same role, and will be “user” going forward.
With Text Completions, the model’s generated text is returned in the completion
values of the response:
>>> response = anthropic.completions.create(...)
>>> response.completion
" Hi, I'm Claude"
With Messages, the response is the content
value, which is a list of content blocks:
>>> response = anthropic.messages.create(...)
>>> response.content
[{"type": "text", "text": "Hi, I'm Claude"}]
Putting words in Claude’s mouth
With Text Completions, you can pre-fill part of Claude’s response:
prompt = "\n\nHuman: Hello\n\nAssistant: Hello, my name is"
With Messages, you can achieve the same result by making the last input message have the assistant
role:
messages = [
{"role": "human", "content": "Hello"},
{"role": "assistant", "content": "Hello, my name is"},
]
When doing so, response content
will continue from the last input message content
:
{
"role": "assistant",
"content": [{"type": "text", "text": " Claude. How can I assist you today?"}],
...
}
System prompt
With Text Completions, the system prompt is specified by adding text before the first \n\nHuman:
turn:
prompt = "Today is January 1, 2024.\n\nHuman: Hello, Claude\n\nAssistant:"
With Messages, you specify the system prompt with the system
parameter:
anthropic.Anthropic().messages.create(
model="claude-3-opus-20240229",
max_tokens=1024,
system="Today is January 1, 2024.", ## <-- system prompt
messages=[
{"role": "user", "content": "Hello, Claude"}
]
)
Model names
The Messages API requires that you specify the full model version (e.g. claude-3-opus-20240229
).
We previously supported specifying only the major version number (e.g. claude-2
), which resulted in automatic upgrades to minor versions. However, we no longer recommend this integration pattern, and Messages do not support it.
Stop reason
Text Completions always have a stop_reason
of either:
"stop_sequence"
: The model either ended its turn naturally, or one of your custom stop sequences was generated."max_tokens"
: Either the model generated your specifiedmax_tokens
of content, or it reached its absolute maximum.
Messages have a stop_reason
of one of the following values:
"end_turn"
: The conversational turn ended naturally."stop_sequence"
: One of your specified custom stop sequences was generated."max_tokens"
: (unchanged)
Specifying max tokens
- Text Completions:
max_tokens_to_sample
parameter. No validation, but capped values per-model. - Messages:
max_tokens
parameter. If passing a value higher than the model supports, returns a validation error.
Streaming format
When using "stream": true
in with Text Completions, the response included any of completion
, ping
, and error
server-sent-events. See Text Completions streaming for details.
Messages can contain multiple content blocks of varying types, and so its streaming format is somewhat more complex. See Messages streaming for details.
Messages examples
Request and response examples for the Messages API
See the API reference for full documentation on available parameters.
Basic request and response
#!/bin/sh
curl https://api.rockapi.ru/anthropic/v1/messages \
--header "x-api-key: $ROCKAPI_API_KEY" \
--header "anthropic-version: 2023-06-01" \
--header "content-type: application/json" \
--data '{
"model": "claude-3-5-sonnet-20240620",
"max_tokens": 1024,
"messages": [
{"role": "user", "content": "Hello, Claude"}
]
}'
{
"id": "msg_01XFDUDYJgAACzvnptvVoYEL",
"type": "message",
"role": "assistant",
"content": [
{
"type": "text",
"text": "Hello!"
}
],
"model": "claude-3-5-sonnet-20240620",
"stop_reason": "end_turn",
"stop_sequence": null,
"usage": {
"input_tokens": 12,
"output_tokens": 6
}
}
Multiple conversational turns
The Messages API is stateless, which means that you always send the full conversational history to the API. You can use this pattern to build up a conversation over time. Earlier conversational turns don’t necessarily need to actually originate from Claude — you can use synthetic assistant
messages.
#!/bin/sh
curl https://api.rockapi.ru/anthropic/v1/messages \
--header "x-api-key: $ROCKAPI_API_KEY" \
--header "anthropic-version: 2023-06-01" \
--header "content-type: application/json" \
--data '{
"model": "claude-3-5-sonnet-20240620",
"max_tokens": 1024,
"messages": [
{"role": "user", "content": "Hello, Claude"},
{"role": "assistant", "content": "Hello!"},
{"role": "user", "content": "Can you describe LLMs to me?"}
]
}'
{
"id": "msg_018gCsTGsXkYJVqYPxTgDHBU",
"type": "message",
"role": "assistant",
"content": [
{
"type": "text",
"text": "Sure, I'd be happy to provide..."
}
],
"stop_reason": "end_turn",
"stop_sequence": null,
"usage": {
"input_tokens": 30,
"output_tokens": 309
}
}
Putting words in Claude’s mouth
You can pre-fill part of Claude’s response in the last position of the input messages list. This can be used to shape Claude’s response. The example below uses "max_tokens": 1
to get a single multiple choice answer from Claude.
#!/bin/sh
curl https://api.rockapi.ru/anthropic/v1/messages \
--header "x-api-key: $ROCKAPI_API_KEY" \
--header "anthropic-version: 2023-06-01" \
--header "content-type: application/json" \
--data '{
"model": "claude-3-5-sonnet-20240620",
"max_tokens": 1,
"messages": [
{"role": "user", "content": "What is latin for Ant? (A) Apoidea, (B) Rhopalocera, (C
) Formicidae"},
{"role": "assistant", "content": "The answer is ("}
]
}'
{
"id": "msg_01Q8Faay6S7QPTvEUUQARt7h",
"type": "message",
"role": "assistant",
"content": [
{
"type": "text",
"text": "C"
}
],
"model": "claude-3-5-sonnet-20240620",
"stop_reason": "max_tokens",
"stop_sequence": null,
"usage": {
"input_tokens": 42,
"output_tokens": 1
}
}
Vision
Claude can read both text and images in requests. Currently, we support the base64
source type for images, and the image/jpeg
, image/png
, image/gif
, and image/webp
media types. See our vision guide for more details.
#!/bin/sh
IMAGE_URL="https://upload.wikimedia.org/wikipedia/commons/a/a7/Camponotus_flavomarginatus_ant.jpg"
IMAGE_MEDIA_TYPE="image/jpeg"
IMAGE_BASE64=$(curl "$IMAGE_URL" | base64)
curl https://api.rockapi.ru/anthropic/v1/messages \
--header "x-api-key: $ROCKAPI_API_KEY" \
--header "anthropic-version: 2023-06-01" \
--header "content-type: application/json" \
--data '{
"model": "claude-3-5-sonnet-20240620",
"max_tokens": 1024,
"messages": [
{"role": "user", "content": [
{"type": "image", "source": {
"type": "base64",
"media_type": "'$IMAGE_MEDIA_TYPE'",
"data": "'$IMAGE_BASE64'"
}},
{"type": "text", "text": "What is in the above image?"}
]}
]
}'
{
"id": "msg_01EcyWo6m4hyW8KHs2y2pei5",
"type": "message",
"role": "assistant",
"content": [
{
"type": "text",
"text": "This image shows an ant, specifically a close-up view of an ant. The ant is shown in detail, with its distinct head, antennae, and legs clearly visible. The image is focused on capturing the intricate details and features of the ant, likely taken with a macro lens to get an extreme close-up perspective."
}
],
"model": "claude-3-5-sonnet-20240620",
"stop_reason": "end_turn",
"stop_sequence": null,
"usage": {
"input_tokens": 1551,
"output_tokens": 71
}
}
Tool use and JSON mode
See our guide for examples for how to use tools with the Messages API.