OpenAI API Reference
Chat
Create chat completion
POST https://api.rockapi.ru/openai/v1/chat/completions
Creates a model response for the given chat conversation.
Request body
Parameter | Type | Required | Default | Description | Sub-parameters |
---|---|---|---|---|---|
messages | array | Required | - | A list of messages comprising the conversation so far. Example Python code. | - role (string): Required. The role of the message author.- content (string or array): Required for most roles. The content of the message.- name (string): Optional. An optional name for the participant.- tool_calls (array): Optional. The tool calls generated by the model (for "assistant" role).- function_call (object): Deprecated. The function call information (for "assistant" role).- tool_call_id (string): Required for "tool" role. The ID of the tool call being responded to. |
model | string | Required | - | ID of the model to use. See the model endpoint compatibility table for details on which models work with the Chat API. | - |
frequency_penalty | number or null | Optional | 0 | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. See more information about frequency and presence penalties. | - |
logit_bias | map | Optional | null | Modify the likelihood of specified tokens appearing in the completion. Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. | - |
logprobs | boolean or null | Optional | false | Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the content of message . | - |
top_logprobs | integer or null | Optional | - | An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to true if this parameter is used. | - |
max_tokens | integer or null | Optional | - | The maximum number of tokens that can be generated in the chat completion. The total length of input tokens and generated tokens is limited by the model's context length. Example Python code for counting tokens. | - |
n | integer or null | Optional | 1 | How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs. | - |
presence_penalty | number or null | Optional | 0 | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. See more information about frequency and presence penalties. | - |
response_format | object | Optional | - | An object specifying the format that the model must output. Compatible with GPT-4 Turbo and all GPT-3.5 Turbo models newer than gpt-3.5-turbo-1106 . Setting to {"type": "json_object"} enables JSON mode, which guarantees the message the model generates is valid JSON. | - |
seed | integer or null | Optional | - | This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result. Determinism is not guaranteed, and you should refer to the system_fingerprint response parameter to monitor changes in the backend. | - |
service_tier | string or null | Optional | null | Specifies the latency tier to use for processing the request. This parameter is relevant for customers subscribed to the scale tier service. If set to 'auto', the system will utilize scale tier credits until they are exhausted. If set to 'default', the request will be processed using the default service tier with a lower uptime SLA and no latency guarantee. When this parameter is set, the response body will include the service_tier utilized. | - |
stop | string / array / null | Optional | null | Up to 4 sequences where the API will stop generating further tokens. | - |
stream | boolean or null | Optional | false | If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a data: [DONE] message. Example Python code. | - |
stream_options | object or null | Optional | null | Options for streaming response. Only set this when you set stream: true . | - |
temperature | number or null | Optional | 1 | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both. | - |
top_p | number or null | Optional | 1 | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both. | - |
tools | array | Optional | - | A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported. | - type (string): Required. The type of the tool. Currently, only function is supported.- function (object): Required. Contains name (string, required), description (string, optional), and parameters (object, optional) |
tool_choice | string or object | Optional | - | Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {"type": "function", "function": {"name": "my_function"}} forces the model to call that tool. none is the default when no tools are present. auto is the default if tools are present. | If object: - type (string): Required. The type of the tool. Currently, only function is supported.- function (object): Required. Contains name (string) specifying the function to be called. |
parallel_tool_calls | boolean | Optional | true | Whether to enable parallel function calling during tool use. | - |
user | string | Optional | - | A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. Learn more. | - |
function_call | string or object | Deprecated | - | Deprecated in favor of tool_choice . Controls which (if any) function is called by the model. none means the model will not call a function and instead generates a message. auto means the model can pick between generating a message or calling a function. Specifying a particular function via {"name": "my_function"} forces the model to call that function. none is the default when no functions are present. auto is the default if functions are present. | If object: - name (string): The name of the function to call. |
functions | array | Deprecated | - | Deprecated in favor of tools . A list of functions the model may generate JSON inputs for. | - name (string): Required. The name of the function.- description (string): Optional. A description of what the function does.- parameters (object): Optional. The parameters the function accepts, described as a JSON Schema object. |
Returns
Returns a chat completion object, or a streamed sequence of chat completion chunk objects if the request is streamed.
Example request
curl https://api.rockapi.ru/openai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $ROCKAPI_API_KEY" \
-d '{
"model": "gpt-4o",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Hello!"
}
]
}'
Example response
{
"id": "chatcmpl-123",
"object": "chat.completion",
"created": 1677652288,
"model": "gpt-4o-mini",
"system_fingerprint": "fp_44709d6fcb",
"choices": [{
"index": 0,
"message": {
"role": "assistant",
"content": "\n\nHello there, how may I assist you today?",
},
"logprobs": null,
"finish_reason": "stop"
}],
"usage": {
"prompt_tokens": 9,
"completion_tokens": 12,
"total_tokens": 21
}
}
The chat completion object
Represents a chat completion response returned by model, based on the provided input.
Field | Type | Description |
---|---|---|
id | string | A unique identifier for the chat completion. |
choices | array | A list of chat completion choices. Can be more than one if n is greater than 1. |
created | integer | The Unix timestamp (in seconds) of when the chat completion was created. |
model | string | The model used for the chat completion. |
service_tier | string or null | The service tier used for processing the request. This field is only included if the service_tier parameter is specified in the request. |
system_fingerprint | string | This fingerprint represents the backend configuration that the model runs with. Can be used in conjunction with the seed request parameter to understand when backend changes have been made that might impact determinism. |
object | string | The object type, which is always chat.completion . |
usage | object | Usage statistics for the completion request. |
Example
{
"id": "chatcmpl-123",
"object": "chat.completion",
"created": 1677652288,
"model": "gpt-4o-mini",
"system_fingerprint": "fp_44709d6fcb",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "\n\nHello there, how may I assist you today?"
},
"logprobs": null,
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 9,
"completion_tokens": 12,
"total_tokens": 21
}
}
The chat completion chunk object
Represents a streamed chunk of a chat completion response returned by model, based on the provided input.
Field | Type | Description |
---|---|---|
id | string | A unique identifier for the chat completion. Each chunk has the same ID. |
choices | array | A list of chat completion choices. Can contain more than one elements if n is greater than 1. Can also be empty for the last chunk if you set stream_options: {"include_usage": true} . |
created | integer | The Unix timestamp (in seconds) of when the chat completion was created. Each chunk has the same timestamp. |
model | string | The model to generate the completion. |
service_tier | string or null | The service tier used for processing the request. This field is only included if the service_tier parameter is specified in the request. |
system_fingerprint | string | This fingerprint represents the backend configuration that the model runs with. Can be used in conjunction with the seed request parameter to understand when backend changes have been made that might impact determinism. |
object | string | The object type, which is always chat.completion.chunk . |
usage | object | An optional field that will only be present when you set stream_options: {"include_usage": true} in your request. When present, it contains a null value except for the last chunk which contains the token usage statistics for the entire request. |
Example
{"id":"chatcmpl-123","object":"chat.completion.chunk","created":1694268190,"model":"gpt-4o-mini", "system_fingerprint": "fp_44709d6fcb", "choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]}
{"id":"chatcmpl-123","object":"chat.completion.chunk","created":1694268190,"model":"gpt-4o-mini", "system_fingerprint": "fp_44709d6fcb", "choices":[{"index":0,"delta":{"content":"Hello"},"logprobs":null,"finish_reason":null}]}
....
{"id":"chatcmpl-123","object":"chat.completion.chunk","created":1694268190,"model":"gpt-4o-mini", "system_fingerprint": "fp_44709d6fcb", "choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]}
Images
Given a prompt and/or an input image, the model will generate a new image.
Related guide: Image generation
Create image
POST https://api.rockapi.ru/openai/v1/images/generations
Creates an image given a prompt.
Request body
Parameter | Type | Required | Default | Description |
---|---|---|---|---|
prompt | string | Required | - | A text description of the desired image(s). The maximum length is 1000 characters for dall-e-2 and 4000 characters for dall-e-3 . |
model | string | Optional | dall-e-2 | The model to use for image generation. Defaults to dall-e-2 . |
n | integer | Optional | 1 | The number of images to generate. Must be between 1 and 10. For dall-e-3 , only n=1 is supported. Defaults to 1. |
quality | string | Optional | - | The quality of the image that will be generated. hd creates images with finer details and greater consistency across the image. This param is only supported for dall-e-3 . |
response_format | string | Optional | url | The format in which the generated images are returned. Must be one of url or b64_json . URLs are only valid for 60 minutes after the image has been generated. Defaults to url . |
size | string | Optional | 1024x1024 | The size of the generated images. Must be one of 256x256 , 512x512 , or 1024x1024 for dall-e-2 . Must be one of 1024x1024 , 1792x1024 , or 1024x1792 for dall-e-3 . Defaults to 1024x1024 . |
style | string | Optional | - | The style of the generated images. Must be one of vivid or natural . Vivid causes the model to lean towards generating hyper-real and dramatic images. Natural causes the model to produce more natural, less hyper-real looking images. This param is only supported for dall-e-3 . |
user | string | Optional | - | A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. Learn more. |
Returns
Returns a list of image objects.
Example request
curl https://api.rockapi.ru/openai/v1/images/generations \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $ROCKAPI_API_KEY" \
-d '{
"model": "dall-e-3",
"prompt": "A cute baby sea otter",
"n": 1,
"size": "1024x1024"
}'
Response
{
"created": 1589478378,
"data": [
{
"url": "https://..."
},
{
"url": "https://..."
}
]
}
Create image edit
- Coming Soon
POST https://api.rockapi.ru/openai/v1/images/edits
Creates an edited or extended image given an original image and a prompt.
Request body
Parameter | Type | Required | Default | Description |
---|---|---|---|---|
image | file | Required | - | The image to edit. Must be a valid PNG file, less than 4MB, and square. If mask is not provided, image must have transparency, which will be used as the mask. |
prompt | string | Required | - | A text description of the desired image(s). The maximum length is 1000 characters. |
mask | file | Optional | - | An additional image whose fully transparent areas (e.g. where alpha is zero) indicate where image should be edited. Must be a valid PNG file, less than 4MB, and have the same dimensions as image . |
model | string | Optional | dall-e-2 | The model to use for image generation. Only dall-e-2 is supported at this time. |
n | integer | Optional | 1 | The number of images to generate. Must be between 1 and 10. Defaults to 1. |
size | string | Optional | 1024x1024 | The size of the generated images. Must be one of 256x256 , 512x512 , or 1024x1024 . Defaults to 1024x1024 . |
response_format | string | Optional | url | The format in which the generated images are returned. Must be one of url or b64_json . URLs are only valid for 60 minutes after the image has been generated. Defaults to url . |
user | string | Optional | - | A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. Learn more. |
Returns
Returns a list of image objects.
Example request
curl https://api.rockapi.ru/openai/v1/images/edits \
-H "Authorization: Bearer $ROCKAPI_API_KEY" \
-F image="@otter.png" \
-F mask="@mask.png" \
-F prompt="A cute baby sea otter wearing a beret" \
-F n=2 \
-F size="1024x1024"
Response
{
"created": 1589478378,
"data": [
{
"url": "https://..."
},
{
"url": "https://..."
}
]
}
Create image variation
- Coming Soon
POST https://api.rockapi.ru/openai/v1/images/variations
Creates a variation of a given image.
Request body
Parameter | Type | Required | Default | Description |
---|---|---|---|---|
image | file | Required | - | The image to use as the basis for the variation(s). Must be a valid PNG file, less than 4MB, and square. |
model | string | Optional | dall-e-2 | The model to use for image generation. Only dall-e-2 is supported at this time. |
n | integer | Optional | 1 | The number of images to generate. Must be between 1 and 10. For dall-e-3 , only n=1 is supported. Defaults to 1. |
response_format | string | Optional | url | The format in which the generated images are returned. Must be one of url or b64_json . URLs are only valid for 60 minutes after the image has been generated. Defaults to url . |
size | string | Optional | 1024x1024 | The size of the generated images. Must be one of 256x256 , 512x512 , or 1024x1024 . Defaults to 1024x1024 . |
user | string | Optional | - | A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. Learn more. |
Returns
Returns a list of image objects.
Example request
curl https://api.rockapi.ru/openai/v1/images/variations \
-H "Authorization: Bearer $ROCKAPI_API_KEY" \
-F image="@otter.png" \
-F n=2 \
-F size="1024x1024"
Response
{
"created": 1589478378,
"data": [
{
"url": "https://..."
},
{
"url": "https://..."
}
]
}
The image object
Represents the URL or the content of an image generated by the OpenAI API.
Name | Type | Description |
---|---|---|
b64_json | string | The base64-encoded JSON of the generated image, if response_format is b64_json . |
url | string | The URL of the generated image, if response_format is url (default). |
revised_prompt | string | The prompt that was used to generate the image, if there was any revision to the prompt. |
Example
{
"url": "...",
"revised_prompt": "..."
}
Audio
Learn how to turn audio into text or text into audio.
Related guide: Speech to text
Create speech
POST https://api.rockapi.ru/openai/v1/audio/speech
Generates audio from the input text.
Request body
Parameter | Type | Required | Default | Description |
---|---|---|---|---|
model | string | Required | - | One of the available TTS models: tts-1 or tts-1-hd . |
input | string | Required | - | The text to generate audio for. The maximum length is 4096 characters. |
voice | string | Required | - | The voice to use when generating the audio. Supported voices are alloy , echo , fable , onyx , nova , and shimmer . Previews of the voices are available in the Text to speech guide. |
response_format | string | Optional | mp3 | The format to audio in. Supported formats are mp3 , opus , aac , flac , wav , and pcm . Defaults to mp3. |
speed | number | Optional | 1.0 | The speed of the generated audio. Select a value from 0.25 to 4.0 . 1.0 is the default. |
Returns
The audio file content.
Example request
curl https://api.rockapi.ru/openai/v1/audio/speech \
-H "Authorization: Bearer $ROCKAPI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "tts-1",
"input": "The quick brown fox jumped over the lazy dog.",
"voice": "alloy"
}' \
--output speech.mp3
Create transcription
POST https://api.rockapi.ru/openai/v1/audio/transcriptions
Transcribes audio into the input language.
Request body
Parameter | Type | Required | Default | Description |
---|---|---|---|---|
file | file | Required | - | The audio file object (not file name) to transcribe, in one of these formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm. |
model | string | Required | - | ID of the model to use. Only whisper-1 (which is powered by our open source Whisper V2 model) is currently available. |
language | string | Optional | - | The language of the input audio. Supplying the input language in ISO-639-1 format will improve accuracy and latency. |
prompt | string | Optional | - | An optional text to guide the model's style or continue a previous audio segment. The prompt should match the audio language. |
response_format | string | Optional | json | The format of the transcript output, in one of these options: json , text , srt , verbose_json , or vtt . Defaults to json. |
temperature | number | Optional | - | The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use log probability to automatically increase the temperature until certain thresholds are hit. |
timestamp_granularities | array | Optional | - | The timestamp granularities |
to populate for this transcription. response_format
must be set verbose_json
to use timestamp granularities. Either or both of these options are supported: word
, or segment
. Note: There is no additional latency for segment timestamps, but generating word timestamps incurs additional latency. |
Returns
The transcription object or a verbose transcription object.
Example request
curl https://api.rockapi.ru/openai/v1/audio/transcriptions \
-H "Authorization: Bearer $ROCKAPI_API_KEY" \
-H "Content-Type: multipart/form-data" \
-F file="@/path/to/file/audio.mp3" \
-F model="whisper-1"
Response
{
"text": "Imagine the wildest idea that you've ever had, and you're curious about how it might scale to something that's a 100, a 1,000 times bigger. This is a place where you can get to do that."
}
Create translation
POST https://api.rockapi.ru/openai/v1/audio/translations
Translates audio into English.
Request body
Parameter | Type | Required | Default | Description |
---|---|---|---|---|
file | file | Required | - | The audio file object (not file name) to translate, in one of these formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm. |
model | string | Required | - | ID of the model to use. Only whisper-1 (which is powered by our open source Whisper V2 model) is currently available. |
prompt | string | Optional | - | An optional text to guide the model's style or continue a previous audio segment. The prompt should be in English. |
response_format | string | Optional | json | The format of the transcript output, in one of these options: json , text , srt , verbose_json , or vtt . Defaults to json. |
temperature | number | Optional | - | The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use log probability to automatically increase the temperature until certain thresholds are hit. |
Returns
The translated text.
Example request
curl https://api.rockapi.ru/openai/v1/audio/translations \
-H "Authorization: Bearer $ROCKAPI_API_KEY" \
-H "Content-Type: multipart/form-data" \
-F file="@/path/to/file/german.m4a" \
-F model="whisper-1"
Response
{
"text": "Hello, my name is Wolfgang and I come from Germany. Where are you heading today?"
}
The transcription object (JSON)
Represents a transcription response returned by model, based on the provided input.
Field | Type | Description |
---|---|---|
text | string | The transcribed text. |
Example
{
"text": "Imagine the wildest idea that you've ever had, and you're curious about how it might scale to something that's a 100, a 1,000 times bigger. This is a place where you can get to do that."
}
The transcription object (Verbose JSON)
Represents a verbose json transcription response returned by model, based on the provided input.
Field | Type | Description |
---|---|---|
task | string | The task type. |
language | string | The language of the input audio. |
duration | string | The duration of the input audio. |
text | string | The transcribed text. |
segments | array | Segments of the transcribed text and their corresponding details. |
words | array | Extracted words and their corresponding timestamps. |
Example
{
"task": "transcribe",
"language": "english",
"duration": 8.470000267028809,
"text": "The beach was a popular spot on a hot summer day. People were swimming in the ocean, building sandcastles, and playing beach volleyball.",
"segments": [
{
"id": 0,
"seek": 0,
"start": 0.0,
"end": 3.319999933242798,
"text": " The beach was a popular spot on a hot summer day.",
"tokens": [
50364, 440, 7534, 390, 257, 3743, 4008, 322, 257, 2368, 4266, 786, 13, 50530
],
"temperature": 0.0,
"avg_logprob": -0.2860786020755768,
"compression_ratio": 1.2363636493682861,
"no_speech_prob": 0.00985979475080967
},
...
]
}
Embeddings
Get a vector representation of a given input that can be easily consumed by machine learning models and algorithms.
Related guide: Embeddings
Create embeddings
POST https://api.rockapi.ru/openai/v1/embeddings
Creates an embedding vector representing the input text.
Request body
Parameter | Type | Required | Default | Description |
---|---|---|---|---|
input | string or array | Required | - | Input text to embed, encoded as a string or array of tokens. To embed multiple inputs in a single request, pass an array of strings or array of token arrays. The input must not exceed the max input tokens for the model (8192 tokens for text-embedding-ada-002 ), cannot be an empty string, and any array must be 2048 dimensions or less. Example Python code for counting tokens. |
model | string | Required | - | ID of the model to use. You can use the List models API to see all of your available models, or see our Model overview for descriptions of them. |
encoding_format | string | Optional | float | The format to return the embeddings in. Can be either float or base64 . |
dimensions | integer | Optional | - | The number of dimensions the resulting output embeddings should have. Only supported in text-embedding-3 and later models. |
user | string | Optional | - | A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. Learn more. |
Returns
A list of embedding objects.
Example request
curl https://api.rockapi.ru/openai/v1/embeddings \
-H "Authorization: Bearer $ROCKAPI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"input": "The food was delicious and the waiter...",
"model": "text-embedding-ada-002",
"encoding_format": "float"
}'
Response
{
"object": "list",
"data": [
{
"object": "embedding",
"embedding": [
0.0023064255,
-0.009327292,
.... (1536 floats total for ada-002)
-0.0028842222,
],
"index": 0
}
],
"model": "text-embedding-ada-002",
"usage": {
"prompt_tokens": 8,
"total_tokens": 8
}
}
The embedding object
Represents an embedding vector returned by embedding endpoint.
Field | Type | Description |
---|---|---|
index | integer | The index of the embedding in the list of embeddings. |
embedding | array | The embedding vector, which is a list of floats. The length of vector depends on the model as listed in the embedding guide. |
object | string | The object type, which is always "embedding". |
Example
{
"object": "embedding",
"embedding": [
0.0023064255,
-0.009327292,
.... (1536 floats total
for ada-002)
-0.0028842222,
],
"index": 0
}
Moderations
Given some input text, outputs if the model classifies it as potentially harmful across several categories.
Related guide: Moderations
Create moderation
POST https://api.rockapi.ru/openai/v1/moderations
Classifies if text is potentially harmful.
Request body
Parameter | Type | Required | Default | Description |
---|---|---|---|---|
input | string or array | Required | - | The input text to classify. |
model | string | Optional | text-moderation-latest | Two content moderations models are available: text-moderation-stable and text-moderation-latest . The default is text-moderation-latest which will be automatically upgraded over time. This ensures you are always using our most accurate model. If you use text-moderation-stable , we will provide advanced notice before updating the model. Accuracy of text-moderation-stable may be slightly lower than for text-moderation-latest . |
Returns
A moderation object.
Example request
curl https://api.rockapi.ru/openai/v1/moderations \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $ROCKAPI_API_KEY" \
-d '{
"input": "I want to kill them."
}'
Response
{
"id": "modr-XXXXX",
"model": "text-moderation-005",
"results": [
{
"flagged": true,
"categories": {
"sexual": false,
"hate": false,
"harassment": false,
"self-harm": false,
"sexual/minors": false,
"hate/threatening": false,
"violence/graphic": false,
"self-harm/intent": false,
"self-harm/instructions": false,
"harassment/threatening": true,
"violence": true,
},
"category_scores": {
"sexual": 1.2282071e-06,
"hate": 0.010696256,
"harassment": 0.29842457,
"self-harm": 1.5236925e-08,
"sexual/minors": 5.7246268e-08,
"hate/threatening": 0.0060676364,
"violence/graphic": 4.435014e-06,
"self-harm/intent": 8.098441e-10,
"self-harm/instructions": 2.8498655e-11,
"harassment/threatening": 0.63055265,
"violence": 0.99011886,
}
}
]
}
The moderation object
Represents if a given text input is potentially harmful.
Field | Type | Description |
---|---|---|
id | string | The unique identifier for the moderation request. |
model | string | The model used to generate the moderation results. |
results | array | A list of moderation objects. |
Example
{
"id": "modr-XXXXX",
"model": "text-moderation-005",
"results": [
{
"flagged": true,
"categories": {
"sexual": false,
"hate": false,
"harassment": false,
"self-harm": false,
"sexual/minors": false,
"hate/threatening": false,
"violence/graphic": false,
"self-harm/intent": false,
"self-harm/instructions": false,
"harassment/threatening": true,
"violence": true,
},
"category_scores": {
"sexual": 1.2282071e-06,
"hate": 0.010696256,
"harassment": 0.29842457,
"self-harm": 1.5236925e-08,
"sexual/minors": 5.7246268e-08,
"hate/threatening": 0.0060676364,
"violence/graphic": 4.435014e-06,
"self-harm/intent": 8.098441e-10,
"self-harm/instructions": 2.8498655e-11,
"harassment/threatening": 0.63055265,
"violence": 0.99011886,
}
}
]
}