https://api.zatomic.ai
The API is designed around a RESTful approach. Our API has predictable, resource-oriented URLs, uses standard HTTP status codes and verbs, accepts only JSON for request bodies, and returns only JSON for all responses.
All API requests must be made over HTTPS. API calls made over plain HTTP are not supported.
https://api.zatomic.ai
While we will support multiple versions and backwards-compatibility in the future, currently there is only one version of the API, v1. All API requests require the version in the URL.
Future versions of the API will follow the vN naming convention, where N is the next incremented whole number from the previous version. Meaning, after v1 we will have v2, v3, v4, and so on.
New versions of the API will be created only if there are breaking changes to previous versions. Breaking changes are defined as changes to request and response objects, property name changes, and removal of endpoints.
Please note: adding properties to requests and responses and adding endpoints do not result in new API versions, as these do not affect existing API clients.
All API changes, regardless of their type of change, can be found in our API changelog.
https://api.zatomic.ai/v1
An API key is required to authenticate all requests to the API. You can view and manage your API keys in your Zatomic account.
Authentication to the API is handled by either setting the X-Api-Key request header or by using the api-key querystring parameter. If both are given, the api-key querystring parameter will be used.
For authentication, API calls will fail for the following reasons:
// With request header
curl -X GET https://api.zatomic.ai/v1/prompts \
-H "X-Api-Key: {API key}"
// With querystring parameter
GET https://api.zatomic.ai/v1/prompts?api-key={API key}
All API requests must be made in the context of a workspace; therefore, in addition to an API key, a workspace ID is also required to call the API. You can find your workspace ID in your Zatomic account.
Similar to the API key, sending the workspace ID is done by either setting the X-Workspace-Id request header or by using the workspace-id querystring parameter. If both are given, the workspace-id querystring parameter will be used.
API requests made without a workspace ID will fail.
// With request header
curl -X GET https://api.zatomic.ai/v1/prompts \
-H "X-Workspace-Id: {workspace ID}"
// With querystring parameter
GET https://api.zatomic.ai/v1/prompts?workspace-id={workspace ID}
Zatomic uses standard HTTP status codes to indicate success or failure of API requests. In general, 2xx codes represent success, 4xx codes indicate a bad request (such as an invalid API key), and 5xx codes mean something went wrong on our end (which should be rare).
HTTP Status Codes | ||
---|---|---|
200 | OK | The request was successful. |
201 | Created | The resource was created. |
204 | No Content | The resource was deleted. |
400 | Bad Request | The request was unacceptable. |
401 | Unauthorized | Invalid API key. |
403 | Forbidden | The API key doesn't have permissions, or an account limit was reached, or the account is not on a paid plan. |
404 | Not Found | The requested resource doesn't exist. |
429 | Too Many Requests | Too many requests hit the API too quickly. |
500 | Internal Server Error | Something went wrong on Zatomic's end. |
{
"status_code": 401,
"title": "Unauthorized",
"message": "Invalid API key.",
"trace_id": "",
"event_id": null
}
To ensure fair usage of our API, all accounts have an API rate limit assigned to them allowing a maximum number of requests per second, depending on their subscription plan. You can find your API rate limit in your Zatomic account.
If you exceed your rate limit, a 429 error code will be returned.
{
"status_code": 429,
"title": "Too Many Requests",
"message": "Rate limit exceeded.",
"trace_id": "",
"event_id": null
}
Zatomic accounts have a set token limit to use every month. Token usage is affected by certain features, such as generating prompts, calculating prompt scores, analyzing a prompt balance, and generating heatmaps. You can find your token limit in your Zatomic account.
Token limits reset at the start of a billing period. If you exceed your token limit in a given billing periond, a 403 error code will be returned. You can inspect the following response headers for additional details pertaining to your token limit and usage.
Token limits do not apply when using your own AI provider.
Tokens Response Headers | |
---|---|
X-Tokens-Limit | The monthly token limit for your account. |
X-Tokens-Remaining | Number of tokens remaining for the current month. |
X-Tokens-Used | Number of tokens used for a given request, if applicable. |
X-Tokens-Duration | The time it took (in seconds) to generate or use the tokens for a given request, if applicable. |
{
"status_code": 403,
"title": "Forbidden",
"message": "Token limit reached for billing period.",
"trace_id": "",
"event_id": null
}
Our API supports the concept of expands, which allows you to retrieve related data for a given object during the same request, and made possible with the expand querystring parameter. You can also request multiple expands in the same request by chaining them together, separated by a comma.
If an endpoint supports expands, it will be noted as such in its section below.
// URL with single expand
https://api.zatomic.ai/v1/prompts/{promptId}?expand=versions
// URL with multiple expands
https://api.zatomic.ai/v1/prompts/{promptId}?expand=versions,scoring,balance
Our API supports the OpenAPI specification to provide you with a standardized, machine-readable way to understand and interact with our API. This allows seamless integration with tools like Swagger, Postman, and AI agents, making it easier for you to explore, test, and implement our API efficiently.
https://api.zatomic.ai/v1/openapi.json
Prompts are the main object in the Zatomic platform, and are workspace-specific. Prompts act as a container for their prompt versions and can be auto-generated from a use case.
Prompts can be expanded to include all of their versions, as well as the scoring, balance, and heatmap for each version.
Expands | |
---|---|
versions | Retrieves all versions of the prompt. |
scoring | Retrieves the scoring object for each version of the prompt. |
balance | Retrieves the balance object for each version of the prompt. |
heatmap | Retrieves the heatmap object for each version of the prompt. |
GET https://api.zatomic.ai/v1/prompts
POST https://api.zatomic.ai/v1/prompts
POST https://api.zatomic.ai/v1/prompts/generate
GET https://api.zatomic.ai/v1/prompts/{promptId}
PATCH https://api.zatomic.ai/v1/prompts/{promptId}
DELETE https://api.zatomic.ai/v1/prompts/{promptId}
Properties | |
---|---|
prompt_id
string
|
Unique ID of the prompt. |
workspace_id
string
|
The ID of the workspace that contains the prompt. |
created
datetime
|
UTC timestamp for when the prompt was created. |
created_by
string
|
The name of the user who created the prompt or the name of the API key that created the prompt. |
updated
datetime
|
UTC timestamp for when the prompt was updated. |
updated_by
string
|
The name of the user who updated the prompt or the name of the API key that updated the prompt. |
name
string
|
Name of the prompt. |
use_case
string, nullable
|
Use case description for the prompt. |
versions
list of version objects
|
List of versions for the prompt; can be empty. If the prompt has versions, by default the list will include only the prompt's primary version. If the versions expand is used, the list will include all versions for the prompt. |
{
"prompt_id": "prm_2qRzu8geIvfudcJTwP0pur4TbMJ",
"workspace_id": "wrk_2nKxr84BuEQIpUl3evP3XYyTxdo",
"created": "2024-12-19T20:33:15.7387Z",
"created_by": "Han Solo",
"updated": "2024-12-19T20:33:15.971617Z",
"updated_by": "Han Solo",
"name": "Prompt name",
"use_case": "Use case description.",
"versions": [
{
"version_id": "ver_2qRzu8qzlNOMhTrini2EKCDh5r6",
"prompt_id": "prm_2qRzu8geIvfudcJTwP0pur4TbMJ",
"created": "2024-12-19T20:33:16.076891Z",
"created_by": "Han Solo",
"updated": "2024-12-19T20:39:13.026997Z",
"updated_by": "Han Solo",
"name": null,
"is_primary": true,
"content": "You are a knowledgeable and friendly assistant...",
"variables": [
"{{variable1}}",
"[[variable2]]"
],
"token_info": {
"model": "gpt-4o",
"token_count": 0,
"token_cost": 0.0
},
"scoring": {
"version_timestamp": "2024-12-19T20:33:15.971617Z",
"scoring_timestamp": "2024-12-19T20:33:15.971617Z",
"overall_score": 0,
"rating": "Excellent",
"summary": {
"strengths": "The strengths of the prompt.",
"areas_for_improvement": "Areas where the prompt could improve.",
"overall_feedback": "Overall feedback for the prompt."
},
"criteria": {
"criteria_id": "sca_2rjp9HFpIsiYQrAiSbZlz85r3GC",
"name": "Default",
"criterion_results": [
{
"slug": "criterion_slug",
"score": 0,
"weight": 0,
"weighted_score": 0,
"feedback": "Specific feedback for the criterion."
}
]
}
},
"balance": {
"version_timestamp": "2024-12-19T20:33:15.971617Z",
"balance_timestamp": "2024-12-19T20:33:15.971617Z",
"summary": {
"overall_feedback": "Overall feedback based on the prompt balance.",
"recommendations": "Recommendations to improve the prompt balance."
},
"categories": [
{
"category": "The category name,",
"feedback": "Feedback about the balance of the category in the prompt.",
"phrase_count": 0,
"phrase_percent": 0.0,
"distribution": "The category distribution."
}
],
"phrases": [
{
"phrase": "The prompt phrase.",
"category": "The prompt category.",
"reason": "Reason the phrase was assigned to its category."
}
]
},
"heatmap": {
"version_timestamp": "2024-12-19T20:33:15.971617Z",
"heatmap_timestamp": "2024-12-19T20:33:15.971617Z",
"summary": "Summary of the heatmap.",
"phrases": [
{
"phrase": "The prompt phrase.",
"attention_score": 1,
"color": "The color level assigned to the phrase.",
"reason": "The reason why the AI model assigned the phrase its attention score."
}
]
}
}
]
}
Creating a new prompt requires only a name, everything else is optional. If only the name is given, the prompt will be created without any versions. If the content is given, the prompt will be created along with its first version, which becomes the prompt's primary version by default.
If a set of variables is given, the keys will be replaced by their values in the content before creating the prompt version.
A successful call returns a 201 status code with a response that contains the prompt object.
POST https://api.zatomic.ai/v1/prompts
Request Properties | |
---|---|
name
string
|
Name of the prompt. |
use_case
string, optional
|
Use case description for the prompt. |
content
string, optional
|
Content for the prompt. |
version_name
string, optional
|
Name for the prompt version. |
variables
set of key-value pairs, optional
|
Set of template variables for the prompt, in key-value pair format. Variables can use either double curly braces {{ }} or double square brackets [[ ]]. |
{
"name": "Prompt name",
"use_case": "Use case description.",
"content": "You are a knowledgeable and friendly assistant...",
"version_name": "Version name",
"variables": {
"{{variable1}}": "variable 1",
"[[variable2]]": "variable 2"
}
}
This endpoint allows you to update either the name of the prompt, its use case, or both. If you need to update the contents of a prompt, that can be done by updating a prompt version.
A successful call returns a response that contains the updated prompt object.
PATCH https://api.zatomic.ai/v1/prompts/{promptId}
// Example
PATCH https://api.zatomic.ai/v1/prompts/prm_2qRzu8geIvfudcJTwP0pur4TbMJ
Request Properties | |
---|---|
name
string, optional
|
Name of the prompt. |
use_case
string, optional
|
Use case description for the prompt. |
{
"name": "Prompt name",
"use_case": "Use case description."
}
Permanently deletes a prompt and all of its versions. This action cannot be undone.
A successful call returns a 204 status code.
DELETE https://api.zatomic.ai/v1/prompts/{promptId}
// Example
DELETE https://api.zatomic.ai/v1/prompts/prm_2qRzu8geIvfudcJTwP0pur4TbMJ
Retrieves a prompt from the given workspace.
A successful call returns a response that contains the prompt object.
Expands | |
---|---|
versions | Retrieves all versions of the prompt. |
scoring | Retrieves the scoring object for each version of the prompt. |
balance | Retrieves the balance object for each version of the prompt. |
heatmap | Retrieves the heatmap object for each version of the prompt. |
GET https://api.zatomic.ai/v1/prompts/{promptId}
// Examples
GET https://api.zatomic.ai/v1/prompts/prm_2qRzu8geIvfudcJTwP0pur4TbMJ
GET https://api.zatomic.ai/v1/prompts/prm_2qRzu8geIvfudcJTwP0pur4TbMJ?expand=versions
GET https://api.zatomic.ai/v1/prompts/prm_2qRzu8geIvfudcJTwP0pur4TbMJ?expand=scoring
GET https://api.zatomic.ai/v1/prompts/prm_2qRzu8geIvfudcJTwP0pur4TbMJ?expand=balance
GET https://api.zatomic.ai/v1/prompts/prm_2qRzu8geIvfudcJTwP0pur4TbMJ?expand=heatmap
Returns the list of all prompts in a given workspace, sorted alphabetically by prompt name.
A successful call returns a response that contains a list of prompt objects.
Expands | |
---|---|
versions | Retrieves all versions of each prompt in the list. |
scoring | Retrieves the scoring object for each version of each prompt in the list. |
balance | Retrieves the balance object for each version of each prompt in the list. |
heatmap | Retrieves the heatmap object for each version of each prompt in the list. |
GET https://api.zatomic.ai/v1/prompts
// Examples
GET https://api.zatomic.ai/v1/prompts?expand=versions
GET https://api.zatomic.ai/v1/prompts?expand=scoring
GET https://api.zatomic.ai/v1/prompts?expand=balance
GET https://api.zatomic.ai/v1/prompts?expand=heatmap
[
{
"prompt_id": "prm_2qJhUNbuEg3J8dvw39jgL3UEJKS",
"workspace_id": "wrk_2nKxr84BuEQIpUl3evP3XYyTxdo",
"created": "2024-12-16T22:03:19.30308Z",
"created_by": "Han Solo",
"updated": "2024-12-16T22:03:19.568618Z",
"updated_by": "Han Solo",
"name": "Prompt name",
"use_case": "Use case description.",
"versions": []
}
]
You can generate a prompt by sending in a use case description to this endpoint. You can then use the generated prompt as content to create a new prompt or a specific prompt version.
For paid accounts, you can also add a settings object to the request that specifies which AI provider and model you want to use for the prompt generation. If settings is given in the request, the provider_id and model are required (note: the model must be supported by the provider).
If the given provider is for Amazon Bedrock, then the aws_region is required and must be the region where the model is located.
You can find your provider IDs and models in your Zatomic account.
A successful call returns a response with an auto-generated content property in Markdown format.
POST https://api.zatomic.ai/v1/prompts/generate
Request Properties | |||||||
---|---|---|---|---|---|---|---|
use_case
string
|
Use case description for the prompt. | ||||||
settings
object, optional
|
Properties for the object:
|
{
"use_case": "Use case description.",
"settings": {
"provider_id": "prv_2skJ18bJRN5otmfTMyWjG3CCC7t",
"model": "amazon.nova-lite-v1:0",
"aws_region": "us-east-1"
}
}
Response Properties | |
---|---|
content
string
|
The content of the generated prompt. Will be in Markdown format. |
{
"content": "You are a knowledgeable and friendly assistant..."
}
Prompt versions, or just versions, are where the actual prompt content is stored, maintained, and analyzed. All versions belong to a parent prompt in a specific workspace. New prompt versions can be auto-generated from a use case.
Versions can be expanded to include all of their scoring, balance, and heatmap data. You can also retrieve and create scoring, balance, and heatmap objects for a version with their specific endpoints.
Expands | |
---|---|
scoring | Retrieves the scoring object for the version. |
balance | Retrieves the balance object for the version. |
heatmap | Retrieves the heatmap object for the version. |
GET https://api.zatomic.ai/v1/prompts/{promptId}/versions
POST https://api.zatomic.ai/v1/prompts/{promptId}/versions
GET https://api.zatomic.ai/v1/prompts/{promptId}/versions/{versionId}
PATCH https://api.zatomic.ai/v1/prompts/{promptId}/versions/{versionId}
DELETE https://api.zatomic.ai/v1/prompts/{promptId}/versions/{versionId}
GET https://api.zatomic.ai/v1/prompts/{promptId}/versions/{versionId}/scoring
POST https://api.zatomic.ai/v1/prompts/{promptId}/versions/{versionId}/scoring
GET https://api.zatomic.ai/v1/prompts/{promptId}/versions/{versionId}/balance
POST https://api.zatomic.ai/v1/prompts/{promptId}/versions/{versionId}/balance
GET https://api.zatomic.ai/v1/prompts/{promptId}/versions/{versionId}/heatmap
POST https://api.zatomic.ai/v1/prompts/{promptId}/versions/{versionId}/heatmap
Properties | |||||||
---|---|---|---|---|---|---|---|
version_id
string
|
Unique ID of the version. | ||||||
prompt_id
string
|
The ID of the parent prompt. | ||||||
created
datetime
|
UTC timestamp for when the version was created. | ||||||
created_by
string
|
The name of the user who created the version or the name of the API key that created the version. | ||||||
updated
datetime
|
UTC timestamp for when the version was updated. | ||||||
updated_by
string
|
The name of the user who updated the version or the name of the API key that updated the version. | ||||||
name
string, nullable
|
Name of the version. | ||||||
is_primary
boolean
|
Use case description for the prompt. | ||||||
content
string
|
The content of the prompt version. | ||||||
variables
list of key-value pairs
|
List of variables for the version; can be empty. Variables are "template tags", designated by either double curly braces {{ }} or double square brackets [[ ]], and can be used to create prompt templates. | ||||||
token_info
object
|
Contains token data about the prompt version.
|
||||||
scoring
scoring object, nullable
|
The scoring object for the prompt version, if scoring has been performed. | ||||||
balance
balance object, nullable
|
The balance object for the prompt version, if the balance has been analyzed. | ||||||
heatmap
heatmap object, nullable
|
The heatmap object for the prompt version, if the heatmap has been generated. |
{
"version_id": "ver_2qRzu8qzlNOMhTrini2EKCDh5r6",
"prompt_id": "prm_2qRzu8geIvfudcJTwP0pur4TbMJ",
"created": "2024-12-19T20:33:16.076891Z",
"created_by": "Han Solo",
"updated": "2024-12-19T20:39:13.026997Z",
"updated_by": "Han Solo",
"name": null,
"is_primary": true,
"content": "You are a knowledgeable and friendly assistant...",
"variables": [
"{{variable1}}",
"[[variable2]]"
],
"token_info": {
"model": "gpt-4o",
"token_count": 0,
"token_cost": 0.0
},
"scoring": {
"version_timestamp": "2024-12-19T20:33:15.971617Z",
"scoring_timestamp": "2024-12-19T20:33:15.971617Z",
"overall_score": 0,
"rating": "Excellent",
"summary": {
"strengths": "The strengths of the prompt.",
"areas_for_improvement": "Areas where the prompt could improve.",
"overall_feedback": "Overall feedback for the prompt."
},
"criteria": {
"criteria_id": "sca_2rjp9HFpIsiYQrAiSbZlz85r3GC",
"name": "Default",
"criterion_results": [
{
"slug": "criterion_slug",
"score": 0,
"weight": 0,
"weighted_score": 0,
"feedback": "Specific feedback for the criterion."
}
]
}
},
"balance": {
"version_timestamp": "2024-12-19T20:33:15.971617Z",
"balance_timestamp": "2024-12-19T20:33:15.971617Z",
"summary": {
"overall_feedback": "Overall feedback based on the prompt balance.",
"recommendations": "Recommendations to improve the prompt balance."
},
"categories": [
{
"category": "The category name.",
"feedback": "Feedback about the balance of category in the prompt.",
"phrase_count": 0,
"phrase_percent": 0.0,
"distribution": "The category distribution."
}
],
"phrases": [
{
"phrase": "The prompt phrase.",
"category": "The prompt category.",
"reason": "Reason the phrase was assigned to its category."
}
]
},
"heatmap": {
"version_timestamp": "2024-12-19T20:33:15.971617Z",
"heatmap_timestamp": "2024-12-19T20:33:15.971617Z",
"summary": "Summary of the heatmap.",
"phrases": [
{
"phrase": "The prompt phrase.",
"attention_score": 1,
"color": "The color level assigned to the phrase.",
"reason": "The reason why the AI model assigned the phrase its attention score."
}
]
}
}
Creating a new prompt version requires only the prompt content; the name and any variables are optional. If a set of variables is given, the keys will be replaced by their values in the content before creating the prompt version.
A successful call returns a 201 status code with a response that contains the version object.
POST https://api.zatomic.ai/v1/prompts/{promptId}/versions
// Example
POST https://api.zatomic.ai/v1/prompts/prm_2qRzu8geIvfudcJTwP0pur4TbMJ/versions
Request Properties | |
---|---|
name
string, optional
|
Name of the version. |
content
string
|
Content for the version. |
variables
set of key-value pairs, optional
|
Set of template variables for the version, in key-value pair format. Variables can use either double curly braces {{ }} or double square brackets [[ ]]. |
{
"name": "Version name",
"content": "You are a knowledgeable and friendly assistant...",
"variables": {
"{{variable1}}": "variable 1",
"[[variable2]]": "variable 2"
}
}
This endpoint allows you to update any combination of the prompt version's name, content, or primary flag.
A successful call returns a response that contains the updated version object.
PATCH https://api.zatomic.ai/v1/prompts/{promptId}/versions/{versionId}
// Example
PATCH https://api.zatomic.ai/v1/prompts/prm_2qRzu8geIvfudcJTwP0pur4TbMJ/versions/ver_2qRzu8qzlNOMhTrini2EKCDh5r6
Request Properties | |
---|---|
name
string, optional
|
Name of the version. |
content
string, optional
|
Content for the version. |
is_primary
boolean, optional
|
Flag that determines if the version is the primary version for the prompt. |
{
"name": "Version name",
"content": "You are a knowledgeable and friendly assistant...",
"is_primary": true
}
Permanently deletes a prompt version. This action cannot be undone.
A successful call returns a 204 status code.
DELETE https://api.zatomic.ai/v1/prompts/{promptId}/versions/{versionId}
// Example
DELETE https://api.zatomic.ai/v1/prompts/prm_2qRzu8geIvfudcJTwP0pur4TbMJ/versions/ver_2qRzu8qzlNOMhTrini2EKCDh5r6
Retrieves a specific version of a prompt.
A successful call returns a response that contains the version object.
Expands | |
---|---|
scoring | Retrieves the scoring object for the version. |
balance | Retrieves the balance object for the version. |
heatmap | Retrieves the heatmap object for the version. |
GET https://api.zatomic.ai/v1/prompts/{promptId}/versions/{versionId}
// Examples
GET https://api.zatomic.ai/v1/prompts/prm_2qRzu8geIvfudcJTwP0pur4TbMJ/versions/ver_2qRzu8qzlNOMhTrini2EKCDh5r6
GET https://api.zatomic.ai/v1/prompts/prm_2qRzu8geIvfudcJTwP0pur4TbMJ/versions/ver_2qRzu8qzlNOMhTrini2EKCDh5r6?expand=scoring
GET https://api.zatomic.ai/v1/prompts/prm_2qRzu8geIvfudcJTwP0pur4TbMJ/versions/ver_2qRzu8qzlNOMhTrini2EKCDh5r6?expand=balance
GET https://api.zatomic.ai/v1/prompts/prm_2qRzu8geIvfudcJTwP0pur4TbMJ/versions/ver_2qRzu8qzlNOMhTrini2EKCDh5r6?expand=heatmap
Returns the list of all versions for prompt, sorted by version updated date in descending order.
A successful call returns a response that contains a list of version objects.
Expands | |
---|---|
scoring | Retrieves the scoring object for each version in the list. |
balance | Retrieves the balance object for each version in the list. |
heatmap | Retrieves the heatmap object for each version in the list. |
GET https://api.zatomic.ai/v1/prompts/{promptId}/versions
// Examples
GET https://api.zatomic.ai/v1/prompts/{promptId}/versions
GET https://api.zatomic.ai/v1/prompts/{promptId}/versions?expand=scoring
GET https://api.zatomic.ai/v1/prompts/{promptId}/versions?expand=balance
GET https://api.zatomic.ai/v1/prompts/{promptId}/versions?expand=heatmap
[
{
"version_id": "ver_2qRzu8qzlNOMhTrini2EKCDh5r6",
"prompt_id": "prm_2qRzu8geIvfudcJTwP0pur4TbMJ",
"created": "2024-12-19T20:33:16.076891Z",
"created_by": "Han Solo",
"updated": "2024-12-19T20:39:13.026997Z",
"updated_by": "Han Solo",
"name": null,
"is_primary": true,
"content": "You are a knowledgeable and friendly assistant...",
"variables": [
"{{variable1}}",
"[[variable2]]"
],
"token_info": {
"model": "gpt-4o",
"token_count": 0,
"token_cost": 0.0
},
"scoring": null,
"balance": null,
"heatmap": null
}
]
Retrieves the scoring object for a specific prompt version.
A successful call returns a response that contains the scoring object.
GET https://api.zatomic.ai/v1/prompts/{promptId}/versions/{versionId}/scoring
// Example
GET https://api.zatomic.ai/v1/prompts/prm_2qRzu8geIvfudcJTwP0pur4TbMJ/versions/ver_2qRzu8qzlNOMhTrini2EKCDh5r6/scoring
Calculates the score for a specific version of a prompt. A successful call returns a response that contains the scoring object.
The request requires the ID of the criteria that you want to use for scoring. To get the list of criteria with their IDs and criterion slugs, use the scoring criteria list endpoint.
For paid accounts, you can also add a settings object to the request that specifies which AI provider and model you want to use for the scoring. If settings is given in the request, the provider_id and model are required (note: the model must be supported by the provider).
If the given provider is for Amazon Bedrock, then the aws_region is required and must be the region where the model is located.
You can find your provider IDs and models in your Zatomic account.
POST https://api.zatomic.ai/v1/prompts/{promptId}/versions/{versionId}/scoring
// Example
POST https://api.zatomic.ai/v1/prompts/prm_2qRzu8geIvfudcJTwP0pur4TbMJ/versions/ver_2qRzu8qzlNOMhTrini2EKCDh5r6/scoring
Request Properties | |||||||
---|---|---|---|---|---|---|---|
criteria_id
string
|
The ID of the criteria to use for scoring. | ||||||
criterion_slugs
list of strings, optional
|
The list of criterion slugs from the criteria. If none are given, then all criterion from the criteria will be used. | ||||||
settings
object, optional
|
Properties for the object:
|
{
"criteria_id": "sca_2rjp9HFpIsiYQrAiSbZlz85r3GC",
"criterion_slugs": ["slug_1", "slug_2", "slug_3"],
"settings": {
"provider_id": "prv_2skJ18bJRN5otmfTMyWjG3CCC7t",
"model": "amazon.nova-lite-v1:0",
"aws_region": "us-east-1"
}
}
Retrieves the balance object for a specific prompt version.
A successful call returns a response that contains the balance object.
GET https://api.zatomic.ai/v1/prompts/{promptId}/versions/{versionId}/balance
// Example
GET https://api.zatomic.ai/v1/prompts/prm_2qRzu8geIvfudcJTwP0pur4TbMJ/versions/ver_2qRzu8qzlNOMhTrini2EKCDh5r6/balance
Analyzes the balance for a specific version of a prompt. A successful call returns a response that contains the balance object.
The request requires an include_examples flag, which determines if the prompt version's examples should be included in the balance analysis. Including examples can add significant time when analyzing the balance of a version.
For paid accounts, you can also add a settings object to the request that specifies which AI provider and model you want to use for the balance analysis. If settings is given in the request, the provider_id and model are required (note: the model must be supported by the provider).
If the given provider is for Amazon Bedrock, then the aws_region is required and must be the region where the model is located.
You can find your provider IDs and models in your Zatomic account.
POST https://api.zatomic.ai/v1/prompts/{promptId}/versions/{versionId}/balance
// Example
POST https://api.zatomic.ai/v1/prompts/prm_2qRzu8geIvfudcJTwP0pur4TbMJ/versions/ver_2qRzu8qzlNOMhTrini2EKCDh5r6/balance
Request Properties | |||||||
---|---|---|---|---|---|---|---|
include_examples
boolean
|
Flag to include the prompt version's examples in the balance analysis. | ||||||
settings
object, optional
|
Properties for the object:
|
{
"include_examples": false,
"settings": {
"provider_id": "prv_2skJ18bJRN5otmfTMyWjG3CCC7t",
"model": "amazon.nova-lite-v1:0",
"aws_region": "us-east-1"
}
}
Retrieves the heatmap object for a specific prompt version.
A successful call returns a response that contains the heatmap object.
GET https://api.zatomic.ai/v1/prompts/{promptId}/versions/{versionId}/heatmap
// Example
GET https://api.zatomic.ai/v1/prompts/prm_2qRzu8geIvfudcJTwP0pur4TbMJ/versions/ver_2qRzu8qzlNOMhTrini2EKCDh5r6/heatmap
Generates the heatmap data for a specific version of a prompt. A successful call returns a response that contains the heatmap object.
The request requires an include_examples flag, which determines if the prompt version's examples should be included in the heatmap. Including examples can add significant time when generating the heatmap data of a version
For paid accounts, you can also add a settings object to the request that specifies which AI provider and model you want to use for the heatmap. If settings is given in the request, the provider_id and model are required (note: the model must be supported by the provider).
If the given provider is for Amazon Bedrock, then the aws_region is required and must be the region where the model is located.
You can find your provider IDs and models in your Zatomic account.
POST https://api.zatomic.ai/v1/prompts/{promptId}/versions/{versionId}/heatmap
// Example
POST https://api.zatomic.ai/v1/prompts/prm_2qRzu8geIvfudcJTwP0pur4TbMJ/versions/ver_2qRzu8qzlNOMhTrini2EKCDh5r6/heatmap
Request Properties | |||||||
---|---|---|---|---|---|---|---|
include_examples
boolean
|
Flag to include the prompt version's examples in the heatmap. | ||||||
settings
object, optional
|
Properties for the object:
|
{
"include_examples": false,
"settings": {
"provider_id": "prv_2skJ18bJRN5otmfTMyWjG3CCC7t",
"model": "amazon.nova-lite-v1:0",
"aws_region": "us-east-1"
}
}
Scoring criteria is used to evaluate prompts to generate their score and rating.
GET https://api.zatomic.ai/v1/prompts/scoring/criteria
POST https://api.zatomic.ai/v1/prompts/scoring/criteria
POST https://api.zatomic.ai/v1/prompts/scoring/criteria/generate
GET https://api.zatomic.ai/v1/prompts/scoring/criteria/{criteriaId}
PUT https://api.zatomic.ai/v1/prompts/scoring/criteria/{criteriaId}
DELETE https://api.zatomic.ai/v1/prompts/scoring/criteria/{criteriaId}
POST https://api.zatomic.ai/v1/prompts/scoring/criteria/{criteriaId}/generate
POST https://api.zatomic.ai/v1/prompts/scoring/criteria/{criteriaId}/criterionset
PUT https://api.zatomic.ai/v1/prompts/scoring/criteria/{criteriaId}/criterionset/{criterionId}
DELETE https://api.zatomic.ai/v1/prompts/scoring/criteria/{criteriaId}/criterionset/{criterionId}
NOTE: While similar and sharing the same criteria_id, the scoring criteria object is different from the scoring criteria results object.
Properties | |
---|---|
criteria_id
string
|
Unique ID of the scoring criteria. |
name
string
|
The criteria name. |
use_case
string
|
The use case for the criteria. |
criterion_set
list of criterion objects
|
The set of criterion for the scoring criteria. |
{
"criteria_id": "sca_2rjp9HFpIsiYQrAiSbZlz85r3GC",
"name": "Default",
"use_case": "The criteria use case.",
"criterion_set": [
{
"criterion_id": "scn_2tVwnAnheHa6NKuYvKcXrqDB21z",
"slug": "criterion_slug",
"label": "The label of the criterion.",
"description": "The criterion description.",
"questions": "The question or questions the criterion is trying to answer.",
"weight": 0
}
]
}
criterion_id
string
|
Unique ID for the criterion. |
slug
string
|
The slug for the criterion. Will be unique within the criterion set. |
label
string
|
The criterion label. |
description
string
|
The criterion description. |
questions
string
|
The question or questions the criterion is trying to answer. |
weight
integer
|
The weight assigned to the criterion. |
{
"criterion_id": "scn_2tVwnAnheHa6NKuYvKcXrqDB21z",
"slug": "criterion_slug",
"label": "The label of the criterion.",
"description": "The criterion description.",
"questions": "The question or questions the criterion is trying to answer.",
"weight": 0
}
NOTE: Managing custom scoring criteria is for paid accounts only.
Creating new scoring criteria requires a name and at least 1 criterion in the criterion_set. For each criterion given, all fields are required.
A successful call returns a 201 status code with a response that contains the scoring criteria object.
POST https://api.zatomic.ai/v1/prompts/scoring/criteria
Request Properties | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
name
string
|
The criteria name. | ||||||||||
use_case
string, optional
|
The use case for the criteria. | ||||||||||
criterion_set
list of criterion objects
|
Properties for the criterion object:
|
{
"name": "Criteria name",
"use_case": "The criteria use case.",
"criterion_set": [
{
"slug": "criterion_slug",
"label": "The label of the criterion.",
"description": "The criterion description.",
"questions": "The question or questions the criterion is trying to answer.",
"weight": 0
}
]
}
NOTE: Managing custom scoring criteria is for paid accounts only.
This endpoint allows you to update a scoring criteria.
A successful call returns a response that contains the updated scoring criteria object.
PUT https://api.zatomic.ai/v1/prompts/scoring/criteria/{criteriaId}
// Example
PUT https://api.zatomic.ai/v1/prompts/scoring/criteria/sca_2rjp9HFpIsiYQrAiSbZlz85r3GC
Request Properties | |
---|---|
name
string
|
Name of the scoring criteria. |
use_case
string, optional
|
Use case of the scoring criteria. |
{
"name": "Criteria name",
"use_case": "The criteria use case."
}
NOTE: Managing custom scoring criteria is for paid accounts only.
Permanently deletes a scoring criteria and all of its criterion. This action cannot be undone.
A successful call returns a 204 status code.
DELETE https://api.zatomic.ai/v1/prompts/scoring/criteria/{criteriaId}
// Example
DELETE https://api.zatomic.ai/v1/prompts/scoring/criteria/sca_2rjp9HFpIsiYQrAiSbZlz85r3GC
Retrieves a scoring criteria from the given workspace.
A successful call returns a response that conains the scoring criteria object.
GET https://api.zatomic.ai/v1/prompts/scoring/criteria/{criteriaId}
// Example
GET https://api.zatomic.ai/v1/prompts/scoring/criteria/sca_2rjp9HFpIsiYQrAiSbZlz85r3GC
Returns the list of all scoring criteria in a given workspace, sorted alphabetically by criteria name. A successful call returns a response that contains a list of scoring criteria objects.
For paid accounts, this endpoint also returns the default system criteria named Default, which will be the last criteria in the list.
For non-paid accounts, this endpoint returns a list that contains just the default system criteria named Default.
GET https://api.zatomic.ai/v1/prompts/scoring/criteria
[
{
"criteria_id": "sca_2rjp9HFpIsiYQrAiSbZlz85r3GC",
"name": "Default",
"use_case": "This is the use case for the default system criteria."
"criterion_set": [
{
"criterion_id": "scn_2tVwnAnheHa6NKuYvKcXrqDB21z",
"slug": "criterion_slug",
"label": "The label of the criterion.",
"description": "The criterion description.",
"questions": "The question or questions the criterion is trying to answer.",
"weight": 0
}
]
}
]
These endpoints generate scoring criteria based on a use case, which can then be used for prompt scoring. The first endpoint requires a use_case as part of the request, whereas the second endpoint will utilize the use case already assocated with the scoring criteria.
For the second endpoint, the generated criterion will be different from any criterion that already exists in the scoring criteria.
The responses for both endpoints are the same. The list of criterion returned for the first endpoint can be used as input to create scoring criteria, while the list of criterion returned from the second endpoint can be used as input to add criterion to the existing scsoring criteria.
POST https://api.zatomic.ai/v1/prompts/scoring/criteria/generate
POST https://api.zatomic.ai/v1/prompts/scoring/criteria/{criteriaId}/generate
// Example
POST https://api.zatomic.ai/v1/prompts/scoring/criteria/sca_2rjp9HFpIsiYQrAiSbZlz85r3GC/generate
Request Properties | |||||||
---|---|---|---|---|---|---|---|
use_case
string
|
Use case to generate scoring criteria. Only applies to the first endpoint. | ||||||
settings
object, optional
|
Properties for the object:
|
{
"use_case": "Use case for the criteria.",
"settings": {
"provider_id": "prv_2skJ18bJRN5otmfTMyWjG3CCC7t",
"model": "amazon.nova-lite-v1:0",
"aws_region": "us-east-1"
}
}
Response Properties | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
criterion_set
list of criterion objects
|
Properties for the criterion object:
|
{
"criterion_set": [
{
"slug": "criterion_slug",
"label": "The label of the criterion.",
"description": "The criterion description.",
"questions": "The question or questions the criterion is trying to answer.",
"weight": 0
}
]
}
NOTE: Managing custom scoring criteria is for paid accounts only.
This endpoint is for adding a new criterion to an existing scoring criteria. All fields in the request are required.
A successful call returns a 201 status code with a response that contains the scoring criterion object.
POST https://api.zatomic.ai/v1/prompts/scoring/criteria/{criteriaId}/criterionset
// Example
POST https://api.zatomic.ai/v1/prompts/scoring/criteria/sca_2rjp9HFpIsiYQrAiSbZlz85r3GC/criterionset
slug
string
|
The slug for the criterion. Can only contain lowercase letters and underscores. |
label
string
|
The criterion label. |
description
string
|
The criterion description. |
questions
string
|
The question or questions the criterion is trying to answer. |
weight
integer
|
The weight assigned to the criterion. Must be a whole number between 1 and 999. |
{
"slug": "criterion_slug",
"label": "The label of the criterion.",
"description": "The criterion description.",
"questions": "The question or questions the criterion is trying to answer.",
"weight": 0
}
This endpoint allows you to update an inidividual scoring criterion. All fields are required.
A successful call returns a response that contains the updated scoring criterion object.
PUT https://api.zatomic.ai/v1/prompts/scoring/criteria/{criteriaId}/criterionset/{criterionId}
// Example
PUT https://api.zatomic.ai/v1/prompts/scoring/criteria/sca_2rjp9HFpIsiYQrAiSbZlz85r3GC/criterionset/scn_2tVwnAnheHa6NKuYvKcXrqDB21z
Request Properties | |
---|---|
slug
string
|
The slug for the criterion. Can only contain lowercase letters and underscores. |
label
string
|
The criterion label. |
description
string
|
The criterion description. |
questions
string
|
The question or questions the criterion is trying to answer. |
weight
integer
|
The weight assigned to the criterion. Must be a whole number between 1 and 999. |
{
"slug": "criterion_slug",
"label": "The label of the criterion.",
"description": "The criterion description.",
"questions": "The question or questions the criterion is trying to answer.",
"weight": 0
}
NOTE: Managing custom scoring criteria is for paid accounts only.
This endpoint removes an individual criterion from a scoring criteria. This action cannot be undone.
A successful call returns a 204 status code.
DELETE https://api.zatomic.ai/v1/prompts/scoring/criteria/{criteriaId}/criterionset/{criterionId}
// Example
DELETE https://api.zatomic.ai/v1/prompts/scoring/criteria/sca_2rjp9HFpIsiYQrAiSbZlz85r3GC/criterionset/scn_2tVwnAnheHa6NKuYvKcXrqDB21z
Retrieves an individual scoring criterion from the given scoring criteria and workspace.
A successful call returns a response that conains the scoring criterion object.
GET https://api.zatomic.ai/v1/prompts/scoring/criteria/{criteriaId}/criterionset/{criterionId}
// Example
GET https://api.zatomic.ai/v1/prompts/scoring/criteria/sca_2rjp9HFpIsiYQrAiSbZlz85r3GC/criterionset/scn_2tVwnAnheHa6NKuYvKcXrqDB21z
When a score is generated for a prompt version, the response includes the results for the criteria that was used to analyze the prompt. This criteria contains the results for each criterion used as part of the analysis.
NOTE: While similar and sharing the same criteria_id, the scoring criteria results object is different from the scoring criteria object.
Properties | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
criteria_id
string
|
Unique ID of the scoring criteria. | ||||||||||
name
string
|
The criteria name. | ||||||||||
criterion_results
list of criterion objects
|
Properties for the criterion object:
|
{
"criteria_id": "sca_2rjp9HFpIsiYQrAiSbZlz85r3GC",
"name": "Default",
"criterion_results": [
{
"slug": "criterion_slug",
"score": 0,
"weight": 0,
"weighted_score": 0,
"feedback": "Specific feedback for the criterion."
}
]
}
Prompt scoring uses various criteria to analyze prompts and assign them a score and rating, with higher scores leading to better prompt performance.
Scoring can be performed and retrieved on individual prompt versions using their specific scoring endpoints. You can also score prompts without a version stored in the system by using the non-version specific endpoint.
Prompts are scored in the following ranges:
Scoring Range | Prompt Rating |
---|---|
0 - 49% | Poor |
50 - 74% | Fair |
75 - 89% | Good |
90 - 100% | Excellent |
POST https://api.zatomic.ai/v1/prompts/scoring
NOTE: When scoring prompts stored outside of Zatomic, the version_timestamp and scoring_timestamp properties will both be null.
Properties | |||||||
---|---|---|---|---|---|---|---|
version_timestamp
datetime, nullable
|
The timestamp of the prompt version used to calculate the score. | ||||||
scoring_timestamp
datetime, nullable
|
The timestamp for when the scoring occurred. | ||||||
overall_score
integer
|
The overall score for the prompt version, from 0.0 to 100.0. | ||||||
rating
string
|
The rating for the prompt version. Will be one of Excellent, Good, Fair, or Poor. | ||||||
summary
object
|
Contains summary info about the prompt version.
|
||||||
criteria
scoring criteria results object
|
The criteria that was used to score the prompt version, with results for each criterion. |
{
"version_timestamp": "2024-12-19T20:33:15.971617Z",
"scoring_timestamp": "2024-12-19T20:33:15.971617Z",
"overall_score": 0,
"rating": "Excellent",
"summary": {
"strengths": "The strengths of the prompt.",
"areas_for_improvement": "Areas where the prompt could improve.",
"overall_feedback": "Overall feedback for the prompt."
},
"criteria": {
"criteria_id": "sca_2rjp9HFpIsiYQrAiSbZlz85r3GC",
"name": "Default",
"criterion_results": [
{
"slug": "criterion_slug",
"score": 0,
"weight": 0,
"weighted_score": 0,
"feedback": "Specific feedback for the criterion."
}
]
}
}
NOTE: This is the endpoint for scoring a prompt stored outside of Zatomic. For the endpoint to score a prompt stored within Zatomic, see this endoint.
Calculates the score for a prompt. A successful call returns a response that contains the scoring object.
The request requires the content for the prompt and the ID of the criteria that you want to use for scoring. To get the list of criteria with their IDs and criterion slugs, use the scoring criteria list endpoint.
For paid accounts, you can also add a settings object to the request that specifies which AI provider and model you want to use for the scoring. If settings is given in the request, the provider_id and model are required (note: the model must be supported by the provider).
If the given provider is for Amazon Bedrock, then the aws_region is required and must be the region where the model is located.
You can find your provider IDs and models in your Zatomic account.
POST https://api.zatomic.ai/v1/prompts/scoring
Request Properties | |||||||
---|---|---|---|---|---|---|---|
use_case
string, optional
|
The use case for the prompt. Optional but recommended to improve analysis. | ||||||
content
string
|
The prompt content. | ||||||
criteria_id
string
|
The ID of the criteria to use for scoring. | ||||||
criterion_slugs
list of strings, optional
|
The list of criterion slugs from the criteria. If none are given, then all criterion from the criteria will be used. | ||||||
settings
object, optional
|
Properties for the object:
|
{
"use_case": "Use case for the prompt.",
"content": "The prompt content.",
"criteria_id": "sca_2rjp9HFpIsiYQrAiSbZlz85r3GC",
"criterion_slugs": ["slug_1", "slug_2", "slug_3"],
"settings": {
"provider_id": "prv_2skJ18bJRN5otmfTMyWjG3CCC7t",
"model": "amazon.nova-lite-v1:0",
"aws_region": "us-east-1"
}
}
Balance refers to the overall effectiveness of the structure and makeup of a prompt version. When the balance of a prompt version is analyzed, the prompt's content is broken down into meaningful phrases, which are then categorized to determine the balance of the prompt.
Balance analysis can be performed and retrieved on individual prompt versions using their specific scoring endpoints. You can also analyze the balance for prompts without a version stored in the system by using the non-version specific endpoint.
Prompt phrases are put into one of the following categories:
Phrase Category | Description |
---|---|
Instruction | Tells AI models what needs to be done. Ideal distribution is 20% - 35%. |
Entity | Gives AI models context and specificity. Ideal distribution is 20% - 35%. |
Concept | Defines themes and abstract ideas for AI models to consider. Ideal distribution is 15% - 30%. |
Detail | Supporting context to help refine AI responses. Ideal distribution is 15% - 30%. |
The categories are then analyzed to determine their distribution, as one of the following:
Category Distribution | Description |
---|---|
Balanced | The prompt has the right amount of phrases in that category to ensure high-quality AI responses. |
Overused | There are too many phrases in that category that could lead to overly complex, unfocused output. |
Underused | There aren't enough phrases in that category for the AI model to produce meaningful results. |
POST https://api.zatomic.ai/v1/prompts/balance
NOTE: When analyzing the balance of prompts stored outside of Zatomic, the version_timestamp and balance_timestamp properties will both be null.
Properties | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
version_timestamp
datetime, nullable
|
The timestamp of the prompt version used to analyze the balance. | ||||||||||
balance_timestamp
datetime, nullable
|
The timestamp for when the balance analysis occurred. | ||||||||||
summary
object
|
Properties of the summary object:
|
||||||||||
categories
list of objects
|
Properties of the categories object:
|
||||||||||
phrases
list of objects
|
Properties of the phrases object:
|
||||||||||
{
"version_timestamp": "2024-12-19T20:33:15.971617Z",
"balance_timestamp": "2024-12-19T20:33:15.971617Z",
"summary": {
"overall_feedback": "Overall feedback based on the prompt balance.",
"recommendations": "Recommendations to improve the prompt balance."
},
"categories": [
{
"category": "The category name.",
"feedback": "Feedback about the balance of the category in the prompt.",
"phrase_count": 0,
"phrase_percent": 0.0,
"distribution": "The category distribution."
}
],
"phrases": [
{
"phrase": "The prompt phrase.",
"category": "The prompt category.",
"reason": "Reason the phrase was assigned to its category."
}
]
}
NOTE: This is the endpoint for analyzing the balance of a prompt stored outside of Zatomic. For the endpoint to analyze the balance a prompt stored within Zatomic, see this endoint.
Analyzes the balance of a prompt. A successful call returns a response that contains the balance object.
The request requires the prompt content and an include_examples flag, which determines if the prompt's examples should be included in the balance analysis. Including examples can add significant time when analyzing the balance of a prompt.
For paid accounts, you can also add a settings object to the request that specifies which AI provider and model you want to use for the balance analysis. If settings is given in the request, the provider_id and model are required (note: the model must be supported by the provider).
If the given provider is for Amazon Bedrock, then the aws_region is required and must be the region where the model is located.
You can find your provider IDs and models in your Zatomic account.
POST https://api.zatomic.ai/v1/prompts/balance
Request Properties | |||||||
---|---|---|---|---|---|---|---|
content
string
|
The prompt content. | ||||||
include_examples
boolean
|
Flag to include the prompt version's examples in the balance check. | ||||||
settings
object, optional
|
Properties for the object:
|
{
"content": "The prompt content.",
"include_examples": false,
"settings": {
"provider_id": "prv_2skJ18bJRN5otmfTMyWjG3CCC7t",
"model": "amazon.nova-lite-v1:0",
"aws_region": "us-east-1"
}
}
Heatmaps allow you to visualize prompt phrases that the AI model gave most (or least) of its attention. Using prompt heatmaps through the API gives you the raw data for you to render heatmap visualizations in other ways.
Generating heatmap data can be performed and retrieved on individual prompt versions using their specific heatmap endpoints. You can also generate heatmap data for prompts without a version stored in the system by using the non-version specific endpoint.
When a prompt heatmap is generated, phrases are broken down into meaningful phrases and assigned a score based on how much attention the AI model gave the phrase. Those attention scores are then assigned a corresponding color level.
Attention Score | Color Level |
---|---|
1 | very-light |
2 | light |
3 | medium |
4 | dark |
5 | very-dark |
POST https://api.zatomic.ai/v1/prompts/heatmap
NOTE: When generating heatmaps of prompts stored outside of Zatomic, the version_timestamp and heatmap_timestamp properties will both be null.
Properties | |||||||||
---|---|---|---|---|---|---|---|---|---|
version_timestamp
datetime, nullable
|
The timestamp of the prompt version used to generate the heatmap. | ||||||||
heatmap_timestamp
datetime, nullable
|
The timestamp for when the heatmap was generated. | ||||||||
summary
object
|
Properties of the summary object:
|
||||||||
phrases
list of objects
|
Properties of the phrases object:
|
{
"version_timestamp": "2024-12-19T20:33:15.971617Z",
"heatmap_timestamp": "2024-12-19T20:33:15.971617Z",
"summary": {
"overall_feedback": "Overall feedback for the heatmap analysis."
},
"phrases": [
{
"phrase": "The prompt phrase.",
"attention_score": 1,
"color": "The color level assigned to the phrase.",
"reason": "The reason why the AI model assigned the phrase its attention score."
}
]
}
NOTE: This is the endpoint for generating heatmap data of a prompt stored outside of Zatomic. For the endpoint to generate heatmap data a prompt stored within Zatomic, see this endoint.
Generates the heatmap data for a prompt. A successful call returns a response that contains the heatmap object.
The request requires the prompt content and an include_examples flag, which determines if the prompt's examples should be included in the heatmap. Including examples can add significant time when generating the heatmap data of a prompt.
For paid accounts, you can also add a settings object to the request that specifies which AI provider and model you want to use for the heatmap. If settings is given in the request, the provider_id and model are required (note: the model must be supported by the provider).
If the given provider is for Amazon Bedrock, then the aws_region is required and must be the region where the model is located.
You can find your provider IDs and models in your Zatomic account.
POST https://api.zatomic.ai/v1/prompts/heatmap
Request Properties | |||||||
---|---|---|---|---|---|---|---|
content
string
|
The prompt content. | ||||||
include_examples
boolean
|
Flag to include the prompt's examples in the heatmap. | ||||||
settings
object, optional
|
Properties for the object:
|
{
"content": "The prompt content.",
"include_examples": false,
"settings": {
"provider_id": "prv_2skJ18bJRN5otmfTMyWjG3CCC7t",
"model": "amazon.nova-lite-v1:0",
"aws_region": "us-east-1"
}
}
The tokens endpoints allow you to get token counts for prompts before calling AI models.
GET https://api.zatomic.ai/v1/tokens/models
POST https://api.zatomic.ai/v1/tokens/count
Properties | |
---|---|
author
string
|
The name of the author for the AI model. |
name
string
|
The name of the AI model. |
display_name
string
|
The display name for the AI model. |
price_per_input_token
decimal, nullable
|
The price per input token for the AI model, in USD. |
{
"author": "The model author.",
"name": "The model name.",
"display_name": "Display name for the model.",
"price_per_input_token": 0.0
}
Returns the list of AI models that Zatomic supports for token counts.
A successful call returns a response that contains a list of token model objects.
GET https://api.zatomic.ai/v1/tokens/models
Gets the token count and cost for the submitted input.
The input takes in content and the list of model names for which to get the token counts and cost. You can get the list of model names from the token models endpoint.
The output is a list of objects that contain the token count and cost for each model sent in the request.
GET https://api.zatomic.ai/v1/tokens/count
Request Properties | |
---|---|
content
string
|
The content for which to calculate the number of tokens and cost. |
type
string
|
The type of content. Currently the only accepted value is text. |
models
list of strings
|
The list of AI model names for which to calculate the token count and cost for the content. |
{
"content": "Felis nunc phasellus arcu ad bibendum elementum taciti.",
"type": "text",
"models": ["Model name", "Model name"]
}
Response Properties | |
---|---|
model
string
|
The name of the AI model. |
token_count
integer, nullable
|
The number of tokens for the content for the AI model. |
token_cost
decimal, nullable
|
The total cost of the tokens for the content for the AI model, in USD. |
[
{
"model": "Model name",
"token_count": 0,
"token_cost": 0.0
}
]