Introduction

The API is designed around a RESTful approach. Our API has predictable, resource-oriented URLs, uses standard HTTP status codes and verbs, accepts only JSON for request bodies, and returns only JSON for all responses.

All API requests must be made over HTTPS. API calls made over plain HTTP are not supported.

Base URL
https://api.zatomic.ai

Versioning

While we will support multiple versions and backwards-compatibility in the future, currently there is only one version of the API, v1. All API requests require the version in the URL.

Future versions of the API will follow the vN naming convention, where N is the next incremented whole number from the previous version. Meaning, after v1 we will have v2, v3, v4, and so on.

New versions of the API will be created only if there are breaking changes to previous versions. Breaking changes are defined as changes to request and response objects, property name changes, and removal of endpoints.

Please note: adding properties to requests and responses and adding endpoints do not result in new API versions, as these do not affect existing API clients.

All API changes, regardless of their type of change, can be found in our API changelog.

Versioning URL
https://api.zatomic.ai/v1

Authentication

An API key is required to authenticate all requests to the API. You can view and manage your API keys in your Zatomic account.

Authentication to the API is handled by setting the X-Api-Key request header or by using the api-key querystring parameter. If both are given, the api-key querystring parameter will be used.

For authentication, API calls will fail for the following reasons:

  • Your API key not being sent in the request.
  • Your API key is invalid.

You may also require users of the API to make requests with an API client ID. Similar to an API key, an API client ID is a unique identifier that grants its user access to the API, thus allowing account admins the ability to track who or what is using the API on their behalf.

If API client IDs are required for your account, in addition to your API key, you must make API calls by setting the X-Api-Client request header or by using the api-client querystring parameter. If both are given, the api-client querystring parameter will be used.

If API client IDs are required for your account, API calls will fail for the following reasons:

  • The API client ID not being sent in the request.
  • The API client ID is invalid.

Requiring API clients for the API is managed in your Zatomic account.

Authenticated Request
// With request headers
curl -X GET https://api.zatomic.ai/v1/prompts \
  -H "X-Api-Key: {API key}"

curl -X GET https://api.zatomic.ai/v1/prompts \
  -H "X-Api-Key: {API key}" \
  -H "X-Api-Client: {API client ID}"

// With querystring parameters
GET https://api.zatomic.ai/v1/prompts?api-key={API key}
GET https://api.zatomic.ai/v1/prompts?api-key={API key}&api-client={API client ID}


Workspaces

All API requests must be made in the context of a workspace; therefore, in addition to an API key, a workspace ID is also required to call the API. You can find your workspace ID in your Zatomic account.

Similar to API keys and client IDs, sending the workspace ID is done by either setting the X-Workspace-Id request header or by using the workspace-id querystring parameter. If both are given, the workspace-id querystring parameter will be used.

API keys and client IDs must have access to the given workspace. API requests made without a workspace ID will fail.

Request with Workspace ID
// With request header
curl -X GET https://api.zatomic.ai/v1/prompts \
  -H "X-Workspace-Id: {workspace ID}"

// With querystring parameter
GET https://api.zatomic.ai/v1/prompts?workspace-id={workspace ID}


Status codes and errors

Zatomic uses standard HTTP status codes to indicate success or failure of API requests. In general, 2xx codes represent success, 4xx codes indicate a bad request (such as an invalid API key), and 5xx codes mean something went wrong on our end (which should be rare).

HTTP Status Codes
200 OK The request was successful.
201 Created The resource was created.
204 No Content The resource was deleted.
400 Bad Request The request was unacceptable.
401 Unauthorized Invalid API key or invalid API client ID.
403 Forbidden The API key doesn't have permissions.
404 Not Found The requested resource doesn't exist.
429 Too Many Requests Too many requests hit the API too quickly.
500 Internal Server Error Something went wrong on Zatomic's end.
Error Response
{
   "status_code": 401,
   "title": "Unauthorized",
   "message": "Invalid API key.",
   "trace_id": "",
   "event_id": null
}

Token usage

Token usage and duration is returned as part of the response headers. These headers only apply to API calls where tokens are used, such as for prompt scoring or generating a heatmap.

Tokens Response Headers
X-Tokens-Used Number of tokens used for a given request, if applicable.
X-Tokens-Duration The time it took (in seconds) to generate or use the tokens for a given request, if applicable.

Expanding objects

Our API supports the concept of expands, which allows you to retrieve related data for a given object during the same request, and made possible with the expand querystring parameter. You can also request multiple expands in the same request by chaining them together, separated by a comma.

If an endpoint supports expands, it will be noted as such in its section below.

URLs with Expands
// URL with single expand
https://api.zatomic.ai/v1/prompts/{promptId}?expand=versions

// URL with multiple expands
https://api.zatomic.ai/v1/prompts/{promptId}?expand=versions,scoring,risk,balance,heatmap


OpenAPI spec

Our API supports the OpenAPI specification to provide you with a standardized, machine-readable way to understand and interact with our API. This allows seamless integration with tools like Swagger, Postman, and AI agents, making it easier for you to explore, test, and implement our API efficiently.

Spec File Location
https://api.zatomic.ai/v1/openapi.json


Prompts

Prompts are the main object in the Zatomic platform, and are workspace-specific. Prompts act as a container for their prompt versions and can be auto-generated from a use case.

Prompts can be expanded to include all of their versions, as well as the scoring, risk, balance, and heatmap for each version.

Expands
versions Retrieves all versions of the prompt.
scoring Retrieves the scoring object for each version of the prompt.
risk Retrieves the risk object for each version of the prompt.
balance Retrieves the balance object for each version of the prompt.
heatmap Retrieves the heatmap object for each version of the prompt.
Endpoints
   GET https://api.zatomic.ai/v1/prompts
  POST https://api.zatomic.ai/v1/prompts
  POST https://api.zatomic.ai/v1/prompts/generate
   GET https://api.zatomic.ai/v1/prompts/{promptId}
 PATCH https://api.zatomic.ai/v1/prompts/{promptId}
DELETE https://api.zatomic.ai/v1/prompts/{promptId}


The Prompt object

Properties
prompt_id
string
Unique ID of the prompt.
workspace_id
string
The ID of the workspace that contains the prompt.
created
datetime
UTC timestamp for when the prompt was created.
created_by
string
The name of the user who created the prompt or the name of the API key that created the prompt.
updated
datetime
UTC timestamp for when the prompt was updated.
updated_by
string
The name of the user who updated the prompt or the name of the API key that updated the prompt.
name
string
Name of the prompt.
use_case
string, nullable
Use case description for the prompt.
versions
List of versions for the prompt; can be empty. If the prompt has versions, by default the list will include only the prompt's primary version. If the versions expand is used, the list will include all versions for the prompt.
The Prompt Object - Fully Expanded
{
   "prompt_id": "prm_2qRzu8geIvfudcJTwP0pur4TbMJ",
   "workspace_id": "wrk_2nKxr84BuEQIpUl3evP3XYyTxdo",
   "created": "2024-12-19T20:33:15.7387Z",
   "created_by": "Han Solo",
   "updated": "2024-12-19T20:33:15.971617Z",
   "updated_by": "Han Solo",
   "name": "Prompt name",
   "use_case": "Use case description.",
   "versions": [
      {
         "version_id": "ver_2qRzu8qzlNOMhTrini2EKCDh5r6",
         "prompt_id": "prm_2qRzu8geIvfudcJTwP0pur4TbMJ",
         "created": "2024-12-19T20:33:16.076891Z",
         "created_by": "Han Solo",
         "updated": "2024-12-19T20:39:13.026997Z",
         "updated_by": "Han Solo",
         "name": null,
         "is_primary": true,
         "content": "You are a knowledgeable and friendly assistant...",
         "variables": [
            "{{variable1}}",
            "[[variable2]]"
         ],
         "token_info": {
            "model": "gpt-4.1",
            "token_count": 0,
            "cost_per_use": 0.0,
            "cost_per_1k": 0.0,
            "cost_per_1m": 0.0
         },
         "scoring": {
            "version_timestamp": "2024-12-19T20:33:15.971617Z",
            "scoring_timestamp": "2024-12-19T20:33:15.971617Z",
            "overall_score": 0,
            "rating": "Excellent",
            "summary": {
               "strengths": "The strengths of the prompt.",
               "areas_for_improvement": "Areas where the prompt could improve.",
               "overall_feedback": "Overall feedback for the prompt."
            },
            "criteria": {
               "criteria_id": "sca_2rjp9HFpIsiYQrAiSbZlz85r3GC",
               "name": "Default",
               "criterion_results": [
                  {
                     "slug": "criterion_slug",
                     "score": 0,
                     "weight": 0,
                     "weighted_score": 0,
                     "feedback": "Specific feedback for the criterion."
                  }
               ]
            }
         },
         "risk": {
            "version_timestamp": "2025-06-22T14:45:19.144393Z",
            "risk_timestamp": "2025-06-22T18:52:20.1727176Z",
            "summary": {
               "overall_feedback": "Overall feedback for the risk analysis.",
               "overall_risk_level": "low|medium|high"
            },
            "bias_analysis": {
               "risk_level": "low|medium|high",
               "feedback": "Feedback for the bias analysis.",
               "issues": [
                  {
                     "issue": "Potential bias.",
                     "parts": [
                        {
                           "part": "Affected part of the prompt.",
                           "revision": "Suggested revision for the prompt part."
                        }
                     ]
                  }
               ]
            },
            "ethical_analysis": {
               "risk_level": "low|medium|high",
               "feedback": "Feedback for the ethical analysis.",
               "issues": [
                  {
                     "issue": "Ethical concern.",
                     "parts": [
                        {
                           "part": "Affected part of the prompt.",
                           "revision": "Suggested revision for the prompt part."
                        }
                     ]
                  }
               ]
            },
            "safety_analysis": {
               "risk_level": "low|medium|high",
               "feedback": "Feedback for the safety analysis.",
               "issues": [
                  {
                     "issue": "Safety issue.",
                     "parts": [
                        {
                           "part": "Affected part of the prompt.",
                           "revision": "Suggested revision for the prompt part."
                        }
                     ]
                  }
               ]
            }
         },
         "balance": {
            "version_timestamp": "2024-12-19T20:33:15.971617Z",
            "balance_timestamp": "2024-12-19T20:33:15.971617Z",
            "summary": {
               "overall_feedback": "Overall feedback based on the prompt balance.",
               "recommendations": "Recommendations to improve the prompt balance."
            },
            "categories": [
               {
                  "category": "The category name,",
                  "feedback": "Feedback about the balance of the category in the prompt.",
                  "phrase_count": 0,
                  "phrase_percent": 0.0,
                  "distribution": "The category distribution."
               }
            ],
            "phrases": [
               {
                  "phrase": "The prompt phrase.",
                  "category": "The prompt category.",
                  "reason": "Reason the phrase was assigned to its category."
               }
            ]
         },
         "heatmap": {
            "version_timestamp": "2024-12-19T20:33:15.971617Z",
            "heatmap_timestamp": "2024-12-19T20:33:15.971617Z",
            "summary": "Summary of the heatmap.",
            "phrases": [
               {
                  "phrase": "The prompt phrase.",
                  "attention_score": 1,
                  "color": "The color level assigned to the phrase.",
                  "reason": "The reason why the AI model assigned the phrase its attention score."
               }
            ]
         }
      }
   ]
}


Creating a prompt

Creating a new prompt requires only a name, everything else is optional. If only the name is given, the prompt will be created without any versions. If the content is given, the prompt will be created along with its first version, which becomes the prompt's primary version by default.

If a set of variables is given, the keys will be replaced by their values in the content before creating the prompt version.

A successful call returns a 201 status code with a response that contains the prompt object.

Endpoint
POST https://api.zatomic.ai/v1/prompts

Request Properties
name
string
Name of the prompt.
use_case
string, optional
Use case description for the prompt.
content
string, optional
Content for the prompt.
version_name
string, optional
Name for the prompt version.
variables
set of key-value pairs, optional
Set of template variables for the prompt, in key-value pair format. Variables can use either double curly braces {{ }} or double square brackets [[ ]].
Request Body
{
   "name": "Prompt name",
   "use_case": "Use case description.",
   "content": "You are a knowledgeable and friendly assistant...",
   "version_name": "Version name",
   "variables": {
      "{{variable1}}": "variable 1",
      "[[variable2]]": "variable 2"
   }
}

Updating a prompt

This endpoint allows you to update either the name of the prompt, its use case, or both. If you need to update the contents of a prompt, that can be done by updating a prompt version.

A successful call returns a response that contains the updated prompt object.

Endpoint
PATCH https://api.zatomic.ai/v1/prompts/{promptId}

// Example
PATCH https://api.zatomic.ai/v1/prompts/prm_2qRzu8geIvfudcJTwP0pur4TbMJ

Request Properties
name
string, optional
Name of the prompt.
use_case
string, optional
Use case description for the prompt.
Request Body
{
   "name": "Prompt name",
   "use_case": "Use case description."
}

Deleting a prompt

Permanently deletes a prompt and all of its versions. This action cannot be undone.

A successful call returns a 204 status code.

Endpoint
DELETE https://api.zatomic.ai/v1/prompts/{promptId}

// Example
DELETE https://api.zatomic.ai/v1/prompts/prm_2qRzu8geIvfudcJTwP0pur4TbMJ


Retrieving a prompt

Retrieves a prompt from the given workspace.

A successful call returns a response that contains the prompt object.

Expands
versions Retrieves all versions of the prompt.
scoring Retrieves the scoring object for each version of the prompt.
risk Retrieves the risk object for each version of the prompt.
balance Retrieves the balance object for each version of the prompt.
heatmap Retrieves the heatmap object for each version of the prompt.
Endpoint
GET https://api.zatomic.ai/v1/prompts/{promptId}

// Examples
GET https://api.zatomic.ai/v1/prompts/prm_2qRzu8geIvfudcJTwP0pur4TbMJ
GET https://api.zatomic.ai/v1/prompts/prm_2qRzu8geIvfudcJTwP0pur4TbMJ?expand=versions
GET https://api.zatomic.ai/v1/prompts/prm_2qRzu8geIvfudcJTwP0pur4TbMJ?expand=scoring
GET https://api.zatomic.ai/v1/prompts/prm_2qRzu8geIvfudcJTwP0pur4TbMJ?expand=risk
GET https://api.zatomic.ai/v1/prompts/prm_2qRzu8geIvfudcJTwP0pur4TbMJ?expand=balance
GET https://api.zatomic.ai/v1/prompts/prm_2qRzu8geIvfudcJTwP0pur4TbMJ?expand=heatmap


Retrieving all prompts

Returns the list of all prompts in a given workspace, sorted alphabetically by prompt name.

A successful call returns a response that contains a list of prompt objects.

Expands
versions Retrieves all versions of each prompt in the list.
scoring Retrieves the scoring object for each version of each prompt in the list.
risk Retrieves the risk object for each version of each prompt in the list.
balance Retrieves the balance object for each version of each prompt in the list.
heatmap Retrieves the heatmap object for each version of each prompt in the list.
Endpoint
GET https://api.zatomic.ai/v1/prompts

// Examples
GET https://api.zatomic.ai/v1/prompts?expand=versions
GET https://api.zatomic.ai/v1/prompts?expand=scoring
GET https://api.zatomic.ai/v1/prompts?expand=risk
GET https://api.zatomic.ai/v1/prompts?expand=balance
GET https://api.zatomic.ai/v1/prompts?expand=heatmap

Response Body
[
   {
      "prompt_id": "prm_2qJhUNbuEg3J8dvw39jgL3UEJKS",
      "workspace_id": "wrk_2nKxr84BuEQIpUl3evP3XYyTxdo",
      "created": "2024-12-16T22:03:19.30308Z",
      "created_by": "Han Solo",
      "updated": "2024-12-16T22:03:19.568618Z",
      "updated_by": "Han Solo",
      "name": "Prompt name",
      "use_case": "Use case description.",
      "versions": []
   }
]

Generating a prompt

You can generate a prompt by sending in a use case description to this endpoint. You can then use the generated prompt as content to create a new prompt or a specific prompt version.

A successful call returns a response with an auto-generated content property in Markdown format.

You can also add a settings object to the request that specifies which AI model and provider you want to use generate the prompt. If settings is given in the request, the model_source and model_id are required.

The model_source field specifies where the model comes from. When using models from your own AI providers, use the value provider; otherwise, use zatomic.

If provider_id is given and the provider is for Amazon Bedrock, then the aws_region is required and must be the region where the model is located.

You can find model IDs in the model catalog and provider IDs in your Zatomic account.

Endpoint
POST https://api.zatomic.ai/v1/prompts/generate

Request Properties
use_case
string
Use case description for the prompt.
settings
object, optional

Properties for the object:

model_source
string
The source of the model.
model_id
string
The ID of the AI model to use for the scoring.
provider_id
string, optional
The ID of the AI provider that contains the model to use for scoring.
aws_region
string, optional
The AWS region where the model resides. Required if the given provider is Amazon Bedrock.
temperature
number, optional
The temperature for the model. Must be betweeen 0 and 1.
Request Body
{
   "use_case": "Use case description.",
   "settings": {
      "model_source": "zatomic|provider",
      "model_id": "aim_2y2eRWI32fN0CB7a5wE7RuvhVMv"
      "provider_id": "aap_2zFxUYe3RINnOr37VQwHDFF3gK3",
      "aws_region": "us-east-1",
      "temperature": 0.75
   }
}
Response Properties
content
string
The content of the generated prompt. Will be in Markdown format.
Response Body
{
   "content": "You are a knowledgeable and friendly assistant..."
}

Versions

Prompt versions, or just versions, are where the actual prompt content is stored, maintained, and analyzed. All versions belong to a parent prompt in a specific workspace. New prompt versions can be auto-generated from a use case.

Versions can be expanded to include all of their scoring, balance, and heatmap data. You can also retrieve and create scoring, risk, balance, and heatmap objects for a version with their specific endpoints.

Expands
scoring Retrieves the scoring object for the version.
risk Retrieves the risk object for the version.
balance Retrieves the balance object for the version.
heatmap Retrieves the heatmap object for the version.
Endpoints
   GET https://api.zatomic.ai/v1/prompts/{promptId}/versions
  POST https://api.zatomic.ai/v1/prompts/{promptId}/versions
   GET https://api.zatomic.ai/v1/prompts/{promptId}/versions/{versionId}
 PATCH https://api.zatomic.ai/v1/prompts/{promptId}/versions/{versionId}
DELETE https://api.zatomic.ai/v1/prompts/{promptId}/versions/{versionId}
   GET https://api.zatomic.ai/v1/prompts/{promptId}/versions/{versionId}/scoring
  POST https://api.zatomic.ai/v1/prompts/{promptId}/versions/{versionId}/scoring
   GET https://api.zatomic.ai/v1/prompts/{promptId}/versions/{versionId}/risk
  POST https://api.zatomic.ai/v1/prompts/{promptId}/versions/{versionId}/risk
   GET https://api.zatomic.ai/v1/prompts/{promptId}/versions/{versionId}/balance
  POST https://api.zatomic.ai/v1/prompts/{promptId}/versions/{versionId}/balance
   GET https://api.zatomic.ai/v1/prompts/{promptId}/versions/{versionId}/heatmap
  POST https://api.zatomic.ai/v1/prompts/{promptId}/versions/{versionId}/heatmap
  POST https://api.zatomic.ai/v1/prompts/{promptId}/versions/{versionId}/improve


The Version object

Properties
version_id
string
Unique ID of the version.
prompt_id
string
The ID of the parent prompt.
created
datetime
UTC timestamp for when the version was created.
created_by
string
The name of the user who created the version or the name of the API key that created the version.
updated
datetime
UTC timestamp for when the version was updated.
updated_by
string
The name of the user who updated the version or the name of the API key that updated the version.
name
string, nullable
Name of the version.
is_primary
boolean
Use case description for the prompt.
content
string
The content of the prompt version.
variables
list of key-value pairs
List of variables for the version; can be empty. Variables are "template tags", designated by either double curly braces {{ }} or double square brackets [[ ]], and can be used to create prompt templates.
token_info
object

Contains token data about the prompt version.

model
string
Name of the AI model used to calculate the token count and cost.
token_count
integer
Number of tokens for the prompt version.
cost_per_use
decimal
Cost of the tokens per use, in USD.
cost_per_1k
decimal
Cost of the tokens per thousand uses, in USD.
cost_per_1m
decimal
Cost of the tokens per million uses, in USD.
scoring
scoring object, nullable
The scoring object for the prompt version, if scoring has been performed.
risk
risk object, nullable
The risk object for the prompt version, if the risk has been analyzed.
balance
balance object, nullable
The balance object for the prompt version, if the balance has been analyzed.
heatmap
heatmap object, nullable
The heatmap object for the prompt version, if the heatmap has been generated.
The Version Object - Fully Expanded
{
   "version_id": "ver_2qRzu8qzlNOMhTrini2EKCDh5r6",
   "prompt_id": "prm_2qRzu8geIvfudcJTwP0pur4TbMJ",
   "created": "2024-12-19T20:33:16.076891Z",
   "created_by": "Han Solo",
   "updated": "2024-12-19T20:39:13.026997Z",
   "updated_by": "Han Solo",
   "name": null,
   "is_primary": true,
   "content": "You are a knowledgeable and friendly assistant...",
   "variables": [
      "{{variable1}}",
      "[[variable2]]"
   ],
   "token_info": {
      "model": "gpt-4.1",
      "token_count": 0,
      "cost_per_use": 0.0,
      "cost_per_1k": 0.0,
      "cost_per_1m": 0.0
   },
   "scoring": {
      "version_timestamp": "2024-12-19T20:33:15.971617Z",
      "scoring_timestamp": "2024-12-19T20:33:15.971617Z",
      "overall_score": 0,
      "rating": "Excellent",
      "summary": {
         "strengths": "The strengths of the prompt.",
         "areas_for_improvement": "Areas where the prompt could improve.",
         "overall_feedback": "Overall feedback for the prompt."
      },
      "criteria": {
         "criteria_id": "sca_2rjp9HFpIsiYQrAiSbZlz85r3GC",
         "name": "Default",
         "criterion_results": [
            {
               "slug": "criterion_slug",
               "score": 0,
               "weight": 0,
               "weighted_score": 0,
               "feedback": "Specific feedback for the criterion."
            }
         ]
      }
   },
   "risk": {
      "version_timestamp": "2025-06-22T14:45:19.144393Z",
      "risk_timestamp": "2025-06-22T18:52:20.1727176Z",
      "summary": {
         "overall_feedback": "Overall feedback for the risk analysis.",
         "overall_risk_level": "low|medium|high"
      },
      "bias_analysis": {
         "risk_level": "low|medium|high",
         "feedback": "Feedback for the bias analysis.",
         "issues": [
            {
               "issue": "Potential bias.",
               "parts": [
                  {
                     "part": "Affected part of the prompt.",
                     "revision": "Suggested revision for the prompt part."
                  }
               ]
            }
         ]
      },
      "ethical_analysis": {
      "risk_level": "low|medium|high",
      "feedback": "Feedback for the ethical analysis.",
      "issues": [
            {
               "issue": "Ethical concern.",
               "parts": [
                  {
                     "part": "Affected part of the prompt.",
                     "revision": "Suggested revision for the prompt part."
                  }
               ]
            }
         ]
      },
      "safety_analysis": {
      "risk_level": "low|medium|high",
      "feedback": "Feedback for the safety analysis.",
      "issues": [
            {
               "issue": "Safety issue.",
               "parts": [
                  {
                     "part": "Affected part of the prompt.",
                     "revision": "Suggested revision for the prompt part."
                  }
               ]
            }
         ]
      }
   },
   "balance": {
      "version_timestamp": "2024-12-19T20:33:15.971617Z",
      "balance_timestamp": "2024-12-19T20:33:15.971617Z",
      "summary": {
         "overall_feedback": "Overall feedback based on the prompt balance.",
         "recommendations": "Recommendations to improve the prompt balance."
      },
      "categories": [
         {
            "category": "The category name.",
            "feedback": "Feedback about the balance of category in the prompt.",
            "phrase_count": 0,
            "phrase_percent": 0.0,
            "distribution": "The category distribution."
         }
      ],
      "phrases": [
         {
            "phrase": "The prompt phrase.",
            "category": "The prompt category.",
            "reason": "Reason the phrase was assigned to its category."
         }
      ]
   },
   "heatmap": {
      "version_timestamp": "2024-12-19T20:33:15.971617Z",
      "heatmap_timestamp": "2024-12-19T20:33:15.971617Z",
      "summary": "Summary of the heatmap.",
      "phrases": [
         {
            "phrase": "The prompt phrase.",
            "attention_score": 1,
            "color": "The color level assigned to the phrase.",
            "reason": "The reason why the AI model assigned the phrase its attention score."
         }
      ]
   }
}


Creating a version

Creating a new prompt version requires only the prompt content; the name and any variables are optional. If a set of variables is given, the keys will be replaced by their values in the content before creating the prompt version.

A successful call returns a 201 status code with a response that contains the version object.

Endpoint
POST https://api.zatomic.ai/v1/prompts/{promptId}/versions

// Example
POST https://api.zatomic.ai/v1/prompts/prm_2qRzu8geIvfudcJTwP0pur4TbMJ/versions

Request Properties
name
string, optional
Name of the version.
content
string
Content for the version.
variables
set of key-value pairs, optional
Set of template variables for the version, in key-value pair format. Variables can use either double curly braces {{ }} or double square brackets [[ ]].
Request Body
{
   "name": "Version name",
   "content": "You are a knowledgeable and friendly assistant...",
   "variables": {
      "{{variable1}}": "variable 1",
      "[[variable2]]": "variable 2"
   }
}

Updating a version

This endpoint allows you to update any combination of the prompt version's name, content, or primary flag.

A successful call returns a response that contains the updated version object.

Endpoint
PATCH https://api.zatomic.ai/v1/prompts/{promptId}/versions/{versionId}

// Example
PATCH https://api.zatomic.ai/v1/prompts/prm_2qRzu8geIvfudcJTwP0pur4TbMJ/versions/ver_2qRzu8qzlNOMhTrini2EKCDh5r6

Request Properties
name
string, optional
Name of the version.
content
string, optional
Content for the version.
is_primary
boolean, optional
Flag that determines if the version is the primary version for the prompt.
Request Body
{
   "name": "Version name",
   "content": "You are a knowledgeable and friendly assistant...",
   "is_primary": true
}

Deleting a version

Permanently deletes a prompt version. This action cannot be undone.

A successful call returns a 204 status code.

Endpoint
DELETE https://api.zatomic.ai/v1/prompts/{promptId}/versions/{versionId}

// Example
DELETE https://api.zatomic.ai/v1/prompts/prm_2qRzu8geIvfudcJTwP0pur4TbMJ/versions/ver_2qRzu8qzlNOMhTrini2EKCDh5r6


Retrieving a version

Retrieves a specific version of a prompt.

A successful call returns a response that contains the version object.

Expands
scoring Retrieves the scoring object for the version.
risk Retrieves the risk object for the version.
balance Retrieves the balance object for the version.
heatmap Retrieves the heatmap object for the version.
Endpoint
GET https://api.zatomic.ai/v1/prompts/{promptId}/versions/{versionId}

// Examples
GET https://api.zatomic.ai/v1/prompts/prm_2qRzu8geIvfudcJTwP0pur4TbMJ/versions/ver_2qRzu8qzlNOMhTrini2EKCDh5r6
GET https://api.zatomic.ai/v1/prompts/prm_2qRzu8geIvfudcJTwP0pur4TbMJ/versions/ver_2qRzu8qzlNOMhTrini2EKCDh5r6?expand=scoring
GET https://api.zatomic.ai/v1/prompts/prm_2qRzu8geIvfudcJTwP0pur4TbMJ/versions/ver_2qRzu8qzlNOMhTrini2EKCDh5r6?expand=risk
GET https://api.zatomic.ai/v1/prompts/prm_2qRzu8geIvfudcJTwP0pur4TbMJ/versions/ver_2qRzu8qzlNOMhTrini2EKCDh5r6?expand=balance
GET https://api.zatomic.ai/v1/prompts/prm_2qRzu8geIvfudcJTwP0pur4TbMJ/versions/ver_2qRzu8qzlNOMhTrini2EKCDh5r6?expand=heatmap


Retrieving all versions

Returns the list of all versions for prompt, sorted by version updated date in descending order.

A successful call returns a response that contains a list of version objects.

Expands
scoring Retrieves the scoring object for each version in the list.
risk Retrieves the risk object for each version in the list.
balance Retrieves the balance object for each version in the list.
heatmap Retrieves the heatmap object for each version in the list.
Endpoint
GET https://api.zatomic.ai/v1/prompts/{promptId}/versions

// Examples
GET https://api.zatomic.ai/v1/prompts/{promptId}/versions
GET https://api.zatomic.ai/v1/prompts/{promptId}/versions?expand=scoring
GET https://api.zatomic.ai/v1/prompts/{promptId}/versions?expand=risk
GET https://api.zatomic.ai/v1/prompts/{promptId}/versions?expand=balance
GET https://api.zatomic.ai/v1/prompts/{promptId}/versions?expand=heatmap

Response Body
[
   {
      "version_id": "ver_2qRzu8qzlNOMhTrini2EKCDh5r6",
      "prompt_id": "prm_2qRzu8geIvfudcJTwP0pur4TbMJ",
      "created": "2024-12-19T20:33:16.076891Z",
      "created_by": "Han Solo",
      "updated": "2024-12-19T20:39:13.026997Z",
      "updated_by": "Han Solo",
      "name": null,
      "is_primary": true,
      "content": "You are a knowledgeable and friendly assistant...",
      "variables": [
         "{{variable1}}",
         "[[variable2]]"
      ],
      "token_info": {
         "model": "gpt-4.1",
         "token_count": 0,
         "cost_per_use": 0.0,
         "cost_per_1k": 0.0,
         "cost_per_1m": 0.0
      },
      "scoring": null,
      "risk": null,
      "balance": null,
      "heatmap": null
   }
]

Retrieving a version score

Retrieves the scoring object for a specific prompt version.

A successful call returns a response that contains the scoring object.

Endpoint
GET https://api.zatomic.ai/v1/prompts/{promptId}/versions/{versionId}/scoring

// Example
GET https://api.zatomic.ai/v1/prompts/prm_2qRzu8geIvfudcJTwP0pur4TbMJ/versions/ver_2qRzu8qzlNOMhTrini2EKCDh5r6/scoring


Calculating a version score

Calculates the score for a specific version of a prompt. A successful call returns a response that contains the scoring object.

The request requires the ID of the criteria that you want to use for scoring. To get the list of criteria with their IDs and criterion slugs, use the scoring criteria list endpoint.

You can also add a settings object to the request that specifies which AI model and provider you want to use for the scoring. If settings is given in the request, the model_source and model_id are required.

The model_source field specifies where the model comes from. When using models from your own AI providers, use the value provider; otherwise, use zatomic.

If provider_id is given and the provider is for Amazon Bedrock, then the aws_region is required and must be the region where the model is located.

You can find model IDs in the model catalog and provider IDs in your Zatomic account.

Endpoint
POST https://api.zatomic.ai/v1/prompts/{promptId}/versions/{versionId}/scoring

// Example
POST https://api.zatomic.ai/v1/prompts/prm_2qRzu8geIvfudcJTwP0pur4TbMJ/versions/ver_2qRzu8qzlNOMhTrini2EKCDh5r6/scoring

Request Properties
criteria_id
string
The ID of the criteria to use for scoring.
criterion_slugs
list of strings, optional
The list of criterion slugs from the criteria. If none are given, then all criterion from the criteria will be used.
settings
object, optional

Properties for the object:

model_source
string
The source of the model.
model_id
string
The ID of the AI model to use for the scoring.
provider_id
string, optional
The ID of the AI provider that contains the model to use for scoring.
aws_region
string, optional
The AWS region where the model resides. Required if the given provider is Amazon Bedrock.
Request Body
{
   "criteria_id": "sca_2rjp9HFpIsiYQrAiSbZlz85r3GC",
   "criterion_slugs": ["slug_1", "slug_2", "slug_3"],
   "settings": {
      "model_source": "zatomic|provider",
      "model_id": "aim_2y2eRWI32fN0CB7a5wE7RuvhVMv"
      "provider_id": "aap_2zFxUYe3RINnOr37VQwHDFF3gK3",
      "aws_region": "us-east-1"
   }
}

Retrieving a version risk

Retrieves the risk object for a specific prompt version.

A successful call returns a response that contains the risk object.

Endpoint
GET https://api.zatomic.ai/v1/prompts/{promptId}/versions/{versionId}/risk

// Example
GET https://api.zatomic.ai/v1/prompts/prm_2qRzu8geIvfudcJTwP0pur4TbMJ/versions/ver_2qRzu8qzlNOMhTrini2EKCDh5r6/risk


Analyzing a version risk

Analyzes the risk for a specific version of a prompt. A successful call returns a response that contains the risk object.

You can add a settings object to the request that specifies which AI model and provider you want to use for the risk analysis. If settings is given in the request, the model_source and model_id are required.

The model_source field specifies where the model comes from. When using models from your own AI providers, use the value provider; otherwise, use zatomic.

If provider_id is given and the provider is for Amazon Bedrock, then the aws_region is required and must be the region where the model is located.

You can find model IDs in the model catalog and provider IDs in your Zatomic account.

Endpoint
POST https://api.zatomic.ai/v1/prompts/{promptId}/versions/{versionId}/risk

// Example
POST https://api.zatomic.ai/v1/prompts/prm_2qRzu8geIvfudcJTwP0pur4TbMJ/versions/ver_2qRzu8qzlNOMhTrini2EKCDh5r6/risk

Request Properties
settings
object, optional

Properties for the object:

model_source
string
The source of the model.
model_id
string
The ID of the AI model to use for the risk analysis.
provider_id
string, optional
The ID of the AI provider that contains the model to use for the risk analysis.
aws_region
string, optional
The AWS region where the model resides. Required if the given provider is Amazon Bedrock.
Request Body
{
   "settings": {
      "model_source": "zatomic|provider",
      "model_id": "aim_2y2eRWI32fN0CB7a5wE7RuvhVMv"
      "provider_id": "aap_2zFxUYe3RINnOr37VQwHDFF3gK3",
      "aws_region": "us-east-1"
   }
}

Retrieving a version balance

Retrieves the balance object for a specific prompt version.

A successful call returns a response that contains the balance object.

Endpoint
GET https://api.zatomic.ai/v1/prompts/{promptId}/versions/{versionId}/balance

// Example
GET https://api.zatomic.ai/v1/prompts/prm_2qRzu8geIvfudcJTwP0pur4TbMJ/versions/ver_2qRzu8qzlNOMhTrini2EKCDh5r6/balance


Analyzing a version balance

Analyzes the balance for a specific version of a prompt. A successful call returns a response that contains the balance object.

The request requires an include_examples flag, which determines if the prompt version's examples should be included in the balance analysis. Including examples can add significant time when analyzing the balance of a version.

You can also add a settings object to the request that specifies which AI model and provider you want to use for the balance analysis. If settings is given in the request, the model_source and model_id are required.

The model_source field specifies where the model comes from. When using models from your own AI providers, use the value provider; otherwise, use zatomic.

If provider_id is given and the provider is for Amazon Bedrock, then the aws_region is required and must be the region where the model is located.

You can find model IDs in the model catalog and provider IDs in your Zatomic account.

Endpoint
POST https://api.zatomic.ai/v1/prompts/{promptId}/versions/{versionId}/balance

// Example
POST https://api.zatomic.ai/v1/prompts/prm_2qRzu8geIvfudcJTwP0pur4TbMJ/versions/ver_2qRzu8qzlNOMhTrini2EKCDh5r6/balance

Request Properties
include_examples
boolean
Flag to include the prompt version's examples in the balance analysis.
settings
object, optional

Properties for the object:

model_source
string
The source of the model.
model_id
string
The ID of the AI model to use for the balance analysis.
provider_id
string, optional
The ID of the AI provider that contains the model to use for the balance analysis.
aws_region
string, optional
The AWS region where the model resides. Required if the given provider is Amazon Bedrock.
Request Body
{
   "include_examples": false,
   "settings": {
      "model_source": "zatomic|provider",
      "model_id": "aim_2y2eRWI32fN0CB7a5wE7RuvhVMv"
      "provider_id": "aap_2zFxUYe3RINnOr37VQwHDFF3gK3",
      "aws_region": "us-east-1"
   }
}

Retrieving a version heatmap

Retrieves the heatmap object for a specific prompt version.

A successful call returns a response that contains the heatmap object.

Endpoint
GET https://api.zatomic.ai/v1/prompts/{promptId}/versions/{versionId}/heatmap

// Example
GET https://api.zatomic.ai/v1/prompts/prm_2qRzu8geIvfudcJTwP0pur4TbMJ/versions/ver_2qRzu8qzlNOMhTrini2EKCDh5r6/heatmap


Generating a version heatmap

Generates the heatmap data for a specific version of a prompt. A successful call returns a response that contains the heatmap object.

The request requires an include_examples flag, which determines if the prompt version's examples should be included in the heatmap. Including examples can add significant time when generating the heatmap data of a version.

You can also add a settings object to the request that specifies which AI model and provider you want to use for the heatmap. If settings is given in the request, the model_source and model_id are required.

The model_source field specifies where the model comes from. When using models from your own AI providers, use the value provider; otherwise, use zatomic.

If provider_id is given and the provider is for Amazon Bedrock, then the aws_region is required and must be the region where the model is located.

You can find model IDs in the model catalog and provider IDs in your Zatomic account.

Endpoint
POST https://api.zatomic.ai/v1/prompts/{promptId}/versions/{versionId}/heatmap

// Example
POST https://api.zatomic.ai/v1/prompts/prm_2qRzu8geIvfudcJTwP0pur4TbMJ/versions/ver_2qRzu8qzlNOMhTrini2EKCDh5r6/heatmap

Request Properties
include_examples
boolean
Flag to include the prompt version's examples in the heatmap.
settings
object, optional

Properties for the object:

model_source
string
The source of the model.
model_id
string
The ID of the AI model to use for the heatmap.
provider_id
string, optional
The ID of the AI provider that contains the model to use for the heatmap.
aws_region
string, optional
The AWS region where the model resides. Required if the given provider is Amazon Bedrock.
Request Body
{
   "include_examples": false,
   "settings": {
      "model_source": "zatomic|provider",
      "model_id": "aim_2y2eRWI32fN0CB7a5wE7RuvhVMv"
      "provider_id": "aap_2zFxUYe3RINnOr37VQwHDFF3gK3",
      "aws_region": "us-east-1"
   }
}

Improving a version

Creates a new version of the prompt based on either its scoring analysis or risk analysis. A successful call returns a response that contains the newly improved version object.

The request supports a source value, which determines if the improvement will be based on the prompt's scoring analysis or its risk analysis.

You can also add a settings object to the request that specifies which AI model and provider you want to use to improve the prompt. If settings is given in the request, the model_source and model_id are required.

The model_source field specifies where the model comes from. When using models from your own AI providers, use the value provider; otherwise, use zatomic.

If provider_id is given and the provider is for Amazon Bedrock, then the aws_region is required and must be the region where the model is located.

You can find model IDs in the model catalog and provider IDs in your Zatomic account.

Endpoint
POST https://api.zatomic.ai/v1/prompts/{promptId}/versions/{versionId}/improve

// Example
POST https://api.zatomic.ai/v1/prompts/prm_2qRzu8geIvfudcJTwP0pur4TbMJ/versions/ver_2qRzu8qzlNOMhTrini2EKCDh5r6/improve

Request Properties
source
string, optional
The analysis source for the improvement. If given must be scoring or risk. If not given defaults to scoring.
settings
object, optional

Properties for the object:

model_source
string
The source of the model.
model_id
string
The ID of the AI model to use for the improvement.
provider_id
string, optional
The ID of the AI provider that contains the model to use for the improvement.
aws_region
string, optional
The AWS region where the model resides. Required if the given provider is Amazon Bedrock.
Request Body
{
   "source": "scoring|risk",
   "settings": {
      "model_source": "zatomic|provider",
      "model_id": "aim_2y2eRWI32fN0CB7a5wE7RuvhVMv"
      "provider_id": "aap_2zFxUYe3RINnOr37VQwHDFF3gK3",
      "aws_region": "us-east-1"
   }
}

Scoring Criteria

Scoring criteria is used to evaluate prompts to generate their score and rating.

Endpoints
   GET https://api.zatomic.ai/v1/prompts/scoring/criteria
  POST https://api.zatomic.ai/v1/prompts/scoring/criteria
  POST https://api.zatomic.ai/v1/prompts/scoring/criteria/generate
   GET https://api.zatomic.ai/v1/prompts/scoring/criteria/{criteriaId}
   PUT https://api.zatomic.ai/v1/prompts/scoring/criteria/{criteriaId}
DELETE https://api.zatomic.ai/v1/prompts/scoring/criteria/{criteriaId}
  POST https://api.zatomic.ai/v1/prompts/scoring/criteria/{criteriaId}/generate
  POST https://api.zatomic.ai/v1/prompts/scoring/criteria/{criteriaId}/criterionset
   PUT https://api.zatomic.ai/v1/prompts/scoring/criteria/{criteriaId}/criterionset/{criterionId}
DELETE https://api.zatomic.ai/v1/prompts/scoring/criteria/{criteriaId}/criterionset/{criterionId}


The Scoring Criteria object

NOTE: While similar and sharing the same criteria_id, the scoring criteria object is different from the scoring criteria results object.

Properties
criteria_id
string
Unique ID of the scoring criteria.
name
string
The criteria name.
use_case
string
The use case for the criteria.
criterion_set
The set of criterion for the scoring criteria.
The Scoring Criteria Object
{
   "criteria_id": "sca_2rjp9HFpIsiYQrAiSbZlz85r3GC",
   "name": "Default",
   "use_case": "The criteria use case.",
   "criterion_set": [
      {
         "criterion_id": "scn_2tVwnAnheHa6NKuYvKcXrqDB21z",
         "slug": "criterion_slug",
         "label": "The label of the criterion.",
         "description": "The criterion description.",
         "questions": "The question or questions the criterion is trying to answer.",
         "weight": 0
      }
   ]
}


The Scoring Criterion object

criterion_id
string
Unique ID for the criterion.
slug
string
The slug for the criterion. Will be unique within the criterion set.
label
string
The criterion label.
description
string
The criterion description.
questions
string
The question or questions the criterion is trying to answer.
weight
integer
The weight assigned to the criterion.
The Scoring Criterion Object
{
   "criterion_id": "scn_2tVwnAnheHa6NKuYvKcXrqDB21z",
   "slug": "criterion_slug",
   "label": "The label of the criterion.",
   "description": "The criterion description.",
   "questions": "The question or questions the criterion is trying to answer.",
   "weight": 0
}


Creating scoring criteria

Creating new scoring criteria requires a name and at least 1 criterion in the criterion_set. For each criterion given, all fields are required.

A successful call returns a 201 status code with a response that contains the scoring criteria object.

Endpoint
POST https://api.zatomic.ai/v1/prompts/scoring/criteria

Request Properties
name
string
The criteria name.
use_case
string, optional
The use case for the criteria.
criterion_set
list of criterion objects

Properties for the criterion object:

slug
string
The slug for the criterion. Can only contain lowercase letters and underscores.
label
string
The criterion label.
description
string
The criterion description.
questions
string
The question or questions the criterion is trying to answer.
weight
integer
The weight assigned to the criterion. Must be a whole number between 1 and 999.
Request Body
{
   "name": "Criteria name",
   "use_case": "The criteria use case.",
   "criterion_set": [
      {
         "slug": "criterion_slug",
         "label": "The label of the criterion.",
         "description": "The criterion description.",
         "questions": "The question or questions the criterion is trying to answer.",
         "weight": 0
      }
   ]
}

Updating a scoring criteria

This endpoint allows you to update a scoring criteria.

A successful call returns a response that contains the updated scoring criteria object.

Endpoint
PUT https://api.zatomic.ai/v1/prompts/scoring/criteria/{criteriaId}

// Example
PUT https://api.zatomic.ai/v1/prompts/scoring/criteria/sca_2rjp9HFpIsiYQrAiSbZlz85r3GC

Request Properties
name
string
Name of the scoring criteria.
use_case
string, optional
Use case of the scoring criteria.
Request Body
{
   "name": "Criteria name",
   "use_case": "The criteria use case."
}

Deleting a scoring criteria

Permanently deletes a scoring criteria and all of its criterion. This action cannot be undone.

A successful call returns a 204 status code.

Endpoint
DELETE https://api.zatomic.ai/v1/prompts/scoring/criteria/{criteriaId}

// Example
DELETE https://api.zatomic.ai/v1/prompts/scoring/criteria/sca_2rjp9HFpIsiYQrAiSbZlz85r3GC


Retrieving a scoring criteria

Retrieves a scoring criteria from the given workspace.

A successful call returns a response that conains the scoring criteria object.

Endpoint
GET https://api.zatomic.ai/v1/prompts/scoring/criteria/{criteriaId}

// Example
GET https://api.zatomic.ai/v1/prompts/scoring/criteria/sca_2rjp9HFpIsiYQrAiSbZlz85r3GC


Retrieving all scoring criteria

Returns the list of all scoring criteria in a given workspace, sorted alphabetically by criteria name. A successful call returns a response that contains a list of scoring criteria objects.

This endpoint also returns the default system criteria named Default, which will be the last criteria in the list.

Endpoint
GET https://api.zatomic.ai/v1/prompts/scoring/criteria

Response Body
[
   {
      "criteria_id": "sca_2rjp9HFpIsiYQrAiSbZlz85r3GC",
      "name": "Default",
      "use_case": "This is the use case for the default system criteria."
      "criterion_set": [
         {
            "criterion_id": "scn_2tVwnAnheHa6NKuYvKcXrqDB21z",
            "slug": "criterion_slug",
            "label": "The label of the criterion.",
            "description": "The criterion description.",
            "questions": "The question or questions the criterion is trying to answer.",
            "weight": 0
         }
      ]
   }
]


Generating scoring criteria

These endpoints generate scoring criteria based on a use case, which can then be used for prompt scoring. The first endpoint requires a use_case as part of the request, whereas the second endpoint will utilize the use case already assocated with the scoring criteria.

For the second endpoint, the generated criterion will be different from any criterion that already exists in the scoring criteria.

The responses for both endpoints are the same. The list of criterion returned for the first endpoint can be used as input to create scoring criteria, while the list of criterion returned from the second endpoint can be used as input to add criterion to the existing scsoring criteria.

You can also add a settings object to the request that specifies which AI model and provider you want to use to generate the scoring criteria. If settings is given in the request, the model_source and model_id are required.

The model_source field specifies where the model comes from. When using models from your own AI providers, use the value provider; otherwise, use zatomic.

If provider_id is given and the provider is for Amazon Bedrock, then the aws_region is required and must be the region where the model is located.

You can find model IDs in the model catalog and provider IDs in your Zatomic account.

Endpoints
POST https://api.zatomic.ai/v1/prompts/scoring/criteria/generate
POST https://api.zatomic.ai/v1/prompts/scoring/criteria/{criteriaId}/generate

// Example
POST https://api.zatomic.ai/v1/prompts/scoring/criteria/sca_2rjp9HFpIsiYQrAiSbZlz85r3GC/generate

Request Properties
use_case
string
Use case to generate scoring criteria. Only applies to the first endpoint.
settings
object, optional

Properties for the object:

model_source
string
The source of the model.
model_id
string
The ID of the AI model to use for the scoring.
provider_id
string, optional
The ID of the AI provider that contains the model to use for scoring.
aws_region
string, optional
The AWS region where the model resides. Required if the given provider is Amazon Bedrock.
Request Body
{
   "use_case": "Use case for the criteria.",
   "settings": {
      "model_source": "zatomic|provider",
      "model_id": "aim_2y2eRWI32fN0CB7a5wE7RuvhVMv"
      "provider_id": "aap_2zFxUYe3RINnOr37VQwHDFF3gK3",
      "aws_region": "us-east-1"
   }
}
Response Properties
criterion_set
list of criterion objects

Properties for the criterion object:

slug
string
The slug for the criterion.
label
string
The criterion label.
description
string
The criterion description.
questions
string
The question or questions the criterion is trying to answer.
weight
integer
The weight assigned to the criterion.
Response Body
{
   "criterion_set": [
      {
         "slug": "criterion_slug",
         "label": "The label of the criterion.",
         "description": "The criterion description.",
         "questions": "The question or questions the criterion is trying to answer.",
         "weight": 0
      }
   ]
}


Creating scoring criterion

This endpoint is for adding a new criterion to an existing scoring criteria. All fields in the request are required.

A successful call returns a 201 status code with a response that contains the scoring criterion object.

Endpoint
POST https://api.zatomic.ai/v1/prompts/scoring/criteria/{criteriaId}/criterionset

// Example
POST https://api.zatomic.ai/v1/prompts/scoring/criteria/sca_2rjp9HFpIsiYQrAiSbZlz85r3GC/criterionset

slug
string
The slug for the criterion. Can only contain lowercase letters and underscores.
label
string
The criterion label.
description
string
The criterion description.
questions
string
The question or questions the criterion is trying to answer.
weight
integer
The weight assigned to the criterion. Must be a whole number between 1 and 999.
Request Body
{
   "slug": "criterion_slug",
   "label": "The label of the criterion.",
   "description": "The criterion description.",
   "questions": "The question or questions the criterion is trying to answer.",
   "weight": 0
}

Updating a scoring criterion

This endpoint allows you to update an inidividual scoring criterion. All fields are required.

A successful call returns a response that contains the updated scoring criterion object.

Endpoint
PUT https://api.zatomic.ai/v1/prompts/scoring/criteria/{criteriaId}/criterionset/{criterionId}

// Example
PUT https://api.zatomic.ai/v1/prompts/scoring/criteria/sca_2rjp9HFpIsiYQrAiSbZlz85r3GC/criterionset/scn_2tVwnAnheHa6NKuYvKcXrqDB21z

Request Properties
slug
string
The slug for the criterion. Can only contain lowercase letters and underscores.
label
string
The criterion label.
description
string
The criterion description.
questions
string
The question or questions the criterion is trying to answer.
weight
integer
The weight assigned to the criterion. Must be a whole number between 1 and 999.
Request Body
{
   "slug": "criterion_slug",
   "label": "The label of the criterion.",
   "description": "The criterion description.",
   "questions": "The question or questions the criterion is trying to answer.",
   "weight": 0
}

Deleting a scoring criterion

This endpoint removes an individual criterion from a scoring criteria. This action cannot be undone.

A successful call returns a 204 status code.

Endpoint
DELETE https://api.zatomic.ai/v1/prompts/scoring/criteria/{criteriaId}/criterionset/{criterionId}

// Example
DELETE https://api.zatomic.ai/v1/prompts/scoring/criteria/sca_2rjp9HFpIsiYQrAiSbZlz85r3GC/criterionset/scn_2tVwnAnheHa6NKuYvKcXrqDB21z


Retrieving a scoring criterion

Retrieves an individual scoring criterion from the given scoring criteria and workspace.

A successful call returns a response that conains the scoring criterion object.

Endpoint
GET https://api.zatomic.ai/v1/prompts/scoring/criteria/{criteriaId}/criterionset/{criterionId}

// Example
GET https://api.zatomic.ai/v1/prompts/scoring/criteria/sca_2rjp9HFpIsiYQrAiSbZlz85r3GC/criterionset/scn_2tVwnAnheHa6NKuYvKcXrqDB21z


Scoring Criteria Results

When a score is generated for a prompt version, the response includes the results for the criteria that was used to analyze the prompt. This criteria contains the results for each criterion used as part of the analysis.


The Scoring Criteria Results object

NOTE: While similar and sharing the same criteria_id, the scoring criteria results object is different from the scoring criteria object.

Properties
criteria_id
string
Unique ID of the scoring criteria.
name
string
The criteria name.
criterion_results
list of criterion objects

Properties for the criterion object:

slug
string
The slug for the criterion. Will be unique within the criterion set.
score
integer
The score given to the criterion.
weight
integer
The weight assigned to the criterion.
weighted_score
integer
The weighted score given to the criterion.
feedback
string
Feedback for the criterion based on its score.
The Scoring Criteria Results Object
{
   "criteria_id": "sca_2rjp9HFpIsiYQrAiSbZlz85r3GC",
   "name": "Default",
   "criterion_results": [
      {
         "slug": "criterion_slug",
         "score": 0,
         "weight": 0,
         "weighted_score": 0,
         "feedback": "Specific feedback for the criterion."
      }
   ]
}

Scoring

Prompt scoring uses various criteria to analyze prompts and assign them a score and rating, with higher scores leading to better prompt performance.

Scoring can be performed and retrieved on individual prompt versions using their specific scoring endpoints. You can also score prompts without a version stored in the system by using the non-version specific endpoint.

Prompts are scored in the following ranges:

Scoring Range Prompt Rating
0 - 49% Poor
50 - 74% Fair
75 - 89% Good
90 - 100% Excellent
Endpoints
POST https://api.zatomic.ai/v1/prompts/scoring


The Scoring object

NOTE: When scoring prompts stored outside of Zatomic, the version_timestamp and scoring_timestamp properties will both be null.

Properties
version_timestamp
datetime, nullable
The timestamp of the prompt version used to calculate the score.
scoring_timestamp
datetime, nullable
The timestamp for when the scoring occurred.
overall_score
integer
The overall score for the prompt version, from 0.0 to 100.0.
rating
string
The rating for the prompt version. Will be one of Excellent, Good, Fair, or Poor.
summary
object

Contains summary info about the prompt version.

strenghts
string
The overall strengths of the prompt.
areas_for_improvement
string
Areas where the prompt could improve.
overall_feedback
string
Overall feedback for the prompt.
criteria
The criteria that was used to score the prompt version, with results for each criterion.
The Scoring Object
{
   "version_timestamp": "2024-12-19T20:33:15.971617Z",
   "scoring_timestamp": "2024-12-19T20:33:15.971617Z",
   "overall_score": 0,
   "rating": "Excellent",
   "summary": {
      "strengths": "The strengths of the prompt.",
      "areas_for_improvement": "Areas where the prompt could improve.",
      "overall_feedback": "Overall feedback for the prompt."
   },
   "criteria": {
      "criteria_id": "sca_2rjp9HFpIsiYQrAiSbZlz85r3GC",
      "name": "Default",
      "criterion_results": [
         {
            "slug": "criterion_slug",
            "score": 0,
            "weight": 0,
            "weighted_score": 0,
            "feedback": "Specific feedback for the criterion."
         }
      ]
   }
}

Calculating a prompt score

NOTE: This is the endpoint for scoring a prompt stored outside of Zatomic. For the endpoint to score a prompt stored within Zatomic, see this endoint.

Calculates the score for a prompt. A successful call returns a response that contains the scoring object.

The request requires the content for the prompt and an optional use_case. This endpoint uses the default system criteria to perform the scoring analysis.

You can also add a settings object to the request that specifies which AI model and provider you want to use for the scoring. If settings is given in the request, the model_source and model_id are required.

The model_source field specifies where the model comes from. When using models from your own AI providers, use the value provider; otherwise, use zatomic.

If provider_id is given and the provider is for Amazon Bedrock, then the aws_region is required and must be the region where the model is located.

You can find model IDs in the model catalog and provider IDs in your Zatomic account.

Endpoint
POST https://api.zatomic.ai/v1/prompts/scoring

Request Properties
content
string
The prompt content.
use_case
string, optional
The use case for the prompt. Recommended to improve analysis.
settings
object, optional

Properties for the object:

model_source
string
The source of the model.
model_id
string
The ID of the AI model to use for the scoring.
provider_id
string, optional
The ID of the AI provider that contains the model to use for scoring.
aws_region
string, optional
The AWS region where the model resides. Required if the given provider is Amazon Bedrock.
Request Body
{
   "content": "The prompt content.",
   "use_case": "Use case for the prompt.",
   "settings": {
      "model_source": "zatomic|provider",
      "model_id": "aim_2y2eRWI32fN0CB7a5wE7RuvhVMv"
      "provider_id": "aap_2zFxUYe3RINnOr37VQwHDFF3gK3",
      "aws_region": "us-east-1"
   }
}

Risk

Prompts and prompt versions can be analyzed for potential risk related to possible bias, ethical concerns, and safety issues. This helps ensure your prompts produce results consistent with your responsible AI initiatives.

Risk analysis can be performed and retrieved on individual prompt versions using their specific risk endpoints. You can also analyze the risk for prompts without a version stored in the system by using the non-version specific endpoint.

Bias, ethical concerns, and safety issues are assigned one of the following risk levels: low, medium, or high.

Endpoints
POST https://api.zatomic.ai/v1/prompts/risk


The Risk object

NOTE: When analyzing the risk of prompts stored outside of Zatomic, the version_timestamp and risk_timestamp properties will both be null.

Properties
version_timestamp
datetime, nullable
The timestamp of the prompt version used to analyze the risk.
risk_timestamp
datetime, nullable
The timestamp for when the risk analysis occurred.
summary
object

Properties:

overall_feedback
string
Overall feedback for the risk analysis.
overall_risk_level
string
Overall risk level for the prompt. Will be low, medium, or high.
bias_analysis
object

Properties:

risk_level
string
Risk level for potential bias. Will be low, medium, or high.
feedback
string
Feedback about possible bias in the prompt.
issues
list of objects
Issues related to possible bias in the prompt. Includes affected prompt parts and suggested revisions.
ethical_analysis
object

Properties:

risk_level
string
Risk level for ethical concerns. Will be low, medium, or high.
feedback
string
Feedback about ethical concerns in the prompt.
issues
list of objects
Issues related to ethical concerns in the prompt. Includes affected prompt parts and suggested revisions.
safety_analysis
object

Properties:

risk_level
string
Risk level for safety issues. Will be low, medium, or high.
feedback
string
Feedback about safety issues in the prompt.
issues
list of objects
Issues related to safety issues in the prompt. Includes affected prompt parts and suggested revisions.
The Risk Object
{
   "version_timestamp": "2025-06-22T14:45:19.144393Z",
   "risk_timestamp": "2025-06-22T18:52:20.1727176Z",
   "summary": {
      "overall_feedback": "Overall feedback for the risk analysis.",
      "overall_risk_level": "low|medium|high"
   },
   "bias_analysis": {
      "risk_level": "low|medium|high",
      "feedback": "Feedback for the bias analysis.",
      "issues": [
         {
            "issue": "Potential bias.",
            "parts": [
               {
                  "part": "Affected part of the prompt.",
                  "revision": "Suggested revision for the prompt part."
               }
            ]
         }
      ]
   },
   "ethical_analysis": {
      "risk_level": "low|medium|high",
      "feedback": "Feedback for the ethical analysis.",
      "issues": [
         {
            "issue": "Potential bias.",
            "parts": [
               {
                  "part": "Affected part of the prompt.",
                  "revision": "Suggested revision for the prompt part."
               }
            ]
         }
      ]
   },
   "safety_analysis": {
      "risk_level": "low|medium|high",
      "feedback": "Feedback for the safety analysis.",
      "issues": [
         {
            "issue": "Safety issue.",
            "parts": [
               {
                  "part": "Affected part of the prompt.",
                  "revision": "Suggested revision for the prompt part."
               }
            ]
         }
      ]
   }
}

Analyzing prompt risk

NOTE: This is the endpoint for analyzing the risk of a prompt stored outside of Zatomic. For the endpoint to analyze the risk a prompt stored within Zatomic, see this endoint.

Analyzes the risk of a prompt. A successful call returns a response that contains the risk object.

You can add a settings object to the request that specifies which AI model and provider you want to use for the risk analysis. If settings is given in the request, the model_source and model_id are required.

The model_source field specifies where the model comes from. When using models from your own AI providers, use the value provider; otherwise, use zatomic.

If provider_id is given and the provider is for Amazon Bedrock, then the aws_region is required and must be the region where the model is located.

You can find model IDs in the model catalog and provider IDs in your Zatomic account.

Endpoint
POST https://api.zatomic.ai/v1/prompts/risk

Request Properties
content
string
The prompt content.
settings
object, optional

Properties for the object:

model_source
string
The source of the model.
model_id
string
The ID of the AI model to use for the risk analysis.
provider_id
string, optional
The ID of the AI provider that contains the model to use for the risk analysis.
aws_region
string, optional
The AWS region where the model resides. Required if the given provider is Amazon Bedrock.
Request Body
{
   "content": "The prompt content.",
   "include_examples": false,
   "settings": {
      "model_source": "zatomic|provider",
      "model_id": "aim_2y2eRWI32fN0CB7a5wE7RuvhVMv"
      "provider_id": "aap_2zFxUYe3RINnOr37VQwHDFF3gK3",
      "aws_region": "us-east-1"
   }
}

Balance

Balance refers to the overall effectiveness of the structure and makeup of a prompt version. When the balance of a prompt version is analyzed, the prompt's content is broken down into meaningful phrases, which are then categorized to determine the balance of the prompt.

Balance analysis can be performed and retrieved on individual prompt versions using their specific balance endpoints. You can also analyze the balance for prompts without a version stored in the system by using the non-version specific endpoint.

Prompt phrases are put into one of the following categories:

Phrase Category Description
Instruction Tells AI models what needs to be done. Ideal distribution is 20% - 35%.
Entity Gives AI models context and specificity. Ideal distribution is 20% - 35%.
Concept Defines themes and abstract ideas for AI models to consider. Ideal distribution is 15% - 30%.
Detail Supporting context to help refine AI responses. Ideal distribution is 15% - 30%.

The categories are then analyzed to determine their distribution, as one of the following:

Category Distribution Description
Balanced The prompt has the right amount of phrases in that category to ensure high-quality AI responses.
Overused There are too many phrases in that category that could lead to overly complex, unfocused output.
Underused There aren't enough phrases in that category for the AI model to produce meaningful results.
Endpoints
POST https://api.zatomic.ai/v1/prompts/balance


The Balance object

NOTE: When analyzing the balance of prompts stored outside of Zatomic, the version_timestamp and balance_timestamp properties will both be null.

Properties
version_timestamp
datetime, nullable
The timestamp of the prompt version used to analyze the balance.
balance_timestamp
datetime, nullable
The timestamp for when the balance analysis occurred.
summary
object

Properties of the summary object:

overall_feedback
string
Overall feedback based on the prompt balance.
recommendations
string
Recommendations to improve the prompt balance.
categories
list of objects

Properties of the categories object:

category
string
Name of the category. Will be Instruction, Entity, Concept, or Detail.
feedback
string
Feedback about the balance of the category in the prompt.
phrase_count
integer
Number of prompt phrases in the category.
phrase_percent
decimal
Percent of prompt phrases for the category.
distribution
integer
Distribution of the category. Will be Balanced, Overused, or Underused.
phrases
list of objects

Properties of the phrases object:

phrase
string
The prompt phrase.
category
string
The category for the phrase. Will be Instruction, Entity, Concept, or Detail.
reason
string
The reason the phrase was assigned to its category.
The Balance Object
{
   "version_timestamp": "2024-12-19T20:33:15.971617Z",
   "balance_timestamp": "2024-12-19T20:33:15.971617Z",
   "summary": {
      "overall_feedback": "Overall feedback based on the prompt balance.",
      "recommendations": "Recommendations to improve the prompt balance."
   },
   "categories": [
      {
         "category": "The category name.",
         "feedback": "Feedback about the balance of the category in the prompt.",
         "phrase_count": 0,
         "phrase_percent": 0.0,
         "distribution": "The category distribution."
      }
   ],
   "phrases": [
      {
         "phrase": "The prompt phrase.",
         "category": "The prompt category.",
         "reason": "Reason the phrase was assigned to its category."
      }
   ]
}

Analyzing prompt balance

NOTE: This is the endpoint for analyzing the balance of a prompt stored outside of Zatomic. For the endpoint to analyze the balance a prompt stored within Zatomic, see this endoint.

Analyzes the balance of a prompt. A successful call returns a response that contains the balance object.

The request requires the prompt content and an include_examples flag, which determines if the prompt's examples should be included in the balance analysis. Including examples can add significant time when analyzing the balance of a prompt.

You can also add a settings object to the request that specifies which AI model and provider you want to use for the balance analysis. If settings is given in the request, the model_source and model_id are required.

The model_source field specifies where the model comes from. When using models from your own AI providers, use the value provider; otherwise, use zatomic.

If provider_id is given and the provider is for Amazon Bedrock, then the aws_region is required and must be the region where the model is located.

You can find model IDs in the model catalog and provider IDs in your Zatomic account.

Endpoint
POST https://api.zatomic.ai/v1/prompts/balance

Request Properties
content
string
The prompt content.
include_examples
boolean
Flag to include the prompt version's examples in the balance analysis.
settings
object, optional

Properties for the object:

model_source
string
The source of the model.
model_id
string
The ID of the AI model to use for the balance analysis.
provider_id
string, optional
The ID of the AI provider that contains the model to use for the balance analysis.
aws_region
string, optional
The AWS region where the model resides. Required if the given provider is Amazon Bedrock.
Request Body
{
   "content": "The prompt content.",
   "include_examples": false,
   "settings": {
      "model_source": "zatomic|provider",
      "model_id": "aim_2y2eRWI32fN0CB7a5wE7RuvhVMv"
      "provider_id": "aap_2zFxUYe3RINnOr37VQwHDFF3gK3",
      "aws_region": "us-east-1"
   }
}

Heatmaps

Heatmaps allow you to visualize prompt phrases that the AI model gave most (or least) of its attention. Using prompt heatmaps through the API gives you the raw data for you to render heatmap visualizations in other ways.

Generating heatmap data can be performed and retrieved on individual prompt versions using their specific heatmap endpoints. You can also generate heatmap data for prompts without a version stored in the system by using the non-version specific endpoint.

When a prompt heatmap is generated, phrases are broken down into meaningful phrases and assigned a score based on how much attention the AI model gave the phrase. Those attention scores are then assigned a corresponding color level.

Attention Score Color Level
1 very-light
2 light
3 medium
4 dark
5 very-dark
Endpoints
POST https://api.zatomic.ai/v1/prompts/heatmap


The Heatmap object

NOTE: When generating heatmaps of prompts stored outside of Zatomic, the version_timestamp and heatmap_timestamp properties will both be null.

Properties
version_timestamp
datetime, nullable
The timestamp of the prompt version used to generate the heatmap.
heatmap_timestamp
datetime, nullable
The timestamp for when the heatmap was generated.
summary
object

Properties of the summary object:

overall_feedback
string
The overall feedback for the heatmap analysis.
phrases
list of objects

Properties of the phrases object:

phrase
string
The prompt phrase.
attention_score
integer
The attenton score assigned to the phrase. Will be a whole number from 1 - 5.
color
string
The color level for the phrase, based on its attention score. Will be very-light, light, medium, dark, or very-dark.
reason
string
The reason why the AI model assigned the phrase its attention score.
The Heatmap Object
{
   "version_timestamp": "2024-12-19T20:33:15.971617Z",
   "heatmap_timestamp": "2024-12-19T20:33:15.971617Z",
   "summary": {
      "overall_feedback": "Overall feedback for the heatmap analysis."
   },
   "phrases": [
      {
         "phrase": "The prompt phrase.",
         "attention_score": 1,
         "color": "The color level assigned to the phrase.",
         "reason": "The reason why the AI model assigned the phrase its attention score."
      }
   ]
}

Generating a prompt heatmap

NOTE: This is the endpoint for generating heatmap data of a prompt stored outside of Zatomic. For the endpoint to generate heatmap data a prompt stored within Zatomic, see this endoint.

Generates the heatmap data for a prompt. A successful call returns a response that contains the heatmap object.

The request requires the prompt content and an include_examples flag, which determines if the prompt's examples should be included in the heatmap. Including examples can add significant time when generating the heatmap data of a prompt.

You can also add a settings object to the request that specifies which AI model and provider you want to use for the heatmap. If settings is given in the request, the model_source and model_id are required.

The model_source field specifies where the model comes from. When using models from your own AI providers, use the value provider; otherwise, use zatomic.

If provider_id is given and the provider is for Amazon Bedrock, then the aws_region is required and must be the region where the model is located.

You can find model IDs in the model catalog and provider IDs in your Zatomic account.

Endpoint
POST https://api.zatomic.ai/v1/prompts/heatmap

Request Properties
content
string
The prompt content.
include_examples
boolean
Flag to include the prompt's examples in the heatmap.
settings
object, optional

Properties for the object:

model_source
string
The source of the model.
model_id
string
The ID of the AI model to use for the heatmap.
provider_id
string, optional
The ID of the AI provider that contains the model to use for the heatmap.
aws_region
string, optional
The AWS region where the model resides. Required if the given provider is Amazon Bedrock.
Request Body
{
   "content": "The prompt content.",
   "include_examples": false,
   "settings": {
      "model_source": "zatomic|provider",
      "model_id": "aim_2y2eRWI32fN0CB7a5wE7RuvhVMv"
      "provider_id": "aap_2zFxUYe3RINnOr37VQwHDFF3gK3",
      "aws_region": "us-east-1"
   }
}