Skip to main content
Portkey Prompts API completely for both requests and responses, making it a drop-in replacement existing for your existing Chat or Completions calls.

Features

Create your Propmt Template on Portkey UI, define variables, and pass them with this API:
curl -X POST "https://api.portkey.ai/v1/prompts/YOUR_PROMPT_ID/completions" \
  -H "Content-Type: application/json" \
  -H "x-portkey-api-key: $PORTKEY_API_KEY" \
  -d '{
    "variables": {
      "joke_topic": "elections",
      "humor_level": "10"
    }
  }'
You can override any model hyperparameter saved in the prompt template by sending its new value at the time of making a request:
curl -X POST "https://api.portkey.ai/v1/prompts/YOUR_PROMPT_ID/completions" \
  -H "Content-Type: application/json" \
  -H "x-portkey-api-key: $PORTKEY_API_KEY" \
  -d '{
    "variables": {
      "user_input": "Hello world"
    },
    "temperature": 0.7,
    "max_tokens": 250,
    "presence_penalty": 0.2
  }'
Passing the {promptId} always calls the Published version of your prompt.But, you can also call a specific template version by appending its version number, like {promptId@12}:Version Tags:
  • @latest: Calls the
  • @{NUMBER} (like @12): Calls the specified version number
  • No Suffix: Here, Portkey defaults to the Published version
curl -X POST "https://api.portkey.ai/v1/prompts/PROMPT_ID@12/completions" \
  -H "Content-Type: application/json" \
  -H "x-portkey-api-key: $PORTKEY_API_KEY" \
  -d '{
    "variables": {
      "user_input": "Hello world"
    }
  }'
Prompts API also supports streaming responses, and completely follows the OpenAI schema.
  • Set stream:True explicitly in your request to enable streaming
curl -X POST "https://api.portkey.ai/v1/prompts/YOUR_PROMPT_ID/completions" \
  -H "Content-Type: application/json" \
  -H "x-portkey-api-key: $PORTKEY_API_KEY" \
  -d '{
    "variables": {
      "user_input": "Hello world"
    },
    "stream": true
    "max_tokens": 250,
    "presence_penalty": 0.2
  }'