Skip to content

BYOM Setup Guide

BYOM (Bring Your Own Model) lets you connect your own LLM provider credentials to Alation AI so agents can use models from your account.


ProviderCredential TypeAuth Method
OpenAIAPI_KEYAPI key
AnthropicAPI_KEYAPI key
OpenAI-compatible (DeepSeek, Groq, Together AI, Fireworks, SambaNova, Cerebras)API_KEYAPI key + custom base URL
AWS BedrockAWSIAM access key + secret key
Azure OpenAIAZUREAPI key or service principal
Google (Gemini / Vertex AI)GCPAPI key or service account

  • Appropriate credentials for your LLM provider (API key, IAM credentials, service account, etc.)

Go to Agent Studio > Explore models, or navigate directly to /app/studio/models.

Click + Add model provider. The form fields change based on the provider you select.

FieldRequiredDescription
NameYesDisplay name (e.g., “My OpenAI Account”)
API KeyYesYour OpenAI API key (sk-proj-...)

Tip: If you encounter errors with the OpenAI provider, try selecting Custom (OpenAI-Compatible) from the provider dropdown instead and set the base URL to https://api.openai.com/v1.

FieldRequiredDescription
NameYesDisplay name (e.g., “My Anthropic Account”)
API KeyYesYour Anthropic API key (sk-ant-...)

Use this for providers that expose an OpenAI-compatible API (DeepSeek, Groq, Together AI, Fireworks, SambaNova, Cerebras, or self-hosted vLLM/Ollama).

FieldRequiredDescription
NameYesDisplay name (e.g., “Groq”)
API KeyYesProvider API key
Base URLYesProvider’s API base URL (see table below)

Common base URLs:

ProviderBase URL
DeepSeekhttps://api.deepseek.com/v1
Together AIhttps://api.together.xyz/v1
Groqhttps://api.groq.com/openai/v1
SambaNovahttps://api.sambanova.ai/v1
Cerebrashttps://api.cerebras.ai/v1
Fireworkshttps://api.fireworks.ai/inference/v1
Self-hosted (vLLM/Ollama)Your endpoint, e.g., http://llm-service:8000/v1
FieldRequiredDescription
NameYesDisplay name (e.g., “AWS Bedrock US West”)
Access Key IDYesAWS IAM access key (AKIA...)
Secret Access KeyYesAWS IAM secret key
RegionYesAWS region (e.g., us-west-2)
Session TokenNoFor temporary STS credentials only

Note: The IAM user must have bedrock:InvokeModel and bedrock:InvokeModelWithResponseStream permissions. For temporary credentials (STS or Vault-managed), you must update the credentials before they expire.

Option A: API Key

FieldRequiredDescription
NameYesDisplay name
EndpointYesAzure OpenAI endpoint (e.g., https://myresource.openai.azure.com/)
API VersionYesAPI version (e.g., 2024-10-21)
API KeyYesAzure API key

Option B: Service Principal

FieldRequiredDescription
NameYesDisplay name
EndpointYesAzure OpenAI endpoint
API VersionYesAPI version
Client IDYesAzure AD application (client) ID
Client SecretYesAzure AD client secret
Token URLYeshttps://login.microsoftonline.com/<tenant-id>/oauth2/v2.0/token
FieldRequiredDescription
NameYesDisplay name
API KeyYesGoogle AI Studio API key (AIza...)
FieldRequiredDescription
NameYesDisplay name
Project IDYesGoogle Cloud project ID
LocationYesRegion (e.g., us-central1)
Service Account JSONYesFull service account key JSON

After entering credentials, click Validate & Continue. If validation succeeds, a Select models dialog appears with available models from the provider.

  • Use the search bar to filter models
  • Check the models you want to add
  • Click Continue

Note: If the model list appears empty (some OpenAI-compatible providers don’t support listing), manual model entry in the UI is not yet supported. Use the API instead (see Setup via API below).

For each selected model, set a friendly display name. Leave blank to use the model ID.

Click Complete to finish.

To verify, create a custom agent in Agent Studio. In the LLM dropdown, you should see your newly added model(s) by the display name you provided.


For automation or programmatic setup, BYOM can be configured via the API. The flow is: create credentials, validate, then create LLM config(s).

Go to https://<your-alation-domain>/ai/docs to open the Swagger UI.

POST /ai/api/v1/llm_credentials
Content-Type: application/json

OpenAI:

{
"name": "My OpenAI Account",
"creds_type": "API_KEY",
"api_key": "sk-proj-..."
}

Anthropic:

{
"name": "Anthropic",
"creds_type": "API_KEY",
"api_key": "sk-ant-..."
}

OpenAI-compatible (e.g., Groq):

{
"name": "Groq",
"creds_type": "API_KEY",
"api_key": "gsk_...",
"base_url": "https://api.groq.com/openai/v1"
}

AWS Bedrock:

{
"name": "AWS Bedrock US West",
"creds_type": "AWS",
"aws_access_key_id": "AKIA...",
"aws_secret_access_key": "...",
"aws_region": "us-west-2"
}

Azure OpenAI (API Key):

{
"name": "Azure OpenAI",
"creds_type": "AZURE",
"azure_endpoint": "https://myresource.openai.azure.com/",
"api_version": "2024-10-21",
"api_key": "..."
}

Azure OpenAI (Service Principal):

{
"name": "Azure OpenAI SP",
"creds_type": "AZURE",
"azure_endpoint": "https://myresource.openai.azure.com/",
"api_version": "2024-10-21",
"azure_client_id": "...",
"azure_client_secret": "...",
"azure_token_url": "https://login.microsoftonline.com/<tenant-id>/oauth2/v2.0/token"
}

Google (API Key):

{
"name": "Gemini",
"creds_type": "GCP",
"api_key": "AIza..."
}

Google (Vertex AI Service Account):

{
"name": "Vertex AI",
"creds_type": "GCP",
"gcp_project": "my-project-id",
"gcp_location": "us-central1",
"gcp_service_account": { "type": "service_account", "project_id": "...", "..." : "..." }
}
POST /ai/api/v1/llm_credentials/{credentials_id}/validate
Content-Type: application/json

List available models:

{
"provider": "openai"
}

Validate a specific model:

{
"provider": "openai",
"model_name": "gpt-4o"
}

Validate an embedding model:

{
"provider": "openai",
"model_name": "text-embedding-3-small",
"is_embedding": true
}

Response:

{
"valid": true,
"models": ["gpt-4o", "gpt-4o-mini", "gpt-5.4", "..."],
"error": null
}

For Bedrock, use "provider": "bedrock". The region is resolved automatically from the credentials.

Use the id returned from the credentials creation response as the llm_credentials_id.

POST /ai/api/v1/config/llm
Content-Type: application/json

Completion model:

{
"name": "My GPT-4o",
"provider": "openai",
"llm_credentials_id": "<id from credential creation response>",
"model_name": "gpt-4o"
}

Embedding model:

{
"name": "My Embeddings",
"provider": "openai",
"llm_credentials_id": "<id from credential creation response>",
"model_name": "text-embedding-3-large",
"is_embedding": true,
"embedding_dimensions": 3072
}

List LLM configs:

GET /ai/api/v1/config/llm
GET /ai/api/v1/config/llm?llm_credentials_id=<uuid>
GET /ai/api/v1/config/llm?is_embedding=true

List credentials:

GET /ai/api/v1/llm_credentials

Delete:

DELETE /ai/api/v1/config/llm/{id}
DELETE /ai/api/v1/llm_credentials/{id}

Note: Delete LLM configs before deleting credentials, as configs reference credentials.


Providercreds_typeproviderRequired Fields
OpenAIAPI_KEYopenaiapi_key
AnthropicAPI_KEYanthropicapi_key
OpenAI-compatibleAPI_KEYopenaiapi_key, base_url
AWS BedrockAWSbedrockaws_access_key_id, aws_secret_access_key, aws_region
Azure OpenAI (Key)AZUREazure_openaiazure_endpoint, api_version, api_key
Azure OpenAI (SP)AZUREazure_openaiazure_endpoint, api_version, azure_client_id, azure_client_secret, azure_token_url
Google GeminiGCPgoogleapi_key
Google Vertex AIGCPgooglegcp_project, gcp_location, gcp_service_account

ErrorCauseSolution
Validation fails with 401Invalid API key or credentialsVerify the key is correct and active
Validation fails with 403Insufficient permissionsCheck IAM policy (Bedrock) or role assignments (Azure)
“Model not found” during validationModel name doesn’t match providerCheck exact model ID (e.g., anthropic.claude-sonnet-4-5-20250929-v1:0 for Bedrock)
“aws_region is required”Region missing for BedrockProvide aws_region in credentials
”api_key is required for API_KEY creds type”Missing API keyInclude api_key in the request
”azure_endpoint and api_version are required”Missing Azure fieldsProvide both endpoint and API version
Cannot delete credentialsLLM configs still reference themDelete associated LLM configs first
Validation returns valid: nullProvider doesn’t support model listingUse model-specific validation by providing a model_name