Skip to content

Key Concepts

The key concepts you’ll see throughout Agent Studio are as follows:

  • LLMs (Foundation Models / FMs): the reasoning engine that interprets instructions and generates outputs
  • Tools: the capabilities an agent can use to take actions and retrieve or apply context
  • MCP: a standard way to expose tools so external agent frameworks can discover and use them
  • Agents: the orchestrator that combines an LLM with instructions and tools to complete tasks reliably
  • The Alation Knowledge Layer: the trusted, governed context that grounds agents in enterprise reality

Understanding how these concepts fit together will help you design agents that are reliable, grounded in trusted Alation context, and ready for enterprise workflows.

An LLM (Large Language Model) is the model an agent uses to understand natural language, reason about a task, and generate outputs. You may also hear LLMs referred to as Foundation Models (FMs)—these terms are often used interchangeably.

In Agent Studio, the LLM is responsible for:

  • Interpreting user intent (what someone is trying to do)
  • Following instructions defined in prompts
  • Deciding when to use an action (tool) to retrieve or validate information
  • Producing the final response or output

An LLM can be very capable, but without the right context it may produce incomplete, inconsistent, or outdated answers. That’s why tools and trusted knowledge matter.

Agent Studio supports both OpenAI and Anthropic models. We also support bringing your own enterprise LLM coming soon.

A Tool (also referred to as an Action) is a callable capability that an agent can use to take actions.

Tools are what let an agent move beyond “just generating text” and instead:

  • Retrieve trusted context from Alation
  • Interact with enterprise metadata and governed entities
  • Connect to other systems (for example, your database or other enterprise platforms) to carry out tasks
  • Support workflows that rely on accurate, structured information

In practice, tools are the mechanism that lets an agent do something—fetch data, validate information, trigger a workflow, or interact with systems—rather than only respond conversationally.

A tool call represents an action taken by an agent.

Customers are metered on actions, and an action is counted each time an Alation tool is called. In other words, every underlying call to an Alation-provided tool counts toward the customer’s usage.

Metering applies only to Alation’s built-in (base) tools. If a customer imports an external tool, or exposes a custom-built agent as a tool, usage is still measured only by the Alation-provided tool calls that happen underneath. A complete list of these Alation-provided tools that are metered is available here.

The number of tool calls an agent makes per request is not fixed and depends on the users request. In practice, most agents make 2 to 3 tool calls per request, and the highest we have seen is 10 tool calls for a single request.

MCP is a way to make tools available to agents in a standardized, interoperable way.

In Agent Studio, MCP is how tools (and out-of-the-box agents) can be exposed so external AI development environments can:

  • Discover what tools are available
  • Call those tools in a consistent way
  • Use Alation-powered actions inside workflows running outside of Alation

Practically, MCP servers are one of the main entry points for using Agent Studio outside the Alation UI. They let you connect your existing agent framework to Agent Studio’s tools and agents without needing to rebuild those capabilities yourself.

Agent Studio supports two ways to use MCP servers: hosted MCP servers and local MCP servers. Both expose Agent Studio capabilities over MCP, but they differ in where the server runs and what it can access.

Most customers use hosted MCP servers by default. In this mode, the MCP servers are hosted by Alation in our cloud.

Hosted MCP servers can expose:

  • Base tools
  • Base (out-of-the-box) agents
  • Custom agents you build in Agent Studio (including those created in the UI)

This is the best option when you want the full Agent Studio experience and the ability to build, publish, and use custom agents outside of Alation.

If you are using the Python SDK, you can also run a local MCP server, meaning you host the MCP server on your own machine using the SDK.

By default, local MCP servers provide access to:

  • Base tools
  • Base (out-of-the-box) agents

Local MCP does not support hosting custom agents created in the Agent Studio UI. In other words, you cannot build an agent in the Agent Studio UI and then host that agent on a local MCP server.

Setup instructions for running a local MCP server are available here.

An agent is a system that can take a goal and work toward completing it—often by combining reasoning with tool usage.

In Agent Studio, you can think of an agent as:

Agent = LLM + Prompt + Tools

  • LLM (FM): interprets the request and generates outputs
  • Prompt: the instructions, behavior guidelines, and task framing that shape how the agent operates
  • Tools: the actions the agent can call to retrieve trusted context and execute tasks

Agents are most effective when they are grounded in trusted knowledge and have clear, governed ways to access that knowledge.

The foundation for Agent Studio is the Alation Knowledge Layer.

Agent Studio is powered by an underlying knowledge layer built from the trusted metadata you already use and deploy inside Alation. It turns the context you already maintain into something AI agents can reliably consume.

This trusted metadata is composed of:

  • Catalog data you already have in Alation
  • Data products that extend and operationalize that context

Because agents are only as good as the knowledge you give them, enterprise AI initiatives need a source of information that’s not only comprehensive—but also trusted, governed, and up to date.

The Alation Knowledge Layer unites the Data Catalog and Data Products to provide structured business context in one place (including definitions, policies, lineage, and governance). It is continuously refreshed and reconciled across systems so the context stays accurate, trusted, and ready for Agent Studio to consume.

Data products are published, trusted assets built from the data catalog for a specific business purpose. They package the underlying data with the business meaning required to use it correctly, including metrics, definitions, and other semantic context, so teams can consistently discover, understand, and operationalize the data.

Agent Studio ties these concepts together into a single system for building enterprise-ready agents:

  • The Alation Knowledge Layer provides the trusted, governed context.
  • Tools sit on top of the knowledge layer (and can also connect to other systems like your database) to take actions and retrieve information.
  • MCP makes those tools accessible outside of Alation so they can plug into the AI workflows you already run.
  • Agents sit on top of tools, using them to complete tasks and workflows.
  • LLMs (FMs) power the agents—interpreting intent, reasoning through steps, and deciding when and how to use tools.

This layered approach is what enables agents to move beyond prototypes: they stay grounded in trusted enterprise context, can take meaningful actions through tools, and remain adaptable through LLM-powered reasoning.