Tyk AI Studio’s Tool System allows Large Language Models (LLMs) to interact with external APIs and services, dramatically extending their capabilities beyond simple text generation. This enables LLMs to perform actions, retrieve real-time data, and integrate with other systems.
Tools bridge the gap between conversational AI and external functionalities. By defining tools, you allow LLMs interacting via the Chat Interface or API to:
Access real-time information (e.g., weather, stock prices, database records).
Interact with other software (e.g., search JIRA tickets, update CRM records, trigger webhooks).
Perform complex calculations or data manipulations using specialized services.
Tool Definition: A Tool in Tyk AI Studio is essentially a wrapper around an external API. Its structure and available operations are defined using an OpenAPI Specification (OAS) (v3.x, JSON or YAML).
Allowed Operations: From the provided OAS, administrators select the specific operationIds that the LLM is permitted to invoke. This provides granular control over which parts of an API are exposed.
Authentication: Tools often require authentication to access the target API. Tyk AI Studio handles this securely by integrating with Secrets Management. You configure the authentication method (e.g., Bearer Token, Basic Auth) defined in the OAS and reference a stored Secret containing the actual credentials.
Privacy Levels: Each Tool is assigned a privacy level. This level is compared against the privacy level of the LLM Configuration being used. A Tool can only be used if its privacy level is less than or equal to the LLM’s level, preventing sensitive tools from being used with potentially less secure or external LLMs.Privacy levels define how data is protected by controlling LLM access based on its sensitivity:
Public – Safe to share (e.g., blogs, press releases).
Internal – Company-only info (e.g., reports, policies).
Confidential – Sensitive business data (e.g., financials, strategies).
Restricted (PII) – Personal data (e.g., names, emails, customer info).
Tool Catalogues: Tools are grouped into logical collections called Catalogues. This simplifies management and access control.
Filters: Optional Filters can be applied to tool interactions to pre-process requests sent to the tool or post-process responses received from it (e.g., for data sanitization).
Documentation: Administrators can provide additional natural language documentation or instructions specifically for the LLM, guiding it on how and when to use the tool effectively.
Dependencies: Tools can declare dependencies on other tools, although the exact usage pattern might vary.
When a user interacts with an LLM via the Chat Interface:
The LLM receives the user prompt and the definitions of available tools (based on user group permissions and Chat Experience configuration).
If the LLM determines that using one or more tools is necessary to answer the prompt, it generates a request to invoke the specific tool operation(s) with the required parameters.
Tyk AI Studio intercepts this request.
It validates the request, checks permissions, and retrieves necessary secrets for authentication.
Tyk AI Studio applies any configured request Filters.
It calls the external API defined by the Tool.
It receives the response from the external API.
Tyk AI Studio applies any configured response Filters.
It sends the tool’s response back to the LLM.
The LLM uses the tool’s response to formulate its final answer to the user.
Tools can also be accessed through the Model Context Protocol (MCP), providing:
Standardized Interface: Use MCP-compatible clients to interact with tools in a vendor-neutral way.
Enhanced Integration: Connect tools to MCP-enabled applications and AI frameworks.
Protocol Compliance: Leverage the growing ecosystem of MCP-compatible tools and clients.
This multi-access approach ensures that tools can be utilized across different interfaces and integration patterns, from simple chat interactions to complex programmatic integrations.