Skills as Tools: LLM Function Calling and MCP Integration
This document explains how AgentUp's "skills" work as tools for LLMs and MCP servers, enabling AI systems to discover and call agent capabilities.
What Are Skills in AgentUp?
In AgentUp, skills are the fundamental units of functionality that agents can perform. They can be:
- Built-in handlers - Provided by the framework (e.g.,
analyze_image,process_document) - Plugin skills - Contributed by installed plugins
Skills as LLM Tools (Function Calling)
The @ai_function Decorator
Skills can be exposed as LLM-callable functions using the @ai_function decorator:
@ai_function(
description="Echo back user messages with optional modifications",
parameters={
"message": {"type": "string", "description": "Message to echo back"},
"format": {"type": "string", "description": "Format style (uppercase, lowercase, title)"},
},
)
@register_handler("echo")
async def handle_echo(task: Task) -> str:
# Handler implementation
return "Echoed message"
How It Works
-
Decoration Phase: The
@ai_functiondecorator creates a JSON schema and marks the function:func._ai_function_schema = { "name": "echo", "description": "Echo back user messages...", "parameters": { "type": "object", "properties": { "message": {"type": "string", "description": "Message to echo back"}, "format": {"type": "string", "description": "Format style..."} }, "required": ["message", "format"] } } -
Registration Phase: During startup, the framework discovers all
@ai_functiondecorated handlers and registers them in theFunctionRegistry. -
LLM Integration: When an LLM needs to call functions:
-
Execution: When the LLM calls a function, the
FunctionExecutorroutes it to the appropriate handler:
Skills as MCP Tools
MCP Server Mode
AgentUp agents can expose their skills as MCP (Model Context Protocol) tools, making them discoverable by other MCP clients:
# agentup.yml
mcp:
enabled: true
server:
enabled: true
name: my-agent-mcp-server
expose_handlers: true # Expose skills as MCP tools
When expose_handlers: true, AgentUp automatically creates MCP tool definitions for all registered skills.
MCP Client Mode
AgentUp can also consume tools from external MCP servers:
# agentup.yml
mcp:
enabled: true
client:
enabled: true
servers:
- name: filesystem
command: npx
args: ['-y', '@modelcontextprotocol/server-filesystem', '/']
- name: github
command: npx
args: ['-y', '@modelcontextprotocol/server-github']
env:
GITHUB_PERSONAL_ACCESS_TOKEN: '${GITHUB_TOKEN}'
MCP Tool Discovery and Integration
-
Discovery: On startup, AgentUp connects to configured MCP servers and discovers available tools:
-
Registration: MCP tools are registered alongside local skills in the
FunctionRegistry: -
Unified Access: LLMs see both local skills and MCP tools as available functions:
-
Execution Routing: The framework automatically routes calls to the appropriate backend:
Practical Example: Complete Flow
Here's how it all works together:
1. Agent Configuration
# agentup.yml
mcp:
enabled: true
client:
enabled: true
servers:
- name: filesystem
command: npx
args: ['-y', '@modelcontextprotocol/server-filesystem', '/tmp']
server:
enabled: true
expose_handlers: true
skills:
- plugin_id: analyze_image # Built-in skill
- plugin_id: ai_agent # AI-powered skill that can call tools
2. Available Tools to LLM
When the AI agent processes a request, it sees these functions:
- analyze_image - From AgentUp's built-in multi-modal handler
- mcp_read_file - From the filesystem MCP server
- mcp_write_file - From the filesystem MCP server
- Any plugin skills that are registered
3. LLM Function Calling
User: "Please analyze the image in /tmp/chart.png and save a summary to /tmp/analysis.txt"
LLM thinks: I need to read the file, analyze it, and write results
1. Calls: mcp_read_file(path="/tmp/chart.png")
2. Calls: analyze_image(image_data=<binary data>)
3. Calls: mcp_write_file(path="/tmp/analysis.txt", content="Chart shows...")
4. Routing and Execution
# Step 1: MCP tool call
await mcp_client.call_tool("read_file", {"path": "/tmp/chart.png"})
# Step 2: Local skill call
analyze_handler = get_handler("analyze_image")
result = await analyze_handler(task_with_image_data)
# Step 3: MCP tool call
await mcp_client.call_tool("write_file", {"path": "/tmp/analysis.txt", "content": result})
Configuration-Driven Tool Exposure
Skill Definitions in Agent Config
skills:
- plugin_id: weather_lookup
name: Weather Lookup
description: Get current weather for a location
tags: [weather, external_api]
# This skill can be called by LLMs when AI routing is enabled
- plugin_id: file_processor
name: File Processor
description: Process uploaded files
tags: [file, processing, multimodal]
# This skill handles file uploads and can be called via function calling
AI Routing Mode
# agentup.yml
routing:
default_mode: ai # Skills available as LLM tools
# OR per-skill
skills:
- plugin_id: data_analysis
routing_mode: ai # Available as LLM tool
- plugin_id: simple_greeting
routing_mode: direct # Direct keyword matching only
keywords: [hello, hi]
Benefits of Skills as Tools
- Unified Interface: LLMs see all capabilities (local skills + MCP tools) as a single set of functions
- Automatic Discovery: No manual tool registration - skills are automatically exposed
- A2A Compliance: All tool calls go through the standard A2A task execution pipeline
- Middleware Inheritance: Tool calls get all configured middleware (caching, rate limiting, etc.)
- Multi-modal Support: Tools can process images, documents, and mixed content
- Cross-Agent Composition: Agents can call each other's skills via MCP
Real-World Scenarios
1. Content Processing Pipeline
User uploads image + asks for analysis
→ LLM calls analyze_image(image_data)
→ LLM calls mcp_search_web(query="similar charts")
→ LLM calls summarize_findings(data=[...])
→ Returns comprehensive analysis
2. Development Agent
User: "Check the status of my GitHub repo and update the README"
→ LLM calls mcp_github_get_repo_status()
→ LLM calls mcp_github_get_file(path="README.md")
→ LLM calls generate_documentation(repo_data=...)
→ LLM calls mcp_github_update_file(path="README.md", content=...)
3. Multi-Agent Workflow
Agent A exposes: code_analysis, security_scan
Agent B exposes: documentation_generation
Agent C (orchestrator) can call tools from both A and B via MCP
Summary
AgentUp's skills system creates a seamless bridge between: - A2A protocol messages (how agents communicate) - LLM function calling (how AI systems invoke capabilities) - MCP tools (how agents share capabilities)
This architecture enables: - AI systems to discover and use agent capabilities automatically - Agents to expose their skills to other agents and AI systems - Complex workflows that span multiple agents and external tools - Consistent middleware application regardless of how skills are invoked
The key insight is that skills are not just handlers - they're discoverable, callable, composable units of functionality that can be orchestrated by AI systems and shared across agent networks.