Skip to main content

Core Concepts

Agent-CoreX introduces a new paradigm for AI agent tool selection. Instead of hardcoding tools upfront, agents dynamically discover and request tools based on their current task. This section explains the key concepts.

The Problem with Static Tools

Traditional AI agents follow this pattern:
// ❌ Old approach: Tools hardcoded at startup
const agent = new Agent({
  tools: [
    githubTool,
    slackTool,
    jiraTool,
    // Need a new tool? Restart the agent.
  ]
});
Issues:
  • Inflexible - adding tools requires code changes
  • Token-wasteful - every tool is in context (even unused ones)
  • Not scalable - 100 tools? Context explodes.
  • Not adaptive - can’t use the best tool, only available ones

The Agent-CoreX Solution: Dynamic Retrieval

// ✅ New approach: Tools retrieved on-demand
const agent = new Agent();

// When the agent needs tools:
const tools = await agent.retrieveTools({
  query: "Deploy to production using Terraform"
});

// Only relevant tools are returned, ranked by relevance
// Context window stays lean, performance stays fast
Benefits:
  • ✅ Flexible - any MCP server, anytime
  • ✅ Token-efficient - only relevant tools in context
  • ✅ Scalable - handles 100+ tools smoothly
  • ✅ Adaptive - always gets the best tools for the task

Key Concepts

1. Tool Retrieval

Definition: The process of finding the right tools for a given task using natural language. How it works:
User/Agent Request:
  "I need to merge a PR and deploy to AWS"

Query Processing:
  - Tokenize the request
  - Extract intent (merge PR, deploy)
  - Identify requirements (AWS, deployment)

Semantic Search:
  - Convert to embeddings
  - Search tool database
  - Find semantically similar tools

Ranking:
  - Score by relevance
  - Consider user history
  - Weight by popularity

Result:
  [
    {name: "merge-pull-request", score: 0.98},
    {name: "deploy-aws", score: 0.96},
    {name: "github-deploy-action", score: 0.94}
  ]
Key Points:
  • Works with ANY natural language query
  • Doesn’t require exact tool names
  • Returns tools ranked by relevance (0.0 to 1.0)
  • Configurable top-k results (default: 5)

2. Tool Lifecycle

Every tool in Agent-CoreX goes through this lifecycle:
┌─────────────────────────────────────────────────────┐
│ PHASE 1: REGISTRATION                              │
│ MCP server registers with Agent-CoreX              │
│ Tools automatically indexed                         │
└──────────────┬──────────────────────────────────────┘

┌──────────────┴──────────────────────────────────────┐
│ PHASE 2: INDEXING                                  │
│ Tool metadata stored in vector database            │
│ Descriptions, parameters, schemas added            │
└──────────────┬──────────────────────────────────────┘

┌──────────────┴──────────────────────────────────────┐
│ PHASE 3: DISCOVERY                                 │
│ Agents search for tools via natural language       │
│ Tools ranked and returned to agent                 │
└──────────────┬──────────────────────────────────────┘

┌──────────────┴──────────────────────────────────────┐
│ PHASE 4: EXECUTION                                 │
│ Agent selects tool and executes via MCP            │
│ Results returned to agent                          │
└──────────────┬──────────────────────────────────────┘

┌──────────────┴──────────────────────────────────────┐
│ PHASE 5: MONITORING                                │
│ Tool usage tracked and metrics collected           │
│ Ranking weights updated based on performance       │
└─────────────────────────────────────────────────────┘

3. Dynamic vs. Static Tools

AspectStaticDynamic (Agent-CoreX)
DefinitionTools defined at startupTools discovered at runtime
FlexibilityLimited to pre-configuredAny available tool
Token UsageAll tools in contextOnly relevant tools
ScalabilityDegrades with tool countScales to 100+ tools
MaintenanceRequires code changesAutomatic marketplace
SpeedInstant (all known)15-50ms retrieval

4. Ranking & Relevance

Agent-CoreX uses a multi-factor ranking algorithm:
// Simplified ranking formula
score = (
  semantic_match * 0.40 +      // How well does it match the query?
  popularity * 0.25 +          // How popular is this tool?
  user_history * 0.20 +        // Has the user used it before?
  performance * 0.15           // How reliable/fast is it?
)
Semantic Matching (40%)
  • How similar is the tool description to the query?
  • Uses vector embeddings for deep semantic understanding
  • Example: “deploy” matches “deployment”, “ship to prod”, “release”
Popularity (25%)
  • How often is the tool used across all users?
  • Tools used more often are generally more reliable
  • Prevents recommending niche/buggy tools
User History (20%)
  • Has this user used this tool successfully before?
  • Preferences are learned and personalized
  • Example: If you always use “deploy-with-terraform”, it gets boosted
Performance (15%)
  • Response time metrics
  • Error rates
  • Uptime/reliability
  • Prevents recommending slow or flaky tools

Practical Examples

Example 1: Simple Query

Request:
await agent.retrieveTools({
  query: "Send a message to Slack",
  topK: 3
});
Response:
[
  {
    "name": "send-slack-message",
    "server": "slack-mcp",
    "score": 0.99,
    "description": "Send a message to a Slack channel",
    "params": {
      "channel": "string",
      "message": "string"
    }
  },
  {
    "name": "post-thread-reply",
    "server": "slack-mcp",
    "score": 0.94,
    "description": "Reply to a thread in Slack"
  },
  {
    "name": "create-jira-comment",
    "server": "jira-mcp",
    "score": 0.72,
    "description": "Create a comment on a Jira issue"
  }
]
What happened:
  • Query matched “send message” + “Slack”
  • Most relevant tool scores 0.99
  • Less relevant tools (Jira comment) scores lower
  • User gets ranked list in milliseconds

Example 2: Complex Query

Request:
await agent.retrieveTools({
  query: "Deploy my Node.js app to AWS Lambda with environment variables, trigger CI/CD, and notify the team when it's done",
  topK: 5
});
Response:
[
  {
    "name": "deploy-to-lambda",
    "score": 0.97,
    "description": "Deploy Node.js app to AWS Lambda"
  },
  {
    "name": "set-lambda-environment",
    "score": 0.96,
    "description": "Configure Lambda environment variables"
  },
  {
    "name": "trigger-github-action",
    "score": 0.95,
    "description": "Trigger GitHub Actions workflow"
  },
  {
    "name": "send-slack-notification",
    "score": 0.93,
    "description": "Send message to Slack channel"
  },
  {
    "name": "create-cloudwatch-alarm",
    "score": 0.87,
    "description": "Create CloudWatch monitoring alarm"
  }
]
What happened:
  • Query contains multiple concepts: deploy, Lambda, env vars, CI/CD, notify
  • Agent-CoreX broke down the query semantically
  • Each tool matched different parts of the request
  • All tools are ordered by relevance to the overall task

Example 3: Misspelled or Colloquial Query

Request:
await agent.retrieveTools({
  query: "Get me some devops stuff 4 deploying infra"
});
Response:
[
  {
    "name": "deploy-terraform",
    "score": 0.91
  },
  {
    "name": "run-ansible-playbook",
    "score": 0.88
  },
  {
    "name": "docker-deploy",
    "score": 0.85
  }
]
What happened:
  • Query is informal and misspelled
  • Semantic matching still understands intent (DevOps + deployment + infrastructure)
  • Returns relevant tools despite casual language
  • This is the power of vector embeddings

Tool Retrieval vs. Tool Execution

It’s important to understand the distinction:

Tool Retrieval

  • Purpose: Find available tools
  • Input: Natural language query
  • Output: Ranked list of tool names + metadata
  • Timing: 15-50ms
  • No execution - just discovery

Tool Execution

  • Purpose: Actually run a tool
  • Input: Tool name + parameters
  • Output: Tool result
  • Timing: 100ms-5s (depends on external API)
  • Requires authentication - credentials for MCP server
Workflow:
1. Retrieval: "What tools do I have?"
   → Get list of 5 tools

2. Selection: Agent picks which tools to use
   → Usually top 1-3 tools

3. Execution: "Run these tools"
   → Actually execute via MCP

4. Results: Integrate results into agent response

Next Steps

Tool Lifecycle Deep Dive

Understand how tools are registered, indexed, and executed.

Dynamic Retrieval Details

Explore ranking algorithms and optimization strategies.

MCP Fundamentals

Learn about the Model Context Protocol that powers tool integration.

API Examples

See real-world examples of tool retrieval and execution.

Key Takeaway: Agent-CoreX transforms tool selection from static configuration into dynamic, intelligent discovery. Your agent always gets the right tools for the job. 🚀