AI-orchestration with LarAgent in multi-agent systems



For example, a user writes:

"Create 10 SEO articles for my blog."



  • analyzing user intent;

  • choosing the right agent;

  • generating ideas;

  • building the structure of the article;

  • writing a text;

  • SEO optimization;

  • formatting the response.

Laravelmulti-agent architecturemaestroerror/laragent

This article shows the real architecture of the system and the key patterns that made it possible to build a scalable AI platform.

Why do we need AI orchestration?

One LLM call is simple.

But when should the system:

  • use different agents,

  • perform chains of actions,

  • summon tools,

  • work with users and chats,

  • support different LLM providers,

orchestrations

AI orchestration is a management layer that:

  • accepts a user request,

  • defines the user's intention,

  • selects the appropriate agent,

  • coordinates the execution of the task.

routes requests between agents

Architecture: Orchestrator → Specialist Agents

The basic pattern of the system:

User Message
   ↓
AgentConversationManager
   ↓
OrchestratorAgent
   ↓
DetectUserIntentTool
   ↓
MatchIntentToAgentTool
   ↓
DelegateToAgentTool
   ↓
Specialist Agent
   ↓
FormatResponseForUserTool

task router

The system uses several specialized agents:

  • ArticleGeneratorAgent

  • MarketingAgent

  • SEOAgent

  • ConsultantAgent

  • DetectUserIntentAgent

Each agent is responsible only for his own area.

This corresponds to the architectural principle:

Single Responsibility for AI Agents

Example of an OrchestratorAgent

class OrchestratorAgent extends AbstractAgent
    implements AgentChatAwareInterface, UserAwareInterface, AgentInterface
{
    use InteractsWithUser, InteractsWithAgentChat;

    protected $provider = 'default';
    protected $history = 'cache';

    public function registerTools(): array
    {
        return [
            new GetAvailableAgentsTool(),
            new MatchIntentToAgentTool(),
            (new DetectUserIntentTool())
                ->addProperty('provider', 'string', 'Название провайдера', [$this->provider])
                ->setRequired('provider')
                ->setMetaData(['history' => $this->chatHistory()]),
            (new DelegateToAgentTool())
                ->addProperty('chat_uuid', 'string', 'UUID чата', [$this->getChatUuid()])
                ->setRequired('chat_uuid')
                ->addProperty('provider', 'string', 'Провайдер', [$this->getProviderName()])
                ->setRequired('provider')
                ->addProperty('user_id', 'integer', 'ID пользователя', [$this->user?->id])
                ->setRequired('user_id'),
            (new FormatResponseForUserTool())
                ->addProperty('provider', 'string', 'Провайдер', [$this->getProviderName()])
                ->setRequired('provider'),
        ];
    }
}

tools (tools)

Intent Detection — understanding the user's intent

what the user wants

structured output

typed object

class Intent extends DataModel
{
#[Desc('User intent')]
public string $intent = "unknown";
}

The Intent Detection Agent:

class DetectUserIntentAgent extends AbstractAgent
{
    protected $responseSchema = Intent::class;

    public function registerTools(): array
    {
        return [new GetUserIntentsTool()];
    }
}

Now LLM returns, for example:

intent = "article.create_detailed"

Mapping Intent → Agent

After determining the intent, the system selects the desired agent.

private function searchAgentByTools(array $requiredTools): ?Agent
{
    $agents = $this->agentRepository->getAllWithTools();

    foreach ($agents as $agent) {
        if ($this->agentHasAllTools($agent, $requiredTools)) {
            return $agent;
        }
    }

    return null;
}

Thus:

Intent → Required Tools → Agent

does not select an agent directly

This is an important architectural protection.

Delegating tasks between agents

The orchestrator does not call the agent directly.

DelegateToAgentTool

private function executeAgent(
    string $agentClass,
    string $chatUuid,
    string $payload,
    int $userId
): string
{
    $agent = $agentClass::setConfiguration($chatUuid, $agentClass)
        ->addSystemMessages();

    if ($agent instanceof UserAwareInterface) {
        $user = $this->userRepository->findById($userId);
        if ($user) {
            $agent->setUser($user);
        }
    }

    if ($agent instanceof AgentChatAwareInterface) {
        $agent->setChatUuid($chatUuid);
    }

    $raw = $agent->respond($payload);

    return is_string($raw) ? $raw : (string) $raw;
}

This allows you to:

  • transmit the user's context

  • send a chat

  • manage your message history

Contract of orchestrated agents

In order for an agent to be called by an orchestrator, it must implement an interface:

interface OrchestratableAgentInterface extends AgentInterface {}

Example:

class ArticleGeneratorAgent extends AbstractAgent
    implements OrchestratableAgentInterface,
               UserAwareInterface,
               AgentChatAwareInterface
{
}

controls agent access

Expert Chain — multistep article generation

Some tasks are too complex for a single LLM request.

For example, the generation of an expert article.

Expert Chain

A chain of 5 steps:

  1. Generating ideas

  2. Choosing the best idea

  3. Outline of the article

  4. Writing sections

  5. Final review

public function generate(string $topic, ?array $plan = null)
{
    $selectedIdea = $topic;

    $plan = $plan ?? $this->generateArticlePlan($topic);

    $sections = $this->writeSections($plan);

    $content = $this->reviewArticle($sections);

    return new ArticleGeneratedContent(
        $topic,
        $content,
        $selectedIdea,
        $plan
    );
}

Each step calls LLM separately.

This gives:

  • better text quality

  • a more stable result

  • control over the generation process

LLM Multi-provider

The system can work with different LLMs.

Supported:

  • OpenAI

  • Claude

  • Gemini

  • YandexGPT

  • Groq

  • Ollama

  • OpenRouter

Configuration:

'providers' => [

    'default' => [
        'driver' => ChatGPTDriver::class,
        'model' => 'gpt-5-mini',
    ],

    'gemini' => [
        'driver' => GeminiDriver::class,
        'model' => 'gemini-2.0-flash-latest',
    ],

    'claude' => [
        'driver' => ClaudeDriver::class,
        'model' => 'claude-3-7-sonnet-latest',
    ],

    'yandexgpt' => [
        'driver' => YandexDriver::class,
        'model' => 'gpt://folder_id/yandexgpt-lite',
    ],
]

Switching happens dynamically:

$instance->changeProvider($provider);
$instance->withModel($model);

This allows you to:

  • change the LLM without changing the business logic

  • testing different models

  • use different models for different tasks

Background processing via Laravel Queues

Mass content generation requires background processing.

For example:

"Create 10 articles"

Each article is a separate queue.

class CreateArticleJob implements ShouldQueue
{
    public function handle(): void
    {
        $agent = $this->buildAgent();

        $payload = $this->getCreateArticlePayload();

        $agent->respond($payload);
    }
}

Advantages:

  • the system does not block the user

  • tasks are scaled horizontally

  • Thousands of articles can be generated

Context Management

The big problem with LLM systems is the growth of context.

Truncation Strategy

The orchestrator keeps the history:

return new SimpleTruncationStrategy([
    'keep_messages' => 25,
    'preserve_system' => true,
]);

Specialized agents — only 1 message.

return new SimpleTruncationStrategy([
    'keep_messages' => 1,
    'preserve_system' => true,
]);

This dramatically reduces the cost of requests.

Anti-looping protection

In multi-agent systems, an infinite loop can occur.:

Agent → Tool → Agent → Tool

Loop Detection Service

System entry point

AgentConversationManager

private function callOrchestratorAgent(
    AgentChat $chat,
    string $payload
): string
{
    $agent = OrchestratorAgent::setConfiguration(
        $chat->uuid,
        OrchestratorAgent::class
    )
        ->setChatUuid($chat->uuid)
        ->setUser($this->getUser());

    $raw = $agent->respond($payload);

    return is_string($raw) ? $raw : (string) $raw;
}

The flow of work:

HTTP Request
   ↓
Controller
   ↓
AgentConversationManager
   ↓
OrchestratorAgent
   ↓
Specialist Agent
   ↓
Response

Results

Multi-agent systems are becoming the standard for AI applications.

The main architectural conclusions:

1. Use Orchestrator+ Specialists

The orchestrator should only route tasks.

2. Use Intent-based routing

The LLM defines the intent, and the system selects the agent.

3. Use interfaces

Contracts like:

OrchestratableAgentInterface
UserAwareInterface
AgentChatAwareInterface

they give architectural control.

4. Break down complex tasks into chains

Expert Chain provides much more stable results.

5. Do multi-LLM architecture

Models change quickly, and architecture must take this into account.

6. Use queues

LLM tasks should be performed in the background.

Conclusion

a full-fledged AI platform

Using:

  • orchestrator pattern

  • intent detection

  • tool-based delegation

  • expert chains

  • multi-provider LLM

We were able to build a system that scales from a single request to mass content generation.

And this is just the beginning of the development of AI architectures.