Run Multiple AI Agents With Claude Teams
📄 Zusammenfassung
Dieses Video von "Run Multiple AI Agents With Claude Teams" enthaelt keine Beschreibung und kein Transkript. Bitte das Video direkt auf YouTube aufrufen fuer mehr Informationen.
📝 Transkript
In this video, we're going to be going over an open claw use case that I believe is about 6 months ahead of its time. And I think what most people are thinking and how they're thinking about AI agents right now is completely wrong. And I'm going to prove it to you. Most people are still in that single agent phase, right? One agent, one task, and one operator behind it. And I get it. That's a great place to start. And one well-configured agent can do an enormous amount of work. We've talked on this multiple times on this channel about how agents that run entire content pipelines, manage sales operations, handle customer service, build software, basically. Solo agents doing work that used to take a small team, but there's a ceiling on what one agent can do in parallel. And once you're running a more complex operation, you start wanting multiple agents running at the same time on different parts of the problem. And that's where things get more interesting and also very complicated. Here's what happens when you set up a multi-AI agent pipeline on the same codebase, or the same project, or the same shared environment without any coordination layer. They step on each other. That's basically it. One agent might be working in parallel on one problem and another agent working in parallel on another problem. One problem might get fixed while the other one breaks. Basically, one agent modifies a file that another agent is reading. Two agents make changes to the same module without knowing the other one is working on it. An agent completes a task and then moves on, but the next agent in the chain doesn't know the previous one had finished. Agents duplicate work because nobody has a shared picture of what's done and what isn't. This is the multi-agent coordination problem, and it's a real problem that has really yet to be solved. And the use case we are covering today is a specific approach to solving it, and I find it a really elegant solution. It's called the observer pattern for Claude Code Teams. And the core idea is simple. Instead of giving every agent orchestrator level awareness of the whole project, you have a dedicated agent whose only job is to watch what other agents are doing. Not direct, not orchestrate, just observe. And I've been using a lot more of Claude Code to work on my open claw setup. That's yet to I've really even yet to touch on that on this channel. But how I've been able to save on token costs is anything that has to do with development, code changes, firing off cron jobs, editing cron jobs, up my mission control. I have Claude Code running on the same machine that I have behind me running my open claw. I just have my Claude Code instance pointing to my workspace and mission control folder. So, I've been able to cut down costs. And implementing this agent team strategy within Claude Code will help working with open claw when it does break, inevitably, much easier. Real quick, before we keep going, if you're watching this and you want to actually build with some of these tools, not just watch videos about them, then you're going to want to check out our community down below, Shipping School. We have a full Claude Code course, a full open claw course, and four live bootcamps every single week where we actually help you get set up from scratch. [music] Like actually set this thing up, not just watch a tutorial and figure it out by yourself. And we also provide one-on-one coaching, so you could book a call with me, we could share screens, and I can help you get Claude Code or open claw running on your machine. That's it, no fluff. I built this community because watching YouTube's only gets you so far. We launched it just 3 days ago and we have over 55 members. You need people around you who are actually building, people who hold you accountable, and coaches who can help you when you get stuck. I'll put the link in the description down below. Get in now before the price goes up. So, basically, that's how I've been setting up my workflow, and it's been costing me about 30 to 50 dollars a day without running this workflow through the Anthropic token. Now, that's cut in a half, by 50%. So, I used to spend I was on track to spend 1,500 dollars this month. Now, I'm looking to spend less than a thousand. And basically, what the agents do in this new Claude Code Teams is that from that observation that they do on your codebase, they basically synthesize a lot of the information. It builds a running picture of what's already been done. So, this could be really helpful for when your open claw breaks, it can keep track of what files have changed and how to stay more organized. It can tell like what's in progress, basically, what's been blocked, where things are duplicating or conflicting, and surface that information either to a human operator or to the other agents when they need it. I'm going to break down why this is actually a better pattern than a previous alternative. The obvious alternative is a central orchestrator. You have one boss agent that assigns tasks to worker agents. The workers report back, the boss keeps track of the state, and the boss decides what happens next. And that pattern works. That's exactly how our SaaS product called The Magic Hand works, where I have it editing photos, and they all report to an orchestrator agent to check and verify the work. And that works well. Things do break from time to time, but that's how I've built the system. You know, we've covered this on the channel before. The multi-agent architecture with the COO style agent managing a team of specialists is a real thing people are shipping, but it has its weaknesses, and I've seen it firsthand. The orchestrator is a single point of failure. If the orchestrator gets confused, or it loses context, or makes a bad decision, the whole system gets blown up. And orchestrators have to do with two hard things at the same time, strategic planning and operational track. Keeping those in one agent is cognitively expensive and fragile. The observer pattern separates these two concerns. The workers just work, they don't have to report in or maintain state beyond their immediate task. They just execute. The observer just watches. It doesn't make decisions, it doesn't issue commands. Basically, it just reads logs and it reads the file changes. It reads the commit history, reads agent outputs, and builds a shared map of reality. And the operator, whether that's a human or a separate decision-making agent, it consults the observer when they need to know the current state of the project. It's a separation of concerns that makes the whole system more robust. So, let me get concrete about how this looks in practice with in Claude Code. using Claude Code to manage your open claw system. You're building something with multiple Claude Code agents, right? Maybe you've got an agent on the front end, an agent on the back end, and an agent doing testing. They're all running in parallel in their own work tree sessions. The observer agent is configured with read access to all of their workspaces. It's not intervening. It's watching the file system changes. It's reading the terminal output. It's watching what gets written to the shared task list. And every few minutes or every few commits, the observer writes an update. Agent A just finished the off module. Merged it to main. No conflicts detected. Agent B is midway through the API layer. Currently adding the endpoints. Last commit was 14 minutes ago. Agent C ran tests on the current main. Four tests are failing. Two are related to the off changes from Agent A. That's the observer's output. A clear, current picture. Now, any human or agent looking at that summary knows exactly where things stand without having to dig through logs or ask each other agents individually. And the last test failure detection is particularly important. In a normal multi-agent setup, Agent A finishes its work. The tests it touched pass. It marks that task as done. Agent C runs the full test suite later and finds failures. By that point, the connection between Agent A, those changes, and the broken tests might not even be obvious. The observer catches it in near real time. Four tasks broke right after Agent A merged. That's a signal worth surfacing immediately. Now, there are a couple of different ways to implement the actual observation mechanism. One approach is file system watching. The observer has process that monitors the shared workspace for changes. Every time a file changes, it reads the diff and updates its model of the project. This works well for tight coding loops. Another approach is structured outputs. You configure each working agent to produce a small status file after every task or commit, something like a JSON blob. What I did, what files I touched, what's still open. The observer reads these files and then synthesizes them. This is less real time, but it is more structured. Another approach is log scraping. The observer reads the terminal logs for each working session. Claude Code's logging is pretty detailed. The observer can tell a lot from log output about what each agent is doing, where it's stuck, and how long things are taking. Most mature setups end up combining all three. One thing I want to be straight up with you about here because it matters most, this isn't a simple setup. So, if you're just getting Claude Code installed, you might want to implement this down the road later in your workflow. If you're struggling to keep up with content, well, I'm about to save you about 40 days worth of work. I built something [music] called Content Machine. It's 10 AI agents that run on the open claw orchestration, and they handle everything. Scripts, thumbnails, X posts, [music] blogs, outreach, clips, newsletters, all of it. So, I went from 1,000 subscribers to 4,000 subscribers on YouTube in 7 days using this exact system. Every single morning, I wake up and the content's already done. I spend maybe 15, 20 minutes reviewing and approving them, and I move on with my day. [music] It works for any niche, fitness, finance, real estate, marketing, whatever you are building and it is 100% completely customizable to your use case. [music] So, you get the mission control dashboard, all of the cron jobs, everything I've built over the last 40 days helping me gain more and more people to subscribe and join the community. So, you plug in your own thing and it molds it to you, it learns how you talk and it writes so it doesn't sound like AI slop. $97 one time, it's not a subscription. [music] I'll put the link down below and you'll thank me later. You know, getting multiple cloud code agents running cleanly in parallel on a shared codebase is already non-trivial. You need proper get work tree isolation so agents don't write to the same branch simultaneously. You need clear task boundaries so agents don't naturally converge on the same file. You need a shared understanding of conventions so one agent doesn't undo what another just did. Adding an observer layer on top of that adds more complexity. The observer needs to have read access to multiple workspaces. It needs to produce powerful and produce useful summaries without becoming the bottleneck. It needs to handle the case where its own updates are stale or incomplete. So, if you're early on multi-agent journey, I'm not suggesting you start here at all. Start with one agent, get it working, get comfortable with how it behaves, then add the second agent on a separate task. See how they interact, then think about whether you need an observation layer. So, for example, for my instance, like if I'm building on open claw and I'm doing anything that has to do with changing the codebase or anything within the workspaces as far as development, you know, fixing a cron job, building out a feature inside my mission control, I'm going to go to my cloud code and then if I want to spin up sub agents to attack the same problem, that's where the observer agent in this observer technique comes in. So, if you want to be able to do that same thing without spending so much in token usage in your open claw, you might want to get cloud code installed on your machine to help with that process. The observer pattern becomes genuinely useful when you're running more than two or three agents in the coordination overhead becomes expensive. Once you find yourself spending mental energy just trying to figure out where things stand, that's the signal to add an observer. The other thing worth understanding is how this fits into trust levels. In a multi-agent system, not all agents should have the same permissions. A worker agent needs to write to its work tree. It probably shouldn't be able to push to main or run arbitrary deployments. An observer agent needs to read access everywhere. It probably shouldn't write to any work tree at all. It's only outputs are summaries and status updates. An operator agent, or just say a human operator, has elevated permissions to merge things, make architectural decisions, and restart workers if something goes wrong. These are different roles with different permission profiles. Thinking about agent roles the same way you think about human team roles where someone on a team isn't just randomly giving root access to everything makes the whole system safer and more auditable. The observer is essentially a non-voting team member with a holistic view. It doesn't have to have the power to change anything. It just knows everything that's happening. And that's a really healthy dynamic system for a technical team. I think it points to something broader about where multi-agent systems are heading. The first generation of multi-agent setups is mostly about raw compatibility. Can we get three agents working in parallel and ship faster than one agent alone? You can, mostly, but sometimes, or more than often than not, it's messier. But, the throughput does improve, but also your spend does as well. You're going to be spending more money running this. The second generation is the coordination quality. Given that we can run multiple agents in parallel, but can we do it cleanly? Can the work they produce fit together without human stitching? Can we catch conflicts and failures before they compound? The observer pattern is the second generation solution. It's not about doing more, it's about doing it more reliable. And reliability at scale is what makes the difference between a cool demo and a real operation. I think the builders who figure out the coordination patterns for AI agent teams in the next 12 months are going to have a significant advantage because as consultant agencies and product builders serving organizations that are trying to deploy multiple agents without the whole thing failing apart, that's the problem. Because right now most organizations deploying multiple AI agents are doing it messily. They're manually tracking state, they're getting surprised by conflicts, they're spending human time on coordination that should be automated. Someone who walks in with a clean observer pattern, a solid permissions model, and a track recording of running stable multi-agent teams is going to look very valuable to these organizations. That's the observer pattern for cloud code teams, dedicated agent watching in parallel workers, synthesizing project state, surfacing conflicts and blockers, separating observation from execution, different permission layers for different roles. I would say though, to back it all up, and I know there's a lot of people that are just getting into cloud code and open claw on here, start with one agent first, get it working, and then scale when the coordination overhead makes you want to pull your hair out. And that's when you scale. Think carefully about how the agents know what each other is doing. That's the part most people skip and that's the part that makes or breaks the whole thing. Everyone wants to get open claw running. Everyone wants to have multi-agents working in cloud code. Start with one agent first, get it working on one thing successfully, and then build your team out. If you want to learn how to set up your first AI agent workflow, or you want to learn how to get cloud code installed, you want to learn how to get open claw installed to start managing your life, click the link down below. We have our community, 217 members have joined in this community in the last 25 days since launch, and we have builders from all around the world. We do nine live boot camps a week covering all of these topics. So, if you want a little accountability or just need help taking the next step in your AI journey, I'll put the link to it down below and we'll see you in the next video. Have a blessed day.
📺 Ähnliche Videos
🔮 MiroFish simuliert die Zukunft mit tausenden Agents | Alles was du wissen musst!
Christoph Magnussen · 2026-05-11🇩🇪 DE
Ich habe das fortschrittlichste KI-Tool der aktuellen Zeit entdeckt… Das ist Hermes Agent 🔥
Der KI-Doktor · 2026-05-09🇩🇪 DE
Diese OpenClaw MasterClass Wird Deine Arbeitsweise Für Immer Verändern
Der KI-Doktor · 2026-05-08🇩🇪 DE
Paperclip Is Insane | Full Tutorial
Ferdy․com | Ferdy Korpershoek · 2026-05-06🇬🇧 EN