China's Free AI Just Embarrassed Claude And ChatGPT (+12 AI Updates)
📄 Zusammenfassung
Dieses Video von "China's Free AI Just Embarrassed Claude And ChatGPT (+12 AI Updates)" enthaelt keine Beschreibung und kein Transkript. Bitte das Video direkt auf YouTube aufrufen fuer mehr Informationen.
📝 Transkript
This week, China just shipped a free AI that beat Claude and Chad GPT on the hardest test in AI. [music] Anthropic just made Claude work with any AI for free. And Elon's new Grock does what no other AI can [music] do. I train people in AI across 150 countries. And honestly, this was the craziest week in AI [music] all of 2026 because OpenAI just completely reborn itself with three massive announcements. They dropped their smartest model ever. They turned Chat GBT into a teammate that lives inside your work [music] apps. And they shipped an image AI that became number one in the world overnight. And at the end, I'm going to show you how to use its insane capabilities yourself. Plus 12 more updates from this week, including some I genuinely had to read twice. As a side note, while I make this video once a week, if you want AI updates in real time, I have a free WhatsApp community where I drop AI updates real time here. Links in the description. Now, on to the video. Kimmy, China's response to chat GPT and Claude just dropped their latest model K2.6. It's free open-source AI and it's coming out on top on the benchmarks that actually matter. For example, it leads every model including Claude and Gemini on humanity's last exam which is considered one of the hardest AI knowledge tests in existence. The same goes with coding. Also, it costs a fraction of what Claude or GPT charge. Its new model has a few features worth discussing today. One, it can code for over 12 hours straight across 4,000 plus tool calls. Most AI tools do one task and wait for your next message. Two, it builds full websites from a single sentence. Animations, 3D graphics, databases, you name it. Three, up from its previous limit of 100 sub aents, now 300 AI workers can run simultaneously. This means that 300 specialists now tackle different parts of your project at the same time. One prompt can produce finished websites, slide decks, spreadsheets, and documents. It's clear the West no longer has sole ownership over this AI race. OpenAI just officially launched GPT 5.5 along with a stronger pro version and they are calling it the smartest and most intuitive model they have ever shipped. And look, I have already done a full deep dive on this on the channel where I gave Codeex one prompt before bed and it built me a working app for the MacBook while I slept. So, I am not going to repeat all of that here. What you actually need to know in 30 seconds is this. GPT 5.5 is way better at handling messy real world tasks, coding, spreadsheets, research, computer use. [music] The big shift is that it now plans steps on its own, checks its own work, and finishes multi-art jobs without you having to handhold it through every move. Here are some top use cases. One, you can ask it to make you an app like the Space Mission One, and it will do it for you. Second, it can do the same thing with 3D games and look at the results. It's extraordinary. Finally, if you give it an existing spreadsheet and ask it to modify it after conducting a financial analysis and it will figure it out and do it for you. It is live right now in chat GPT and codeex for plus pro business and enterprise users. API access is coming soon. Hackers have spared noon. Even AI can be breached now. Few days ago, one of the biggest platforms on the planet was breached. Meet Versel. You have probably never heard of them, but you use them every day. They are the invisible engine behind OpenAI's website, Nike, Walmart, and millions of other sites on the internet. It's so big that it's valued at $9.3 billion. Yet, they got breached. Here is the crazy part. The hackers did not attack Versel directly. They attacked a tiny AI productivity tool that one Versil employee had installed. That employee clicked allow all permissions when connecting it to Google Workspace. That one click opened the door. Hackers bypassed multiffactor authentication, walked into Versus internal systems, and claimed to have stolen API keys, source code, and data on 580 employees. That data is now selling online for $2 million. This is not a one-off. Last November, Chinese hackers used Claude to automate attacks on 30 global companies. The AI did 90% of the work on its own. See the pattern? AI is no longer just your assistant. It is your newest employee. It reads your email. It logs into your dashboards. and hackers have figured out the fastest way into your company is through the AI tool your employee installed last week. Three things you do today. One, open your Google Workspace settings and kick out every AI tool you do not actively use. Two, never click allow all on any AI tool ever. Three, tell your team the new hack is not a sketchy email. It is a pretty AI app asking for access. AI is the new door into your company. Make sure you know who you are letting in. A guy just used AI to do something only hospitals could do. He decoded his own DNA. Here is why this is insane. More than one in three people process common medicines differently because of a single gene. Anti-depressants, painkillers, heart medication. For some, the standard dose is too strong. For others, too weak. Most people have no idea which group they are in. What if you could find out before swallowing that pill whether your body can actually process it? One DNA sequencing run tells you, which basically means reading your DNA. This can also show if you carry inherited risks for cancer or autoimmune conditions that run in your family. Until now, only hospitals could read this. It used to cost $3 billion and take 13 years. But now with AI, a guy did it in just $1,100 in 4 hours at home. He used a USB-sized device that plugs into his laptop. An AI model inside listens to his DNA and translates it into readable letters. But reading the DNA is the easy part. Understanding it is the hard part. 98% of your genome is not genes. It is instructions that tell genes when to turn on and off. It's complicated, but basically all you need to know is that scientists used to call it junk DNA because nobody could read it. Last year, Google DeepMind dropped Alpha Genome, an AI model that takes your raw DNA and makes predictions on how variance in human DNA sequences impact a wide range of biological processes like which disease risk goes up and more. This unlocks so many solutions for humanity. If you want to do this yourself at home, I'll attach the link in my free WhatsApp community. You can take a look. SpaceX just announced a major collaboration with Cursor AI, the coding editor that most engineers already use every day. And the numbers in this deal are honestly absurd. Cursor is now getting access to Colossus, which is the SpaceX tied XAI supercomput. We are talking about roughly a millionH 100 equivalent GPUs. That is more compute power than most countries have access to. What cursor will do with all of that is train much more powerful coding and knowledge work models. models built specifically for engineers and complex technical tasks. Here is the part that made everyone sit up. In return for that compute, SpaceX gains the right to acquire Cursor later in 2026 for up to $60 billion or pay 10 billion just for the joint research output. Think about that for a second. A coding tool that started 2 years ago is now potentially worth $60 billion because the people building infrastructure realize something. Whoever owns the tool engineers code in owns where the next decade of software actually gets built. For anyone in tech, this is the partnership to watch. Moving on to the next update. Onto the next update. It's always surprising when Elon drops a new product with no announcement or even press release, especially when that is a new AI product that is twice as big as anything XAI has made. Grock 4.3 just dropped and it does something interesting. While most AI models can just write for you, this one can actually build. Give it a topic and it builds you a full PowerPoint deck. Ask for analysis and it spits out a populated Excel sheet. Want a report? It gives you a downloadable PDF. You can also feed it a video now, not just images, full video, and it understands what is happening inside it. This is a big deal because not even claude can do this right now. OpenAI released a major upgrade to its image generation called chat GPT images 2.0 and as compared to other models became number one in the world. It can support multiple languages and can also create detailed visuals like infographics, maps or comics. The best part about this model is that it can think before generating for example searching the web for up-to-date info or creating multiple related images from one prompt. This makes it useful not just for fun art, but for practical work like marketing materials or diagrams. Because this is an incredible tool, I decided to make a full tutorial with use cases on how you can leverage this to make your day-to-day life easier. But before that, let's quickly wrap up the updates. Anthropic has added a game-changing feature to cloud co-work. It's called live artifacts. Unlike static charts that go stale, these are interactive dashboards connected to your live data. They refresh automatically, transforming Claude from a simple chatbot into a dynamic living workspace. Here is how it works in practice. In the Claude desktop app, head to the co-work tab and start a new task. You can prompt Claude to build a dashboard. For instance, I asked for a daily command center, a hub to see my Gmail, calendar, and Slack mentions in one place. I enabled act without asking for my apps and Claude immediately probed the connectors to verify the data structure. Within moments, it generated a sleek branded dashboard populated with my actual realtime information because everything is saved in a dedicated live artifacts tab with version history so you can return to it later across devices instead of losing it in our chat history. OpenAI just shipped an experimental tool inside Codeex called Chronicle. And once you see what it actually does, you understand why every developer on Twitter is losing their mind over it this week. How it works is it quietly takes periodic screenshots of your computer screen in the background. These images help the AI build a better memory of what you've been working on, like open files, error messages, tools you're using, or project layouts. Instead of repeatedly explaining your context in every prompt, Codeex can now understand references like why is this failing more naturally or even fix that bug I was seeing earlier. The feature is opt-in which means that users can pause or turn it off anytime and screenshots are stored only for 6 hours on your device where they are processed to create memories and then deleted. However, it has still sparked privacy discussions because it can see sensitive information on screen. Would you use it? Let us know in the comments. Now, on to the next news. Right now, when you ask AI to design a UI for you, it guesses your brand. It picks a generic blue, a generic font, and your designs all end up looking like every other AI design out there. But Google's design tool stitch just released a new feature where it supports a new open format called design. MD. It is a plain text markdown file that describes an entire design system, colors, fonts, spacing, components, and the reasoning behind every choice. With design.md, AI agents can read your actual brand rules and generate you is that match them. Until now, this file was locked inside Stitch. Google just made it fully open source. So, you can use this with any AI tool that can read it and use it. If you are building digital products, this is the standard to start using right now. Claude Desktop just became a universal remote for AI. The app now lets you swap out Anthropic's built-in brain and plug in any other AI model you want. For example, Open Routter, which is a cheaper marketplace with hundreds of AI models in one place, or your company's private system like Foundry, which is what big enterprises use to keep their AI internal, or even a local model running on your own laptop, which means your data never leaves the room. And here's why this matters. You get Claude's interface and tools, but you can swap the engine. Cheaper models for simple tasks, specialized models for specific work, local models for privacy. Most AI companies want to lock you inside their model. Anthropic just gave you the keys. OpenAI just dropped workspace agents inside Chat GPT. And this is the one that finally turns Chat GPT into a real teammate, not just a chatbot. Here is how it works. You describe a job in plain English. ChatgPT spins up an agent and that agent lives inside your tools, your Slack, Linear, your email, all of it. The demo is wild. Someone types, "Build an agent that monitors my product feedback channel in Slack, answers questions, and checks new issues." Chad GPT writes the instructions, hooks it into linear, connects the Slack channel, and ships it. Then they ask, "Summarize the last 24 hours of feedback and draft a team email." The agent searches Slack, pulls linear issues, writes the summary, and drafts the email. Done. And this runs 24 by7. So, when a user reports a bug at 2:00 a.m., the agent files the ticket before anyone opens their laptop. The work you used to delegate to a human, you can now delegate to chat GPT. [music] Take a look at this image. And in particular, this girl's hands. In her left hand, she's holding the word few. And in her right hand, she's holding the phrase a thousand words. Now, here's the thing. Every AI image model from the last 2 years has been failing at hands and failing at text. And this image somehow nails both at the same time. This is chat GPT image 2.0. Let's discuss five use cases of this tool. And stick around for the last one because that's the one that actually broke my brain. Quick context before we start. There are two modes inside this thing. Instant mode is fast and thinking mode actually reasons through your prompt before drawing it. To turn it on, just go to chatgpt.com, click create an image, and toggle [music] thinking. Let's get into it. I post one of these deep dives every week. So, if you want to stay actually ahead on this stuff instead of finding out from Twitter 3 weeks later, the link in the bio has the free WhatsApp community where I share all of this first. And if you want to catch up on deep dives of the newest AI tools of the week, we recently made videos on GPT 5.5 and claude design out on our channel now. This one is my personal favorite. Six image brand series with full character and object continuity across all of them. I asked for six images. Fictional fashion brand. I called it Techno. Made up the name. Italian archival graphic design references. '90s Southern California streetear aesthetic. Same look across all six shots. Real looking models. Different poses. Different context. One image is a mailer, one a record sleeve, one a wine label, one a flyer. Six different touch points, one brand identity. Watch these six. 1 2 3 4 5 6. That does not look AI. Like, that does not look like AI at all. I could put these on Instagram tomorrow and start selling t-shirts from them. Seriously, same visual DNA across all six. Same lighting philosophy, same color grade, same model identity across different shots. This is the character and object continuity feature. Open AAI just shipped with this model. Brand new capability, biggest unlock for anyone building a brand, a comic, a storyboard, a multi-art campaign. Real question for you. If you are a founder right now or a creator, would you still hire a studio for brand photos? Would you actually still book that shoot? I am genuinely asking, yes or no in the comments. I want to know where the line is for you. This is where it got pretty crazy for me. A real working QR code baked right into the design. I invented a fake app, called it idea square, made it up on the spot, and I told Chad GPT, "Make me an advertisement flyer for this app with a working QR code inside the design. Modern techy look headline, download our app now." Here is the flyer. Look at this text clean layout professional. And that QR code bottom right, I actually pulled out my phone, scanned it, and guess what? It actually resolved to a placeholder URL. It works. Like you could take this flyer right now, swap in your real URL, send it to a printer today. Done. And if you click on the thought panel, you see how it reasoned. It literally said, quote, "The user did not specify a URL, so I am placing a placeholder." That kind of self-aware decision-making inside an image tool, brand new. Never had that before. Now, this one, I was actually a little skeptical going in. Multilingual text rendering. And we start with Japanese because here is the thing. Every image tool claims multilingual support, but then you run it and it just hallucinates characters that look foreign but mean nothing in the language. So I asked for a complete manga page, full Japanese, proper speech bubbles, readable dialogue, and I was specific, not Japanese-looking characters, real Japanese, grammatically correct, a native reader can actually parse it. Output, look at this. Every speech bubble, actual readable Japanese. And I did not just take the model's word for it. I sent this to a friend of mine who reads Japanese. She confirmed the dialogue actually makes sense, an actual conversation between the characters. And look at the inking. Clean. [music] The composition follows actual manga panel flow. Left to right reading order correct for the style. And this is the one. This is the image. The image I opened this entire video with right at the very start. So the prompt photograph over a woman's shoulder. She is making magazine word art on a carpet floor. The art reads, quote, "A picture is worth a thousand words, but sometimes [music] generating a few words in the right place can elevate its meaning." And here is the specific part. She's holding the phrase a thousand words in her right hand. And she's holding the word few in her left hand. And the model nailed all of it. Look at this. Both hands correct. The specific words on the specific hands correct. The text composition on the floor correct everything. So this this is the superpower where you realize something really important instruction following that is the actual product. Now anyone can make pretty pictures everyone can do that. The real question now is can the model do exactly what you asked down to the specific word in the specific hand. This one can. And this is the one the one I did not think was possible. So what I did I uploaded a grid 64 small frames from a show. Dense visual content lots going on in every single frame. First, I asked Chad GPT to label every single frame with a number. It used thinking mode, counted the columns, counted the rows, figured out 64 frames, then drew a number on every single one, 1 to 64 correctly placed on everyone. Already that alone is something no image tool I have ever used can do. But then I did this. I said to chat GPT, pull out frame number 40, just that one specific frame. give it to me in 16x9 aspect ratio as a standalone highquality image and it actually did it. Watch this. It used Python inside the thinking process, calculated the exact coordinates of frame 40, cropped it cleanly, upscaled, refframed to 16x9 and the output clean, highquality standalone portrait looks like it was always a 16x9 image in the first place. In plain English, what just happened here? The image tool, it just became an image editor tool. analyzing, isolating, recomposing existing images. That that is a completely different category of capability. Quick thing for the founders watching, me and my team actually run corporate AI trainings. We have been brought in by Adobe, Razer Pay, Uber and a bunch of others to train their teams on exactly this kind of stuff. So if you are running a company and you want the same kind of session for your people, the email is in the description. Just write to us. Hope this helped. And if you are not subscribed yet, 70% of the people watching this are not. YouTube literally will not show you the next one unless you fix that. That was all for today. I'll see you in the next one.
📺 Ähnliche Videos
🔮 MiroFish simuliert die Zukunft mit tausenden Agents | Alles was du wissen musst!
Christoph Magnussen · 2026-05-11🇩🇪 DE
Ich habe das fortschrittlichste KI-Tool der aktuellen Zeit entdeckt… Das ist Hermes Agent 🔥
Der KI-Doktor · 2026-05-09🇩🇪 DE
Diese OpenClaw MasterClass Wird Deine Arbeitsweise Für Immer Verändern
Der KI-Doktor · 2026-05-08🇩🇪 DE
Paperclip Is Insane | Full Tutorial
Ferdy․com | Ferdy Korpershoek · 2026-05-06🇬🇧 EN