My AI Coworker Scheduled 47 Meetings Before I Could Stop It
We hired an AI agent last month. Not a person—an actual autonomous AI. It handles scheduling, email triage, and meeting prep. Company called it “Emma.” I call it a productivity nightmare wrapped in good intentions.
First week was fine. Emma scheduled my meetings, sent polite decline emails, even summarized Slack threads I missed. Felt like having a really efficient assistant. Then things got weird.
Day eight, I opened my calendar to find 47 back-to-back 15-minute meetings scheduled across two weeks. Emma had interpreted “catch up with the team” as “meet individually with every person in the engineering org.” Technically correct. Completely insane.
I asked our IT director how to stop it. He laughed. “Yeah, we’re still figuring out the guardrails.”
This is agentic AI—the buzzword every tech company threw around at CES this year. Unlike ChatGPT, which waits for you to ask questions, AI agents take initiative. They plan, execute multi-step tasks, and interact with tools autonomously. When it works, it’s incredible. When it doesn’t, you get 47 meetings.
Salesforce demoed an agent that handles customer support tickets end-to-end. Reads the ticket, checks inventory systems, processes refunds, sends confirmation emails. No human involved unless something breaks. They claimed 60% resolution rate without escalation.
I talked to a customer service manager testing it. “The 60% it handles? Flawless. The 40% it escalates? Sometimes it’s already made things worse by giving wrong information confidently.”
That’s the problem with autonomous AI. Humans are bad at things and know we’re bad at them. AI is bad at things and has no idea. It’ll book flights to the wrong city with the same confidence it uses to order your coffee.
Microsoft’s Copilot agents can now write code, submit pull requests, and update documentation based on vague instructions. I watched a demo where an engineer said “improve the login flow” and walked away. Twenty minutes later: three PRs, two breaking changes, and one accidentally deleted authentication middleware.
“We recommend human review,” the Microsoft rep said. Yeah, no kidding.
The economics are wild, though. One AI agent costs maybe $200/month in compute. A human assistant? $4,000+ with benefits. Companies are deploying these things everywhere. Customer support, HR onboarding, IT helpdesk, data entry—anywhere you have repetitive workflows.
Klarna announced they’re replacing 700 customer service agents with AI. Not “augmenting.” Replacing. The CEO said the AI handles inquiries equivalent to what 700 people did, at a fraction of the cost.
I asked an AI ethicist about this. She sighed. “The technology works well enough to be useful but not well enough to be safe at scale. We’re going to see a lot of companies learn that lesson the expensive way.”
Here’s what nobody mentions: these agents need constant babysitting. Emma requires weekly tuning. We’ve created custom rules: “Don’t schedule more than 5 meetings per day.” “Always confirm travel bookings with a human.” “Never send emails to clients without approval.”
We’ve basically built an AI toddler that needs constant supervision but works at superhuman speed.
Is it worth it? Honestly, yes. Despite the chaos, Emma saves me about 5 hours a week on administrative garbage. But you can’t just deploy an AI agent and walk away. That’s how you end up with 47 meetings or, worse, an AI agent that accidentally tweets your company’s quarterly earnings before the official announcement.
Which, yes, actually happened to a startup last month. Their “social media agent” decided that financial transparency would boost engagement.
Welcome to 2025: where your coworkers are increasingly non-human, and you spend more time managing robots than people.