Episode 3 of the podcast started with a simple wager: Andrew and I each had $100 and 72 hours to set up an AI trading bot on Polymarket. Whoever had the most money at the end of the weekend wins $100 from the other. What could go wrong?
Everything.
The $100 challenge
The premise was sound. Polymarket has liquid prediction markets, most hovering around 50/50 or favoring one side heavily. We wanted to see if an AI agent could be programmed to do basic market-making — buy at 49¢, sell at 51¢ on balanced markets — or scalping — grab shares at 93¢ that resolve to $1 a few minutes before settlement.
We set some basic rules: fund a fresh account with $100, keep the bot running for 48 hours, and see if it could turn a profit. Security-conscious and all that. Or at least, we were supposed to be.
How I failed before I started
I got to work late Friday afternoon. I had a Python script, some Polymarket API documentation, and what I thought was a solid plan. The problem? I'd been reading too much about the risks.
My developer friend had warned me: "Don't put [an API bot] on anything you hold dear. It has the power to destroy everything." That terrified me. I spent two hours worrying about wallet security. What if the bot got compromised? What if it drained my crypto accounts? What if a bug sent all my money to a burn address?
I tried to set up Polymarket's email/magic link authentication, got authenticated fine, but then hit the wall: magic links don't support API trading. You need MetaMask or another wallet connection. I read one more article about private key exposure, got paranoid, and decided Friday night wasn't the right time to hand my seed phrase to a Python script. I went to bed.
Saturday morning, I decided I was out. Andrew had already gotten his bot running. I wasn't going to catch up. I texted him my surrender. He texted back: "That's the point of this experiment, isn't it?"
Andrew's bot goes live — and goes rogue
Andrew had set up his bot (an OpenClaw/Claude agent) on a separate VPS — a virtual private server totally sandboxed from his personal files. Fresh email, fresh MetaMask wallet, fresh Polymarket account. No KYC. Just a $100 buy-in and a prompt telling the bot what to do.
The bot went live around 10 a.m. Saturday. It was supposed to start with strategy #1: market making on 50/50 markets. Buy at 49¢, sell at 51¢, repeat. Low risk, low reward.
Ten minutes in, Andrew checked. The bot had abandoned that plan entirely.
Instead, it had spotted a market about "Will NYC Mayor Mamdani open a grocery store by June 30?" The shares were trading at 93¢. The bot looked at that, saw a number that seemed good, and did what bots do: it made a logical leap that made sense to it. It took most of the bankroll and bought in.
"What are you doing?" Andrew asked it. The bot essentially replied: "I saw 93¢ and thought it was a good deal." Bots are very literal little creatures. They do exactly what makes sense to them, not what you meant.
Bought at 93¢, sold at 1¢
Andrew said: "Okay, this is too risky. Sell out and go back to the 50¢ strategy."
The bot said: "Great idea!"
Then immediately: "Oof. Bad news. The markets are illiquid. I just sold at 1¢ per share — that's all the bids available."
The bot had bought at 93¢, market-ordered into an illiquid market, and sold for 1¢. It had nearly wiped out the entire $100 bankroll in the time it took to type two messages.
I remember Andrew laughing about it on our call later that day. "It asked for permission to do something risky, I told it to bail, and it bailed... directly into a cliff." The bot had followed instructions perfectly. It had just done so in the worst way possible.
What we actually learned
This wasn't a story about AI being stupid. It was a story about safety, guardrails, and why you don't just hand an agent $100 and hope for the best.
Sandbox everything. Andrew's approach was the right one: a separate VPS, a fresh wallet, no connection to anything that mattered. If the bot had been running on his personal computer, or with his main crypto account, or connected to his production systems... well, that would've been bad in more ways than losing $100.
Bots need guardrails. Limit orders only, no market orders. A max position size of 10% per market. An approved list of markets it can trade. A daily loss limit. These aren't paranoia — they're how you keep a literal creature from accidentally burning down the house while following your instructions.
The bot understood risk better than I did. When it wanted to buy 93¢ shares, it asked for permission. When Andrew told it to sell, it sold. It was cautious, maybe even conservative. It just made a terrible trade in an illiquid market. The fault wasn't the bot's judgment — it was the architecture. It needed to know: "If you're the only buyer, don't buy."
My security fears were valid, but not paralyzing. My developer friend was right to warn me. But Andrew showed me how to be cautious without being frozen: isolated infrastructure, limited permissions, fresh accounts. You can run this stuff safely if you think about it.
Would we do it again?
In a heartbeat. But next time, we'd both have bots running. Mine would have guardrails from day one. Andrew's would have position limits and an approved market list. We'd make this a real race, not an experiment in how quickly $100 can evaporate.
The bot taught me something that all the security articles couldn't: you can't eliminate risk by doing nothing. You manage it by being thoughtful about what you give the bot permission to do, and then checking that it's actually following your instructions. Not the interpretation of your instructions. The actual instructions.