The Dark Twin of AI Adoption
Remote Workers in Virtual Robot Wars
Geoffrey Moore famously described the “Chasm” in technology adoption—that treacherous gap between the Early Adopters, who are willing to suffer for a vision, and the Early Majority, who just want things to work. When it comes to creating software with AI, we haven’t quite crossed the chasm. We are currently camping inside it.
I’m not taking back my words, “the camera phone moment for software” is here, but it’s a moment of looong exposure. Vibe coding isn’t quite there yet. People wanted an “easy button”, but they got a position in middle management instead. You could get something started that way, but to be truly successful, the vibe coder needs to become an architect of synthetic labor. You have to manage a team who—just like real people—occasionally lie to your face.
How does that work in practice, especially in large organizations, shoehorning employees to adopt technology that is changing, well, disruptively fast? The freelancer, the indie hacker, and the solo creator are different creatures than the 9-to-5 worker. Because of some irrational intrinsic motivation, these self-employed people are driven to tinker. And something can come out of that.
But, to expect that the steady and reliable 9-to-5 professional, whose psychological and financial security is only threatened by AI, will make some extraordinary effort to adopt half-baked technology seems a little unsustainable in this moment of excedingly long exposure.
Nothing is that black and white.
It is not the 9-to-5 employee vs. the digital nomad. Rather, it’s the official, HR-driven capability development program vs. some effective stack of office tools glued together to bypass enterprise restrictions. It’s the Dark Twin, as James C. Scott called it, the unsanctioned improvisation in the shadows, that keeps the wheels turning instead of management decree.
I’ll give just one example. When a doctor sees a patient, they do not simply send a bill for some amount of money. Instead, the visit must be translated into specific alphanumeric codes (e.g., ICD-10 for diagnoses, CPT for procedures).
Insurance companies (the payers) started to use AI to automatically deny claims for whatever tiny error they could find. In response, hospitals employ armies of medical coders—often remote workers paid by the chart—to fight these denials. Their quotas are rising to match the speed of the insurance bots, but their software is archaic.
To survive, they are secretly building their own AI tools—”Appeal Bots” to fight the “Denial Bots.” It’s an arms race. It can be something just as simple as copy-pasting the denial letter to ChatGPT alongside a template prompt: “Refute this denial using the ‘Prudent Layperson Standard’ and cite [Insert State] Insurance Code Section 21. Create a table comparing the patient’s symptoms to the Milliman Care Guidelines.”
The Result: A 4-page legal argument generated in 30 seconds.
It is virtual robot wars with distance workers operating AI as artillery with your case files as ammunition. 😬


