Who Should Not Run an AI Adoption in Your Company
Six Engineer Archetypes That Quietly Slow Teams Down
Everyone says they are “integrating AI.”
Most of them are just adding noise.
With AI:
Slack is louder.
Docs are longer.
Budgets are higher.
Bud roadmaps are somehow… slower 🤷
The uncomfortable truth is this:
AI rarely fails because of models. It fails because of engineers.
After watching multiple teams “adopt AI,” the same personalities keep showing up.
Recognizable. Predictable. Costly.
If you’re a CTO or VP of Engineering, these are the six archetypes slowing you down. And the one boring, quiet type you should actually be hiring or promoting.
Take it with a grain of salt. But I’m pretty sure you’ve already met some of them.
1. The “From Scratch” Artisan
(The Over-Engineer)
The behavior
This engineer looks at a mature, battle-tested solution like Spring AI, LangChain, or Vertex AI and says, “Nah, I can build that in-house.”
They reject existing frameworks in favor of rolling their own vector stores, orchestration layers, or abstractions. Often justified as “flexibility,” “control,” or “learning.”
The focus shifts from business differentiation to infrastructure craftsmanship.
The trap
They confuse control with leverage.
Instead of asking “What is the smallest surface we need to own?”, they expand the system they must maintain. Every internal abstraction becomes a long-term liability.
Why avoid it
They build fragile cathedrals.
While they are debugging custom infra and chasing edge cases already solved elsewhere, the industry moves on. Three months later, a library update solves the problem better. You are left with a bespoke legacy system that no longer differentiates and no one dares to rewrite.
2. The “AI Everywhere” Maximalist
(The Everything-Is-a-Prompt Engineer)
The behavior
This engineer sees AI as a universal hammer.
Parsing? Prompt it.
Validation? Prompt it.
Math? Prompt it.
Business rules? Definitely prompt it.
They try to replace deterministic systems with probabilistic ones, even when simple code would be faster, cheaper, and correct.
The trap
They treat LLMs like deterministic machines.
You’ll see them forcing models to perform statistical calculations, exact parsing, or rigid logic. Tasks the model is statistically guaranteed to hallucinate on. All while refusing to call a calculator, a rules engine, or plain old code.
Why avoid it
They introduce invisible risk.
The system looks elegant until it fails in production in ways that are hard to reproduce, test, or reason about. Debugging becomes archaeology. Reliability quietly erodes. Confidence follows.
AI is powerful, but it is not a replacement for software fundamentals. Using it everywhere is not sophistication. It’s a category error.
3. The News Ticker
(The Hype Evangelist)
The behavior
This person is the town crier of your Slack.
Subscribed to every OpenAI, Anthropic, and DeepMind newsletter. If a CEO tweets at 2:00 AM, the team hears about it by 2:05.
They constantly push to revisit the roadmap because a new model is 0.5% better on a benchmark no one asked for.
The trap
FOMO.
They generate massive cognitive load. Engineers who are busy shipping start feeling behind because they didn’t test the experimental plugin released an hour ago.
Why avoid it
They confuse motion with progress.
The team stays stuck in “Hello World” mode. Always trying. Never integrating. Nothing reaches production. It looks innovative from the outside. Inside, it’s just churn.
Innovation theater.
4. The Scribe
(The Wall-of-Text Generator)
The behavior
This engineer believes AI’s main purpose is generating more words.
Five-paragraph pull request descriptions.
800-word Jira summaries.
Emails that read like LinkedIn thought leadership.
The trap
They mistake volume for value.
Instead of reducing cognitive load, they multiply it. A human still has to read that output. Worse, the lack of self-awareness is often complete. Performance reviews, internal docs, even social posts that loudly announce: “This was written by a model.”
Why it hurts
We don’t need better summaries of bugs caught in PR.
We need clearer decisions or a fix that auto-commits.
If AI output takes more effort to parse than the original input, the Scribe is slowing everyone down.
5. The Narrator
(The Demo-er)
The behavior
They love scheduling 45-minute “knowledge sharing” sessions.
You join expecting insight. Instead, you watch them install a plugin, configure an IDE, and click “Generate.”
The trap
They are reading documentation out loud.
Everything shown is available in the first five minutes of the getting-started guide. There’s no discussion of edge cases, integration pain, or failure modes.
Why avoid it
AI value doesn’t live in setup.
It lives in constraints, workflows, guardrails, and recovery paths. The Narrator optimizes for the shiny first impression, not operational reality.
6. The Speculator
(The AI Investor)
The behavior
This engineer approaches leadership like a VC pitching a Series B.
They always need more.
More budget.
More licenses.
More GPU tiers.
They obsess over premium models and elite benchmarks.
The trap
They treat models like volatile assets.
“We should switch to Model B, it’s better at nuanced poetry,” they argue, ignoring integration cost, migration risk, and actual business value.
Why avoid it
They create a budget black hole.
They optimize for theoretical capability, not practical utility. What the model could do matters more to them than what the system needs to do.
The Archetype You Should Actually Hire: The Pragmatist
So if we avoid the hype addicts, the over-engineers, and the speculators, who’s left?
The Pragmatist.
They rarely talk about AI.
No “AI Enthusiast” in their bio.
No Slack spam.
No demos.
They just use AI to quietly remove friction.
When you ask how they refactored a legacy module so quickly, they shrug:
“I scripted a Copilot workflow for the boilerplate.”They don’t generate summaries.
They generate tests, migrations, and error handling.They don’t ask to build a custom LLM.
They wrap an existing API in a secure, boring function and move on.
They make AI invisible.
Predictable.
Boring.
And in software engineering, boring is usually where the money is made.
Supplemental reads
Here are recommended reads to help you with AI adoption:




Spot on, Mirek. I think the "Everything-is-a-Prompt" engineer is the most dangerous because they erode reliability. Replacing simple, predictable code with a probabilistic "black box" is a recipe for debugging archaeology later.