For most of my career, moving fast was a liability.
I would build things too quickly, according to most of the rooms I was in. A colleague needs three weeks of research before touching a feature: stakeholder interviews, competitor analysis, a Notion doc with seventeen sections. I'd already shipped something. By the time the research concluded, we were usually looking at the same solution - except now I'd created friction by skipping the process everyone else considered mandatory.
I learned to adapt. Slow down publicly. Produce the artifacts. Earn the alignment before moving.
Only now, many years into my career, does that instinct feel like an advantage rather than something to manage. And I think AI is why.
What research is actually doing
Research is not bad. Research is a tool for building mental models when you don't already have them.
If you've never shipped a booking interface before, studying how others have done it is genuinely useful. You're accumulating information your intuition can't yet access. The research earns its keep because the alternative - guessing blind - would be worse.
For a long time I underestimated this. I assumed colleagues doing weeks of research were being slow or overly cautious. Some were. But many were doing exactly what they needed to do - building the internal model that I'd already built through years of shipping similar things. The research wasn't the problem. The assumption that everyone was at the same starting point was.
AI accelerates implementation so dramatically that this distinction now matters more than ever. If you can ship in a day what used to take a week, burning two weeks on research before you start is not a minor inefficiency. It's the bottleneck.
Intuition is not a shortcut
People treat intuition as the lazy option. Skip the process, trust your gut, ship fast and hope for the best.
That framing is wrong.
Strong product intuition is compressed experience. Every past decision, every failed feature, every user session where something broke in an unexpected way - all of that encodes as pattern recognition over time. When you look at a problem and immediately sense the right approach, it's not magic. It's your brain retrieving a decision from an internal dataset built through years of actually shipping things.
For most of my career I had to work around this rather than lean into it. Moving too fast made people uncomfortable. It read as arrogance, or as disrespecting a process that the rest of the team had invested in. So I learned to produce the artifacts, sit in the research sessions, keep my half-finished prototype on a branch until the process caught up. That adaptation was right. Teams need shared context, not just correct answers delivered at the wrong speed.
I wrote about something adjacent to this in my piece on the cosplaying PM - the idea that for technical products, the build is the discovery. You can't spec your way to a great developer experience. You have to feel where it's wrong. The instinct is useful. The trick is making it legible to the people around you.
Where AI shifts the stakes
When implementation was expensive, being wrong was expensive. A bad product decision meant weeks of engineering time lost. That created a rational case for more research upfront - the cost of getting it wrong justified the delay of getting it right.
AI has inverted that math.
A wrong turn now costs a day. Sometimes hours. You build the thing, watch it in action, realize the approach is off, and rebuild. The feedback loop that used to span a sprint now spans an afternoon. Which means the penalty for moving on intuition has dropped dramatically, and the cost of prolonged analysis has stayed the same.
This is what the taste argument is really about. When output is cheap, the bottleneck shifts from how fast you can build to how quickly you can tell what's worth building. Intuition is the mechanism that makes that fast. Research is often the mechanism that slows it down.
The counter argument
Strong intuition is personal context compressed into a judgment call. The problem is that other people don't have that context - and in agency or client work, that gap is more than a communication problem. It's a trust problem.
The person who spent three weeks on research can answer questions. Why this approach and not that one? What did you look at? What did you rule out and why? They have a paper trail. They can walk a client through the reasoning, handle the pushback in a room, and demonstrate that the decision came from a process - not a feeling.
I couldn't always do that. I'd arrive at the right answer and struggle to explain the route. "I've built enough of these to know" is not a satisfying answer when a client is about to sign off on six months of work. That's not a technical problem - it's a cultural one. Clients and agencies have a trust relationship that runs on evidence, not instinct. The research process, even when it arrives at the same conclusion, is doing social and political work that pure intuition skips entirely.
This is where the build-first approach does double duty. A working prototype is not just a faster path to the right answer. It's an artifact that creates something to discuss, push back on, and align around - a partial substitute for the documentation trail that research naturally produces. It moves the conversation from "trust me" to "look at this." But it doesn't fully replace the ability to answer every question in the room, and some rooms need that.
The failure mode to avoid is treating your intuition as infallible. It's a starting point, not a conclusion. And in contexts where the relationship matters as much as the result, knowing the right answer isn't enough - you have to be able to defend it too.
Concluding
The analysis-heavy approach has always been a hedge against inexperience - and a reasonable one. For people still building their internal models, research is doing real work. The problem is when it becomes the default for everyone, regardless of experience, regardless of what AI has done to the cost of iteration.
I spent years adapting my instincts to fit processes that weren't built for them. Now those same instincts are finally in the right environment.
Intuition built from experience is not the alternative to good process. For the right person on the right problem, it was always the process.
Frequently asked questions
Isn't skipping research just reckless guessing?
No, and conflating the two is the most common mistake people make on this topic. Intuition is compressed experience - pattern recognition built from years of shipping real things and watching what fails. Reckless guessing has no information behind it. The difference is whether the person making the call has actually done the work before, not whether they produced a research document first.
Does this approach work for everyone, or just experienced developers?
It scales with depth of experience, so no, it does not work the same way for everyone. Someone early in their career genuinely needs the scaffolding of research to build the mental models that intuition later automates. The argument is not that research is always wrong - it is that for people who have accumulated real domain knowledge, excessive analysis often delays what they already sense is correct.
How does AI change this calculation specifically?
AI collapses the cost of iteration. When building something used to take weeks, being wrong was genuinely expensive. Now, a wrong turn costs you a day, sometimes hours. That changes how much validation you need before committing. You still need judgment about what to build, but you need far less certainty before you start, because correcting course is cheap.
What do you do when intuition conflicts with what a stakeholder or client wants?
Two different problems. If a teammate pushes back, build the thing and bring the artifact to the conversation - people converge on concrete things much faster than abstractions. But in agency or client work, the dynamic is different. Clients need to trust the process, not just the outcome. If you can't explain how you got to the answer, the answer itself becomes harder to sell. In those contexts, the research process is doing relationship and trust work that a prototype alone can't fully replace.
Can you train product intuition, or do you just have it?
You train it by shipping, failing, and shipping again - not by reading more frameworks or attending more workshops. Every time you build something and watch real users interact with it, you update your internal model of what works. The people with strong intuition are not smarter. They just have more reps.