Yesterday morning, my co-founder Jackson and I had yet another long discussion. I originally just wanted to quickly align on our direction—but somehow, it turned into a conversation that lasted over an hour.

By the end of the day, I brought this up with Jackson again. What emerged from our exchange was a key realization: we simply have different communication styles.

Jackson is analytical and data-driven. He values thoughtful preparation and structured thinking before entering a conversation. I, on the other hand, tend to be pragmatic and solution-oriented. For instance, I just wanted to “quickly clarify” things.

Throughout our startup journey, I’ve mostly followed my intuition. I constantly have new ideas bubbling up. But Jackson rightly pointed out that not all of my ideas have worked out. That made it clear: my intuition isn’t always a reliable guide.

Jackson suggested we shift toward making data-based decisions. I found that idea both exciting and necessary. Going forward, we want to base our decisions—wherever possible—on real data, whether qualitative or quantitative.

Last night, I proposed a simple decision-making framework to Jackson, which he liked. It’s adapted from a model I found on Google:

  1. Define the problem – Every data project is a business project.
  2. Collect relevant data – What do we already know, and what do we need to find out?
  3. Analyze the data – What patterns or insights emerge?
  4. Develop an action plan – Based on the data, what should we do?
  5. Evaluate the results – Did our solution work? What did we learn?

What I take from this: Before implementing the intuitive solution that first comes to mind, we should pause and gather data. That data might support the solution—or reveal that the problem isn’t actually a problem, or that a completely different path is needed.

I also came across a great quote on Reddit that reinforced this idea:

“You need to start instrumenting your product so that you can track usage and start building the metrics that you want to deliver. Every feature we deliver should include a section on how you expect people to use it and how you will measure that... If the way to track it doesn't exist yet, that becomes part of the epic for that feature.”

That means for every feature we ship, we want to clearly define:

  • How we expect users to interact with it
  • How we will measure if it’s working as intended

Tracking needs to be part of the product from day one—not just an afterthought.

Another question we’ve been thinking about: How long should we test an idea before deciding whether to move forward or pivot? Some indie hackers test one startup idea per month (link). But is that enough time?

I know of one startup from the REACH Incubator in Münster that approached this differently. They defined specific KPIs for each idea and set a timeframe. If the idea didn’t meet the target within that timeframe, they pivoted. Simple and effective.

What do you think?

  • How long do you usually test a new idea or feature?
  • Do you set success metrics up front?
  • What’s your approach to data-driven iteration?

Would love to hear your thoughts!