The Roadmap Noose

Note: this article was originally published on 25th July 2016

If there’s one thing product managers love, it’s a roadmap. It can become a framework for a team to deliver amazing products, offering clarity and structure to the product vision. But roadmaps can also become a metaphorical noose around a product teams neck when it’s fundamentally created to set dates. Quasi roadmaps-cum-project plans don’t work for a number of reasons, but principally it’s as we’re horrible at estimating the time taken to perform a task.

As a product manager, it’s easy enough to ratify. Simply open your roadmap from this time last year and cross-reference what you thought you’d launch last year, and what you actually launched. This time last year I was fresh-faced in a new role and predicted we’d build a fully transactional app on never-used-before APIs within 4 months, and by the end of the year, we’d have taken the app to 3 new markets. In short, I was shooting for global dominance. In reality, we launched the app within 6 months and spent the rest of the year building out features and functionality in the app without taking it to a new market. Despite the chasm between the initial roadmap and reality, I firmly believe the whole team can look back with pride on what we collectively achieved.

If this is the case, that our plans and realities often differ, then why do we subject ourselves to this planning ritual?

Modern businesses want to harmonise strategic planning with agility, but falsely see roadmaps as a set of promises and commitments to delivery as opposed to a guide to the strategic direction of the product. For businesses to be truly agile, they need to accept the unknowns in the world. This is juxtaposed with the traditional view of roadmaps though, and therin lies the problem.

Roadmaps are often a mere reassurance for the business that a product team is focused on the right challenges. It’s not unreasonable for a business to want to understand what it’s potential product pipeline looks like, especially considering the financial implications this has. Businesses need financial predictions which often hinge on product launches and improvements. Given the financial pressure, it’s easy to see how the traditional view of roadmaps has developed, and why product managers are continually asked to produce them.

So given roadmaps offer value, it’s important to understand why they’re so often wrong.

History is littered with examples of human overestimation of the time needed to complete a task; arguably, it’s an inherent flaw.

One of the popularist theories on the matter comes Daniel Kahneman and Amos Tversky (1979) and their ‘planning fallacy’. Kahneman and Tversky theorised that predictions about how much time will be needed to complete a future task display an optimism bias, which research proved to be true. This phenomenon is true for both group and individual tasks, and across all industries.

For example, take the construction of the Sydney Opera House. According to original estimates in 1957, the opera house would be completed early in 1963 for $7 million… a scaled-down version of the opera house finally opened in 1973 at a cost of $102 million (Hall, 1980).

The Channel Tunnel faced similar issues. The builders predicted that the first trains would run between London and Paris in June 1993. The train ran in May 1994. The tendency to hold a confident belief that our own projects will proceed as planned, even while knowing that the vast majority of similar projects have run late, is the planning fallacy.

Rooted within the fallacy is the concept of wishful thinking:

Wishful thinking is the formation of beliefs and making decisions according to what might be pleasing to imagine instead of by appealing to evidence, rationality, or reality.

Sounds like a roadmap you’ve seen, right?

This is grounded in a self-serving bias, where we construct our previous experiences to reflect best on ourselves. We may take credit for tasks that went well and blame delays on outside influences.

This is most evident when creating roadmaps and estimating the time to completion. When assessing how long a task will likely take we try and relate it to similar tasks previously completed. Research found that people usually focus on previous occasions that justify an optimistic outlook; we almost never focus on episodes when we’ve encountered problems or failed to finish tasks as expected (Buehler et al., 1994).

The fable of deadlines is now popular in modern culture, with quotes and memes-a-plenty:

“I love deadlines. I love the whooshing noise they make as they go by.”

Douglas Adams, The Salmon of Doubt

So, we know roadmaps are required and can be useful, but we’re not great at estimating the time taken. What’s the solution then? We think we’ve found it.

After much pondering, procrastination and pizza (and a healthy dose of reading from our peers), we’ve changed our roadmaps to better serve all outcomes. We now prioritise themes into Now / Next / Later buckets. The Now bucket constitutes of the things we’re working on right now. The next bucket…you got it. It’s the stuff we’re working on next. And the later bucket usually contains big, meaty ideas that we’re not ready to work on anytime soon, but are important.

This allows the team to have a clear focus and ends debates about products and features that are months away. This provides the business with the reassurance that the product is moving in the right direction, and gives the product teams enough freedom that they’re solving problems rather than building out-of-date solutions proposed in a roadmap 12 months previous.

We can still talk about dates when required, but these are separate to the roadmap. The roadmap servers a clearer purpose this way; it provides tangibility to the product vision and provides a framework for success.

The first mention I can find online explaining this way of working comes from Noah Weiss here. It’s worth a read.

Here was our first stab at this, using Artefact Cards:

I feel liberated just looking at it.

How this will pan out in the long term at Travelex is yet to be seen; early feedback from Product, Design, Engineering, Marketing and our commercial teams has been positive, as has the reaction from the rest of the business. Time will tell if the idea sticks, but for now the metaphorical noose has been laid to rest and our teams are revelling in the freedom.


The original now/next /later post from Noah Weiss is here >

If you’re looking for a tool to help with this model, ProdPad is worth checking out >

This presentation from Janna Bastow offers a different view on this format and other facaets of roadmaps >

Will chat bots kill mobile apps?

Note: this article was originally published on 13th May 2016

Unless you’ve been living under a rock for the last month, you’ll know that Facebook recently announced that they’re opening up Messenger to allow developers to build bots. Big news.

Bots can work in a number of ways, principally they’ll either perform natural language processing, or they’ll have a guided conversation through the use of controlled interactions (for example, buttons). “Hey, Amazon here. Want to buy this TV set?” “Yes/No”. Conversational commerce.

Naturally, this leads to people questioning their place within the ecosystem — are chatbots replacements for websites? Are they going to kill off mobile apps?



History tells us that new channels can become complementary rather than displacing existing channels. Websites didn’t kill physical stores, and mobile apps haven’t killed websites. Instead, the makeup of the ecosystem changes and each channel performs a different job for the customer whilst coexisting with the other channels. Its’ likely that over time that bots will become a quadratic channel — physical, web, app and bots will likely make up a businesses ‘omnichannel’ presence.

Whilst I’m excited about bots, I do wonder if their potential for driving commerce is a little misunderstood. They’ll be great for service discovery and acquisition, but that ultimately they’ll hand off to apps + websites. Finding out about a new product or service via chat might be contextually relevant, but the experience will never be rich enough or allow for enough customisation for it to be entirely contained within the bot. Instead, the bot serves the initial parts of the customer journey, awareness and acquisition, and the other channels provide the richness needed for conversion and retention.

If bots are to be a vehicle for acquisition and service discovery, are they more a challenger to the App Store than apps themselves? The App Store has unaddressed long-standing problems with discoverability (rumoured to be somewhat addressed in iOS 10) and perhaps chatbots will provide consumers with another way to discover great apps. To take this a stage further, you could argue that chatbots could begin to challenge search engines… Google & Apple vs Facebook. A fight everyone would love to see.

One way or another, chatbots will change the way we interact with products and services over the coming years. Expecting them to replace apps, however, is a false dawn.


Goodbye, Apple Watch

Note: this article was originally published on 21st February 2016

Baa. Baaaa. Baa.

This, I’m reliably informed by Travelex’s Android engineers, is how I sound when talking about Apple products. An iSheep.

Now whilst I’d like to think I have a little more objectivity than that, they’re probably not too far wrong. I fell in love with Apple products whilst at school with the advent of the iPod. From there I progressed to iPhone, obeying the annual marginal-improvement release cycle, and then unequivocally to the iPad, MacBook, iMac, and Apple TV. To say my front room looks like an Apple showroom wouldn’t be too far wrong.

So last September when Apple announced the Apple Watch, you can imagine my pant-wetting reaction. I sat furiously refreshing at 8am on 10th April to preorder, and at the end of the month I began rocking an Apple Watch on my wrist (42mm Space Grey Sport, if you’re interested).

At first, it was easy to overlook its faults and endlessly rationalise its misgivings;

“I charge my phone every night, it’s no problem to charge my watch too”

“I move my wrist when I want to see the time anyway, so it’s no problem that the watch only shows the time when I move my wrist”

“I can’t afford to miss important notifications, so it’s no problem that I get frequently interrupted by spam messages too”

“It’s great that it tells me to stand up every hour, as I’m always forgetting to stand. I’m sure this won’t become irritating”

I maintained these views for a while, but slowly I began to get frustrated with the watch. I moved from reasoning why I should wear the watch (because the watch was useful, or “enhanced my life” in Apple-speak) to reasoning why I couldn’t take it off (I work in mobile, I have to understand how this works / my wife would kill me – I spent £500 on this thing!).

The battery slowly became an issue. Sure, I charge my iPhone every night but if for some reason I can’t, the next day I’m surrounded by people with iPhone cables. With the Apple Watch, a charge-less night results in wearing a £500 paperweight the next day and endless people gleefully asking you what the time is.

The biggest frustration grew to be the need to rotate my wrist for the watch to wake and the time to be visible. This sounds like a minor point, and when I’ve discussed it with other Apple Watch owners they rationalise it as I did; “don’t you need to do that with any normal watch anyway?”. Well, the answer to this is unequivocally no.

Picture this; I’m interviewing a candidate and need to discreetly check the time so we don’t run over. I find an opportune moment and glance down at my wrist. Damn. I can’t see the time. Okay, no problem, I’ll subtly move my wrist and I’m sure it’ll come on. No joy. I proceed to move my wrist anti-clockwise and lift it slightly. Joy! I can see the time. Horror! The candidate then says “Sorry, am I boring you?”. Looking at your watch is a cue that you’re either bored or have somewhere else better to be. And with the Apple Watch, you convey this every time you check the time. Not good.

After living with the Apple Watch for a few months I also began suffering phantom wrist vibrations. Any slight tinge on my wrist and I’d be checking the watch. Sound ridiculous? Ever been driving your car and felt your phone go off, only to check it and there are no notifications? That’s a phantom vibration. It’s the mental mis-association of a vibration, leading to you falsely assume your phone has vibrated. Now I don’t know about you, but my jeans pockets don’t vibrate too often, so phantom vibrations relating to my phone are few and far between. The wrist, however – that’s a whole new ball game. Throughout the day your wrist gets multiple minor vibrations and many of these then present themselves as phantom vibrations. You’re left in a state where you check your wrist falsely multiple times per hour, or you don’t check and you miss notifications (one of the selling points of the watch).

All of these frustrations boiled over recently and I did the unthinkable – I ditched the Apple Watch. I have to admit I did feel somewhat liberated, and it’s replacement (the Pebble Time Round) has been a dream. It’s lighter, lasts longer and is always on.

On disclosing this to other iSheep their reaction has been two-fold;

Reaction: What about the apps?!

Answer: I never used any apps apart from the stock apps (usually greeted with “oh yeah, me too”).

Reaction: I’ve been thinking about swapping too… let me know how you get on.

Despite all of the above, I wouldn’t say the Apple Watch is a terrible device. It’s clearly a version 1 product with many kinks that need to be ironed out. It’s biggest problem? It’s primarily a watch. And it’s… well… a terrible watch.

You shouldn’t have priorities

Note: this article was originally published on 13th January 2016

Let’s get things straight: you cannot have priorities. How do I know this? Priority should be used as a singular term.

A priority is defined by the Oxford Dictionary as “A thing that is regarded as more important than others”. It’s therefore contradictory to have priorities, as one will ultimately be more important than the others, thus rendering the others not priorities by definition.

The word priority originated from the the Latin prior, meaning first. Until the 1940’s, the word priorities did not even appear in the dictionary. It’s most likely a reflection of modern societies intemperance, where multiple things are percieved to be of equally great importance.

Disagree? Okay — grab your list of priorities. Now, imagine a world where only one of them can possibly take place, no matter what. Pick one. Just one… Got it? That’s your priority.

In any team having priorities can cause conflict and confusion. McChesney et al articulate this well in the book The Four Disciplines of Execution. They argue that day-to-day whirlwind in any organisation causes confusion and consequently inefficiency. Having a priority enables teams to focus on the task that’s important, and to lessen the impact of the daily whirlwind.

Imagine being clear on your priority, coming into work and not worrying about 100’s of distracting emails and Slack messages — you know exactly what your priority is and how you’re going to execute it that day. Feels good.

In order to achieve this, McChesney et al believe teams should have a wildy important goal, WIG for short. WIGs should take the form “do X to Y by Z”, for example “increase conversion by 5% by 31st January”. This is clear, lacks ambiguity and enables the team to focus entirely on a single thing; their priority.

Organisations often struggle with this concept. As we’re now in January many organisations are going through the process of goal setting. Your division may have anything from 5–20 goals for the year, and you may be able to positvely influence many of the goals. Faced with 20 goals though, how do you know where to start? Were there to be a single goal, a WIG, a priority, it’d be clear where to start and it’d enable all teams to rally around with a singular focus on achieving that goal. And surely that amplification of effort leads to improved results.

Next time you’re tempted to say “priorities” I implore you to append “my one” to that phrase. Saying “my one priorities” will a) make you look like a fool, but more importantly b) cause you to swap out priorities for priority.

Is MVP on Death Row?

Note: this article was originally published on January 5th, 2016.

The true spirit of minimum viable product (MVP) has been eroded. It’s become a term dangerously banded about by those who don’t know it’s true meaning. It’s time it died.

The term MVP was popularised by Eric Ries in The Lean Startup, where he defined it as a:

“… version of a new product which allows a team to collect the  maximum amount of validated learning about customers with the least effort.”

Eric Ries

The spirit of MVP is clear, to learn to most from the least effort. With such a simple ethos, how is it that it’s been so misunderstood? Arguably, the problems with MVP arise from the phraseology and the misinterpretation of the individual words. If we deconstruct the term, its weaknesses are obvious. The below definitions are lifted straight out of the Oxford Dictionary;

Minimum“The least or smallest amount or quantity possible”

Viable“Capable of working successfully”

Product“article or substance that is manufactured”

Quite simply, people associate a product as being something which is shipped to customers, ready to be used. Couple this with “minimum” and “viable”, it’s easy to see why people now misuse the phrase and understand it to mean the smallest thing that works that you can ship to customers. Key to MVP is validated learning, yet none of the terms indicate learning in any way, shape or form.

With this misunderstanding, there’s an unhealthy trend occurring that many Product Managers will attest to;


With all the derivatives the term brings, there’s a trend for an MVP to the treated as the final product. This is scary and frustrating in equal measures.

Think I’m overstating MVP’s bastardisation? A quick search on Twitter throws up these explanations of the term;

  • “the product which has just those features and no more, that allows you to ship a product…”
  • “is the most basic version of your product that still delivers your core offering”
  • “the minimum a development team can get away with shipping”

It’s clear the misuse is widespread and terminal, and it’s time for us to kill the term MVP.

What next?

Now, I am being a little facetious in saying MVP should die. I don’t necessarily think the spirit of MVP should be killed off, just the term. What next? Well, that really depends on the change you want to convey.

Minimum Viable Experiment

Had Eric Ries coined the phrase MVE, things may be very different. It’s the use of the word “Product” in MVP that leads people to think the thing is shipped, when actually it may never see the light of day. Swap “Product” for “Experiment” however, and that barrier is removed. The Oxford Dictionary defines an experiment as a;

“procedure undertaken to make a discovery, test a hypothesis”

Combine this with the definitions of minimum and viable, and you’re left with;

“The smallest thing capable of working that tests a hypothesis”

In short, MVE = the original MVP, just without the horrendous confusion. There’s a clear reference to learning, and you’re left with the sense that this isn’t a final product.

Minimum Lovable Product

You may, however, not want to talk about experiments. In your organisation, you may be under certain pressures to ship products and therefore MVP is used to convey that the product mustn’t be fully featured for launch. Say hello to MLP.

Minimum Lovable Product indicates that it’s;

  1. a product that’s
  2. not fully featured that’s
  3. something customers will love

Again, this conveys much of the MVP message, but the use of the emotive term ‘love’ indicates it’s over and above the bare minimum.

This graphic by @jopas was originally designed to highlight the difference between misunderstood-MVP and actual-MVP. I actually think it highlights the difference between misunderstood-MVP and MLP;


MLP recognises that launching an MVP can cause many false positives. For your MVP, it was perfectly acceptable to have sucky error messages, right? Well, say they turn your customers off and your retention tanks. All of a sudden your questioning product-market fit, when in reality it’s a small part of your product that’s causing the dissatisfaction. MLP reframes what’s acceptable within a product.

If we map time vs. quality during product development, I think most people would agree with the graph below. With MVP, you’d normally prioritise the product backlog and then draw the line at “Good Enough” with aspirations to tackle some of the “Nice to Have” tasks, although these aren’t a priority;


MLP changes this, and says “Good Enough” is not good enough, and “Nice to Have” is must have;


To build products that resonate with customers, they need to have these lovable moments built in from day 1 (and an aside, it’s really difficult to retroactively go back and make a product lovable). Otherwise, how do you know if customers will actually use your product? A product launch that is just good enough only tells you how your customers will use a product that is just good enough, not how your customers would actually use your product.

This isn’t to say the first version of a product must be perfect. Excellence shouldn’t be mistaken for perfectionism. I’m also not promoting building fully-fledged products for launch. MLP is in the same spirit as MVP… just lovable.


Love MLP? Check out Andrew Chen’s excellent blog post on Minimum Desirable Products here >

Up ↑