AI and ChatGPT have been the buzzwords lately. Whether you love it or hate it, you can’t escape it (kind of like Miley’s new song, but that’s another topic). Some of the questions that can be heard a lot are: “Is AI going to take my job?”, “Is AI going to replace me?”.
While there are probably already numerous views on this, there is no certain answer, and I’m not here to give one. I’m here to argue why, in my opinion, the strategy should be left in the hands of (product) strategists, consultants, and managers, in other words – humans.
I explore strategy elements that rely on insights rather than just facts and reasoning behind the frameworks. By showcasing some of Q’s processes and ways of work, I explain how strategy cannot be boxed into a set of instructions or logic formulas but how we still might use AI tools to help us.
Strategy ≠ framework
“A new strategy for a client is about creating a new reality for a sector”
– Strategy (2021), K. Safarova
As such, there is no right or wrong way to do it. And there is definitely no one way to do it. Strategy is not straightforward or linear. As new insights arise, it requires multiple back-and-forth efforts, revisions, and updates. And, unlike the logic behind AI, there is no textbook or formula on how to do it.
Therefore, it cannot be boxed into a single framework and reused with different clients. It is specifically tailor-made for each client, business, or industry. At Q, we carefully design each strategy project to the client’s needs and the problem at hand. As such, the output and deliverables are different with each project.
Frameworks can give an illusion that all you have to do is fill in some boxes, and the answer will magically appear. The reality is far from it. It’s about adapting existing frameworks in a meaningful way – for you – or even creating new ones.
One of the widespread and well-known frameworks is the user personas one. In Q, we steer away from it and focus on (unmet) user needs instead. Here’s why and how:
That being said, I’m not suggesting that frameworks should not be used or that they cannot be helpful. Their use will only be effective if they encourage critical thinking. The same can be said for ChatGPT.
Insights are messy
Strategy comes from insights. And insights come from users, stakeholders, market, 3rd party documents, reports, business, etc. In other words, there is no one source of truth.
The image above illustrates our way of coming up with insights that lead us to main opportunities and concepts. It’s a dynamic and ongoing engagement that includes some straightforward activities, such as research preparation, conducting research, generating first insights, or analyzing the data.
The image itself doesn’t show what happens in the background and what drives it. Essentially, it’s the subtleties that cannot be put on paper or written as instructions. It’s what comes from experience, learning from mistakes, and building rapport.
These include, but are not limited to:
- Iteration: the first version is never the final version. New insights and feedback arise throughout the project, so the findings are constantly updated.
- Adaptation: even though we can follow similar or the same steps, the insights will never be the same for different clients. There is no one-solution-fits-all.
- Gut feeling: some of the first things you hear in an interview that grab your attention and you remember right after are usually the most important.
- Repetition: the most important insights are usually repeated with different participants and across different reports. Usually subtly and phrased differently.
- Importance of feedback: we don’t work in isolation; the client’s feedback is valued even with work in progress. It can come in any form and at any point during the project.
- Knowing where to look: often, we encounter large amounts of documents, existing ideas, and materials dating years back. While having a quick and easy way to analyze everything would be useful, not all data is equally important.
- Different points of view: with usually two or more product strategists working on a project, along with other team members (e.g. UX/UI designers), everyone has a different approach. Creating unique propositions is the beauty of having different minds working on the same problem.
- (Gradually) inferring: as the image above shows, coming up with opportunities or drawing any conclusions often happens gradually. It is a journey determined by all of the points made before and is not made at one point in time but continuously.
“ChatGPT beautifully demonstrates how knowing and thinking are not the same thing. Knowing is committing facts to memory; thinking is applying reason to those facts.” – LA Times
Tools like ChatGPT and its knowledge base can help minimize the streetlight effect, a bias common in research. It refers to finding the truth or the answers where it’s easy to find or look for. In practice, it could mean not performing complete and exhausting research due to a lack of time or resources. Given the right instructions, AI could be used to eliminate this bias and present the right facts needed for insights creation. Of course, while tools like ChatGPT are still in their inception stage, it is important to be careful with what the right facts are and what incorrect or biased information might be.
Predictably irrational
In his book of the same title, through a series of real-life examples and extensive research, Dan Ariely shows how we humans often behave far less rationally than we think. But the good thing is, once we understand where these decisions come from, we can revise our thinking and use technology to design products that overcome these shortcomings.
One thing common to all our strategy projects is the people. The users, the client (the stakeholders), and us, the product strategists leading and working on the project. And if we’re all irrational, how do we tackle it?
We take a human-centered approach in everything we do. Starting with user research to ideation and user validation and testing. The way users behave, what they think, and how they use your product can only be observed in direct interaction with them. Think back to user needs vs. user personas presented earlier.
We, as product strategists, are people working on clients’ projects and, as such, don’t rely purely on data and facts.
“Get comfortable using business judgment and logic to narrow the scope of the problem” – Strategy (2021), K. Safarova
Using business judgment comes from experience, working on (similar) projects, non-verbal and subtle signs from users during interviews, and from a conversation with a client over a business lunch. It’s a case of learning by doing.
Research done by Yalcin et al. (2022) examines how customers react to their bank loans being approved by algorithm vs the human. When their application is rated positively, they evaluate the company more positively. This is because they associate their favorable outcome more with themselves and their specific or personal traits. The same research suggests that despite algorithms’ cost efficiency and predictive accuracy, there is a significant upside to having humans continue to make customer-facing decisions. Therefore, as a client-facing company and working on client-based projects, we will continue to rely on our people to deliver projects^ but embrace new ways to improve.
Allies, not enemies
Lately, it’s been difficult to avoid articles and news starting with “unlock the power of AI for your business” or “AI tools you need in *insert profession*.” But that’s the thing; AI should be just that – a tool. And, like with any tool, it’s only as powerful as the person using it and how they use it. It will be interesting to follow as these tools develop and become more sophisticated and widespread.
Or, as Miley would put it: I can buy myself flowers, write my research questions, talk to users for hours, and say things stakeholders will understand. But that doesn’t mean I couldn’t use someone else’s hand. Sometimes, that hand can be virtual.
So let’s try to be allies, not enemies. But be careful not to value yourself so little by thinking you are (easily) replaced.
“Like any tool, GPT is an enemy of thinking only if we fail to find ways to make it our ally.” – LA Times