You need to stop doing this on your AI projects

It’s easy to get excited about AI projects. Especially when you hear about all the amazing things people are doing with AI, from conversational and natural language processing (NLP) systems, to image recognition, autonomous systems and great predictive analytics and pattern and anomaly detection capabilities. But when people get excited about AI projects, they tend to overlook some significant red flags. And those are the red flags that cause over 80% of AI projects to fail.

One of the biggest reasons why AI projects fail is that companies do not justify the use of AI from a return on investment (ROI) perspective. Simply put, they are not worth the time and expense given the cost, complexity and difficulty of implementing the AI ​​systems.

Organizations rush past the exploration phase of AI adoption, and jump from simple proof-of-concept “demos” straight to production without first assessing whether the solution will yield any positive returns. A big reason for this is that measuring the AI ​​project’s ROI can prove more difficult than first expected. Too often, teams are pressured by senior management, colleagues or external teams to just get started with AI efforts, and projects move forward without a clear answer to the problem they’re actually trying to solve or the ROI that’s going to be seen. . When companies struggle to develop a clear understanding of what to expect when it comes to the ROI of AI, misalignment of expectations is always the result.

Missing and misaligned ROI expectations

So, what happens when the ROI of an AI project doesn’t line up with management expectations? One of the most common reasons why AI projects fail is that the return is not justified by the investment of money, resources and time. If you’re going to spend time, effort, human resources, and money implementing an AI system, you want to get a well-identified positive return.

Even worse than a misaligned ROI is the fact that many organizations don’t even measure or quantify ROI to begin with. ROI can be measured in a number of ways from a financial return such as generating revenue or reducing expenses, but it can also be measured as a return on time, shifting or reallocating critical resources, improving reliability and safety, reducing errors and improving quality control, or improving security and compliance. It’s easy to see how an AI project can provide a positive ROI if you spend a hundred thousand dollars on an AI project to eliminate two million dollars of potential costs or liabilities, then every dollar spent to reduce the liability is worth . But you’ll only see that return if you actually plan ahead and manage it.

Management guru Peter Drucker once famously said, “you can’t manage what you don’t measure.” The act of measuring and managing AI ROI is what separates those who see positive value from AI from those who end up canceling their projects years and millions of dollars in effort.

Boil the sea and bite off more than you can chew

Another big reason companies don’t see the ROI they expect is that projects try to bite off too much at once. Iterative, agile best practices, especially those used by best practice AI methodologies like CPMAI clearly advise project owners to “think big. Start small. Iterate often.” Unfortunately, there are many failed AI implementations that have taken the opposite approach of thinking big, starting big and iterating rarely. An example of this is Walmart’s investment in AI-powered robots for inventory management. In 2017, Walmart invested in robots to scan store shelves, and by 2022 they pulled them out of stores.

Walmart obviously had enough resources and smart people. So you can’t blame bad people or bad technology. Rather, the main problem was a poor solution to the problem. Walmart realized that it was just cheaper and easier to use human employees they already had working in the stores to complete the same tasks that the robot was supposed to do. Another example of a project that does not produce the expected results can be found with the various uses of the Pepper robot in supermarkets, museums and tourist areas. Better people or better technology would not have solved this problem. Rather, just a better approach to managing and evaluating AI projects. Methodology, folks.

Adopt a step-by-step approach to running AI and machine learning projects

Were these companies caught up in the hype of the technology? Were these companies just looking to have a robot roaming the halls for the “cool” factor? Because being cool isn’t solving any real business problem or solving a pain point. Don’t do AI for AI’s sake. If you’re doing AI just for AI’s sake, then don’t be surprised when you don’t have a positive ROI.

So, what can companies do to ensure positive returns for their projects? First, stop implementing AI projects for AI’s sake. Successful companies adopt a step-by-step approach to running AI and machine learning projects. As mentioned earlier, methodology is often the missing secret sauce to successful AI projects. Organizations are now seeing benefits in using approaches like the Cognitive Project Management for AI (CPMAI) methodology, building on decades-old data-centric project approaches like CRISP-DM and incorporating established best-practice agile approaches to ensure short, iterative project sprints.

These approaches all start with the business user and their requirements in mind. The very first step to CRISP-DM, CPMAI, and even Agile is figuring out if you should even go ahead with an AI project. These methods suggest alternative approaches, such as automation or straight-up programming, or even more people may be more appropriate to solve the problem.

The “AI Go No Go” Analysis

If AI is the right solution, make sure you answer “yes” to a number of different questions to assess whether you’re ready to embark on your AI project. The set of questions you need to ask to decide whether to move forward with an AI project is called the “AI Go No Go” analysis, and this is part of the very first phase of the CPMAI methodology. The “AI Go No Go” analysis allows users to ask a series of nine questions in three general categories. For an AI project to actually move forward, you need three things in alignment: the business feasibility, the data feasibility, and the technology/implementation capability. The first of the three general categories asks about the business feasibility and asks you if there is a clear problem definition, if the organization is actually willing to invest in this change once it is created, and if there is sufficient ROI or impact.

These may seem like very basic questions, but all too often these very simple questions are skipped over. The second set of questions deals with data including data quality, data quantity and data access. The third set of questions is about implementation, including whether you have the right team and skill sets needed, can execute the model as needed, and that the model can be used where it is intended.

The hardest part of asking these questions is being honest with the answers. It is important to be honest when considering whether to proceed with the project, and if you answer “no” to one or more of these questions, it means that you are either not ready to proceed yet, or that you are not should proceed at all. Don’t just plow on and do it anyway, because if you do, don’t be surprised when you’ve wasted a lot of time, energy and resources and don’t get the returns you were hoping for.

Leave a Reply

Your email address will not be published.