Stop asking AI researchers to build products, please.

Mehdi Merai, Ph.D (c)
5 min readAug 14, 2019
Photo by Sarah Kilian on Unsplash

Artificial intelligence was, similarly to the internet itself, originally incubated inside universities and governmental agency labs. After the 90s “AI winter”, a light-speed AI tsunami disrupted every existent aspect of organizations and our everyday life. From the accuracy of Deep Neural Networks to GANs, modern AI achievements are now astonishing. Every two weeks, media are relating new machine learning model demonstrations even more impressive than the previous ones (GANs, Deep RL, etc). This effervescence creates a massive number of hungry organizations that are hoping to lead this multipurpose technology and be the “AI leader”, or at least, an AI-ready institution. Meanwhile, almost all of the impressive AI Models are still in incubation process, inside a university lab, hosted on a researcher’s Git Repos. Artistic AI capabilities are demos that are most often limited to a specific development sandbox where all the components are controlled and structured.

In the real world, however, the environment is full of exceptions and very demanding users. When it’s a question of applying artificial intelligence in it, the AI that “does everything” and can “learn by itself” becomes tricky to demonstrate outside of research posters. This is normal, since most AI models are still in incubation phase under the radar of motivated researchers. What’s problematic is mainly related to the massive chasm between investors expectations and the narratives of AI researchers. In fact, pressured investors and hungry R&D entities may fall for the following traps:

Timing research progress is fundamentally wrong

Photo by Jose Antonio Gallego Vázquez on Unsplash

Research entities have a very different perspective about timing and milestones. It’s particularly delusional to time science discovery and the progress of AI prediction capability in a linear way or simply based on historical progress. “By 2030, AI will do this and that”. These jeopardized plans are unfortunately frequent and could misleadingly excite the investment market with unfounded predictions. This common error in the research ecosystem is not new. Breakthroughs could come late or even earlier than our expectations. For example, in the early 2000s, the Neural Nets capabilities available today were attributed to a very far future — in reality we reached interesting capabilities much faster than expected. The same could happen in the reverse way! For that reason, classic investment frameworks where the scale is fast and the timeline is sharp and aggressive could deceive more than a venture capitalist.

AI is real. However, when it’s about R&D, it’s hard to time its progress following the VCs and CFOs frameworks. Because of the size of the hype and the opportunity, several research companies adopted a startup narrative to better fit into VC and investor boxes. They promote startup roadmaps (bold and aggressive) as if future scientific discoveries could be so easily and accurately forecasted. In reality, investing in research could increase the chance of tackling scientific challenges, but it couldn’t be timed as an execution process.

AI researchers are not a premium version of engineers

Photo by Alex Kotliarskyi on Unsplash

A very common misunderstanding in the AI market, is the confusion between research and engineering. We can get a sense of this error through job listings and LinkedIn titles. Research was commonly associated with Ph.Ds, and Ph.Ds were commonly associated with top talent. Because of this HR spaghetti, startups and organizations looking for AI readiness are hiring researchers, believing that they are basically an advanced version of engineers with some linear algebra knowledge. This is fundamentally wrong, following the fact that AI researchers are not necessarily trained to solve many of the complex problems organizations face in the market. When you need an operational solution, ready for daily use, you need engineers, IT people and infrastructure. AI researchers are trained to get you the best result ever, as most organizations don’t need a state of the art accuracy. They would prefer stable and available applications rather than the best algorithm that will die into a Git repo.

The truth should be said: most organizations are investing in AI proof of concepts, while few of those projects actually get to the deployment stage. The reason is simply related to organizations that are expecting researchers to build operational systems with a very minimal consideration of the Herculean engineering effort involved in developing an AI demo into a real, market-ready application. To build a product, you need an intensive relation with the user and a deep understanding of the usage context. Researchers spends 10X more time discussing research with other researchers than validating a market with your end user. It’s simply not their role to do that!

Underestimating the transition friction and cost

Photo by Suad Kamardeen on Unsplash

Researchers have their proper way to measure success. What could impress a NeurIPS audience could be welcomed in a very cold way inside organizations. A common statement that I heard more than once in the AI research ecosystem is that business people can’t understand the value of X or Y achievements. In reality, business people could understand very well research work but only if it’s clearly explained and particularly if they see a reason why they should care. AI can present amazing opportunities for organizations, however, it can’t skip the attraction rule of business IT where a solution needs analyst advocacy and user adoption.

Mehdi Merai, CEO @dataperformers

📝 Read this story later in Journal.

👩‍💻 Wake up every Sunday morning to the week’s most noteworthy stories in Tech waiting in your inbox. Read the Noteworthy in Tech newsletter.

--

--

Mehdi Merai, Ph.D (c)

Partner @Deloitte (AI / Disruptive Tech Leader), Ex-founder (Exit 2021), PhD. (c) @Artificial Intelligence