Skip navigation
robot learning Getty Images

When ML and AI Fail, Look at the People Before the Technology

It’s not always the technology’s fault when ML and AI fail. Here are three human-driven factors that can hamper their success.

These are exciting times for machine learning and artificial intelligence, with advances in deep learning, autonomous driving, speech recognition and other areas happening near weekly. At the same time, there are still hurdles that remain: enterprises that don’t know how to bring these technologies onboard, and government bodies that struggle to rein in a field that is moving more quickly than they can.

There are big problems to tackle as ML and AI advance, including the ethical implications and the energy costs. In fact, they are keeping promising endeavors from accomplishing what they hoped to accomplish, or even from getting off the ground. 

But sometimes, the determining factor between the success and failure of an ML or AI project is not the tech; it’s the people. Here are three ways that the human touch can be a bad thing.


AI is a tech buzzword, and the inevitable result of the excitement surrounding AI models is that some people – even those in the tech industry – will overestimate what is currently possible or feasible.

Artificial intelligence is often impressive and sometimes groundbreaking, but it is not magic. Machine learning models need vast amounts of data, properly prepared, to learn, which itself takes time and resources. Even when all of that is present, the outcome is not always as hoped.

The potential in AI is seemingly limitless, said AJ Abdallat, CEO of Beyond Limits, but if AI can’t explain itself, it’s essentially useless. AI that can both arrive at and explain its answers is driving a lot of excitement, Abdallat said, but that’s not where a lot of models are yet – and enterprises are struggling to get there.

“Conventional AI systems are what we refer to as black box implementations, where systems are trained based on data but cannot explain how they got the answer,” he said. The potential is huge, but organizations expecting explainable AI but getting something else are likely to be disappointed – and maybe, unfortunately, will not stick around for the valuable business use cases that may be down the road.


In so many respects, AI and ML are about data – not just the huge amounts of data needed to train the models, but also the data required to drive management and business decisions in tech and other fields. 

I think that as other companies and other competitors get more and more data-driven, companies that aren't increasing their data skills and making decisions with data may get left behind because the quality of their decisions just becomes worse and worse relative to other companies,” said Vik Paruchuri, founder and CEO of “That's a management problem.”

The quality, not just the quantity, of the data behind ML and AI models matters. “AI algorithms require consistency in order for them to pattern recognize,” said Abhinav Somani, CEO of Leverton. “If the same answer is given in too many different ways, then the AI might not adequately learn a proper response. The more structured the data, the better the AI outcomes.” A proper input and data strategy, post-data cleansing and pristine data selection can all improve those outcomes, Somani said.

Even AI models built by the best in the field, with ample resources at their disposal, don’t always work out. That’s often because of the bias baked into the model, a bias that makes plain that at the end of the day that humans and their infallibilities build these frameworks.

The AI algorithms are only as good as the quality of data that gets put into them,” said Somani. “In fact, deep learning AI methods mimic human behavior, and thus, if the human behavior is flawed, then so will be the AI output.”


The interest in ML and AI is there, but there is a shortage of experts who can do the work required to really push the technology forward.

When it comes to hiring people who know how to build artificial intelligence systems, the competition is intense, especially given that ML and AI are facing a talent pool shortage,” said Mateusz Opala, machine learning tech lead at Netguru.

This is an issue that is affecting the whole world, not just North America. In 2017, Tencent, a major Chinese tech company, estimated there were only 300,000 researchers and practitioners in AI around the world, and Element AI estimated that only 10,000 people on the planet have the necessary skills for serious AI research. Opala pointed out that Yoshua Bengio, one of the fathers of modern deep learning, says the ML talent pool ranges from 22,000 to 90,000 people, depending on the constraints.

“Many of these people don’t have computer science degrees and lack software development skills,” Opala said. “At the same time, software developers tend to lack the scientific rigor required for running machine learning experiments. As a result, we are facing a reproducibility crisis in machine learning.”

That’s good news financially for those who do have that expertise, but bad news for the technology overall. With bigger players able to offer the high salaries true AI experts can get, the lack of talent could hamper the efforts of smaller companies.

Hide comments


  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.