Skip to content

Don’t Make These AI Blunders

Entrepreneur Tom Siebel has a knack for anticipating megatrends in technology. In the 1980s, he joined Oracle in its hyper-growth phase because he saw the power of relational databases. Then he would go on to start Siebel Systems, which pioneered the CRM space.

But of course, Tom is far from done. So what is he focused on now? Well, his latest venture, C3, is targeting the AI (Artificial Intelligence) market. The company’s software helps with the development, deployment and operation of this technology at scale, such as across IoT environments.

“AI has huge social, economic and environmental benefits,” said Tom. “Look at the energy industry. AI will make things more reliable and safer. There’ll be little downside. Yet AI is not all goodness and light either. There are many unintended negative consequences.”

He points out some major risk factors like privacy and cybersecurity. “What we saw with Facebook and Cambridge Analytica was just a dress rehearsal,” he said.

But when it comes to AI, some of the problems may be subtle. Although, the consequences can still be severe.

Here’s an example:  Suppose you develop an AI system and it has a 99% accuracy rate for detecting cancer. This would be impressive, right?

Not necessarily. The model could actually be way off because of low quality data, the use of wrong algorithms, bias or a faulty sample. In other words, our cancer test could potentially lead to terrible results.

“Basically, all AI to date has been trained wrong,” said Arijit Sengupta, who is the CEO of Aible.com. “This is because all AI focuses on accuracy in some form instead of optimizing the impact on various stakeholders. The benefit of predicting something correctly is never the same as the cost of making a wrong prediction.”

But of course, there are other nagging issues to keep in mind. Let’s take a look:

Transparency: One of the main drivers of AI innovation has been deep learning. But this process is highly complex and involves many hidden layers. This can mean that an AI model is essentially a black box.

“As algorithms become more advanced and complex, it’s becoming increasingly difficult to understand how decisions are made and correlations are found,” said Ivan Novikov, who is the CEO of Wallarm. “And because companies tend to keep proprietary AI algorithms private, there’s been a lack of scrutiny that further complicates matters. In order to address this issue of transparency, AI developers will need to strike a balance between allowing algorithms to be openly reviewed, while keeping company secrets under wraps.”

Rigid Models: Data often evolves. This could be due to changes in preferences or even cultures. In light of this, AI needs to have ongoing monitoring.

“Systems should be developed to handle changes in the norm,” said Triveni Gandhi, who is a Data Scientist at Dataiku. “Whether it’s a recommendation engine, predictive maintenance system, or a fraud detection system, things change over time, so the idea of ‘normal’ behavior will continue to shift. Any AI system you build, no matter what the use case, needs to be agile enough to shift with changing norms. If it’s not, the system and its results will quickly be rendered useless.”

Simplicity: Some problems may not need AI!

Note the following from Chris Hausler, who is a Data Science Manager at Zendesk: “One big mistake I see is using AI as the default approach for solving any problem that comes your way. AI can bring a lot of value in a number of settings, but there are also many problems where a simple heuristic will perform almost as well with a much smaller research, deployment and maintenance overhead.”

Data: No doubt, you need quality data. But this is just the minimum. The fact is that data practices can easily go off the rails.

“Data quality is important but so is data purity,” said Dan Olley, who is the Global EVP and CTO of Elsevier, a division of RELX Group. “We all know dirty data can lead to poor or inaccurate models. Data acquisition, ingestion and cleansing is the hidden key to any AI system. However, be careful that in putting the data into the structures you need for one purpose, you aren’t destroying patterns in the raw data you didn’t know existed. These patterns could be useful later on for different insights. Always keep a copy of all raw data, errors and all.”

Published inAI

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *