AI (Artificial Intelligence) has experienced several periods of severe funding cuts and lack of interest, such as during the 1970s and 1980s. They were called “AI winters,” a reference to the concept of nuclear winter where the sun is blocked from a layer of smoke and dust!
But of course, things are much different nowadays. AI is one of the hottest categories of tech and is a strategic priority for companies like Facebook, Google, Microsoft and many others.
Yet could we be facing another winter? May things have gone too far? Well, it’s really tough to tell. Keep in mind that prior AI winters were a reaction to the fact that many of the grandiose promises did not come to fruition.
But as of now, we are seeing many innovations and breakthroughs that are impacting diverse industries. VCs are also writing large checks to fund startups while mega tech companies have been ramping their M&A.
Simply put, there are few signs of a slowdown.
A New Kind of AI Winter?
History doesn’t repeat itself but it often rhymes, said Mark Twain. And this may be the case with AI — that is, we could be seeing a new type of winter. It’s where society is negatively affected in subtle ways over a prolonged period of time.
According to Alex Wong, who is the chief scientist and co-founder of DarwinAI, the problem of bias in models is major part of this. For example, AI is being leveraged in hiring, which involves screening for candidates based on large numbers of resumes.
“While this approach might seem data driven and thus objective, there are significant gender and cultural biases in these past hiring practices, which are then learned by the AI in the same way a child can pickup historical biases from what they are taught,” said Alex. “Without deeper investigation, the system will start making biased and discriminatory decisions that can have a negative societal impact and create greater inequality when released into companies.”
But this is not a one off. Bias is really a pervasive problem, corrosive and often goes undetected. In fact, a big reason is the culture of the AI community, which is more focused on striving for accuracy rates – not necessarily interested on the broader impact.
Another factor is that AI tools are becoming more pervasive and are often free to use. So as inexperienced people build models, there is a higher likelihood that we’ll see even more bias in the outcomes.
What To Do?
Explainability is about understanding AI models. True, this is difficult because deep learning systems can be black boxes. But there are creative ways to deal with this.
Consider the following from Christian Beedgen, who is the co-founder and CTO of Sumo Logic:
“Early in our development of Sumo Logic, we built a generic and unsupervised anomaly detection system to track how often different classifications of logs appeared, with the idea that this would help us spot interesting trends and outliers. Once implemented, however, we found that — despite a variety of approaches — the results generated by our advanced anomaly detection algorithms simply were not meaningfully explainable to our users. We realized that the results of sophisticated algorithms don’t matter if humans can’t figure out what they mean. Since then, we’ve focused on narrower problem states to create fundamentally simpler — and therefore more useful — predictive machinery.”
It was a tough lesson but it wound up being critical for the company, as the products got much stronger and useful. Sumo Logic has gone on to raise $230 million and is one of the top players in its space.
Going forward, the AI industry needs to be much more proactive, with an urgency for fairness, accountability and transparency. A way to help this along would be to include building features in platforms that provide insights on explainability as well as bias. Even having old-school ethics boards is a good option. After all, this is common for university research.
“AI-influenced decisions that result in discrimination and inequality, when left unchecked by humans, can lead to a loss of trust in AI, which can in turn hinder its widespread adoption, especially in the sectors that could truly benefit,” said Alex. “We should not only strive to improve the transparency and interpretability of deployed AI systems, but educate those who build these systems and make them aware of fairness, data leakage, creator accountability, and inherent bias issues. The goal is not only highly effective AI systems, but also ones that society can trust.”