CES: The Coolest AI (Artificial Intelligence) Announcements

As seen at this week’s CES 2020 mega conference, the buzz for AI continues to be intense. Here are just a few comments from the attendees:

  • Nichole Jordan, who is Grant Thornton’s Central region managing partner: “From AI-powered agriculture equipment to emotion-sensing technology, walking the exhibit floors at CES drives home the fact that artificial intelligence is no longer a vision of the future. It is here today and is clearly going to be more integrated into our world going forward.”
  • Derek Kennedy, the Senior Partner and Global Technology Leader at Boston Consulting Group: “AI is increasingly playing a role in every intelligent product, such as upscaling video signals for an 8K TV as well as every business process, like predicting consumer demand for a new product.”
  • Houman Haghighi, the Business Development Partner at Menlo Ventures: “Voice, natural language and predictive actions are continuing to become the new—and sometimes the only—user interface within the home, automobile, and workplace.”

So what were some of the stand out announcement at CES? Well, given that there were over 4,500 exhibitors, this is a tough question to answer. But here are some innovations that certainly do show the power of AI:

Prosthetics: Using AI along with EMG technology, BrainCo has built a prosthetic arm that learns. In fact, it can allow for people to play a piano or even do calligraphy. 

“This is an electronic device that allows you to control the movements of an artificial arm with the power of thought alone,” said Max Babych, who is the CEO of SpdLoad. Today In: Small Business

The cost for the prosthetic is quite affordable at about $10,000 (this is compared to about $100,000 for alternatives). 

SelfieType: One of the nagging frictions of smartphones is the keyboard. But Samsung has a solution: SelfieType. It leverages cameras and AI to create a virtual keyboard on a surface (such as a table) that learns from hand movements. 

“This was my favorite and simplest AI use case at CES,” said R. Mordecai, who is the Head of Innovation and Partnerships at INNOCEAN USA. “I wish I had it for the flight home so I could type this on the plane tray.”

Lululab’s Lumine: This is a smart mirror that is for skin care. Lumine uses deep learning to analyze six categories–wrinkles, pigment, redness, pores, sebum and trouble–and then recommends products to help.

Whisk: This is powered by AI to scan the contents of your fridge so as to think up creative dishes to cook (it is based research from over 100 nutritionists, food scientists, engineers and retailers). Not only does this technology allow for a much better experience, but should help reduce food waste. Keep in mind that the average person throws away 238 pounds of food every year. 

Wiser: Developed by Schneider Electric, this is a small device that you install in your home’s circuit breaker box. With the use of machine learning, you can get real-time monitoring of usage by appliance, which can lead to money savings and optimization for a solar system.

Vital Signs Monitoring: The Binah.ai app analyzes a person’s face to get medical-grade insights, such as oxygen saturation, respiration rate, heart rate variability and mental stress. The company also plans to add monitoring for hemoglobin levels and blood pressure.

Neon: This is a virtual assistant that looks like a real person, who can engage in intelligent conversation and show emotion. While still in the early stages, the technology is actually kind of scary. The creator of Neon–which is Samsung-backed Star Labs—thinks that it will replace doctors, lawyers and other white collar professionals. No doubt, this appears to be a recipe for wide-scale unemployment, not to mention a way to unleash a torrent of deepfakes!

AI (Artificial Intelligence): What’s The Next Frontier For Healthcare?

Perhaps one of the biggest opportunities for AI (Artificial Intelligence) is the healthcare industry. According to ReportLinker, spending on this category is forecasted to jump from $2.1 billion to $36.1 billion by 2025. This is a hefty 50.2% compound annual growth rate (CAGR).

So then what are some of the trends that look most interesting within healthcare AI? Well, to answer this question, I reached out to a variety of experts in the space.

Here’s a look: 

Ori Geva, who is the CEO of Medial EarlySign:

One of the key trends is the use of health AI to spur the transition of medicine from reactive to proactive care. Machine learning-based applications will preempt and prevent disease on a more personal level, rather than merely reacting to symptoms. Providers and payers will be better positioned to care for their patients’ needs with the tools to delay or prevent the onset of life-threatening conditions. Ultimately, patients will benefit from timely and personalized treatment to improve outcomes and potentially increase survival rates.

Dr. Gidi Stein, who is the CEO of MedAware:

In the next five years, consumers will gain more access to their health information than ever before via mobile electronic medical records (EMR) and health wearables. AI will facilitate turning this mountain of data into actionable health-related insights, promoting personalized health and optimizing care. This will empower patients to take the driving wheel of their own health, promote better patient-provider communication and facilitate high-end healthcare to under-privileged geographies.

Tim O’Malley, who is the President and Chief Growth Officer at EarlySense:

Today, there are millions of physiologic parameters which are extracted from a patient. I believe the next mega trend will be harnessing this AI-driven “Smart Data” to accurately predict and avoid adverse events for patients. The aggregate of this data will be used to formulate predictive analytics to be used across diverse patient populations across the continuum of care, which will provide truly personalized medicine.

Andrea Fiumicelli, who is the vice president and general manager of Healthcare and Life Sciences at DXC Technology:

Ultimately, AI and data analytics could prove to be the catalyst in addressing some of today’s most difficult-to-treat health conditions. By combining genomics with individual patient data from electronic health records and real-world evidence on patient behavior culled from wearables, social media and elsewhere, health care providers can harness the power of precision medicine to determine the most effective approaches for specific patients.

This brings tremendous potential to treating complex conditions such as depression. AI can offer insights into a wealth of data to determine the likelihood of depression—based on the patient’s age, gender, comorbidities, genomics, life style, environment, etc.—and can provide information about potential reactions before they occur, thus enabling clinicians to provide more effective treatment sooner.

Ruthie Davi, who is the vice president of Data Science at Acorn AI, a Medidata company:

One key advance to consider is the use of carefully curated datasets to form Synthetic Control Arms as a replacement for placebo in clinical trials. Recruiting patients for randomized control trials can be challenging, particularly in small patient populations. From the patient perspective, while an investigational drug can offer hope via a new treatment option, the possibility of being in a control arm can be a disincentive. Additionally, if patients discover they are in a control arm, they may drop out or elect to receive therapies outside of the trial protocol, threatening the validity and completion of the entire trial.

However, thanks to advances in advanced analytics and the vast amount of data available in life sciences today, we believe there is a real opportunity to transform the clinical trial process. By leveraging patient-level data from historical clinical trials from Medidata’s expansive clinical trial dataset, we can create a synthetic control arm (SCA) that precisely mimics the results of a traditional randomized control. In fact, in a recent non-small cell lung cancer case study, Medidata together with Friends of Cancer Research was successful in replicating the overall survival of the target randomized control with SCA. This is a game-changing effort that will enhance the clinical trial experience for patients and propel next generation therapies through clinical development.

Tesla’s AI Acquisition: A New Way For Autonomous Driving?

This week Tesla acquired DeepScale, which is a startup that focuses on developing computer vision technologies (the price of the deal was not disclosed). This appears to be a part of the company’s focus on building an Uber-like service as well building fully autonomous vehicles.

Founded in 2015, DeepScale has raised $15 million from investors like Point72, next47, Andy Bechtolsheim, Ali Partovi, and Jerry Yang. The founders include Forrest Iandola and Kurt Keutzer, who are both PhD’s. In fact, about a quarter of the engineering team has a PhD and they have more than 30,000 academic citations.

“DeepScale is a great fit for Tesla because the company specializes in compressing neural nets to work in vehicles, and hooking them into perception systems with multiple data types,” said Chris Nicholson, who is the CEO and founder of Skymind. “That’s what Tesla needs to make progress in autonomous driving.”

Tesla has the advantage of an enormous database of vehicle information. So with software expertise, the company should help accelerate the innovation. “If ‘data is the new oil’ then ‘AI models are the new Intellectual Property and barrier to entry,’” said Joel Vincent, who is the CMO of Zededa. “This is the dawn of a new age of competitive differentiation. AI models are useless without data and Telsa has an astounding amount of edge data.”

Now when it comes to autonomous driving, there are other major requirements–some which may get little attention.

Just look at the use of energy. “Large models require more powerful processors and larger memory to run them in production,” said Dr. Sumit Gupta, who is the the IBM Cognitive Systems VP of AI and HPC. “But vehicles have a limited energy budget, so the market is always trying to minimize the energy that the electronics in the car consume. This is what DeepScale is good at. The company invented an AI model called ‘SqueezeNet’ that requires a smaller memory footprint and also less CPU horsepower.”

Keep in mind that the lower energy consumption will mean there will be more capacity for sensors for vision. “This should help make autonomous vehicles safer,” said Arjan Wijnveen, who is the CEO of CVEDIA. “Tesla seems certain that they don’t need LiDAR for effective computer vision, but there are lots of other types of sensors you could see on their vehicles in the future, and sometimes just placing a second camera facing another angle can improve the AI model.”

Not using LiDAR would be a big deal, which would mean a much lower cost per vehicle. “There are concerns about the deployment of LIDAR lasers in the public sphere,” said Gavin D. J. Harper, who is a Faraday Institution Research Fellow at the University of Birmingham. “Safety measures include limiting the power and exposure of lasers. There is also the concern about the potential for causing inadvertent harm to those nearby.”

So all in all, the DeepScale deal could move the needle for Tesla and represent a shift in the industry. Although, it is still important to keep in mind that autonomous driving is still in the nascent stages (regardless of what Elon Musk boasts!) There remain many tough issues to work out, which could easily drag on because of regulatory processes.

“To get to full autonomy, you’re still going to need some major algorithmic improvements,” said Nicholson. “Some of the smartest people in the world are working on this, and it seems clear that we’ll get there, even if we don’t know when. In any case, companies like Tesla and Waymo have the right mix of talent, data, and cars on the road.”

What AI (Artificial Intelligence) Will Mean For The Cannabis Space

Just about every estimate shows that the cannabis industry will see strong long-term growth. Yet there are some major challenges–and they are more than just about changing existing laws and regulations.

But AI (Artificial Intelligence) is likely to be a big help. True, the industry has not been a big adopter of new technologies. However, this should change soon as investors pour billions of dollars into the space.

So how might AI impact things?  Well, look at what the CEO and Director of CROP Corp, Michael Yorke, has to say: “The use of AI in sensors and high-definition cameras can be used to keep track of and adjust multiple inputs in the growing environment such as water level, PH level, temperature, humidity, nutrient feed, light spectrum and CO2 levels. Tracking and adjusting these inputs can make a major difference in the quantity and quality of cannabis that growers are able to produce. AI also helps automate trimming technology so that it is able to de-leaf buds saving countless hours of manual labor. Similarly, it can be applied to automated planting equipment to increase the effectiveness and efficiency of planting. And AI can identify the sex of the plants, detect sick plants, heal or remove sick plants from the environment, and track the plant growth rate to be able to predict size and yield.”

No doubt, such things could certainly move the needle in a big way. 

There are also opportunities to help with such things as more accurate predictions, which would allow for maximizing efficiency. And yes, AI is likely to be key in discovering new strains or customize strains for specific effects (examples would include relaxation, excitement or increasing/decreasing hunger). The result could be even more growth in the cannabis market. 

But there is something else to keep in mind: With no legalization on a federal level in the US, there is a need for sophisticated tracking systems. 

“The existing regulations are complex, requiring businesses to follow detailed rules that govern every area of the industry from growing to packaging and selling to consumers,” said Mark Krytiuk, who is the president of Nabis Holdings. “Even the smallest error can cost a cannabis business thousands, and incur harsh punishments such as losing their cannabis license.”

The situation is even more complex with retail operations. “Artificial intelligence is one key technological advancement that could make a significant impact,” said Krytiuk. “By implementing this technology, cannabis retailers would be able to more easily track state-by-state regulations, and the constant changes that are being made. With this information, they would be able to properly package, ship, and sell products in a more compliant way that is less likely to be intercepted by government regulations.”

Keep in mind that the problems with compliance are a leading cause of failure for cannabis operators. “Running a cannabis business can be costly, especially when it comes to getting and keeping a license, paying high taxes, and dealing with the added pressure of ever-changing government regulations,” said Krytiuk. “If more cannabis businesses had access to automated, AI-powered technology that could help them be more compliant, there would be more successful companies helping the industry to grow.”

Again, the AI part of the cannabis industry is very much in the nascent stages. It will likely take some time to get meaningful traction. But for entrepreneurs, the opportunity does look promising. “The industry is only going to continue to grow, so it’s only a matter of time before it reaches its own technological revolution,” said Krytiuk.

AI (Artificial Intelligence) Words You Need To Know

In 1956, John McCarthy setup a ten-week research project at Dartmouth University that was focused on a new concept he called “artificial intelligence.” The event included many of the researchers who would become giants in the emerging field, like Marvin Minsky, Nathaniel Rochester, Allen Newell, O.G. Selfridge, Raymond Solomonoff, and Claude Shannon.

Yet the reaction to the phrase artificial intelligence was mixed. Did it really explain the technology? Was there a better way to word it?

Well, no one could come up with something better–and so AI stuck.

Since then, we’ve seen the coining of plenty of words in the category, which often define complex technologies and systems. The result is that it can be tough to understand what is being talked about.

So to help clarify things, let’s take a look at the AI words you need to know:

Algorithm

From Kurt Muehmel, who is a VP Sales Engineer at Dataiku:

A series of computations, from the most simple (long division using pencil and paper), to the most complex. For example, machine learning uses an algorithm to process data, discover rules that are hidden in the data, and that are then encoded in a “model” that can be used to make predictions on new data.

Machine Learning

From Dr. Hossein Rahnama, who is the co-founder and CEO of Flybits:

Traditional programming involves specifying a sequence of instructions that dictate to the computer exactly what to do. Machine learning, on the other hand, is a different programming paradigm wherein the engineer provides examples comprising what the expected output of the program should be for a given input. The machine learning system then explores the set of all possible computer programs in order to find the program that most closely generates the expected output for the corresponding input data. Thus, with this programming paradigm, the engineer does not need to figure out how to instruct the computer to accomplish a task, provided they have a sufficient number of examples for the system to identify the correct program in the search space.

Neural Networks

From Dan Grimm, who is the VP and General Manager of Computer Vision a RealNetworks:

Neural networks are mathematical constructs that mimic the structure of the human brain to summarize complex information into simple, tangible results. Much like we train the human brain to, for example, learn to control our bodies in order to walk, these networks also need to be trained with significant amounts of data. Over the last five years, there have been tremendous advancements in the layering of these networks and the compute power available to train them.

Deep Learning

From Sheldon Fernandez, who is the CEO of DarwinAI:

Deep Learning is a specialized form of Machine Learning, based on neural networks that emulate the cognitive capabilities of the human mind. Deep Learning is to Machine Learning what Machine Learning is to AI–not the only manifestation of its parent, but generally the most powerful and eye-catching version. In practice, deep learning networks capable of performing sophisticated tasks are 1.) many layers deep with millions, sometimes, billions of inputs (hence the ‘deep’); 2.) trained using real world examples until they become proficient at the prevailing task (hence the ‘learning’).

Explainability

From Michael Beckley, who is the CTO and founder of Appian:

Explainability is knowing why AI rejects your credit card charge as fraud, denies your insurance claim, or confuses the side of a truck with a cloudy sky. Explainability is necessary to build trust and transparency into AI-powered software. The power and complexity of AI deep learning can make predictions and decisions difficult to explain to both customers and regulators. As our understanding of potential bias in data sets used to train AI algorithms grows, so does our need for greater explainability in our AI systems. To meet this challenge, enterprises can use tools like Low Code Platforms to put a human in the loop and govern how AI is used in important decisions.

Supervised, Unsupervised and Reinforcement Learning

From Justin Silver, who is the manager of science & research at PROS:

There are three broad categories of machine learning: supervised, unsupervised, and reinforcement learning. In supervised learning, the machine observes a set of cases (think of “cases” as scenarios like “The weather is cold and rainy”) and their outcomes (for example, “John will go to the beach”) and learns rules with the goal of being able to predict the outcomes of unobserved cases (if, in the past, John usually has gone to the beach when it was cold and rainy, in the future the machine will predict that John will very likely go to the beach whenever the weather is cold and rainy). In unsupervised learning, the machine observes a set of cases, without observing any outcomes for these cases, and learns patterns that enable it to classify the cases into groups with similar characteristics (without any knowledge of whether John has gone to the beach, the machine learns that “The weather is cold and rainy” is similar to “It’s snowing” but not to “It’s hot outside”). In reinforcement learning, the machine takes actions towards achieving an objective, receives feedback on those actions, and learns through trial and error to take actions that lead to better fulfillment of that objective (if the machine is trying to help John avoid those cold and rainy beach days, it could give John suggestions over a period of time on whether to go to the beach, learn from John’s positive and negative feedback, and continue to update its suggestions).

Bias

From Mehul Patel, who is the CEO of Hired:

While you may think of machines as objective, fair and consistent, they often adopt the same unconscious biases as the humans who built them. That’s why it’s vital that companies recognize the importance of normalizing data—meaning adjusting values measured on different scales to a common scale—to ensure that human biases aren’t unintentionally introduced into the algorithm. Take hiring as an example: If you give a computer a data set with 100 female candidates and 300 male candidates and ask it to predict the best person for the job, it is going to surface more male candidates because the volume of men is three times the size of women in the data set. Building technology that is fair and equitable may be challenging but will ensure that the algorithms informing our decisions and insights are not perpetuating the very biases we are trying to undo as a society.

Backpropagation

From Victoria Jones, who is the Zoho AI Evangelist:

Backpropagation algorithms allow a neural network to learn from its mistakes. The technology tracks an event backwards from the outcome to the prediction and analyzes the margin of error at different stages to adjust how it will make its next prediction. Around 70% of our AI assistant (called Zia) features the use of backpropagation, including Zoho Writer’s grammar-check engine and Zoho Notebook’s OCR technology, which lets Zia identify objects in images and make those images searchable. This technology also allows Zia’s chatbot to respond more accurately and naturally. The more a business uses Zia, the more Zia understands how that business is run. This means that Zia’s anomaly detection and forecasting capabilities become more accurate and personalized to any specific business.

NLP (Natural Language Processing)

Courtney Napoles, who is the Language Data Manager at Grammarly:

The field of NLP brings together artificial intelligence, computer science, and linguistics with the goal of teaching machines to understand and process human language. NLP researchers and engineers build models for computers to perform a variety of language tasks, including machine translation, sentiment analysis, and writing enhancement. Researchers often begin with analysis of a text corpus—a huge collection of sentences organized and annotated in a way that AI algorithms can understand.

The problem of teaching machines to understand human language—which is extraordinarily creative and complex—dates back to the advent of artificial intelligence itself. Language has evolved over the course of millennia, and devising methods to apprehend this intimate facet of human culture is NLP’s particularly challenging task, requiring astonishing levels of dexterity, precision, and discernment. As AI approaches—particularly machine learning and the subset of ML known as deep learning—have developed over the last several years, NLP has entered a thrilling period of new possibilities for analyzing language at an unprecedented scale and building tools that can engage with a level of expressive intricacy unimaginable even as recently as a decade ago.

How Screenwriting Can Boost AI (Artificial Intelligence)

Scott Ganz, who is the Principal Content Designer at Intuit, did not get in the tech game the typical way. Keep in mind that he was a screenwriter, such as for Wordgirl (winning an Emmy for his work) and The Muppets. “I’ve always considered myself a comedy writer at heart, which generally involves a lot of pointing out what’s wrong with the world to no one in particular and being unemployed,” he said.

Yet his skills have proven quite effective and valuable, as he has helped create AI chatbots for brands like Mattel and Call of Duty. And yes, as of now, he works on the QuickBooks Assistant, a text-based conversational AI platform. “I partner with our engineering and design teams to create an artificial personality that can dynamically interact with our customers, and anticipate their questions,” he said. “It also involves a lot of empathy. It’s one thing to anticipate what someone might ask about their finances. You also have to anticipate how they feel about it.”

So what are his takeaways? Well, he certainly has plenty. Let’s take a look:

Where is chatbot technology right now? Trends you see in the coming years?

It’s a really exciting time for chatbots and conversational user interfaces in general. They’re becoming much more sophisticated than the rudimentary “press 1 for yes” types of systems that we used to deal with… usually by screaming “TALK TO A HUMAN!” The technology is evolving quickly, and so are the rules and best practices. With those two things still in flux, the tech side and the design side can constantly disrupt each other.

Using natural language processing, we’re finding opportunities to make our technology more humanized and relatable, so people can interact naturally and make better informed decisions about their money. For example, we analyze our customers’ utterances to not only process their requests, but also to understand the intent behind their requests, so we can get them information faster and with greater ease. We really take “natural language processing” seriously. Intuit works really hard to speak its customers’ language. That’s extra important to us. You can’t expect people to say “accounts receivable.” It’s “who owes me money?” As chatbots become more and more prevalent in our daily lives, it’s important that we’re aware of the types of artificial beings we’re inviting into our living rooms, cars, and offices. I’m excited to see the industry experiment with gender-neutral chatbots, where we can both break out of the female assistant trope and also make the world a little more aware of non-binary individuals.

Explain your ideas about screenwriting/improv/chatbots?

When I joined the WordGirl writing team, my first task was to read the show “bible” that the creators had written. Yes, “bible” is the term they use. WordGirl didn’t have a central office or writers’ room. We met up once a year to pitch ideas and, after that, we worked remotely. Therefore, the show bible was essential when it came to getting us all on the same page creatively.

Once I got to Intuit, I used those learnings to create a relationship bible to make sure everyone was on the same page about our bot’s personality and its relationship with our customers. Right now my friend Nicholas Pelczar and I are the only two conversation designers at QuickBooks, but we don’t want it to stay that way. As the team expands, documents like this become even more important.

As a side-note, I originally called the document, “the character bible,” but after my team had a coaching session with Scott Cook, who is the co-founder of Intuit, I felt that the document wasn’t customer-focused enough. I then renamed it the “relationship bible” and redid the sections on the bot’s wants, needs, and fears so that they started with the customer’s wants, needs, and fears. Having established those, I then spelled out exactly why the bot wants and fears those same things.

I also got to apply my improv experience to make the bot more personable. Establishing the relationship is the best first thing you can do in an improv scene. The Upright Citizens Brigade is all about this. It increases your odds of having your scene make sense. Also, comedy is hard, and I wanted to give other content designers some easy recipes to write jokes. Therefore, the last section of the relationship bible has some guidance on the kinds of jokes the bot would and wouldn’t/shouldn’t make, as well as some comedy tropes designers can use.

What are some of the best practices? And the gotchas?

On a tactical level, conversation design hinges on communicating clearly with people so that the conversation itself doesn’t slip off the rails. When this happens in human conversation, people are incredibly adept at getting things back on track. Bots are much less nuanced and flexible and they really struggle with this. Therefore, designers need to be incredibly precise both in how you ask and answer questions. It’s really easy to mislead people about what the bot is asking or what it’s capable of answering.

Beyond that, it’s really important to track the emotional undercurrents of the conversation. It may be an AI, but it’s dealing with people (and all of their emotions). AI has its strengths, but design requires empathy, and this is absolutely the case when you’re engaging with people in conversation. Our emotions are a huge part of what we’re communicating, and true understanding requires that we understand that side of it as well. Even though a chatbot is not a person, it is made by real people, expressing real care, in a way that makes it available to everyone, all the time. It’s important to keep this in mind when working on a chatbot, so that the voice remains authentic.

One of the biggest things we need to take into account is ensuring the chatbot is a champion for the company, but still remains a little bit separated from the company. Since chatbots still make mistakes, it’s important to take steps to protect the company’s reputation. This plays out in creating strict guidelines around the use of “I” vs. “We.” When the chatbot messes up, it uses “I,” and only uses “we” in moments when it is working in conjunction with humans at the company. By using these two separate phrases, customers are able to better understand what is the work of the chatbot vs. the company. Essentially, we think of QuickBooks Assistant as a kind of digital employee.

Bias: The Silent Killer Of AI (Artificial Intelligence)

When it comes to AI (Artificial Intelligence), there’s usually a major focus on using large datasets, which allow for the training of models. But there’s a nagging issue: bias. What may seem like a robust dataset could instead be highly skewed, such as in terms of race, wealth and gender.

Then what can be done? Well, to help answer this question, I reached out to Dr. Rebecca Parsons, who is the Chief Technology Officer of ThoughtWorks, a global technology company with over 6,000 employees in 14 countries. She has a strong background in both the business and academic worlds of AI.

So here’s a look at what she has to say about bias:

Can you share some real-life examples of bias in AI systems and explain how it gets there?

It’s a common misconception that the developers who are responsible for infusing bias in AI systems are either prejudiced or acting out of malice—but in reality, bias is more often unintentional and unconscious in nature. AI systems and algorithms are created by people with their own experiences, backgrounds and blind spots which can unfortunately lead to the development of fundamentally biased systems. The problem is compounded by the fact that the teams responsible for the development, training, and deployment of AI systems are largely not representative of the society at large. According to a recent research report from NYU, women comprise only 10% of AI research staff at Google and only 2.5% of Google’s workforce is black. This lack of representation is what leads to biased datasets and ultimately algorithms that are much more likely to perpetuate systemic biases.

One example that demonstrates this point well is the use of voice assistants like Siri or Alexa that are trained on huge databases of recorded speech that are unfortunately dominated by speech from white, upper-middle class Americans—making it challenging for the technology to understand commands from people outside that category. Additionally, studies have shown that algorithms trained on historically biased data have significant error rates for communities of color especially in over-predicting the likelihood of a convicted criminals to reoffend which can have serious implications for the justice system.

How do you detect bias in AI and guard against it?

The best way to detect bias in AI is by cross-checking the algorithm you are using to see if there are patterns that you did not necessarily intend. Correlation does not always mean causation, and it is important to identify patterns that are not relevant so you can amend your dataset. One way you can test for this is by checking if there is any under- or overrepresentation in your data. If you detect a bias in your testing, then you must overcome it by adding more information to supplement that underrepresentation.

The Algorithmic Justice League has also done interesting work on how to correct biased algorithms. They ran tests on facial recognition programs to see if they could accurately determine race and gender. Interestingly, lighter-skinned males were almost always correctly identified, while 35% of darker-skinned females were misidentified. The reason? One of the most widely used facial-recognition data training sets was estimated to be more than 75% male and more than 80% white. Because the categorization had far more definitive and distinguishing categories for white males than it did for any other race, it was biased in correctly identifying these individuals over others. In this instance, the fix was quite easy. Programmers added more faces to the training data and the results quickly improved.

While AI systems can get quite a lot right, humans are the only ones who can look back at a set of decisions and determine whether there are any gaps in the datasets or oversight that led to a mistake.  This exact issue was documented in a study where a hospital was using machine learning to predict the risk of death from pneumonia. The algorithm came to the conclusion that patients with asthma were less likely to die from pneumonia than patients without asthma. Based off this data, hospitals could decide that it was less critical to hospitalize patients with both pneumonia and asthma, given the patients appeared to have a higher likelihood of recovery. However, the algorithm overlooked another important insight, which is that those patients with asthma typically receive faster and more intensive care than other patients, which is why they have a lower mortality rate connected to pneumonia. Had the hospital blindly trusted the algorithm, they may have incorrectly assumed that it’s less critical to hospitalize asthmatics, when in reality they actually require even more intensive care.

What could be the consequences to the AI industry if bias is not dealt with properly?

As detailed in the asthma example, if biases in AI are not properly identified, the difference can quite literally be life and death. The use of AI in areas like criminal justice can also have devastating consequences if left unchecked. Another less-talked about consequence is the potential of more regulation and lawsuits surrounding the AI industry. Real conversations must be had around who is liable if something goes terribly wrong. For instance, is it the doctor who relies on the AI system that made the decision resulting in a patient’s death, or the hospital that employs the doctor? Is it the AI programmer who created the algorithm, or the company that employs the programmer?

Additionally, the “witness” in many of these incidents cannot even be cross-examined since it’s often the algorithm itself. And to make things even more complicated, many in the industry are taking the position that algorithms are intellectual property, therefore limiting the court’s ability to question programmers or attempt to reverse-engineer the program to find out what went wrong in the first place. These are all important discussions that must be had as AI continues to transform the world we live in.

If we allow this incredible technology to continue to advance but fail to address questions around biases, our society will undoubtedly face a variety of serious moral, legal, practical and social consequences. It’s important we act now to mitigate the spread of biased or inaccurate technologies.

What Do VCs Look For In An AI (Artificial Intelligence) Deal?

Recently SoftBank Group launched its latest fund, called Vision Fund 2, which has $108 billion in assets. The focus: It’s primarily on making investments in AI (Artificial Intelligence). No doubt, the fund will have a huge impact on the industry.

But of course, VCs are not the only ones ramping up their investments. So are mega tech companies. For example, Microsoft has announced a $1 billion equity stake in OpenAI (here’s a post I wrote for Forbes.com on the deal).

The irony is that—until a decade ago—AI was mostly a backwater, which had suffered several winters.  But with the surge in Big Data, new innovations in academic research and the improvements in GPUs, the technology has become a real force.

“I’m convinced that the current AI revolution will be the largest technology trend driving innovation in enterprise software over the next decade,” said Jeremy Kaufmann, who is a Principal at ScaleVP. “The magnitude of this trend will be at least as large as what we observed with cloud software overtaking on-prem software over the last two decades, which created over 200 billion dollars in value. While certain subfields like autonomous driving may be overhyped with irrational expectations on timing, I would argue that progress in the discipline has actually exceeded expectations since the deep learning revolution in 2012.”

Given all this, there are many AI startups springing up to capitalize on the megatrend. So then what are some of the factors to improve the odds of getting funding?

Well, in the AI wave, things may be different from what we saw with the cloud revolution.

“Unlike the shift from on-prem software to SaaS, though, progress in AI will not rewrite every business application,” said Kaufmann. “For example, SaaS companies like Salesforce and Workday were able to get big by fundamentally eating old-school on-prem vendors like Oracle and ADP. In this AI revolution, however, do not expect a new startup to displace an incumbent by offering an ‘AI-first’ version of Salesforce or Workday, as AI does not typically replace a core business system of record. Rather, the most logical area for AI to have an impact in the world of enterprise software will be to sit on top of multiple systems of record where it can act as a system of prediction. I am excited about conversations around the likelihood of sales conversion, the most effective marketing channels, the probability that a given credit card transaction is fraudulent, and whether a particular website visitor might be a malicious bot.”

Data, The Team And Business Focus

An AI startup will also need a rock-solid data strategy, which allows for training of the models. Ideally this would mean having a proprietary source.

“At the highest level, a lot of our diligence comes down to who has proprietary data,” said Kaufmann. “In every deal, we ask, ‘Does this startup have domain-specific understanding and data or could a Google or Amazon sweep in and replicate what they do?’ It’s one of the biggest challenges for startups—in order to succeed, you need a data set that can’t easily be replicated. Without the resources of a larger, more established company, that’s a very big challenge—but it can be achieved with a variety of hacks, including ‘selling workflow first, AI second,’ scraping publicly available data for a minimum viable product, incentivizing your customers to share their data with you in return for a price discount, or partnering with the relevant institutions in the field who possess the key data in question.”

And even when you have robust data, there remain other challenges, such as with tagging and labeling.  There are also inherent problems with bias, which can lead to unintended consequences.

All this means that—as with any venture opportunity—the team is paramount. “We look at the academic backgrounds,” said Rama Sekhar, who is a partner at Norwest Venture Partners. “We also like a team that has worked with models at scale, say from companies like Google, Amazon or Apple.”

But for AI startups, there can often be too much of a focus on the technology, which could stymie the progress of the startup. “Some of the red flags for me are when a pitch does not define a market target clearly or there is not a differentiation,” said David Blumberg, who is the Founder and Managing Partner of Blumberg Capital. “Failed startups are usually not because of the technology but instead from a lack of product-market fit.”

Microsoft Bets $1 Billion On The Holy Grail Of AI (Artificial Intelligence)

Microsoft is back to its winning ways. And with its surging profits, the company is looking for ways to marshal its enormous resources to keep up the momentum. Perhaps one of the most important recent deals is a $1 billion investment in OpenAI. The goal is to build a next-generation AI platform that is not only powerful but ethical and trustworthy

Founded in 2015, OpenAI is one of the few companies — along with Google’s DeepMind  that is focused on AGI (Artificial General Intelligence), which is really the Holy Grail of the AI world. According to a blog post from the company: “AGI will be a system capable of mastering a field of study to the world-expert level, and mastering more fields than any one human — like a tool which combines the skills of Curie, Turing, and Bach. An AGI working on a problem would be able to see connections across disciplines that no human could. We want AGI to work with people to solve currently intractable multi-disciplinary problems, including global challenges such as climate change, affordable and high-quality healthcare, and personalized education. We think its impact should be to give everyone economic freedom to pursue what they find most fulfilling, creating new opportunities for all of our lives that are unimaginable today.”

It’s a bold vision. But OpenAI has already made major advances in AI.  “It has pushed the limits of what AI can achieve in many fields, but there are two in particular that stand out,” said Stephane Rion, who is the Senior Deep Learning Specialist of Emerging Practices at Teradata. “The first one is reinforcement learning, where OpenAI has driven some major research breakthroughs, including designing an AI system capable of defeating most of its human challengers in the video game Dota 2. This project doesn’t just show the promise of AI in the video game industry, but how Reinforcement Learning can be used for numerous other applications such as robotics, retail and manufacturing. OpenAI has also made some major advances in the area of Natural Language Processing (NLP), specifically in unsupervised learning and attention mechanism. This can be used to build systems that achieve many language-based tasks such as translation, summarization and generation of coherent paragraphs of text.”

But keep in mind that AI is still fairly weak — with applications for narrow use cases. The fact is that AGI is likely something that is years away. “In fact, we’re far from machines learning basic things about the world in the same way animals can,” said Krishna Gade, who is the CEO and Co-founder at Fiddler Labs. “It is true, machines can beat humans on a some tasks by processing tons of data at scale. However, as humans, we don’t understand or approach the world this way — by processing massive amounts of labeled data. Instead, we use reasoning and predictions to infer the future from available information. We’re able to fill in gaps, extrapolate things with common sense and work with incomplete premises — which machines simply can’t to do yet. So while it is interesting, I do believe that we’re quite far from AGI.”

Guy Caspi, who is the CEO of Deep Instinct, agrees on this. “While deep learning has drastically improved the state-of-the-art in many fields, we have seen little progress in AGI,” he said. “A deep learning model trained for computer vision cannot suddenly learn to understand Japanese, and vice versa.”

Regardless, with OpenAI and Microsoft willing to take on tough challenges, there will likely be an acceleration of innovation and breakthroughs — providing benefits in the near-term. “I love to see a company like Microsoft, which the market has rewarded with enormous returns on capital, making investments that aim to benefit society,” said Dave Costenaro, who is the head of artificial intelligence R&D at Jane.ai. “Funding OpenAI, creating more open source content, and sponsoring their ‘AI for Earth’ grants are all examples that Microsoft has recently spun up in earnest. Microsoft is not a charity, however, so the smart money will take this as signal of how strategically important these issues are.”

What’s more, the emphasis on ethics will also be impactful. As AI becomes pervasive, there needs to be more attention to the social implications.  We are already seeing problems, such as with bias and deepfakes.

“With the fast pace of AI innovation, we’re continually encountering new, largely unforeseen ethical implications,” said Alexandr Wang, who is the CEO of Scale. “In the absence of traditional governing bodies, creating ethical guidelines for AI is both a shared and a perpetual responsibility.”

AI (Artificial Intelligence): An Extinction Event For The Corporate World?

Back in 1973, Daniel Bell published a pioneering book, called The Coming of Post-Industrial Society.  He described how society was undergoing relentless change, driven by rapid advances in technology. Services would become more important than goods. There would also be the emergence of a “knowledge class” and gender roles would evolve. According to Bell: “The concept of the post-industrial society deals primarily with changes in the social structure, the way in which the economy is being transformed and the occupational system reworked, and with the new relations between theory and empiricism, particularly science and technology.”

Among the many readers of the book was a young Tom Siebel. And yes, it would rock his world. It’s what inspired him to enroll at the graduate school of engineering at the University of Illinois and get a degree in Computer Science. He would then join Oracle during the 1980s, where he helped lead the wide-scale adoption of relational databases. Then in 1993, he started his own business, called Siebel Systems, which capitalized on the Internet wave, and after this, he launched C3.ai, a top enterprise AI software provider.

No doubt, he has a great view of tech history (during his career, the IT business has gone from $50 billion to $4 trillion). But he also has an uncanny knack for anticipating major trends and opportunities. Bell’s book was certainly a big help. Keep in mind that he said it was essentially about “social forecasting.”

OK then, so what now for Siebel? Well, he has recently published his own book: Digital Transformation: Survive and Thrive in an Era of Mass Extinction

In it, he contends that technology is at a inflection point. While the PC and Internet eras were mostly about streamlining procedures and operations, the new era is much different. It’s about the convergence of four megatrends: cloud computing, big data, artificial intelligence (AI) and IoT (Internet-of-Things). All these are making systems and platforms increasingly smarter.

Siebel provides various case studies to show what’s already being done, such as:

  • Royal Dutch Shell: The oil giant has created an AI app that analyzes 500,000 refineries across the globe so as to allow for predictive maintenance.
  • Caterpillar: With an iPhone, a customer can easily get the vitals on a tractor. Caterpillar has also leveraged telemetry to monitor assets in order to predict failures.
  • 3M: The company is leveraging AI to anticipate complaints from invoices.

All these initiatives have had a major impact, resulting in hundreds of millions in savings. Of course, this does not include the positive impact from improved customer service, safety and environmental protection.

In fact, Siebel believes that if companies do not engage in digital transformation, the prospects for success will be bleak. This means that CEOs must lead the process and become much more knowledgeable about technology. In other words, all CEOs will need to deeply rethink their businesses.

Now there will definitely be challenges and resistance. Many companies simply lack the talent to transition towards next-generation technologies like AI. Even worse, the current technologies are often scattered, disconnected and complex.

Yet Siebel does point out that large companies have inherent advantages. One is that there is often a large amount of quality data, which can be a powerful moat against the competition. What’s more, large companies have the resources, distribution and infrastructure to scale their efforts.

But for the most part, CEOs can no longer wait as the pace of change is accelerating.

According to Siebel:  “New business models will emerge.  Products and services unimaginable today will be ubiquitous.  New opportunities will abound.  But the great majority of corporations and institutions that fail to seize this moment will become footnotes in history.”