AI (Artificial Intelligence) Companies That Are Combating The COVID-19 Pandemic

AI (Artificial Intelligence) has a long history, going back to the 1950s when the computer industry started.  It’s interesting to note that much of the innovation came from government programs, not private industry. This was all about how to leverage technologies to fight the Cold War and put a man on the moon. 

The impact of these program would certainly be far-reaching. They would lead to the creation of the Internet and the PC revolution.

So fast forward to today: Could the COVID-19 pandemic have a similar impact? Might it be our generation’s Space Race?

I think so. And of course, it’s just not the US this time. This is about a worldwide effort.

The Catalysts

Wide-scale availability of data will be key. The White House Office of Science and Technology has formed the Covid-19 Open Research Dataset, which has over 24,000 papers and is constantly being updated. This includes the support of the National Library of Medicine (NLM), National Institutes of Health (NIH), Microsoft and the Allen Institute for Artificial Intelligence.

“This database helps scientists and doctors create personalized, curated lists of articles that might help them, and allows data scientists to apply text mining to sift through this prohibitive volume of information efficiently with state-of-the-art AI methods,” said Noah Giansiracusa, who is the Assistant Professor at Bentley University.

Yet there needs to be an organized effort to galvanize AI experts to action. The good news is that there are already groups emerging. For example, there is the C3.ai Digital Transformation Institute, which is a new consortium of research universities, C3.ai (a top AI company) and Microsoft. The organization will be focused on using AI to fight pandemics. 

There are even competitions being setup to stir innovation. One is Kaggle’s COVID-19 Open Research Dataset Challenge, which is a collaboration with the NIH and White House. This will be about leveraging Kaggle’s 4+ million community of data scientists. The first contest was to help provide better forecasts of the spread of COVID-19 across the world. 

Next, the Decentralized Artificial Intelligence Alliance, which is led by SingularityNET, is putting together an AI hackathon to fight the pandemic. The organization has more than 50 companies, labs and nonprofits.

And then there is MIT Solve, which is a marketplace for social impact innovation. It has established the Global Health Security & Pandemics Challenge. In fact, a member of this organization, Ada Health, has developed an AI-powered COVID-19 personalized screening test. 

Free AI Tools

AI tools and infrastructure services can be costly. This is especially the case for models that target complex areas like medical research.

But AI companies have stepped up—that is, by eliminating their fees:

  • NVIDIA is providing a free 90-day license for Parabricks, which allows for using AI for genomics purposes. Consider that the technology can significantly cut down the time for processing. The program also involves free support from Oracle Cloud Infrastructure and Core Scientific (a provider of NVIDIA DGX systems and NetApp cloud-connected storage).
  • DataRobot is offering its platform for no charge. This allows for the deployment, monitoring and management of AI models at scale. The technology is also provided to the Kaggle competition. 
  • Run:AI is offering its software for free to help with building virtualization layers for deep learning models. 
  • DarwinAI has collaborated with the University of Waterloo’s VIP Lab to develop COVID-Net.  This is a convolutional neural network that detects COVID-19 using chest radiography. DarwinAI is also making this technology open source (below you’ll find a visualization of this).

Patient Care

Patient care is an area where AI could be essential. An example of this is Biofourmis. In a two-week period, this startup created a remote monitoring system that has a biosensor for a patient’s arm and an AI application to help with the diagnosis. In other words, this can help reduce infection rates for doctors and medical support personnel. Keep in mind that–in China–about 29% of COVID-19 deaths were healthcare workers. 

Another promising innovation to help patients is from Vital.  The founders are Aaron Patzer, who is the creator of Mint.com, and Justin Schrager, an ER doc. Their company uses AI and NLP (Natural Language Processing) to manage overloaded hospitals. 

Vital is now devoting all its resources to create C19check.com. The app, which was built in a partnership with Emory Department of Emergency Medicine’s Health DesignED Center and the Emory Office of Critical Event Preparedness and Response, provides guidance to the public for self-triage before going to the hospital. So far, it’s been used by 400,000 people. 

And here are some other interesting patient care innovations:

  • AliveCor: The company has launched KardiaMobile 6L, which measures QTc (heart rate corrected interval) in COVID-19 patients. This helps detect sudden cardiac arrest by using AI. It’s based on the FDA’s recent guidance to allow more availability of non-invasive remote monitoring devices for the pandemic. 
  • CLEW: It has launched the TeleICU. It uses AI to identify respiratory deterioration in advance. 

Drug Discovery

While drug discovery has made many advances over the years, the process can still be slow and onerous. But AI can help out.

For example, a startup that is using AI to accelerate drug development is Gero Pte.  It has used the technology to better isolate compounds for COVID-19 by testing treatments that are already used in humans. 

“Mapping the virus genome has seemed to happen very quickly since the outbreak,” said Vadim Tabakman, who is the Director of technical evangelism at Nintex. “Leveraging that information with Machine Learning to explore different scenarios and learn from those results could be a game changer in finding a set of drugs to fight this type of outbreak. Since the world is more connected than ever, having different researchers, hospitals and countries, providing data into the datasets that get processed, could also speed up the results tremendously.”

AIOps: What You Need To Know

AIOps, which is a term that was coined by Gartner in 2017, is increasingly becoming a critical part of next-generation IT. “In a nutshell, AIOps is applying cognitive computing like AI and Machine learning techniques to improve IT operations,” said Adnan Masood, who is the Chief Architect of AI & Machine Learning at UST Global. “This is not to be confused with the entirely different discipline of MLOps, which focuses on the Machine learning operationalization pipeline. AIOps refers to the spectrum of AI capabilities used to address IT operations challenges–for example, detecting outliers and anomalies in the operations data, identifying recurring issues, and applying self-identified solutions to proactively resolve the problem, such as by restarting the application pool, increasing storage or compute, or resetting the password for a locked-out user.”

The fact is that IT departments are often stretched and starved for resources. Traditional tools have usually been rule-based and inflexible, which has made it difficult to deal with the flood of new technologies.

“IT teams have adopted microservices, cloud providers, NoSQL databases, and various other engineering and architectural approaches to help support the demands their businesses are putting on them,” said Shekhar Vemuri, who is the CTO of Clairvoyant. “But in this rich, heterogeneous, distributed, complex world, it can be a challenge to stay on top of vast amounts of machine-generated data from all these monitoring, alerting and runtime systems.  It can get extremely difficult to understand the interactions between various systems and the impact they are having on cost, SLAs, outages etc.”

So with AIOps, there is the potential for achieving scale and efficiencies.  Such benefits can certainly move the needle for a company, especially as IT has become much more strategic.

“From our perspective, AIOps equips IT organizations with the tools to innovate and remain competitive in their industries, effectively managing infrastructure and empowering insights across increasingly complex hybrid and multi-cloud environments,” said Ross Ackerman, who is the NetApp Director of Analytics and Transformation. “This is accomplished through continuous risk assessments, predictive alerts, and automated case opening to help prevent problems before they occur. At NetApp, we’re benefiting from a continuously growing data lake that was established over a decade ago. It was initially used for reactive actions, but with the introduction of more advanced AI and ML, it has evolved to offer predictive and prescriptive insights and guidance. Ultimately, our capabilities have allowed us to save customers over two million hours of lost productivity due to avoided downtime.”

As with any new approach, though, AIOps does require much preparation, commitment and monitoring. Let’s face it, technologies like AI can be complex and finicky. 

“The algorithms can take time to learn the environment, so organizations should seek out those AIOps solutions that also include auto-discovery and automated dependency mapping as these capabilities provide out-of-the-box benefits in terms of root-cause diagnosis, infrastructure visualization, and ensuring CMDBs are accurate and up-to-date,” said Vijay Kurkal, who is the CEO of Resolve. “These capabilities offer immediate value and instantaneous visibility into what’s happening under the hood, with machine learning and AI providing increasing richness and insights over time.”

As a result, there should be a clear-cut framework when it comes to AIOps. Here’s what Appen’s Chief AI Evangelist Alyssa Simpson Rochwerger recommends: 

  • Clear ability to measure product success (business value outcomes).
  • Ability to measure and report on associated performance metrics such as accuracy, throughput, confidence and outcomes
  • Technical infrastructure to support—including but not limited to—model training, hosting, management, versioning and logging
  • Data Set management including traceability, data provenance and transparency
  • Low confidence/fallback data handling (this could be either a data annotation or other human-in-the-loop process or default when the AI system can’t handle a task or has a low-confidence output)

All this requires a different mindset. It’s really about looking at things in terms of software application development. 

“Most enterprise businesses are struggling with a wall to production, and need to start realizing a return on their machine learning and AI investments,” said Santiago Giraldo, who is a Senior Product Marketing Manager at Cloudera. “The problem here is two-fold. One issue is related to technology: Businesses must have a complete platform that unifies everything from data management to data science to production. This includes robust functionalities for deploying, serving, monitoring, and governing models. The second issue is mindset: Organizations need to adopt a production mindset and approach machine learning and AI holistically in everything from data practices to how the business consumes and uses the resulting predictions.”

So yes, AIOps is still early days and there will be lots of trial-and-error. But this approach is likely to be essential.

“While the transformative promise of AI has yet to materialize in many parts of the business, AIOps offers a proven, pragmatic path to improved service quality,” said Dave Wright, who is the Chief Innovation Officer at ServiceNow. “And since it requires little overhead, it’s a great pilot for other AI initiatives that have the potential to transform a business.”

Coronavirus: Can AI (Artificial Intelligence) Make A Difference?

The mysterious coronavirus is spreading at an alarming rate. There have been at least 305 deaths as more than 14,300 persons have been infected.

On Thursday, the World Health Organization (WHO) declared the coronavirus a global emergency. To put things into perspective, it has already exceeded the numbers infected during the 2002-2003 outbreak of SARS (Severe Acute Respiratory Syndrome) in China. 

Many countries are working hard to quell the virus. There have been quarantines, lock-downs on major cities, limits on travel and accelerated research on vaccine development. 

However, could technologies like AI (Artificial Intelligence) help out? Well, interestingly enough, it already has.

Just look at BlueDot, which is a venture-backed startup. The company has built a sophisticated AI platform that processes billions of pieces data, such as from the world’s air travel network, to identity outbreaks.

In the case of the coronavirus, BlueDot made its first alert on December 31st. This was ahead of the US Centers for Disease Control and Prevention, which made its own determination on January 6th.

BlueDot is the mastermind of Kamran Khan, who is an infectious disease physician and professor of Medicine and Public Health at the University of Toronto. Keep in mind that he was a frontline healthcare worker during the SARS outbreak. 

“We are currently using natural language processing (NLP) and machine learning (ML) to process vast amounts of unstructured text data, currently in 65 languages, to track outbreaks of over 100 different diseases, every 15 minutes around the clock,” said Khan. “If we did this work manually, we would probably need over a hundred people to do it well. These data analytics enable health experts to focus their time and energy on how to respond to infectious disease risks, rather than spending their time and energy gathering and organizing information.”

But of course, BlueDot will probably not be the only organization to successfully leverage AI to help curb the coronavirus. In fact, here’s a look at what we might see:

Colleen Greene, the GM of Healthcare at DataRobot:

“AI could predict the number of potential new cases by area and which types of populations will be at risk the most. This type of technology could be used to warn travelers so that vulnerable populations can wear proper medical masks while traveling.”

Vahid Behzadan, the Assistant Professor of Computer Science at the University of New Haven:

“AI can help with the enhancement of optimization strategies. For instance, Dr. Marzieh Soltanolkottabi’s  research is on the use of machine learning to evaluate and optimize strategies for social distancing (quarantine) between communities, cities, and countries to control the spread of epidemics. Also, my research group is collaborating with Dr. Soltanolkottabi in developing methods for enhancement of vaccination strategies leveraging recent advances in AI, particularly in reinforcement learning techniques.”

Dr. Vincent Grasso, who is the IPsoft Global Practice Lead for Healthcare and Life Sciences:

“For example, when disease outbreaks occur, it is crucial to obtain clinical related information from patients and others involved such as physiological states before and after, logistical information concerning exposure sites, and other critical information. Deploying humans into these situations is costly and difficult, especially if there are multiple outbreaks or the outbreaks are located in countries lacking sufficient resources. Conversational computing working as an extension of humans attempting to get relevant information would be a welcome addition. Conversational computing is bidirectional—it can engage with a patient and gather information, or the reverse, provide information based upon plans that are either standardized or modified based on situational variations. In addition, engaging in a multilingual and multimodal manner further extends the conversational computing deliverable. In addition to this ‘front end’ benefit, the data that is being collected from multiple sources such as voice, text, medical devices, GPS, and many others, are beneficial as datapoints and can help us learn to combat a future outbreak more effectively.”

Steve Bennett, the Director of Global Government Practice at SAS and former Director of National Biosurveillance at the U.S. Department of Homeland Security:

“AI can help deal with the coronavirus in several ways. AI can predict hotspots around the world where the virus could make the jump from animals to humans (also called a zoonotic virus). This typically happens at exotic food markets without established health codes.  Once a known outbreak has been identified, health officials can use AI to predict how the virus is going to spread based on environmental conditions, access to healthcare, and the way it is transmitted. AI can also identify and find commonalities within localized outbreaks of the virus, or with micro-scale adverse health events that are out of the ordinary. The insights from these events can help answer many of the unknowns about the nature of the virus.

“Now, when it comes to finding a cure for coronavirus, creating antivirals and vaccines is a trial and error process. However, the medical community has successfully cultivated a number of vaccines for similar viruses in the past, so using AI to look at patterns from similar viruses and detect the attributes to look for in building a new vaccine gives doctors a higher probability of getting lucky than if they were to start building one from scratch.”

Don Woodlock, the VP of HealthShare at InterSystems:

“With ML approaches, we can read the tens of billions of data points and clinical documents in medical records and establish the connections to patients that do or do not have the virus. The ‘features’ of the patients that contract the disease pop out of the modeling process, which can then help us target patients that are higher risk.

“Similarly, ML approaches can automatically build a model or relationship between treatments documented in medical records and the eventual patient outcomes. These models can quickly identify treatment choices that are correlated to better outcomes and help guide the process of developing clinical guidelines.”

Prasad Kothari, who is the VP Data Science and AI for The Smart Cube:

“The coronavirus can cause severe symptoms such as pneumonia, severe acute respiratory syndrome, kidney failure etc. AI empowered algorithms such as genome based neural networks already built for personalized treatment can prove very helpful in managing these adverse events or symptoms caused by coronavirus, especially when the effect of virus depends on immunity and the genome structure of individual and no standard treatment can treat all symptoms an effects in the same way.

“In recent times, immunotherapy and Gene therapy empowered through AI algorithms such as boltzmann machines (entropy based combinatorial neural networks) have stronger evidence of treating such diseases which stimulate body’s immunity systems. For this reason, Abbvie’s Aluvia HIV drug is one possible treatment. If you look at data of affected patients and profile virus mechanics and cellular mechanism affected by the coronavirus, there are some similarities in the biological pathways and treatment efficacy. But this is yet to be tested.”

CES: The Coolest AI (Artificial Intelligence) Announcements

As seen at this week’s CES 2020 mega conference, the buzz for AI continues to be intense. Here are just a few comments from the attendees:

  • Nichole Jordan, who is Grant Thornton’s Central region managing partner: “From AI-powered agriculture equipment to emotion-sensing technology, walking the exhibit floors at CES drives home the fact that artificial intelligence is no longer a vision of the future. It is here today and is clearly going to be more integrated into our world going forward.”
  • Derek Kennedy, the Senior Partner and Global Technology Leader at Boston Consulting Group: “AI is increasingly playing a role in every intelligent product, such as upscaling video signals for an 8K TV as well as every business process, like predicting consumer demand for a new product.”
  • Houman Haghighi, the Business Development Partner at Menlo Ventures: “Voice, natural language and predictive actions are continuing to become the new—and sometimes the only—user interface within the home, automobile, and workplace.”

So what were some of the stand out announcement at CES? Well, given that there were over 4,500 exhibitors, this is a tough question to answer. But here are some innovations that certainly do show the power of AI:

Prosthetics: Using AI along with EMG technology, BrainCo has built a prosthetic arm that learns. In fact, it can allow for people to play a piano or even do calligraphy. 

“This is an electronic device that allows you to control the movements of an artificial arm with the power of thought alone,” said Max Babych, who is the CEO of SpdLoad. Today In: Small Business

The cost for the prosthetic is quite affordable at about $10,000 (this is compared to about $100,000 for alternatives). 

SelfieType: One of the nagging frictions of smartphones is the keyboard. But Samsung has a solution: SelfieType. It leverages cameras and AI to create a virtual keyboard on a surface (such as a table) that learns from hand movements. 

“This was my favorite and simplest AI use case at CES,” said R. Mordecai, who is the Head of Innovation and Partnerships at INNOCEAN USA. “I wish I had it for the flight home so I could type this on the plane tray.”

Lululab’s Lumine: This is a smart mirror that is for skin care. Lumine uses deep learning to analyze six categories–wrinkles, pigment, redness, pores, sebum and trouble–and then recommends products to help.

Whisk: This is powered by AI to scan the contents of your fridge so as to think up creative dishes to cook (it is based research from over 100 nutritionists, food scientists, engineers and retailers). Not only does this technology allow for a much better experience, but should help reduce food waste. Keep in mind that the average person throws away 238 pounds of food every year. 

Wiser: Developed by Schneider Electric, this is a small device that you install in your home’s circuit breaker box. With the use of machine learning, you can get real-time monitoring of usage by appliance, which can lead to money savings and optimization for a solar system.

Vital Signs Monitoring: The Binah.ai app analyzes a person’s face to get medical-grade insights, such as oxygen saturation, respiration rate, heart rate variability and mental stress. The company also plans to add monitoring for hemoglobin levels and blood pressure.

Neon: This is a virtual assistant that looks like a real person, who can engage in intelligent conversation and show emotion. While still in the early stages, the technology is actually kind of scary. The creator of Neon–which is Samsung-backed Star Labs—thinks that it will replace doctors, lawyers and other white collar professionals. No doubt, this appears to be a recipe for wide-scale unemployment, not to mention a way to unleash a torrent of deepfakes!

AI (Artificial Intelligence): What’s The Next Frontier For Healthcare?

Perhaps one of the biggest opportunities for AI (Artificial Intelligence) is the healthcare industry. According to ReportLinker, spending on this category is forecasted to jump from $2.1 billion to $36.1 billion by 2025. This is a hefty 50.2% compound annual growth rate (CAGR).

So then what are some of the trends that look most interesting within healthcare AI? Well, to answer this question, I reached out to a variety of experts in the space.

Here’s a look: 

Ori Geva, who is the CEO of Medial EarlySign:

One of the key trends is the use of health AI to spur the transition of medicine from reactive to proactive care. Machine learning-based applications will preempt and prevent disease on a more personal level, rather than merely reacting to symptoms. Providers and payers will be better positioned to care for their patients’ needs with the tools to delay or prevent the onset of life-threatening conditions. Ultimately, patients will benefit from timely and personalized treatment to improve outcomes and potentially increase survival rates.

Dr. Gidi Stein, who is the CEO of MedAware:

In the next five years, consumers will gain more access to their health information than ever before via mobile electronic medical records (EMR) and health wearables. AI will facilitate turning this mountain of data into actionable health-related insights, promoting personalized health and optimizing care. This will empower patients to take the driving wheel of their own health, promote better patient-provider communication and facilitate high-end healthcare to under-privileged geographies.

Tim O’Malley, who is the President and Chief Growth Officer at EarlySense:

Today, there are millions of physiologic parameters which are extracted from a patient. I believe the next mega trend will be harnessing this AI-driven “Smart Data” to accurately predict and avoid adverse events for patients. The aggregate of this data will be used to formulate predictive analytics to be used across diverse patient populations across the continuum of care, which will provide truly personalized medicine.

Andrea Fiumicelli, who is the vice president and general manager of Healthcare and Life Sciences at DXC Technology:

Ultimately, AI and data analytics could prove to be the catalyst in addressing some of today’s most difficult-to-treat health conditions. By combining genomics with individual patient data from electronic health records and real-world evidence on patient behavior culled from wearables, social media and elsewhere, health care providers can harness the power of precision medicine to determine the most effective approaches for specific patients.

This brings tremendous potential to treating complex conditions such as depression. AI can offer insights into a wealth of data to determine the likelihood of depression—based on the patient’s age, gender, comorbidities, genomics, life style, environment, etc.—and can provide information about potential reactions before they occur, thus enabling clinicians to provide more effective treatment sooner.

Ruthie Davi, who is the vice president of Data Science at Acorn AI, a Medidata company:

One key advance to consider is the use of carefully curated datasets to form Synthetic Control Arms as a replacement for placebo in clinical trials. Recruiting patients for randomized control trials can be challenging, particularly in small patient populations. From the patient perspective, while an investigational drug can offer hope via a new treatment option, the possibility of being in a control arm can be a disincentive. Additionally, if patients discover they are in a control arm, they may drop out or elect to receive therapies outside of the trial protocol, threatening the validity and completion of the entire trial.

However, thanks to advances in advanced analytics and the vast amount of data available in life sciences today, we believe there is a real opportunity to transform the clinical trial process. By leveraging patient-level data from historical clinical trials from Medidata’s expansive clinical trial dataset, we can create a synthetic control arm (SCA) that precisely mimics the results of a traditional randomized control. In fact, in a recent non-small cell lung cancer case study, Medidata together with Friends of Cancer Research was successful in replicating the overall survival of the target randomized control with SCA. This is a game-changing effort that will enhance the clinical trial experience for patients and propel next generation therapies through clinical development.

Tesla’s AI Acquisition: A New Way For Autonomous Driving?

This week Tesla acquired DeepScale, which is a startup that focuses on developing computer vision technologies (the price of the deal was not disclosed). This appears to be a part of the company’s focus on building an Uber-like service as well building fully autonomous vehicles.

Founded in 2015, DeepScale has raised $15 million from investors like Point72, next47, Andy Bechtolsheim, Ali Partovi, and Jerry Yang. The founders include Forrest Iandola and Kurt Keutzer, who are both PhD’s. In fact, about a quarter of the engineering team has a PhD and they have more than 30,000 academic citations.

“DeepScale is a great fit for Tesla because the company specializes in compressing neural nets to work in vehicles, and hooking them into perception systems with multiple data types,” said Chris Nicholson, who is the CEO and founder of Skymind. “That’s what Tesla needs to make progress in autonomous driving.”

Tesla has the advantage of an enormous database of vehicle information. So with software expertise, the company should help accelerate the innovation. “If ‘data is the new oil’ then ‘AI models are the new Intellectual Property and barrier to entry,’” said Joel Vincent, who is the CMO of Zededa. “This is the dawn of a new age of competitive differentiation. AI models are useless without data and Telsa has an astounding amount of edge data.”

Now when it comes to autonomous driving, there are other major requirements–some which may get little attention.

Just look at the use of energy. “Large models require more powerful processors and larger memory to run them in production,” said Dr. Sumit Gupta, who is the the IBM Cognitive Systems VP of AI and HPC. “But vehicles have a limited energy budget, so the market is always trying to minimize the energy that the electronics in the car consume. This is what DeepScale is good at. The company invented an AI model called ‘SqueezeNet’ that requires a smaller memory footprint and also less CPU horsepower.”

Keep in mind that the lower energy consumption will mean there will be more capacity for sensors for vision. “This should help make autonomous vehicles safer,” said Arjan Wijnveen, who is the CEO of CVEDIA. “Tesla seems certain that they don’t need LiDAR for effective computer vision, but there are lots of other types of sensors you could see on their vehicles in the future, and sometimes just placing a second camera facing another angle can improve the AI model.”

Not using LiDAR would be a big deal, which would mean a much lower cost per vehicle. “There are concerns about the deployment of LIDAR lasers in the public sphere,” said Gavin D. J. Harper, who is a Faraday Institution Research Fellow at the University of Birmingham. “Safety measures include limiting the power and exposure of lasers. There is also the concern about the potential for causing inadvertent harm to those nearby.”

So all in all, the DeepScale deal could move the needle for Tesla and represent a shift in the industry. Although, it is still important to keep in mind that autonomous driving is still in the nascent stages (regardless of what Elon Musk boasts!) There remain many tough issues to work out, which could easily drag on because of regulatory processes.

“To get to full autonomy, you’re still going to need some major algorithmic improvements,” said Nicholson. “Some of the smartest people in the world are working on this, and it seems clear that we’ll get there, even if we don’t know when. In any case, companies like Tesla and Waymo have the right mix of talent, data, and cars on the road.”

What AI (Artificial Intelligence) Will Mean For The Cannabis Space

Just about every estimate shows that the cannabis industry will see strong long-term growth. Yet there are some major challenges–and they are more than just about changing existing laws and regulations.

But AI (Artificial Intelligence) is likely to be a big help. True, the industry has not been a big adopter of new technologies. However, this should change soon as investors pour billions of dollars into the space.

So how might AI impact things?  Well, look at what the CEO and Director of CROP Corp, Michael Yorke, has to say: “The use of AI in sensors and high-definition cameras can be used to keep track of and adjust multiple inputs in the growing environment such as water level, PH level, temperature, humidity, nutrient feed, light spectrum and CO2 levels. Tracking and adjusting these inputs can make a major difference in the quantity and quality of cannabis that growers are able to produce. AI also helps automate trimming technology so that it is able to de-leaf buds saving countless hours of manual labor. Similarly, it can be applied to automated planting equipment to increase the effectiveness and efficiency of planting. And AI can identify the sex of the plants, detect sick plants, heal or remove sick plants from the environment, and track the plant growth rate to be able to predict size and yield.”

No doubt, such things could certainly move the needle in a big way. 

There are also opportunities to help with such things as more accurate predictions, which would allow for maximizing efficiency. And yes, AI is likely to be key in discovering new strains or customize strains for specific effects (examples would include relaxation, excitement or increasing/decreasing hunger). The result could be even more growth in the cannabis market. 

But there is something else to keep in mind: With no legalization on a federal level in the US, there is a need for sophisticated tracking systems. 

“The existing regulations are complex, requiring businesses to follow detailed rules that govern every area of the industry from growing to packaging and selling to consumers,” said Mark Krytiuk, who is the president of Nabis Holdings. “Even the smallest error can cost a cannabis business thousands, and incur harsh punishments such as losing their cannabis license.”

The situation is even more complex with retail operations. “Artificial intelligence is one key technological advancement that could make a significant impact,” said Krytiuk. “By implementing this technology, cannabis retailers would be able to more easily track state-by-state regulations, and the constant changes that are being made. With this information, they would be able to properly package, ship, and sell products in a more compliant way that is less likely to be intercepted by government regulations.”

Keep in mind that the problems with compliance are a leading cause of failure for cannabis operators. “Running a cannabis business can be costly, especially when it comes to getting and keeping a license, paying high taxes, and dealing with the added pressure of ever-changing government regulations,” said Krytiuk. “If more cannabis businesses had access to automated, AI-powered technology that could help them be more compliant, there would be more successful companies helping the industry to grow.”

Again, the AI part of the cannabis industry is very much in the nascent stages. It will likely take some time to get meaningful traction. But for entrepreneurs, the opportunity does look promising. “The industry is only going to continue to grow, so it’s only a matter of time before it reaches its own technological revolution,” said Krytiuk.

AI (Artificial Intelligence) Words You Need To Know

In 1956, John McCarthy setup a ten-week research project at Dartmouth University that was focused on a new concept he called “artificial intelligence.” The event included many of the researchers who would become giants in the emerging field, like Marvin Minsky, Nathaniel Rochester, Allen Newell, O.G. Selfridge, Raymond Solomonoff, and Claude Shannon.

Yet the reaction to the phrase artificial intelligence was mixed. Did it really explain the technology? Was there a better way to word it?

Well, no one could come up with something better–and so AI stuck.

Since then, we’ve seen the coining of plenty of words in the category, which often define complex technologies and systems. The result is that it can be tough to understand what is being talked about.

So to help clarify things, let’s take a look at the AI words you need to know:

Algorithm

From Kurt Muehmel, who is a VP Sales Engineer at Dataiku:

A series of computations, from the most simple (long division using pencil and paper), to the most complex. For example, machine learning uses an algorithm to process data, discover rules that are hidden in the data, and that are then encoded in a “model” that can be used to make predictions on new data.

Machine Learning

From Dr. Hossein Rahnama, who is the co-founder and CEO of Flybits:

Traditional programming involves specifying a sequence of instructions that dictate to the computer exactly what to do. Machine learning, on the other hand, is a different programming paradigm wherein the engineer provides examples comprising what the expected output of the program should be for a given input. The machine learning system then explores the set of all possible computer programs in order to find the program that most closely generates the expected output for the corresponding input data. Thus, with this programming paradigm, the engineer does not need to figure out how to instruct the computer to accomplish a task, provided they have a sufficient number of examples for the system to identify the correct program in the search space.

Neural Networks

From Dan Grimm, who is the VP and General Manager of Computer Vision a RealNetworks:

Neural networks are mathematical constructs that mimic the structure of the human brain to summarize complex information into simple, tangible results. Much like we train the human brain to, for example, learn to control our bodies in order to walk, these networks also need to be trained with significant amounts of data. Over the last five years, there have been tremendous advancements in the layering of these networks and the compute power available to train them.

Deep Learning

From Sheldon Fernandez, who is the CEO of DarwinAI:

Deep Learning is a specialized form of Machine Learning, based on neural networks that emulate the cognitive capabilities of the human mind. Deep Learning is to Machine Learning what Machine Learning is to AI–not the only manifestation of its parent, but generally the most powerful and eye-catching version. In practice, deep learning networks capable of performing sophisticated tasks are 1.) many layers deep with millions, sometimes, billions of inputs (hence the ‘deep’); 2.) trained using real world examples until they become proficient at the prevailing task (hence the ‘learning’).

Explainability

From Michael Beckley, who is the CTO and founder of Appian:

Explainability is knowing why AI rejects your credit card charge as fraud, denies your insurance claim, or confuses the side of a truck with a cloudy sky. Explainability is necessary to build trust and transparency into AI-powered software. The power and complexity of AI deep learning can make predictions and decisions difficult to explain to both customers and regulators. As our understanding of potential bias in data sets used to train AI algorithms grows, so does our need for greater explainability in our AI systems. To meet this challenge, enterprises can use tools like Low Code Platforms to put a human in the loop and govern how AI is used in important decisions.

Supervised, Unsupervised and Reinforcement Learning

From Justin Silver, who is the manager of science & research at PROS:

There are three broad categories of machine learning: supervised, unsupervised, and reinforcement learning. In supervised learning, the machine observes a set of cases (think of “cases” as scenarios like “The weather is cold and rainy”) and their outcomes (for example, “John will go to the beach”) and learns rules with the goal of being able to predict the outcomes of unobserved cases (if, in the past, John usually has gone to the beach when it was cold and rainy, in the future the machine will predict that John will very likely go to the beach whenever the weather is cold and rainy). In unsupervised learning, the machine observes a set of cases, without observing any outcomes for these cases, and learns patterns that enable it to classify the cases into groups with similar characteristics (without any knowledge of whether John has gone to the beach, the machine learns that “The weather is cold and rainy” is similar to “It’s snowing” but not to “It’s hot outside”). In reinforcement learning, the machine takes actions towards achieving an objective, receives feedback on those actions, and learns through trial and error to take actions that lead to better fulfillment of that objective (if the machine is trying to help John avoid those cold and rainy beach days, it could give John suggestions over a period of time on whether to go to the beach, learn from John’s positive and negative feedback, and continue to update its suggestions).

Bias

From Mehul Patel, who is the CEO of Hired:

While you may think of machines as objective, fair and consistent, they often adopt the same unconscious biases as the humans who built them. That’s why it’s vital that companies recognize the importance of normalizing data—meaning adjusting values measured on different scales to a common scale—to ensure that human biases aren’t unintentionally introduced into the algorithm. Take hiring as an example: If you give a computer a data set with 100 female candidates and 300 male candidates and ask it to predict the best person for the job, it is going to surface more male candidates because the volume of men is three times the size of women in the data set. Building technology that is fair and equitable may be challenging but will ensure that the algorithms informing our decisions and insights are not perpetuating the very biases we are trying to undo as a society.

Backpropagation

From Victoria Jones, who is the Zoho AI Evangelist:

Backpropagation algorithms allow a neural network to learn from its mistakes. The technology tracks an event backwards from the outcome to the prediction and analyzes the margin of error at different stages to adjust how it will make its next prediction. Around 70% of our AI assistant (called Zia) features the use of backpropagation, including Zoho Writer’s grammar-check engine and Zoho Notebook’s OCR technology, which lets Zia identify objects in images and make those images searchable. This technology also allows Zia’s chatbot to respond more accurately and naturally. The more a business uses Zia, the more Zia understands how that business is run. This means that Zia’s anomaly detection and forecasting capabilities become more accurate and personalized to any specific business.

NLP (Natural Language Processing)

Courtney Napoles, who is the Language Data Manager at Grammarly:

The field of NLP brings together artificial intelligence, computer science, and linguistics with the goal of teaching machines to understand and process human language. NLP researchers and engineers build models for computers to perform a variety of language tasks, including machine translation, sentiment analysis, and writing enhancement. Researchers often begin with analysis of a text corpus—a huge collection of sentences organized and annotated in a way that AI algorithms can understand.

The problem of teaching machines to understand human language—which is extraordinarily creative and complex—dates back to the advent of artificial intelligence itself. Language has evolved over the course of millennia, and devising methods to apprehend this intimate facet of human culture is NLP’s particularly challenging task, requiring astonishing levels of dexterity, precision, and discernment. As AI approaches—particularly machine learning and the subset of ML known as deep learning—have developed over the last several years, NLP has entered a thrilling period of new possibilities for analyzing language at an unprecedented scale and building tools that can engage with a level of expressive intricacy unimaginable even as recently as a decade ago.

How Screenwriting Can Boost AI (Artificial Intelligence)

Scott Ganz, who is the Principal Content Designer at Intuit, did not get in the tech game the typical way. Keep in mind that he was a screenwriter, such as for Wordgirl (winning an Emmy for his work) and The Muppets. “I’ve always considered myself a comedy writer at heart, which generally involves a lot of pointing out what’s wrong with the world to no one in particular and being unemployed,” he said.

Yet his skills have proven quite effective and valuable, as he has helped create AI chatbots for brands like Mattel and Call of Duty. And yes, as of now, he works on the QuickBooks Assistant, a text-based conversational AI platform. “I partner with our engineering and design teams to create an artificial personality that can dynamically interact with our customers, and anticipate their questions,” he said. “It also involves a lot of empathy. It’s one thing to anticipate what someone might ask about their finances. You also have to anticipate how they feel about it.”

So what are his takeaways? Well, he certainly has plenty. Let’s take a look:

Where is chatbot technology right now? Trends you see in the coming years?

It’s a really exciting time for chatbots and conversational user interfaces in general. They’re becoming much more sophisticated than the rudimentary “press 1 for yes” types of systems that we used to deal with… usually by screaming “TALK TO A HUMAN!” The technology is evolving quickly, and so are the rules and best practices. With those two things still in flux, the tech side and the design side can constantly disrupt each other.

Using natural language processing, we’re finding opportunities to make our technology more humanized and relatable, so people can interact naturally and make better informed decisions about their money. For example, we analyze our customers’ utterances to not only process their requests, but also to understand the intent behind their requests, so we can get them information faster and with greater ease. We really take “natural language processing” seriously. Intuit works really hard to speak its customers’ language. That’s extra important to us. You can’t expect people to say “accounts receivable.” It’s “who owes me money?” As chatbots become more and more prevalent in our daily lives, it’s important that we’re aware of the types of artificial beings we’re inviting into our living rooms, cars, and offices. I’m excited to see the industry experiment with gender-neutral chatbots, where we can both break out of the female assistant trope and also make the world a little more aware of non-binary individuals.

Explain your ideas about screenwriting/improv/chatbots?

When I joined the WordGirl writing team, my first task was to read the show “bible” that the creators had written. Yes, “bible” is the term they use. WordGirl didn’t have a central office or writers’ room. We met up once a year to pitch ideas and, after that, we worked remotely. Therefore, the show bible was essential when it came to getting us all on the same page creatively.

Once I got to Intuit, I used those learnings to create a relationship bible to make sure everyone was on the same page about our bot’s personality and its relationship with our customers. Right now my friend Nicholas Pelczar and I are the only two conversation designers at QuickBooks, but we don’t want it to stay that way. As the team expands, documents like this become even more important.

As a side-note, I originally called the document, “the character bible,” but after my team had a coaching session with Scott Cook, who is the co-founder of Intuit, I felt that the document wasn’t customer-focused enough. I then renamed it the “relationship bible” and redid the sections on the bot’s wants, needs, and fears so that they started with the customer’s wants, needs, and fears. Having established those, I then spelled out exactly why the bot wants and fears those same things.

I also got to apply my improv experience to make the bot more personable. Establishing the relationship is the best first thing you can do in an improv scene. The Upright Citizens Brigade is all about this. It increases your odds of having your scene make sense. Also, comedy is hard, and I wanted to give other content designers some easy recipes to write jokes. Therefore, the last section of the relationship bible has some guidance on the kinds of jokes the bot would and wouldn’t/shouldn’t make, as well as some comedy tropes designers can use.

What are some of the best practices? And the gotchas?

On a tactical level, conversation design hinges on communicating clearly with people so that the conversation itself doesn’t slip off the rails. When this happens in human conversation, people are incredibly adept at getting things back on track. Bots are much less nuanced and flexible and they really struggle with this. Therefore, designers need to be incredibly precise both in how you ask and answer questions. It’s really easy to mislead people about what the bot is asking or what it’s capable of answering.

Beyond that, it’s really important to track the emotional undercurrents of the conversation. It may be an AI, but it’s dealing with people (and all of their emotions). AI has its strengths, but design requires empathy, and this is absolutely the case when you’re engaging with people in conversation. Our emotions are a huge part of what we’re communicating, and true understanding requires that we understand that side of it as well. Even though a chatbot is not a person, it is made by real people, expressing real care, in a way that makes it available to everyone, all the time. It’s important to keep this in mind when working on a chatbot, so that the voice remains authentic.

One of the biggest things we need to take into account is ensuring the chatbot is a champion for the company, but still remains a little bit separated from the company. Since chatbots still make mistakes, it’s important to take steps to protect the company’s reputation. This plays out in creating strict guidelines around the use of “I” vs. “We.” When the chatbot messes up, it uses “I,” and only uses “we” in moments when it is working in conjunction with humans at the company. By using these two separate phrases, customers are able to better understand what is the work of the chatbot vs. the company. Essentially, we think of QuickBooks Assistant as a kind of digital employee.

Bias: The Silent Killer Of AI (Artificial Intelligence)

When it comes to AI (Artificial Intelligence), there’s usually a major focus on using large datasets, which allow for the training of models. But there’s a nagging issue: bias. What may seem like a robust dataset could instead be highly skewed, such as in terms of race, wealth and gender.

Then what can be done? Well, to help answer this question, I reached out to Dr. Rebecca Parsons, who is the Chief Technology Officer of ThoughtWorks, a global technology company with over 6,000 employees in 14 countries. She has a strong background in both the business and academic worlds of AI.

So here’s a look at what she has to say about bias:

Can you share some real-life examples of bias in AI systems and explain how it gets there?

It’s a common misconception that the developers who are responsible for infusing bias in AI systems are either prejudiced or acting out of malice—but in reality, bias is more often unintentional and unconscious in nature. AI systems and algorithms are created by people with their own experiences, backgrounds and blind spots which can unfortunately lead to the development of fundamentally biased systems. The problem is compounded by the fact that the teams responsible for the development, training, and deployment of AI systems are largely not representative of the society at large. According to a recent research report from NYU, women comprise only 10% of AI research staff at Google and only 2.5% of Google’s workforce is black. This lack of representation is what leads to biased datasets and ultimately algorithms that are much more likely to perpetuate systemic biases.

One example that demonstrates this point well is the use of voice assistants like Siri or Alexa that are trained on huge databases of recorded speech that are unfortunately dominated by speech from white, upper-middle class Americans—making it challenging for the technology to understand commands from people outside that category. Additionally, studies have shown that algorithms trained on historically biased data have significant error rates for communities of color especially in over-predicting the likelihood of a convicted criminals to reoffend which can have serious implications for the justice system.

How do you detect bias in AI and guard against it?

The best way to detect bias in AI is by cross-checking the algorithm you are using to see if there are patterns that you did not necessarily intend. Correlation does not always mean causation, and it is important to identify patterns that are not relevant so you can amend your dataset. One way you can test for this is by checking if there is any under- or overrepresentation in your data. If you detect a bias in your testing, then you must overcome it by adding more information to supplement that underrepresentation.

The Algorithmic Justice League has also done interesting work on how to correct biased algorithms. They ran tests on facial recognition programs to see if they could accurately determine race and gender. Interestingly, lighter-skinned males were almost always correctly identified, while 35% of darker-skinned females were misidentified. The reason? One of the most widely used facial-recognition data training sets was estimated to be more than 75% male and more than 80% white. Because the categorization had far more definitive and distinguishing categories for white males than it did for any other race, it was biased in correctly identifying these individuals over others. In this instance, the fix was quite easy. Programmers added more faces to the training data and the results quickly improved.

While AI systems can get quite a lot right, humans are the only ones who can look back at a set of decisions and determine whether there are any gaps in the datasets or oversight that led to a mistake.  This exact issue was documented in a study where a hospital was using machine learning to predict the risk of death from pneumonia. The algorithm came to the conclusion that patients with asthma were less likely to die from pneumonia than patients without asthma. Based off this data, hospitals could decide that it was less critical to hospitalize patients with both pneumonia and asthma, given the patients appeared to have a higher likelihood of recovery. However, the algorithm overlooked another important insight, which is that those patients with asthma typically receive faster and more intensive care than other patients, which is why they have a lower mortality rate connected to pneumonia. Had the hospital blindly trusted the algorithm, they may have incorrectly assumed that it’s less critical to hospitalize asthmatics, when in reality they actually require even more intensive care.

What could be the consequences to the AI industry if bias is not dealt with properly?

As detailed in the asthma example, if biases in AI are not properly identified, the difference can quite literally be life and death. The use of AI in areas like criminal justice can also have devastating consequences if left unchecked. Another less-talked about consequence is the potential of more regulation and lawsuits surrounding the AI industry. Real conversations must be had around who is liable if something goes terribly wrong. For instance, is it the doctor who relies on the AI system that made the decision resulting in a patient’s death, or the hospital that employs the doctor? Is it the AI programmer who created the algorithm, or the company that employs the programmer?

Additionally, the “witness” in many of these incidents cannot even be cross-examined since it’s often the algorithm itself. And to make things even more complicated, many in the industry are taking the position that algorithms are intellectual property, therefore limiting the court’s ability to question programmers or attempt to reverse-engineer the program to find out what went wrong in the first place. These are all important discussions that must be had as AI continues to transform the world we live in.

If we allow this incredible technology to continue to advance but fail to address questions around biases, our society will undoubtedly face a variety of serious moral, legal, practical and social consequences. It’s important we act now to mitigate the spread of biased or inaccurate technologies.