Tecton.ai Snags $20 Million To Solve AI’s Data Problem

While the COVID-19 pandemic has halted many venture fundings, Tecton.ai has been able to buck the trend. This week the company announced a $20 million investment from Andreessen Horowitz and Sequoia (last year there was a $5 million angel round).

Tecton is a platform that morphs raw data into AI models that can be successfully deployed.  And yes, this is far from a trivial process.

“The foundational success of an AI-based technology revolution or even the build of a very simple algorithm ultimately lies in the health of the data,” said Kim Kaluba, who is the Senior Manager for Data Management Solutions at SAS. “However, in survey after survey organizations continue to report problems with accessing, preparing, cleansing and managing data, ultimately stalling the development of trustworthy and transparent analytical models.”

Consider that the data wrangling is often the most time-consuming and expensive part of the AI process. “Some data scientists report spending 80% of their time collecting and cleaning data,” said Jen Snell, who is the Vice President of Product Marketing and Intelligent Self Service at Verint. “This problem has become so ubiquitous that it’s now called the ‘80/20 rule’ of data science.”

Regarding Tecton, the technology is the result of deep experience of its three founders, who helped build the AI platform for Uber (called Michelangelo). “When we got to Uber, everything was breaking because of the extreme growth,” said Mike Del Balso, who is the CEO and co-founder of Tecton. “Data was spread across silos and there were challenges with the deployment of models. With Michelangelo, we made an end-to-end platform that was targeted for the average data science person. We didn’t want to create huge engineering teams. We also built Michelangelo with the focus on production, collaboration, visibility and reusability.”

Within a couple years, the platform would lead to the development of thousands of AI models, helping with such capabilities as ETA, safety and fraud scores. The result was more sustainable growth and stronger competitive advantages for Uber.

Why Is Data So Complicated?

Data is actually fairly simple. It’s just a string of numbers, right?

This is true. But data does present many tough challenges for enterprises, even for some of the most advanced technology companies.

“Oftentimes the data that we receive is ‘dirty,’” said Melissa McSherry, who is the SVP Global Head of Credit and Data Products at Visa. “Think about your credit card statement. The merchant names are sometimes unrecognizable—that has to do with the way merchants are set up in the system. When we clean up the data, we can often generate amazing insight. But that is significant work. Oftentimes organizations don’t understand how much work is required and are disappointed in what it takes to actually get results.”

Another issue with data is organizational. “Enterprises enforce data security and governance policies that weren’t designed to feed data science teams with a steady stream of up-to-date, granular business data,” said Bethann Noble, who is the Senior Director of Product Marketing and Machine Learning at Cloudera. “As data science teams start new projects with different stakeholders, they have to solve for data access once again, which could mean a different journey through a different bureaucratic maze every time. And the necessary data can be anywhere, in any form—residing across different data centers, cloud platforms, or edge devices. It needs to be moved and pre-processed to be ready for machine learning, which can involve complex analytical pipelines across physical and organizational silos.”

Keep in mind that the data problem is only getting more complicated. Based on research from IDC, the total amount of global data will reach 175 zettabytes by 2025, up from 33 zettabytes in 2018 (a zettabyte is 10 to the 21st power or 1 sextillion bytes!)

“In this digital age, we are suffering from ‘InfoObesity’—gorging ourselves on an inconsumable amount of data that is not just unwieldy but can become dysfunctional, especially as we increase the amount of data we collect without scaling our ability to support, filter and manage it,” said Michael Ringman, who is the CIO of TELUS International. “While investing in Big Data is easy, efficient and effective use of it has become difficult.”

Oh, and then there are the privacy and security issues. “Given the mass amounts of data used for complex algorithms, data science platforms can be hot targets for data breaches,” said Ross Ackerman, who is the Director of Analytics and Transformation at NetApp. “Often, the most important data for algorithms contain or can be mapped to CII (Customer Identifiable Information) or PII (Personal Identifiable Information).”

Tecton’s Capabilities

For enterprise AI applications, there are really two main approaches. First, there are analytical models, which provide insights like forecasted churn rates. These types of applications do not need real-time data.

Next, there are operational models. These are embedded in a company’s product, such as a mobile app. They need highly sophisticated data systems and scale. “This is where you can create magical experiences,” said Del Balso.

For the most part, Tecton is about operational models, which are essentially the most demanding–but can provide the most benefits. “It’s high stakes,” said Del Balso.

Tecton is built to streamline the data pipeline, which means that data scientists can spend more time on building effective models. An essential part of this is a feature store that allows for the seamless transition between data scientists and data engineers. Tecton, of course, has other cutting-edge features–and the funding will definitely accelerate the innovation (the platform is currently in private beta).

“For decades, companies have worked to develop technology, knowledge, skills and infrastructure to handle and harvest unstructured data in pursuit of unlocking answers to the most difficult questions,” said Michal Siwinski, who is a Corporate VP at Cadence Design Systems. “However, there’s more work to be done. Because the technology is still continuing to evolve, data is a virtually untapped resource with only as high as 4% of today’s data being analyzed.”

AI (Artificial Intelligence) Projects: Where To Start?

Artificial Intelligence (AI) is clearly a must-have when it comes to being competitive in today’s markets. But implementing this technology has been challenging, even for some of the world’s top companies. There are issues with data, finding the right talent and creating models that generate sufficient ROI.

As a result, many AI projects fail. According to IDC, only abut 35% of organizations succeed in getting models into production successfully.

“While we see AI technologies performing a swath of incredible feats such as Google Translate, AlphaGo, and solving a rubik’s cube, it can be hard to tell which business problems AI is apt to solve,” said Ankur Goyal, who is the CEO at Impira. “This has led to a lot of confusion—and a vendor community that has taken advantage of it by labeling things as AI when they aren’t. It’s very reminiscent of early last decade when cloud technologies took off and we had a lot of cloud washing going on. We had vendors marketing themselves as cloud players when their offerings were vaporware. Similarly, we are going through a period of AI washing now.”

So then, if your company is thinking of implementing AI, what is the best way to start? How can you help boost the odds of success and avoid the pitfalls?

Here’s a look at some strategies:

Beware Of The Hype

AI is not magic. It will not solve all your company’s problems.  Rather, you need to take a realistic approach to the technology.

“Unlike traditional data analytics, machine learning (ML) models that power AI are not always going to offer clear-cut answers,” said Santiago Giraldo, who is the Senior Product Marketing Manager of Data Engineering at Cloudera. “Implementing AI into the business requires experimentation and an understanding that not every experiment is going to drive ROI. When an AI project is successful, it is often built on top of many failed data science experiments. Taking a portfolio approach to ML and AI enables greater longevity in projects and the ability to build on the successes more effectively in the future.”

Interestingly enough, there are often situations when the technology is really just overkill!

“Often times businesses take on AI projects not realizing that it might have been cheaper to continue a process manually instead of investing large amounts of time and money into building a system that doesn’t save the company time or money,” said Gus Walker, who is the Senior Director of Product Management at Veritone.

Governance First

You don’t want to spend time and money on a project and then realize there are legal or compliance restrictions. This could easily mean having to abandon the effort.

“First, customer data should not be used without permission,” said Debu Chatterjee, who is the senior director of platform AI engineering at ServiceNow. “Secondly, bias from data should be mitigated. Any model which is a black box and cannot be tested through APIs for bias should be avoided. The risk of bias is present in nearly any AI model, even in an algorithmic decision, regardless of whether the algorithm was learned from data or written by humans.”

Identify the Problem To Be Solved

In the early phases of an AI project, there should be lots of brainstorming. This should also involve a cross-section of people in the organization, which will help with buy-in. The goal is to identify a business problem to be solved. 

“For many companies, the problem is that they start with a need for technology, and not with an actual business need,” said Colin Priest, who is the VP of AI Strategy at DataRobot. “It reminds me of this famous quote from Steve Jobs, ‘You’ve got to start with the customer experience and work backwards to the technology. You can’t start with the technology and try to figure out where you’re going to sell it.’”

The problem to be solved should also be specific–that is, something that can be measured–and narrow. Don’t boil the ocean.

“It is the small steps that count,” said Mike Brooks, who is the Senior Director of APM Consulting at Aspen Technology. “Do not make the mistake of trying to make AI work for everything, all at once. After analyzing value for each AI initiative, the real benefits come when it solves a very specific goal.”

Costs

While it is important to estimate the ROI of a project, there is often too little attention paid to the cost side of the equation. But this can lead to disappointment. After all, it is never fun to be over budget on a corporate initiative. 

“Companies looking to implement an AI project should start by looking at the cost of the operation and doing an analysis on how that cost structure compares to best practices,” said Jerry Kurtz, who is the EVP and Head of I&D at Capgemini North America. “The cost of storing and transforming data is typically 70% of the budget, and only brings 10% of the value. Being able to leverage AI to solve business problems is only 30% of the cost, and brings 90% of the value. If an organization can reduce data costs and improve data quality, they’ll have more budget to put toward leveraging AI to solve those business problems, like improving productivity and efficiency.”

Buy-In

Implementing AI can be wrenching for an organization.  Employees may be skeptical of the technology and could fear for their jobs. 

This is why there needs to be focus on getting buy-in, which means having clear communication of the benefits. It should also involve a commitment from the C-Suite. Consider that—according to a recent survey from O’Reilly—the biggest bottleneck for AI is an unsupportive culture. 

“For AI to succeed you must have the buy-in of your workforce and the right employee upskilling programs,” said Anand Rao, who is the Global AI Lead at PwC. “You can’t simply offer AI training courses to employees; you need to go further and offer both immediate opportunities and incentives to apply what they’ve learned. Furthermore, business stakeholders and end-users—not just the tech staff—need to be included from the beginning of any project. If they’re not brought in at the start, your organization risks building a solution that does not work for the people who will be using it.”

AI (Artificial Intelligence) Companies That Are Combating The COVID-19 Pandemic

AI (Artificial Intelligence) has a long history, going back to the 1950s when the computer industry started.  It’s interesting to note that much of the innovation came from government programs, not private industry. This was all about how to leverage technologies to fight the Cold War and put a man on the moon. 

The impact of these program would certainly be far-reaching. They would lead to the creation of the Internet and the PC revolution.

So fast forward to today: Could the COVID-19 pandemic have a similar impact? Might it be our generation’s Space Race?

I think so. And of course, it’s just not the US this time. This is about a worldwide effort.

The Catalysts

Wide-scale availability of data will be key. The White House Office of Science and Technology has formed the Covid-19 Open Research Dataset, which has over 24,000 papers and is constantly being updated. This includes the support of the National Library of Medicine (NLM), National Institutes of Health (NIH), Microsoft and the Allen Institute for Artificial Intelligence.

“This database helps scientists and doctors create personalized, curated lists of articles that might help them, and allows data scientists to apply text mining to sift through this prohibitive volume of information efficiently with state-of-the-art AI methods,” said Noah Giansiracusa, who is the Assistant Professor at Bentley University.

Yet there needs to be an organized effort to galvanize AI experts to action. The good news is that there are already groups emerging. For example, there is the C3.ai Digital Transformation Institute, which is a new consortium of research universities, C3.ai (a top AI company) and Microsoft. The organization will be focused on using AI to fight pandemics. 

There are even competitions being setup to stir innovation. One is Kaggle’s COVID-19 Open Research Dataset Challenge, which is a collaboration with the NIH and White House. This will be about leveraging Kaggle’s 4+ million community of data scientists. The first contest was to help provide better forecasts of the spread of COVID-19 across the world. 

Next, the Decentralized Artificial Intelligence Alliance, which is led by SingularityNET, is putting together an AI hackathon to fight the pandemic. The organization has more than 50 companies, labs and nonprofits.

And then there is MIT Solve, which is a marketplace for social impact innovation. It has established the Global Health Security & Pandemics Challenge. In fact, a member of this organization, Ada Health, has developed an AI-powered COVID-19 personalized screening test. 

Free AI Tools

AI tools and infrastructure services can be costly. This is especially the case for models that target complex areas like medical research.

But AI companies have stepped up—that is, by eliminating their fees:

  • NVIDIA is providing a free 90-day license for Parabricks, which allows for using AI for genomics purposes. Consider that the technology can significantly cut down the time for processing. The program also involves free support from Oracle Cloud Infrastructure and Core Scientific (a provider of NVIDIA DGX systems and NetApp cloud-connected storage).
  • DataRobot is offering its platform for no charge. This allows for the deployment, monitoring and management of AI models at scale. The technology is also provided to the Kaggle competition. 
  • Run:AI is offering its software for free to help with building virtualization layers for deep learning models. 
  • DarwinAI has collaborated with the University of Waterloo’s VIP Lab to develop COVID-Net.  This is a convolutional neural network that detects COVID-19 using chest radiography. DarwinAI is also making this technology open source (below you’ll find a visualization of this).

Patient Care

Patient care is an area where AI could be essential. An example of this is Biofourmis. In a two-week period, this startup created a remote monitoring system that has a biosensor for a patient’s arm and an AI application to help with the diagnosis. In other words, this can help reduce infection rates for doctors and medical support personnel. Keep in mind that–in China–about 29% of COVID-19 deaths were healthcare workers. 

Another promising innovation to help patients is from Vital.  The founders are Aaron Patzer, who is the creator of Mint.com, and Justin Schrager, an ER doc. Their company uses AI and NLP (Natural Language Processing) to manage overloaded hospitals. 

Vital is now devoting all its resources to create C19check.com. The app, which was built in a partnership with Emory Department of Emergency Medicine’s Health DesignED Center and the Emory Office of Critical Event Preparedness and Response, provides guidance to the public for self-triage before going to the hospital. So far, it’s been used by 400,000 people. 

And here are some other interesting patient care innovations:

  • AliveCor: The company has launched KardiaMobile 6L, which measures QTc (heart rate corrected interval) in COVID-19 patients. This helps detect sudden cardiac arrest by using AI. It’s based on the FDA’s recent guidance to allow more availability of non-invasive remote monitoring devices for the pandemic. 
  • CLEW: It has launched the TeleICU. It uses AI to identify respiratory deterioration in advance. 

Drug Discovery

While drug discovery has made many advances over the years, the process can still be slow and onerous. But AI can help out.

For example, a startup that is using AI to accelerate drug development is Gero Pte.  It has used the technology to better isolate compounds for COVID-19 by testing treatments that are already used in humans. 

“Mapping the virus genome has seemed to happen very quickly since the outbreak,” said Vadim Tabakman, who is the Director of technical evangelism at Nintex. “Leveraging that information with Machine Learning to explore different scenarios and learn from those results could be a game changer in finding a set of drugs to fight this type of outbreak. Since the world is more connected than ever, having different researchers, hospitals and countries, providing data into the datasets that get processed, could also speed up the results tremendously.”

AIOps: What You Need To Know

AIOps, which is a term that was coined by Gartner in 2017, is increasingly becoming a critical part of next-generation IT. “In a nutshell, AIOps is applying cognitive computing like AI and Machine learning techniques to improve IT operations,” said Adnan Masood, who is the Chief Architect of AI & Machine Learning at UST Global. “This is not to be confused with the entirely different discipline of MLOps, which focuses on the Machine learning operationalization pipeline. AIOps refers to the spectrum of AI capabilities used to address IT operations challenges–for example, detecting outliers and anomalies in the operations data, identifying recurring issues, and applying self-identified solutions to proactively resolve the problem, such as by restarting the application pool, increasing storage or compute, or resetting the password for a locked-out user.”

The fact is that IT departments are often stretched and starved for resources. Traditional tools have usually been rule-based and inflexible, which has made it difficult to deal with the flood of new technologies.

“IT teams have adopted microservices, cloud providers, NoSQL databases, and various other engineering and architectural approaches to help support the demands their businesses are putting on them,” said Shekhar Vemuri, who is the CTO of Clairvoyant. “But in this rich, heterogeneous, distributed, complex world, it can be a challenge to stay on top of vast amounts of machine-generated data from all these monitoring, alerting and runtime systems.  It can get extremely difficult to understand the interactions between various systems and the impact they are having on cost, SLAs, outages etc.”

So with AIOps, there is the potential for achieving scale and efficiencies.  Such benefits can certainly move the needle for a company, especially as IT has become much more strategic.

“From our perspective, AIOps equips IT organizations with the tools to innovate and remain competitive in their industries, effectively managing infrastructure and empowering insights across increasingly complex hybrid and multi-cloud environments,” said Ross Ackerman, who is the NetApp Director of Analytics and Transformation. “This is accomplished through continuous risk assessments, predictive alerts, and automated case opening to help prevent problems before they occur. At NetApp, we’re benefiting from a continuously growing data lake that was established over a decade ago. It was initially used for reactive actions, but with the introduction of more advanced AI and ML, it has evolved to offer predictive and prescriptive insights and guidance. Ultimately, our capabilities have allowed us to save customers over two million hours of lost productivity due to avoided downtime.”

As with any new approach, though, AIOps does require much preparation, commitment and monitoring. Let’s face it, technologies like AI can be complex and finicky. 

“The algorithms can take time to learn the environment, so organizations should seek out those AIOps solutions that also include auto-discovery and automated dependency mapping as these capabilities provide out-of-the-box benefits in terms of root-cause diagnosis, infrastructure visualization, and ensuring CMDBs are accurate and up-to-date,” said Vijay Kurkal, who is the CEO of Resolve. “These capabilities offer immediate value and instantaneous visibility into what’s happening under the hood, with machine learning and AI providing increasing richness and insights over time.”

As a result, there should be a clear-cut framework when it comes to AIOps. Here’s what Appen’s Chief AI Evangelist Alyssa Simpson Rochwerger recommends: 

  • Clear ability to measure product success (business value outcomes).
  • Ability to measure and report on associated performance metrics such as accuracy, throughput, confidence and outcomes
  • Technical infrastructure to support—including but not limited to—model training, hosting, management, versioning and logging
  • Data Set management including traceability, data provenance and transparency
  • Low confidence/fallback data handling (this could be either a data annotation or other human-in-the-loop process or default when the AI system can’t handle a task or has a low-confidence output)

All this requires a different mindset. It’s really about looking at things in terms of software application development. 

“Most enterprise businesses are struggling with a wall to production, and need to start realizing a return on their machine learning and AI investments,” said Santiago Giraldo, who is a Senior Product Marketing Manager at Cloudera. “The problem here is two-fold. One issue is related to technology: Businesses must have a complete platform that unifies everything from data management to data science to production. This includes robust functionalities for deploying, serving, monitoring, and governing models. The second issue is mindset: Organizations need to adopt a production mindset and approach machine learning and AI holistically in everything from data practices to how the business consumes and uses the resulting predictions.”

So yes, AIOps is still early days and there will be lots of trial-and-error. But this approach is likely to be essential.

“While the transformative promise of AI has yet to materialize in many parts of the business, AIOps offers a proven, pragmatic path to improved service quality,” said Dave Wright, who is the Chief Innovation Officer at ServiceNow. “And since it requires little overhead, it’s a great pilot for other AI initiatives that have the potential to transform a business.”

Coronavirus: Can AI (Artificial Intelligence) Make A Difference?

The mysterious coronavirus is spreading at an alarming rate. There have been at least 305 deaths as more than 14,300 persons have been infected.

On Thursday, the World Health Organization (WHO) declared the coronavirus a global emergency. To put things into perspective, it has already exceeded the numbers infected during the 2002-2003 outbreak of SARS (Severe Acute Respiratory Syndrome) in China. 

Many countries are working hard to quell the virus. There have been quarantines, lock-downs on major cities, limits on travel and accelerated research on vaccine development. 

However, could technologies like AI (Artificial Intelligence) help out? Well, interestingly enough, it already has.

Just look at BlueDot, which is a venture-backed startup. The company has built a sophisticated AI platform that processes billions of pieces data, such as from the world’s air travel network, to identity outbreaks.

In the case of the coronavirus, BlueDot made its first alert on December 31st. This was ahead of the US Centers for Disease Control and Prevention, which made its own determination on January 6th.

BlueDot is the mastermind of Kamran Khan, who is an infectious disease physician and professor of Medicine and Public Health at the University of Toronto. Keep in mind that he was a frontline healthcare worker during the SARS outbreak. 

“We are currently using natural language processing (NLP) and machine learning (ML) to process vast amounts of unstructured text data, currently in 65 languages, to track outbreaks of over 100 different diseases, every 15 minutes around the clock,” said Khan. “If we did this work manually, we would probably need over a hundred people to do it well. These data analytics enable health experts to focus their time and energy on how to respond to infectious disease risks, rather than spending their time and energy gathering and organizing information.”

But of course, BlueDot will probably not be the only organization to successfully leverage AI to help curb the coronavirus. In fact, here’s a look at what we might see:

Colleen Greene, the GM of Healthcare at DataRobot:

“AI could predict the number of potential new cases by area and which types of populations will be at risk the most. This type of technology could be used to warn travelers so that vulnerable populations can wear proper medical masks while traveling.”

Vahid Behzadan, the Assistant Professor of Computer Science at the University of New Haven:

“AI can help with the enhancement of optimization strategies. For instance, Dr. Marzieh Soltanolkottabi’s  research is on the use of machine learning to evaluate and optimize strategies for social distancing (quarantine) between communities, cities, and countries to control the spread of epidemics. Also, my research group is collaborating with Dr. Soltanolkottabi in developing methods for enhancement of vaccination strategies leveraging recent advances in AI, particularly in reinforcement learning techniques.”

Dr. Vincent Grasso, who is the IPsoft Global Practice Lead for Healthcare and Life Sciences:

“For example, when disease outbreaks occur, it is crucial to obtain clinical related information from patients and others involved such as physiological states before and after, logistical information concerning exposure sites, and other critical information. Deploying humans into these situations is costly and difficult, especially if there are multiple outbreaks or the outbreaks are located in countries lacking sufficient resources. Conversational computing working as an extension of humans attempting to get relevant information would be a welcome addition. Conversational computing is bidirectional—it can engage with a patient and gather information, or the reverse, provide information based upon plans that are either standardized or modified based on situational variations. In addition, engaging in a multilingual and multimodal manner further extends the conversational computing deliverable. In addition to this ‘front end’ benefit, the data that is being collected from multiple sources such as voice, text, medical devices, GPS, and many others, are beneficial as datapoints and can help us learn to combat a future outbreak more effectively.”

Steve Bennett, the Director of Global Government Practice at SAS and former Director of National Biosurveillance at the U.S. Department of Homeland Security:

“AI can help deal with the coronavirus in several ways. AI can predict hotspots around the world where the virus could make the jump from animals to humans (also called a zoonotic virus). This typically happens at exotic food markets without established health codes.  Once a known outbreak has been identified, health officials can use AI to predict how the virus is going to spread based on environmental conditions, access to healthcare, and the way it is transmitted. AI can also identify and find commonalities within localized outbreaks of the virus, or with micro-scale adverse health events that are out of the ordinary. The insights from these events can help answer many of the unknowns about the nature of the virus.

“Now, when it comes to finding a cure for coronavirus, creating antivirals and vaccines is a trial and error process. However, the medical community has successfully cultivated a number of vaccines for similar viruses in the past, so using AI to look at patterns from similar viruses and detect the attributes to look for in building a new vaccine gives doctors a higher probability of getting lucky than if they were to start building one from scratch.”

Don Woodlock, the VP of HealthShare at InterSystems:

“With ML approaches, we can read the tens of billions of data points and clinical documents in medical records and establish the connections to patients that do or do not have the virus. The ‘features’ of the patients that contract the disease pop out of the modeling process, which can then help us target patients that are higher risk.

“Similarly, ML approaches can automatically build a model or relationship between treatments documented in medical records and the eventual patient outcomes. These models can quickly identify treatment choices that are correlated to better outcomes and help guide the process of developing clinical guidelines.”

Prasad Kothari, who is the VP Data Science and AI for The Smart Cube:

“The coronavirus can cause severe symptoms such as pneumonia, severe acute respiratory syndrome, kidney failure etc. AI empowered algorithms such as genome based neural networks already built for personalized treatment can prove very helpful in managing these adverse events or symptoms caused by coronavirus, especially when the effect of virus depends on immunity and the genome structure of individual and no standard treatment can treat all symptoms an effects in the same way.

“In recent times, immunotherapy and Gene therapy empowered through AI algorithms such as boltzmann machines (entropy based combinatorial neural networks) have stronger evidence of treating such diseases which stimulate body’s immunity systems. For this reason, Abbvie’s Aluvia HIV drug is one possible treatment. If you look at data of affected patients and profile virus mechanics and cellular mechanism affected by the coronavirus, there are some similarities in the biological pathways and treatment efficacy. But this is yet to be tested.”

CES: The Coolest AI (Artificial Intelligence) Announcements

As seen at this week’s CES 2020 mega conference, the buzz for AI continues to be intense. Here are just a few comments from the attendees:

  • Nichole Jordan, who is Grant Thornton’s Central region managing partner: “From AI-powered agriculture equipment to emotion-sensing technology, walking the exhibit floors at CES drives home the fact that artificial intelligence is no longer a vision of the future. It is here today and is clearly going to be more integrated into our world going forward.”
  • Derek Kennedy, the Senior Partner and Global Technology Leader at Boston Consulting Group: “AI is increasingly playing a role in every intelligent product, such as upscaling video signals for an 8K TV as well as every business process, like predicting consumer demand for a new product.”
  • Houman Haghighi, the Business Development Partner at Menlo Ventures: “Voice, natural language and predictive actions are continuing to become the new—and sometimes the only—user interface within the home, automobile, and workplace.”

So what were some of the stand out announcement at CES? Well, given that there were over 4,500 exhibitors, this is a tough question to answer. But here are some innovations that certainly do show the power of AI:

Prosthetics: Using AI along with EMG technology, BrainCo has built a prosthetic arm that learns. In fact, it can allow for people to play a piano or even do calligraphy. 

“This is an electronic device that allows you to control the movements of an artificial arm with the power of thought alone,” said Max Babych, who is the CEO of SpdLoad. Today In: Small Business

The cost for the prosthetic is quite affordable at about $10,000 (this is compared to about $100,000 for alternatives). 

SelfieType: One of the nagging frictions of smartphones is the keyboard. But Samsung has a solution: SelfieType. It leverages cameras and AI to create a virtual keyboard on a surface (such as a table) that learns from hand movements. 

“This was my favorite and simplest AI use case at CES,” said R. Mordecai, who is the Head of Innovation and Partnerships at INNOCEAN USA. “I wish I had it for the flight home so I could type this on the plane tray.”

Lululab’s Lumine: This is a smart mirror that is for skin care. Lumine uses deep learning to analyze six categories–wrinkles, pigment, redness, pores, sebum and trouble–and then recommends products to help.

Whisk: This is powered by AI to scan the contents of your fridge so as to think up creative dishes to cook (it is based research from over 100 nutritionists, food scientists, engineers and retailers). Not only does this technology allow for a much better experience, but should help reduce food waste. Keep in mind that the average person throws away 238 pounds of food every year. 

Wiser: Developed by Schneider Electric, this is a small device that you install in your home’s circuit breaker box. With the use of machine learning, you can get real-time monitoring of usage by appliance, which can lead to money savings and optimization for a solar system.

Vital Signs Monitoring: The Binah.ai app analyzes a person’s face to get medical-grade insights, such as oxygen saturation, respiration rate, heart rate variability and mental stress. The company also plans to add monitoring for hemoglobin levels and blood pressure.

Neon: This is a virtual assistant that looks like a real person, who can engage in intelligent conversation and show emotion. While still in the early stages, the technology is actually kind of scary. The creator of Neon–which is Samsung-backed Star Labs—thinks that it will replace doctors, lawyers and other white collar professionals. No doubt, this appears to be a recipe for wide-scale unemployment, not to mention a way to unleash a torrent of deepfakes!

AI (Artificial Intelligence): What’s The Next Frontier For Healthcare?

Perhaps one of the biggest opportunities for AI (Artificial Intelligence) is the healthcare industry. According to ReportLinker, spending on this category is forecasted to jump from $2.1 billion to $36.1 billion by 2025. This is a hefty 50.2% compound annual growth rate (CAGR).

So then what are some of the trends that look most interesting within healthcare AI? Well, to answer this question, I reached out to a variety of experts in the space.

Here’s a look: 

Ori Geva, who is the CEO of Medial EarlySign:

One of the key trends is the use of health AI to spur the transition of medicine from reactive to proactive care. Machine learning-based applications will preempt and prevent disease on a more personal level, rather than merely reacting to symptoms. Providers and payers will be better positioned to care for their patients’ needs with the tools to delay or prevent the onset of life-threatening conditions. Ultimately, patients will benefit from timely and personalized treatment to improve outcomes and potentially increase survival rates.

Dr. Gidi Stein, who is the CEO of MedAware:

In the next five years, consumers will gain more access to their health information than ever before via mobile electronic medical records (EMR) and health wearables. AI will facilitate turning this mountain of data into actionable health-related insights, promoting personalized health and optimizing care. This will empower patients to take the driving wheel of their own health, promote better patient-provider communication and facilitate high-end healthcare to under-privileged geographies.

Tim O’Malley, who is the President and Chief Growth Officer at EarlySense:

Today, there are millions of physiologic parameters which are extracted from a patient. I believe the next mega trend will be harnessing this AI-driven “Smart Data” to accurately predict and avoid adverse events for patients. The aggregate of this data will be used to formulate predictive analytics to be used across diverse patient populations across the continuum of care, which will provide truly personalized medicine.

Andrea Fiumicelli, who is the vice president and general manager of Healthcare and Life Sciences at DXC Technology:

Ultimately, AI and data analytics could prove to be the catalyst in addressing some of today’s most difficult-to-treat health conditions. By combining genomics with individual patient data from electronic health records and real-world evidence on patient behavior culled from wearables, social media and elsewhere, health care providers can harness the power of precision medicine to determine the most effective approaches for specific patients.

This brings tremendous potential to treating complex conditions such as depression. AI can offer insights into a wealth of data to determine the likelihood of depression—based on the patient’s age, gender, comorbidities, genomics, life style, environment, etc.—and can provide information about potential reactions before they occur, thus enabling clinicians to provide more effective treatment sooner.

Ruthie Davi, who is the vice president of Data Science at Acorn AI, a Medidata company:

One key advance to consider is the use of carefully curated datasets to form Synthetic Control Arms as a replacement for placebo in clinical trials. Recruiting patients for randomized control trials can be challenging, particularly in small patient populations. From the patient perspective, while an investigational drug can offer hope via a new treatment option, the possibility of being in a control arm can be a disincentive. Additionally, if patients discover they are in a control arm, they may drop out or elect to receive therapies outside of the trial protocol, threatening the validity and completion of the entire trial.

However, thanks to advances in advanced analytics and the vast amount of data available in life sciences today, we believe there is a real opportunity to transform the clinical trial process. By leveraging patient-level data from historical clinical trials from Medidata’s expansive clinical trial dataset, we can create a synthetic control arm (SCA) that precisely mimics the results of a traditional randomized control. In fact, in a recent non-small cell lung cancer case study, Medidata together with Friends of Cancer Research was successful in replicating the overall survival of the target randomized control with SCA. This is a game-changing effort that will enhance the clinical trial experience for patients and propel next generation therapies through clinical development.

Tesla’s AI Acquisition: A New Way For Autonomous Driving?

This week Tesla acquired DeepScale, which is a startup that focuses on developing computer vision technologies (the price of the deal was not disclosed). This appears to be a part of the company’s focus on building an Uber-like service as well building fully autonomous vehicles.

Founded in 2015, DeepScale has raised $15 million from investors like Point72, next47, Andy Bechtolsheim, Ali Partovi, and Jerry Yang. The founders include Forrest Iandola and Kurt Keutzer, who are both PhD’s. In fact, about a quarter of the engineering team has a PhD and they have more than 30,000 academic citations.

“DeepScale is a great fit for Tesla because the company specializes in compressing neural nets to work in vehicles, and hooking them into perception systems with multiple data types,” said Chris Nicholson, who is the CEO and founder of Skymind. “That’s what Tesla needs to make progress in autonomous driving.”

Tesla has the advantage of an enormous database of vehicle information. So with software expertise, the company should help accelerate the innovation. “If ‘data is the new oil’ then ‘AI models are the new Intellectual Property and barrier to entry,’” said Joel Vincent, who is the CMO of Zededa. “This is the dawn of a new age of competitive differentiation. AI models are useless without data and Telsa has an astounding amount of edge data.”

Now when it comes to autonomous driving, there are other major requirements–some which may get little attention.

Just look at the use of energy. “Large models require more powerful processors and larger memory to run them in production,” said Dr. Sumit Gupta, who is the the IBM Cognitive Systems VP of AI and HPC. “But vehicles have a limited energy budget, so the market is always trying to minimize the energy that the electronics in the car consume. This is what DeepScale is good at. The company invented an AI model called ‘SqueezeNet’ that requires a smaller memory footprint and also less CPU horsepower.”

Keep in mind that the lower energy consumption will mean there will be more capacity for sensors for vision. “This should help make autonomous vehicles safer,” said Arjan Wijnveen, who is the CEO of CVEDIA. “Tesla seems certain that they don’t need LiDAR for effective computer vision, but there are lots of other types of sensors you could see on their vehicles in the future, and sometimes just placing a second camera facing another angle can improve the AI model.”

Not using LiDAR would be a big deal, which would mean a much lower cost per vehicle. “There are concerns about the deployment of LIDAR lasers in the public sphere,” said Gavin D. J. Harper, who is a Faraday Institution Research Fellow at the University of Birmingham. “Safety measures include limiting the power and exposure of lasers. There is also the concern about the potential for causing inadvertent harm to those nearby.”

So all in all, the DeepScale deal could move the needle for Tesla and represent a shift in the industry. Although, it is still important to keep in mind that autonomous driving is still in the nascent stages (regardless of what Elon Musk boasts!) There remain many tough issues to work out, which could easily drag on because of regulatory processes.

“To get to full autonomy, you’re still going to need some major algorithmic improvements,” said Nicholson. “Some of the smartest people in the world are working on this, and it seems clear that we’ll get there, even if we don’t know when. In any case, companies like Tesla and Waymo have the right mix of talent, data, and cars on the road.”

What AI (Artificial Intelligence) Will Mean For The Cannabis Space

Just about every estimate shows that the cannabis industry will see strong long-term growth. Yet there are some major challenges–and they are more than just about changing existing laws and regulations.

But AI (Artificial Intelligence) is likely to be a big help. True, the industry has not been a big adopter of new technologies. However, this should change soon as investors pour billions of dollars into the space.

So how might AI impact things?  Well, look at what the CEO and Director of CROP Corp, Michael Yorke, has to say: “The use of AI in sensors and high-definition cameras can be used to keep track of and adjust multiple inputs in the growing environment such as water level, PH level, temperature, humidity, nutrient feed, light spectrum and CO2 levels. Tracking and adjusting these inputs can make a major difference in the quantity and quality of cannabis that growers are able to produce. AI also helps automate trimming technology so that it is able to de-leaf buds saving countless hours of manual labor. Similarly, it can be applied to automated planting equipment to increase the effectiveness and efficiency of planting. And AI can identify the sex of the plants, detect sick plants, heal or remove sick plants from the environment, and track the plant growth rate to be able to predict size and yield.”

No doubt, such things could certainly move the needle in a big way. 

There are also opportunities to help with such things as more accurate predictions, which would allow for maximizing efficiency. And yes, AI is likely to be key in discovering new strains or customize strains for specific effects (examples would include relaxation, excitement or increasing/decreasing hunger). The result could be even more growth in the cannabis market. 

But there is something else to keep in mind: With no legalization on a federal level in the US, there is a need for sophisticated tracking systems. 

“The existing regulations are complex, requiring businesses to follow detailed rules that govern every area of the industry from growing to packaging and selling to consumers,” said Mark Krytiuk, who is the president of Nabis Holdings. “Even the smallest error can cost a cannabis business thousands, and incur harsh punishments such as losing their cannabis license.”

The situation is even more complex with retail operations. “Artificial intelligence is one key technological advancement that could make a significant impact,” said Krytiuk. “By implementing this technology, cannabis retailers would be able to more easily track state-by-state regulations, and the constant changes that are being made. With this information, they would be able to properly package, ship, and sell products in a more compliant way that is less likely to be intercepted by government regulations.”

Keep in mind that the problems with compliance are a leading cause of failure for cannabis operators. “Running a cannabis business can be costly, especially when it comes to getting and keeping a license, paying high taxes, and dealing with the added pressure of ever-changing government regulations,” said Krytiuk. “If more cannabis businesses had access to automated, AI-powered technology that could help them be more compliant, there would be more successful companies helping the industry to grow.”

Again, the AI part of the cannabis industry is very much in the nascent stages. It will likely take some time to get meaningful traction. But for entrepreneurs, the opportunity does look promising. “The industry is only going to continue to grow, so it’s only a matter of time before it reaches its own technological revolution,” said Krytiuk.

AI (Artificial Intelligence) Words You Need To Know

In 1956, John McCarthy setup a ten-week research project at Dartmouth University that was focused on a new concept he called “artificial intelligence.” The event included many of the researchers who would become giants in the emerging field, like Marvin Minsky, Nathaniel Rochester, Allen Newell, O.G. Selfridge, Raymond Solomonoff, and Claude Shannon.

Yet the reaction to the phrase artificial intelligence was mixed. Did it really explain the technology? Was there a better way to word it?

Well, no one could come up with something better–and so AI stuck.

Since then, we’ve seen the coining of plenty of words in the category, which often define complex technologies and systems. The result is that it can be tough to understand what is being talked about.

So to help clarify things, let’s take a look at the AI words you need to know:

Algorithm

From Kurt Muehmel, who is a VP Sales Engineer at Dataiku:

A series of computations, from the most simple (long division using pencil and paper), to the most complex. For example, machine learning uses an algorithm to process data, discover rules that are hidden in the data, and that are then encoded in a “model” that can be used to make predictions on new data.

Machine Learning

From Dr. Hossein Rahnama, who is the co-founder and CEO of Flybits:

Traditional programming involves specifying a sequence of instructions that dictate to the computer exactly what to do. Machine learning, on the other hand, is a different programming paradigm wherein the engineer provides examples comprising what the expected output of the program should be for a given input. The machine learning system then explores the set of all possible computer programs in order to find the program that most closely generates the expected output for the corresponding input data. Thus, with this programming paradigm, the engineer does not need to figure out how to instruct the computer to accomplish a task, provided they have a sufficient number of examples for the system to identify the correct program in the search space.

Neural Networks

From Dan Grimm, who is the VP and General Manager of Computer Vision a RealNetworks:

Neural networks are mathematical constructs that mimic the structure of the human brain to summarize complex information into simple, tangible results. Much like we train the human brain to, for example, learn to control our bodies in order to walk, these networks also need to be trained with significant amounts of data. Over the last five years, there have been tremendous advancements in the layering of these networks and the compute power available to train them.

Deep Learning

From Sheldon Fernandez, who is the CEO of DarwinAI:

Deep Learning is a specialized form of Machine Learning, based on neural networks that emulate the cognitive capabilities of the human mind. Deep Learning is to Machine Learning what Machine Learning is to AI–not the only manifestation of its parent, but generally the most powerful and eye-catching version. In practice, deep learning networks capable of performing sophisticated tasks are 1.) many layers deep with millions, sometimes, billions of inputs (hence the ‘deep’); 2.) trained using real world examples until they become proficient at the prevailing task (hence the ‘learning’).

Explainability

From Michael Beckley, who is the CTO and founder of Appian:

Explainability is knowing why AI rejects your credit card charge as fraud, denies your insurance claim, or confuses the side of a truck with a cloudy sky. Explainability is necessary to build trust and transparency into AI-powered software. The power and complexity of AI deep learning can make predictions and decisions difficult to explain to both customers and regulators. As our understanding of potential bias in data sets used to train AI algorithms grows, so does our need for greater explainability in our AI systems. To meet this challenge, enterprises can use tools like Low Code Platforms to put a human in the loop and govern how AI is used in important decisions.

Supervised, Unsupervised and Reinforcement Learning

From Justin Silver, who is the manager of science & research at PROS:

There are three broad categories of machine learning: supervised, unsupervised, and reinforcement learning. In supervised learning, the machine observes a set of cases (think of “cases” as scenarios like “The weather is cold and rainy”) and their outcomes (for example, “John will go to the beach”) and learns rules with the goal of being able to predict the outcomes of unobserved cases (if, in the past, John usually has gone to the beach when it was cold and rainy, in the future the machine will predict that John will very likely go to the beach whenever the weather is cold and rainy). In unsupervised learning, the machine observes a set of cases, without observing any outcomes for these cases, and learns patterns that enable it to classify the cases into groups with similar characteristics (without any knowledge of whether John has gone to the beach, the machine learns that “The weather is cold and rainy” is similar to “It’s snowing” but not to “It’s hot outside”). In reinforcement learning, the machine takes actions towards achieving an objective, receives feedback on those actions, and learns through trial and error to take actions that lead to better fulfillment of that objective (if the machine is trying to help John avoid those cold and rainy beach days, it could give John suggestions over a period of time on whether to go to the beach, learn from John’s positive and negative feedback, and continue to update its suggestions).

Bias

From Mehul Patel, who is the CEO of Hired:

While you may think of machines as objective, fair and consistent, they often adopt the same unconscious biases as the humans who built them. That’s why it’s vital that companies recognize the importance of normalizing data—meaning adjusting values measured on different scales to a common scale—to ensure that human biases aren’t unintentionally introduced into the algorithm. Take hiring as an example: If you give a computer a data set with 100 female candidates and 300 male candidates and ask it to predict the best person for the job, it is going to surface more male candidates because the volume of men is three times the size of women in the data set. Building technology that is fair and equitable may be challenging but will ensure that the algorithms informing our decisions and insights are not perpetuating the very biases we are trying to undo as a society.

Backpropagation

From Victoria Jones, who is the Zoho AI Evangelist:

Backpropagation algorithms allow a neural network to learn from its mistakes. The technology tracks an event backwards from the outcome to the prediction and analyzes the margin of error at different stages to adjust how it will make its next prediction. Around 70% of our AI assistant (called Zia) features the use of backpropagation, including Zoho Writer’s grammar-check engine and Zoho Notebook’s OCR technology, which lets Zia identify objects in images and make those images searchable. This technology also allows Zia’s chatbot to respond more accurately and naturally. The more a business uses Zia, the more Zia understands how that business is run. This means that Zia’s anomaly detection and forecasting capabilities become more accurate and personalized to any specific business.

NLP (Natural Language Processing)

Courtney Napoles, who is the Language Data Manager at Grammarly:

The field of NLP brings together artificial intelligence, computer science, and linguistics with the goal of teaching machines to understand and process human language. NLP researchers and engineers build models for computers to perform a variety of language tasks, including machine translation, sentiment analysis, and writing enhancement. Researchers often begin with analysis of a text corpus—a huge collection of sentences organized and annotated in a way that AI algorithms can understand.

The problem of teaching machines to understand human language—which is extraordinarily creative and complex—dates back to the advent of artificial intelligence itself. Language has evolved over the course of millennia, and devising methods to apprehend this intimate facet of human culture is NLP’s particularly challenging task, requiring astonishing levels of dexterity, precision, and discernment. As AI approaches—particularly machine learning and the subset of ML known as deep learning—have developed over the last several years, NLP has entered a thrilling period of new possibilities for analyzing language at an unprecedented scale and building tools that can engage with a level of expressive intricacy unimaginable even as recently as a decade ago.