Is AI Headed For Another Winter?

AI (Artificial Intelligence) has experienced several periods of severe funding cuts and lack of interest, such as during the 1970s and 1980s. They were called “AI winters,” a reference to the concept of nuclear winter where the sun is blocked from a layer of smoke and dust!

But of course, things are much different nowadays. AI is one of the hottest categories of tech and is a strategic priority for companies like Facebook, Google, Microsoft and many others.

Yet could we be facing another winter? May things have gone too far?  Well, it’s really tough to tell. Keep in mind that prior AI winters were a reaction to the fact that many of the grandiose promises did not come to fruition.

But as of now, we are seeing many innovations and breakthroughs that are impacting diverse industries. VCs are also writing large checks to fund startups while mega tech companies have been ramping their M&A.

Simply put, there are few signs of a slowdown.

A New Kind of AI Winter?

History doesn’t repeat itself but it often rhymes, said Mark Twain.  And this may be the case with AI — that is, we could be seeing a new type of winter.  It’s where society is negatively affected in subtle ways over a prolonged period of time.

According to Alex Wong, who is the chief scientist and co-founder of DarwinAI, the problem of bias in models is major part of this.  For example, AI is being leveraged in hiring, which involves screening for candidates based on large numbers of resumes.

“While this approach might seem data driven and thus objective, there are significant gender and cultural biases in these past hiring practices, which are then learned by the AI in the same way a child can pickup historical biases from what they are taught,” said Alex. “Without deeper investigation, the system will start making biased and discriminatory decisions that can have a negative societal impact and create greater inequality when released into companies.”

But this is not a one off. Bias is really a pervasive problem, corrosive and often goes undetected. In fact, a big reason is the culture of the AI community, which is more focused on striving for accuracy rates – not necessarily interested on the broader impact.

Another factor is that AI tools are becoming more pervasive and are often free to use. So as inexperienced people build models, there is a higher likelihood that we’ll see even more bias in the outcomes.

What To Do?

Explainability is about understanding AI models.  True, this is difficult because deep learning systems can be black boxes.  But there are creative ways to deal with this.

Consider the following from Christian Beedgen, who is the co-founder and CTO of Sumo Logic:

“Early in our development of Sumo Logic, we built a generic and unsupervised anomaly detection system to track how often different classifications of logs appeared, with the idea that this would help us spot interesting trends and outliers. Once implemented, however, we found that — despite a variety of approaches — the results generated by our advanced anomaly detection algorithms simply were not meaningfully explainable to our users. We realized that the results of sophisticated algorithms don’t matter if humans can’t figure out what they mean. Since then, we’ve focused on narrower problem states to create fundamentally simpler — and therefore more useful — predictive machinery.”

It was a tough lesson but it wound up being critical for the company, as the products got much stronger and useful.  Sumo Logic has gone on to raise $230 million and is one of the top players in its space.

What Next?

Going forward, the AI industry needs to be much more proactive, with an urgency for fairness, accountability and transparency.  A way to help this along would be to include building features in platforms that provide insights on explainability as well as bias.  Even having old-school ethics boards is a good option. After all, this is common for university research.

“AI-influenced decisions that result in discrimination and inequality, when left unchecked by humans, can lead to a loss of trust in AI, which can in turn hinder its widespread adoption, especially in the sectors that could truly benefit,” said Alex. “We should not only strive to improve the transparency and interpretability of deployed AI systems, but educate those who build these systems and make them aware of fairness, data leakage, creator accountability, and inherent bias issues. The goal is not only highly effective AI systems, but also ones that society can trust.”

Deep Learning: When Should You Use It?

Deep learning, which is a subset of AI (Artificial Intelligence), has been around since the 1950s. It’s focused on developing systems that mimic the brain’s neural network structure.

Yet it was not until the 1980s that deep learning started to show promise, spurred by the pioneering theories of researchers like Geoffrey Hinton, Yoshua Bengio and Yann Lecun. There was also the benefit of accelerating improvements in computer power.

Despite all this, there remained lots of skepticism. Deep learning approaches still looked more like interesting academic exercises that were not ready for prime time.

But this all changed in a big way in 2012, when Hinton, Ilya Sutskever, and Alex Krizhevsky used sophisticated deep learning to recognize images in an enormous dataset. The results were stunning, as they blew away previous records. So began the deep learning revolution.

Nowadays if you do a cursory search of the news for the phrase “deep learning” you’ll see hundreds of mentions. Many of them will be from mainstream publications.

Yes, it’s a case of a 60-plus-year-old overnight success story. And it is certainly well deserved.

But of course, the enthusiasm can still stretch beyond reality. Keep in mind that deep learning is far from a miracle technology and does not represent the final stages of true AI nirvana. If anything, the use cases are still fairly narrow and there are considerable challenges.

“Deep learning is most effective when there isn’t an obvious structure to the data that you can exploit and build features around,” said Dr. Scott Clark, who is the co-founder and CEO of SigOpt. “Common examples of this are text, video, image, or time series datasets. The great thing about deep learning is that it will automatically build and exploit patterns in the data in order to make better decisions. The downside is that this can sometimes take a lot of data and a lot of compute resources to converge to a good solution. It tends to be the most effective in places where there is a lot of data, a lot of compute power, and there is a need for the best possible solution.”

True, it is getting easier to use deep learning. Part of this is due to the ubiquity of open source platforms like TensorFlow and PyTorch. Then there is the emergence of cloud-based AI Systems, such as Google’s AutoML.

But such things only go so far. “Each neural network model has tens or hundreds of hyperparameters, so turning and optimizing these parameters requires deep knowledge and experiences from human experts,” said Jisheng Wang, who is the head of data science at Mist. “Interpretability is also a big challenge when using deep learning models, especially for enterprise software, which prefers to keep humans in the loop. While deep learning reduces the human effort of feature engineering, it also increases the difficulty for humans to understand and interpret the model. So in certain applications where we require human interaction and feedback for continuous improvement, deep learning may not be the appropriate choice.”

However, there are alternatives that may not be as complex, such as traditional machine learning. “In cases with smaller datasets and simpler correlations, techniques like KNN or random forest may be more appropriate and effective,” said Sheldon Fernandez, who is the CEO of DarwinAI.

Now this is not to somehow imply that you should mostly shun deep learning. The technology is definitely powerful and continues to show great progress (just look at the recent innovation of Generative Adversarial Networks or GANs).  Many companies — from mega operators like Google to early-stage startups — are also focused on developing systems to make the process easier and more robust.

But as with any advanced technology, it needs to be treated with care. Even experts can get things wrong.  “A deep learning model might easily get a problematic or nonsensical correlation,” said Sheldon,. “That is, the network might draw conclusions based on quirks in the dataset that are catastrophic from a practical point of view.”

What You Need To Know About Machine Learning

Machine learning is one of those buzz words that gets thrown around as a synonym for AI (Artificial Intelligence). But this really is not accurate. Note that machine learning is a subset of AI.

This field has also been around for quite some time, with the roots going back to the late 1950s. It was during this period that IBM’s Arthur L. Samuel created the first machine learning application, which played chess.

So how was this different from any other program? Well, according to Venkat Venkataramani, who is the co-founder and CEO of Rockset, machine learning is “the craft of having computers make decisions without providing explicit instructions, thereby allowing the computers to pattern match complex situations and predict what will happen.”

To pull this off, there needs to be large amounts of quality data as well as sophisticated algorithms and high-powered computers. Consider that when Samuel built his program such factors were severely limited. So it was not until the 1990s that machine learning became commercially viable.

“Current trends in machine learning are mainly driven by the structured data collected by enterprises over decades of transactions in various ERP systems,” said Kalyan Kumar B, who is the Corporate Vice President and Global CTO of HCL Technologies. “In addition, the plethora of unstructured data generated by social media is also a contributing factor to new trends. Major machine learning algorithms classify the data, predict variability and, if required, sequence the subsequent action. For example, an online retail app that can classify a user based on their profile data and purchase history allows the retailer to predict the probability of a purchase based on the user’s search history and enables them to target discounts and product recommendations.”

Now you’ll also hear another buzz word, which often gets confused with machine learning – that is, deep learning. Keep in mind that this is a subset of machine learning and involves sophisticated systems called neural networks that mimic the operation of the brain.  Like machine learning, deep learning has been around since the 1950s.  Yet it was during the 1980s and 1980s that this field gained traction, primarily from innovative theories of academics like Geoffrey Hinton, Yoshua Bengio and Yann Lecun. Eventually, mega tech operators like Google, Microsoft and Facebook would invest heavily in this technology. The result has been a revolution in AI. For example, if you use something like Google Translate, then you have seen the power of this technology.

But machine learning – supercharged by deep learning neural networks — is also making strides in the enterprise. Here are just a few examples:

  • Mist has built a virtual assistant, called Marvis, that is based on machine learning algorithms that mine insights from Wireless LANs. A network administrator can just ask it questions like “How are the wi-fi access points in the Baker-Berry Library performing?” and Marvis will provide answers based on the data. More importantly, the system gets smarter and smarter over time.
  • Barracuda Networks is a top player in the cybersecurity market and machine learning is a critical part of the company’s technology. “We’ve found that this technology is exponentially better at stopping personalized social engineering attacks,” said Asaf Cidon, who is the VP of Email Security for Barracuda Networks. “The biggest advantage of this technology is that it effectively allows us to create a ‘custom’ rule set that is unique to each customer’s environment. In other words, we can use the historical communication patterns of each organization to create a statistical model of what a normal email looks like in that organization. For example, if the CFO of the company always sends emails from certain email addresses, at certain times of the day, and logs in using certain IPs and communicates with certain people, the machine learning will absorb this data. We can also learn and identify all of the links that would be ‘typical’ to appear in an organization’s email system. We then use that knowledge and apply different machine learning classifiers that compare employee behavior with what a normal email would be like in the organization.”

Of course, machine learning has drawbacks – and the technology is far from achieving true AI. It cannot understand causation or engage in conceptual thinking. There are also potential risks of bias and overfitting of the models (which means that the algorithms determine that mere noise represents real patterns).

Even something like handling time-series data at scale can be extremely difficult. “An example is the customer journey,” said Anjul Bhambhri, who is the Vice President of Platform Engineering at Adobe. “This kind of dataset involves behavioral data that may have trillions of customer interactions. How important is each of the touch points in the purchase decision? To answer this, you need to find a way to determine a customer’s intent, which is complex and ambiguous. But it is certainly something we are working on.”

Despite all this, machine learning remains an effective way to turn data into valuable insights. And progress is likely to continue at a rapid clip.

“Machine language is important because its predictive power will disrupt numerous industries,” said Sheldon Fernandez, who is the CEO of DarwinAI. “We are already seeing this in the realm of computer vision, autonomous vehicles and natural language processing. Moreover, the implications of these disruptions may have far-reaching impacts to our quality of life, such as with advancements in medicine, health care and pharmaceuticals.”

Don’t Make These AI Blunders

Entrepreneur Tom Siebel has a knack for anticipating megatrends in technology. In the 1980s, he joined Oracle in its hyper-growth phase because he saw the power of relational databases. Then he would go on to start Siebel Systems, which pioneered the CRM space.

But of course, Tom is far from done. So what is he focused on now? Well, his latest venture, C3, is targeting the AI (Artificial Intelligence) market. The company’s software helps with the development, deployment and operation of this technology at scale, such as across IoT environments.

“AI has huge social, economic and environmental benefits,” said Tom. “Look at the energy industry. AI will make things more reliable and safer. There’ll be little downside. Yet AI is not all goodness and light either. There are many unintended negative consequences.”

He points out some major risk factors like privacy and cybersecurity. “What we saw with Facebook and Cambridge Analytica was just a dress rehearsal,” he said.

But when it comes to AI, some of the problems may be subtle. Although, the consequences can still be severe.

Here’s an example:  Suppose you develop an AI system and it has a 99% accuracy rate for detecting cancer. This would be impressive, right?

Not necessarily. The model could actually be way off because of low quality data, the use of wrong algorithms, bias or a faulty sample. In other words, our cancer test could potentially lead to terrible results.

“Basically, all AI to date has been trained wrong,” said Arijit Sengupta, who is the CEO of “This is because all AI focuses on accuracy in some form instead of optimizing the impact on various stakeholders. The benefit of predicting something correctly is never the same as the cost of making a wrong prediction.”

But of course, there are other nagging issues to keep in mind. Let’s take a look:

Transparency: One of the main drivers of AI innovation has been deep learning. But this process is highly complex and involves many hidden layers. This can mean that an AI model is essentially a black box.

“As algorithms become more advanced and complex, it’s becoming increasingly difficult to understand how decisions are made and correlations are found,” said Ivan Novikov, who is the CEO of Wallarm. “And because companies tend to keep proprietary AI algorithms private, there’s been a lack of scrutiny that further complicates matters. In order to address this issue of transparency, AI developers will need to strike a balance between allowing algorithms to be openly reviewed, while keeping company secrets under wraps.”

Rigid Models: Data often evolves. This could be due to changes in preferences or even cultures. In light of this, AI needs to have ongoing monitoring.

“Systems should be developed to handle changes in the norm,” said Triveni Gandhi, who is a Data Scientist at Dataiku. “Whether it’s a recommendation engine, predictive maintenance system, or a fraud detection system, things change over time, so the idea of ‘normal’ behavior will continue to shift. Any AI system you build, no matter what the use case, needs to be agile enough to shift with changing norms. If it’s not, the system and its results will quickly be rendered useless.”

Simplicity: Some problems may not need AI!

Note the following from Chris Hausler, who is a Data Science Manager at Zendesk: “One big mistake I see is using AI as the default approach for solving any problem that comes your way. AI can bring a lot of value in a number of settings, but there are also many problems where a simple heuristic will perform almost as well with a much smaller research, deployment and maintenance overhead.”

Data: No doubt, you need quality data. But this is just the minimum. The fact is that data practices can easily go off the rails.

“Data quality is important but so is data purity,” said Dan Olley, who is the Global EVP and CTO of Elsevier, a division of RELX Group. “We all know dirty data can lead to poor or inaccurate models. Data acquisition, ingestion and cleansing is the hidden key to any AI system. However, be careful that in putting the data into the structures you need for one purpose, you aren’t destroying patterns in the raw data you didn’t know existed. These patterns could be useful later on for different insights. Always keep a copy of all raw data, errors and all.”

AI & Data: Avoiding The Gotchas

When it comes to an AI (Artificial Intelligence) project, there is usually lots of excitement. The focus is often on using new-fangled algorithms – such as deep learning neural networks – to unlock insights that will transform the business.

But in this process, something often gets lost: The importance of establishing the right plan for the data. Keep in mind that 80% of the time of an AI project can be spent on identifying, storing, processing and cleansing data.

“The big gotcha is having bad data fed into your AI systems,” said David Linthicum, who is the Chief Cloud Strategy Officer at Deloitte Consulting LLP. “It’s only as smart as the data that it’s allowed to cull through. The quality of data is of utmost importance. The use of cloud computing allows for massive amounts of data to be stored for very low costs, which means that you can afford to provide all the data that your AI systems need.”

The data process can certainly be dicey. Even subtle changes can have a major impact on the outcomes.

So what to do to avoid the problems? Well, here are some strategies to consider:

Clear-Cut Focus: A majority of AI projects for traditional companies are about reducing costs, increasing revenues or keeping up with the competition. But for the most part, the goals can get easily muddled.

According to Stuart Dobbie, who is the Product Owner at Callsign: “Fundamentally, the core recurring problem remains simple: many businesses fail to clearly articulate their business problem prior to choosing the technologies and skill-sets required to solve it.”

The temptation is to over complicate things. But of course, this can mean an AI project will go off the rails and be a major waste of resources.

Overfitting: It seems like the more variables an AI model has, the better, right? Not really. If there are a large number of variables, then the model will probably not reflect what’s happening in the real world.  This is known as overfitting. And it’s a common issue with data.

“Overfitting, for example, is not solely a data problem,” said Dan Olley, who is the Global EVP and CTO of Elsevier, “but also a model training problem. This all comes back to designing the training and testing of models carefully and incorporating a varied group of inputs to validate the training and testing.”

Noise: This is the result of mislabeled examples (class noise) or errors in the values of attributes (attribute noise). The good news is that class noise can be easily identified and excluded. But attribute noise is another matter. This usually does not show up as an outlier.

“In machine learning algorithms, most good ones have the outlier identification/ elimination embedded in the algorithm logic,” said Prasad Vuyyuru, who is a partner for the Enterprise Insights Practice at Infosys Consulting. “The data scientist or SME will still need to apply additional filters or decision trees during the learning stage to exclude certain data that may skew from the sample.”

One way is to use cross validation, say by dividing the data into ten similar sized folds. You will then train the algorithm on nine folds and the computer will evaluate the measure on the last one – which should be done ten times.

“We should always follow Ockham’s Razor which states that the best Machine Learning models are simple models that fit the data well,” said Vuyyuru.

Maintenance: AI models are not static. They get better over time. Or, then again, they could actually decay over time because the data is not adequately updated. In other words, the data needs ongoing maintenance.

“AI systems are not like other pieces of software,” said Kurt Muehmel, who is the VP of Sales Engineering at Dataiku. “They can’t be released once and then forgotten. They take a lot of maintenance because people change, data changes, and models can drift over time. As more and more businesses develop AI systems, the issue of maintenance as a gotcha will quickly come to the forefront.”

Veeva CEO: Winning The AI Gold Rush

For the last 30 years, Peter Gassner has been at the center of various shifts in enterprise software. While at IBM Silicon Valley Lab, he worked in the mainframe market. Then he moved over to PeopleSoft, which was a pioneer of client-server platforms.

Yet he knew these technologies had major flaws. They were often rigid, clunky and complex. There was also the challenge of working with large amounts data, which was often spread across silos.

But when Peter saw the emergence of cloud computing, he knew this technology was the answer. So in 2003, he joined as the VP of Technology. He got a first-row seat on how the cloud could scale and transform organizations.

As with any new technology megatrend, the first applications were broad. And yes, this presented some complications. This is why, in 2007, Peter launched Veeva Systems, which focused on providing cloud services for the healthcare industry. This became part of a new wave called the “industry cloud.” The timing was spot on as Veeva saw strong adoption.

Fast forward to today: The company has a market cap of $16.5 billion and more than 600 life sciences customers. During the latest quarter, revenues jumped by 27% to $224.7 million and net income came to $64.1 million, up from $34.9 million in the same period a year ago.

AI and Healthcare?

When I first met Peter six years ago, I mentioned his company’s latest quarterly results. But he remarked: “I’m more concerned about where Veeva will be five years from now.”

This long-term thinking has certainly been critical for the success of Veeva, allowing for breakthrough innovations in the product line.

OK then, what about AI (Artificial Intelligence)? How will this be a part of Veeva’s product roadmap?

Well, of course, this is something that Peter has been thinking a lot about. “The industry is a gold rush,” he said. “And there will be more losers than winners.”

This should be no surprise. Whenever technology undergoes seismic change, there is overinvestment. Eventually this leads to a shakeout, with consolidation and shutdowns.

Now as for AI, Peter believes that success requires a key ingredient: data. Without it, a startup will have an extremely difficult challenge in standing out from the competition.

But this is not an issue for Veeva. Consider that its platform includes 70% of healthcare sales reps across the globe.

However, Peter did not rush to build an app to leverage the data. He instead first built a solid data layer, called Nitro. The goal was to make it easy to organize and classify data (based on industry-specific standards). To develop Nitro, Peter used’s Redshift as the core database. It is a petabyte-scale data warehouse service in the cloud. The bottom line:  insights can be accessed much quicker (a traditional system could have lags of several weeks).

“It would have taken us much longer to build Nitro without Redshift,” said Peter.

But Nitro is just the first step. Veeva is currently working on an AI engine called Andi, which is a 24/7 assistant. It will crunch a wide variety of data about a life science company’s customers and suggest the best actions to take. “Some days Veeva Andi will recommend a field rep go see a particular doctor, send an email, share a piece of content, or invite them to an event,” said Peter. “It might also send things directly to the customer on behalf of a pharmaceutical company. Keep in mind that the amount of data needed to drive intelligent engagement is overwhelming. Humans can’t consume all that data, find patterns, and make sense of it. Veeva Andi will solve this, learn, and get smarter over time.”

For the most part, AI is still in the early days. But Peter does not want to take any shortcuts. He understands that any technology takes awhile to get adoption and to make a real impact. The enterprise market also requires that things be done right – and that there be a clear-cut return on investment. And no doubt, such things can easily get lost when there is a gold rush among technology vendors.

What You Need To Know About RPA (Robotic Process Automation)

RPA (Robotic Process Automation) is one of the hottest sectors of the tech market. Some of the world’s top venture capitalists – such as Sequoia Capital, CapitalG and Accel Partners – have invested enormous sums in startups like UiPath and Automation Anywhere. According to Grand View Research, Inc., the spending on RPA is expected to hit $8.75 billion by 2024.

The irony is that this category has been around for awhile and was kind of a backwater. But with the breakthroughs in AI (Artificial Intelligence), the industry has come to life.

So what is RPA? Well, first of all, the “robotic” part of the term is somewhat misleading. RPA is not about physical robots. Rather, the technology is based on software robots. They essentially are focused on automating tedious activities, such as in back offices. These include processing insurance claims, payroll, procurement, bank reconciliation, quote-to-cash and invoices.

And the impact can be transformative on an organization. For example, a company can leverage RPA to reduce its reliance on outsourcing, which can be a big cost saver. There may also be less of a need for hiring internally.

Yet even if there is not much of a drop in headcount, there should still be material improvements in productivity. Based on research from Automation Anywhere, RPA can automate processes 70% faster. In other words, employees will have more time to devote to value-added activities.

Here are some other benefits:

  • Accuracy: RPA eliminates human error.
  • Compliance: Legal and regulatory requirements can be embedded in the systems.
  • Tracking: There can be diagnosis of technical issues and monitoring of risks, such as with customer service.

“RPA is a way for enterprises to create a true virtual workforce that drives business agility and efficiency,” said Richard French, who is the CRO at Kryon. “It is managed just as any other team in the organization and can interact with people just as other employees would interact with one another.”

The process of using RPA is also straightforward. “Show your bots what you do, then let them do the work,” said Mukund Srigopal, who is the Director of Product Marketing at Automation Anywhere. “They can interact with any system or application the same way you do. Bots can learn and they can also be cloned. It’s code-free, non-disruptive, non-invasive, and easy. Leading RPA platforms can add a layer of cognitive intelligence to the automation of business processes.”

Now RPA is not without its challenges (hey, no technology is a panacea!). There does need to be a rethinking of the current processes in place of an organization. After all, it’s not a good idea to automate a sub-par system!

Next, if there are major changes to the existing transaction platforms, it will take some time to retool the RPA. This can actually be a prolonged effort if there are many bots.

Bottom Line On RPA

When it comes to RPA, the costs for implementation are modest, especially when compared to the return. “Once implemented, the capabilities are easily scalable,” said French.

RPA is also a good way for a company to transition to AI. “There has been a proliferation of AI-enabled services in recent years, but businesses often struggle to operationalize them,” said Srigopal. “But RPA is a great way to infuse AI capabilities into business processes. A platform like ours can offer deep learning models built on neural networks to intelligently automate activities like document processing. For the most part, traditional RPA works well in very structured and predictable scenarios.”

Hiring For The AI (Artificial Intelligence) Revolution — Part II

For a recent post, I wrote about the required skillsets for hiring AI talent. No doubt, they are quite extensive — and in high demand.

Consider the following from Udacity:

“We’ve seen a tremendous rise in interest and enrollment in AI and machine learning, not just year over year but month over month as well. From 2017 to 2018, we saw over 30% growth in demand for courses on AI and machine learning. In 2018, we saw an even more significant rise with a 70% increase in demand for AI and machine learning courses. We anticipate interest to continue to grow month over month in 2019.”

Despite all this, when hiring AI people, you will still need to do your own training. And it must be ongoing. If not, there is a big risk of failure with a new AI hire.

So let’s see how various top companies are handling training:

Ohad Barnoy,VP of Customer Success, Kryon Systems:

Our AI developers start with an in-depth training itinerary in order to gain a deep understanding of our platforms. They do this via our home-grown on-line Kryon Academy, a program that helps further AI training in parallel with on-the-job training. The developer is assigned a three-week course in each one of our development pods and with QA.

Chris Hausler, Data Science Manager, Zendesk:

Research and technology in AI is moving so quickly that constant learning and upskilling is required to keep up with the state-of-the-art and do your job well. At Zendesk, we run a weekly paper club where we discuss emerging research related to our work and have frequent “lab days” where the team has time to experiment with new ideas.

From Atif Kureishy, Global VP, Emerging Practices at Teradata:

Though more and more people are retooling their skillsets by acquiring deep learning knowledge through avenues like massive open online courses (MOOCs) or Kaggle, it is rare to find people who can do it in practice – and this difference is important. The classroom or competitions are certainly a step in the right direction, but it does not replace real-life experience.

Organizations should deploy and rotate their AI teams across various business units to gain exposure and understand challenges that the line of business is facing in building AI capabilities. This will enable experiential knowledge that can be brought together in a Center of Excellence but carries forward experiences from across their Enterprise.

Guy Caspi, CEO and co-founder at Deep Instinct:

At Deep Instinct, we focus our training primarily on two areas: Comprehensive understanding of deep learning, machine learning and big data, plus one additional area: the domain our product is in. For instance, our cybersecurity experts are consistently sharing their knowledge with our deep learning experts during the training process. The reason is that a deep (or machine) learning expert who is saturated with knowledge specific to the domain (in our case cybersecurity) during training will operate more effectively and be better adapted to real-world use cases.

Yogesh Patel, CTO & Head of AI Research, Callsign:

The line between Data Engineers, Software Engineers and Data Scientists is blurring when it comes to big data. There is a clear pull towards the latter, with more Data Engineers and Software Engineers seeking to become Data Scientists. With the introduction of deep learning, there is less and less need to spend huge amounts of time dealing with data exploration, data cleansing and feature engineering — at least in theory. Correspondingly, we are seeing more people claiming to be Data Scientists, but who are really just applying a brute force approach to machine learning.

Furthermore, we have training companies claiming that no prior knowledge in data curation is required and that no background in statistics is required. While that may be true in some domains, in the domain of cybersecurity we need more people with a solid understanding of the domain, as well as data science concepts. This means understanding the meaning and statistical properties and relationships between data attributes across a variety of data sources. It also means understanding how those data attributes and data sources might impact a given algorithm, especially when dealing with issues such as the imbalanced classification problem. For example, for the task of credit fraud detection, it means having an intuitive grasp about how, when and where a given transaction type occurs — a prerequisite for formulating and testing experimental hypotheses. In the same example, it also means understanding exactly how a given classification algorithm might be impacted when few to no examples of a given transaction type are available, and tuning or adapting the classification algorithm as necessary.

Alex Spinelli, CTO, LivePerson:

Managers and leaders must learn the concepts. They must learn what is and is not an applicable use of AI.

For example, AI is powered by data and examples. Problems that have limited history are often not good examples of ones easily solved by AI tools. This is referred to as the cold start problem.

Outputs from AI are not always predictable. This means that the linear nature of product design and workflows will change. It is not easy to reverse engineer why an AI system provided a specific answer. Another critical component to the training process is to develop new skills on product design that leverages AI. Product designers and leaders must understand statistics and probability in new ways.

Corey Berkey, Director of Human Resources, JazzHR:

Many companies are investing in training their workers to ensure they are staying current with technology and advancements in the industry. While math and computer technology serve as the backbone of AI-focused roles, continuing education in the field is a must. Many online learning solutions today offer a variety of AI-related certifications from top-tier universities to help workers expand their knowledge in areas such as programming, machine learning, graphical modeling, and advanced mathematics. It’s critical that companies focus on providing development opportunities these transformative hires so they are able to fine-tune their skills and learn best practices from peers.

Hiring For The AI (Artificial Intelligence) Revolution – Part I

In the coming years, Artificial Intelligence (AI) is likely to be strategic for a myriad of industries. But there is a major challenge: recruiting. Simply put, it can be extremely tough to identify the right people who can leverage the technology (even worse, there is a fierce war for AI talent in Silicon Valley).

To be successful, it’s essential for companies to understand what are the key skillsets required (and yes, they are evolving). So let’s take a look:

Dan O’Connell, Chief Strategy Officer & Board Member, Dialpad:

I think it’s critical for “AI” teams (natural language processing, machine learning, etc.) to have a mix of backgrounds — hiring Ph.D’s and academics who are thinking about and building the latest innovations, but combining that with individuals who have worked in a business environment and know how to code, ship product and are used to the cadence of a start-up or technology company. You can’t go all academic, and you can’t go all first-hand experience. We found the mix to be important in both building models, designing features, and bringing things to market.

Sofus Macskassy, VP of Data Science, HackerRank:

Many don’t realize that you do not need a large team of deep learning experts to integrate AI in your business. A few experts, with a supporting staff of research engineers, product engineers and product managers can get the job done. There is much more to AI than deep learning, and businesses need to find a candidate with strong machine learning fundamentals. Many candidates with a theoretical background in machine learning have the tools they need to learn the job. Training AI talent on the specific needs for your business is cheaper and faster than training someone to be an AI expert. Hire strong research engineers that can take academic papers and equations and turn them into fast code. These are often engineers with a technical foundation in computer science, physics or electrical engineering. Together with your AI expert(s), they will make a powerful AI team. Add a product manager to tell them what product to build and you have a powerhouse.

Chris Hausler, Data Science Manager at Zendesk:

Any person working in the field of AI needs to be able to code and have solid mathematical and statistical skills. It’s a misnomer that you need a PhD to work in AI, but genuine curiosity and an eagerness to learn will help you keep up with this fast moving field. Having the skills to implement and validate your own experimental ideas is a huge advantage.

We have found success hiring people from disciplines that focus on experimentation and problem solving. The Data Science team at Zendesk has a diverse background with people coming from Genetics, Economics, Pharmacy, Neuroscience, Computer Science and Machine Learning to name a few.

Atif Kureishy, Global VP, Emerging Practices at Teradata:

One could argue that the skills for AI are similar to data science; math, computer science and domain expertise, but the truth is that AI models are predicated on two things, automation and data – and lots of it.

Increasing sophistication in automating key aspects of building, training and deploying AI models (such as model selection, feature representation, hyper parameter tuning, etc.) mean the skillset needed must be focused on model lifecycle and model risk management principles to ensure model trust, transparency, safety and stability. Typically, these are spread across roles in organizations that touch on policy, regulation, ethics, technology and data science. But these will need to converge to build AI at scale.

Guy Caspi, CEO and co-founder at Deep Instinct:

People who have strong academic backgrounds sometimes lean towards one of two directions: either they cannot leave a project until it’s perfect, often missing important deadlines – or the opposite: they’re satisfied with basic academic-level standards that may not meet an organization’s production requirements. We search out people who have both a strong academic background, but also have a strong product/operational inclination.

Cool AI Highlights At CES

AI was definitely the dominant theme at CES this week. According to a keynote from LG’s president and CTO, Dr. I.P. Park, this technology is “an opportunity of our lifetime to open the next chapter in … human progress.”

Wow! Yes, it’s heady stuff. But then again, AI really is becoming pervasive. Consider that recently announced that more than 100 million Alexa devices have been sold.

OK then, for CES – which, by the way, had about 180,000 attendees and more than 2.9 million net square feet of exhibit space in Las Vegas — what were some of the standout innovations? Let’s take a look:

3D Tracking: Intel and Alibaba announced a partnership to allow for real-time tracking of athletes. The technology, which is based on AI-capable Intel Xeon Scalable processors, creates a 3D mesh of a person that captures real-time biomechanical data. Note there is no need for the athlete to wear any sensors. Essentially, the AI and computer vision systems will process the digital data.  Oh, and Intel and Alibaba will showcase the technology at next year’s Tokyo Olympic Games.

Intel executive vice president and general manager of  the Data Center Group, Navin Shenoy, notes: “This technology has incredible potential as an athlete training tool and is expected to be a game-changer for the way fans experience the Games, creating an entirely new way for broadcasters to analyze, dissect and reexamine highlights during instant replays.”

AI For Your Mouth: Yes, Oral-B showcased its latest electric toothbrush, called Genius X. As the name implies, it does have whiz-bang AI systems built in. They are focused on tracking a person’s brushing styles so as to provide personalized feedback. The device will hit the markets in September.

Connected Bathroom: Baracoda Group Company thinks there is lots of opportunity here. This is why it has leveraged its CareOS platform – which uses AI, Augmented Reality (AR) and 4D, facial/object recognition – to create a start mirror. Called Artemis, it has quite a few interesting features. Just few of them include the following:

  • Visual Acuity Test: This tracks the changes in your vision.
  • AR Virtual Try-on: You can digitally apply beauty products like lipstick and eyeliners.
  • AR Tutorials: You can get coaching on hairstyles, makeup and so on.
  • Voice Commands: You can talk to the mirror to change the lights, control the mirror and adjust the shower settings.

Artemis will hit the market sometime in the second half of this year. However, the device will not be cheap – retailing at $20,000.

Cuddly Robot: AI is key for many robots.  Yet there are problems.  After all, robots are usually far from lifelike because of their stiff movements and metallic exteriors.

But Groove X takes a different tact. The company has developed Lovot, which looks like a teddy bear. Think of it as, well, a replacement for your pet.

There is quite a bit of engineering inside the Lovot, which has more than 50 sensors and uses deep learning (Groove X calls it Emotional Robotics).  Basically, the focus is to bring the power of love to machines.

As for when the Lovot will launch, it will be some time in 2020. The price tag will also be about $3,000.

Voice Identity: There has continued to be lots of innovation in this category. For example, at CES Pindrop launched a voice identity platform for IoT, voice assistants, smart homes/offices and connected cars. This technology means that you no longer have to use pin codes to gain access to your accounts or devices. Instead, Pindrop will be able to instantly provide authentication when you start to talk.