Don’t Make These AI Blunders

Entrepreneur Tom Siebel has a knack for anticipating megatrends in technology. In the 1980s, he joined Oracle in its hyper-growth phase because he saw the power of relational databases. Then he would go on to start Siebel Systems, which pioneered the CRM space.

But of course, Tom is far from done. So what is he focused on now? Well, his latest venture, C3, is targeting the AI (Artificial Intelligence) market. The company’s software helps with the development, deployment and operation of this technology at scale, such as across IoT environments.

“AI has huge social, economic and environmental benefits,” said Tom. “Look at the energy industry. AI will make things more reliable and safer. There’ll be little downside. Yet AI is not all goodness and light either. There are many unintended negative consequences.”

He points out some major risk factors like privacy and cybersecurity. “What we saw with Facebook and Cambridge Analytica was just a dress rehearsal,” he said.

But when it comes to AI, some of the problems may be subtle. Although, the consequences can still be severe.

Here’s an example:  Suppose you develop an AI system and it has a 99% accuracy rate for detecting cancer. This would be impressive, right?

Not necessarily. The model could actually be way off because of low quality data, the use of wrong algorithms, bias or a faulty sample. In other words, our cancer test could potentially lead to terrible results.

“Basically, all AI to date has been trained wrong,” said Arijit Sengupta, who is the CEO of “This is because all AI focuses on accuracy in some form instead of optimizing the impact on various stakeholders. The benefit of predicting something correctly is never the same as the cost of making a wrong prediction.”

But of course, there are other nagging issues to keep in mind. Let’s take a look:

Transparency: One of the main drivers of AI innovation has been deep learning. But this process is highly complex and involves many hidden layers. This can mean that an AI model is essentially a black box.

“As algorithms become more advanced and complex, it’s becoming increasingly difficult to understand how decisions are made and correlations are found,” said Ivan Novikov, who is the CEO of Wallarm. “And because companies tend to keep proprietary AI algorithms private, there’s been a lack of scrutiny that further complicates matters. In order to address this issue of transparency, AI developers will need to strike a balance between allowing algorithms to be openly reviewed, while keeping company secrets under wraps.”

Rigid Models: Data often evolves. This could be due to changes in preferences or even cultures. In light of this, AI needs to have ongoing monitoring.

“Systems should be developed to handle changes in the norm,” said Triveni Gandhi, who is a Data Scientist at Dataiku. “Whether it’s a recommendation engine, predictive maintenance system, or a fraud detection system, things change over time, so the idea of ‘normal’ behavior will continue to shift. Any AI system you build, no matter what the use case, needs to be agile enough to shift with changing norms. If it’s not, the system and its results will quickly be rendered useless.”

Simplicity: Some problems may not need AI!

Note the following from Chris Hausler, who is a Data Science Manager at Zendesk: “One big mistake I see is using AI as the default approach for solving any problem that comes your way. AI can bring a lot of value in a number of settings, but there are also many problems where a simple heuristic will perform almost as well with a much smaller research, deployment and maintenance overhead.”

Data: No doubt, you need quality data. But this is just the minimum. The fact is that data practices can easily go off the rails.

“Data quality is important but so is data purity,” said Dan Olley, who is the Global EVP and CTO of Elsevier, a division of RELX Group. “We all know dirty data can lead to poor or inaccurate models. Data acquisition, ingestion and cleansing is the hidden key to any AI system. However, be careful that in putting the data into the structures you need for one purpose, you aren’t destroying patterns in the raw data you didn’t know existed. These patterns could be useful later on for different insights. Always keep a copy of all raw data, errors and all.”

What You Need To Know About The Low-Code Market

The low-code market tends to get drowned out with other buzzy tech trends. But it shouldn’t. According to Forrester Research, the total spending on the category is forecasted to hit $21.2 billion by 2022, representing a compound annual growth rate of roughly 40%.

Low-code is focused on making it easier and quicker to develop applications, such as with a drag-and-drop interface and integrations. By comparison, traditional code development can require thousands or even millions of lines of code.

“With low-code, innovative apps can be delivered 10x faster and organizations can turn on a dime, adapting their systems at the speed of business,” said Paulo Rosado, who is the CEO at OutSystems. His firm is one of the top operators in the sector, with more than 1,000 customers.

The main use cases for low-code include the following:

  • Internal Apps: These help improve a company’s core functions like HR, sales/marketing and financial reporting.
  • Customer Facing Apps: The solutions can be quite sophisticated, such as with rich user experiences, security and mobile capabilities.
  • Legacy App Replacement: There are huge swaths of old code across many organizations. With low-code, a company can upgrade these systems for modern approaches.

But there are certainly challenges and issues with low-code development. Interestingly enough, there can be resistance from within your company as this technology may appear to be a threat. The irony is that this could lead to “shadow IT.” In other words, this means people will use low-code tools even though there hasn’t been approval from IT!  Unfortunately, there may be risks with compliance and security.

Then there is the problem of finding the right low-code tool. Consider that there are more than 100 vendors.

So here are some things to keep in mind:

  • Vendor Trust: It may be difficult if not impossible to switch vendors because of the proprietary data models. This is why its important to focus on those firms that have a history of execution and commitment to the market.
  • Understanding the Feature Sets: Low-code platforms usually cover specific areas, say small business or enterprise environments.  Given this, you need to clearly identify your goals.
  • Ease of Use: This is not just about the UI. You also want a vendor that has helpful resources – like videos and training – as well as strong customer support.
  • Collaboration: Look for a low-code platform that allows for granular sharing of permissions. Consider that these types of apps usually involve teams.
  • Integrations: Does the low-code platform connect to the apps you use in your organization?

The Future Of Low-Code

Low-code is likely to become a standard across many organizations. “Every business user at a company is generally given the same set of productivity tools: Email, Chat, Word Processor, Spreadsheet, and a Presentation tool,” said Tejas Gahdia, who is the Head Evangelist for Zoho Creator and the Zoho Developer Platform. “Low-code tools will sit alongside these applications for every employee of an organization as they will be considered a core productivity tool.” Zoho Creator, which was launched in 2006, is one of the top low-code tools on the market. It has over 1 million users and 3 million applications built.

If anything, low-code will probably be essential for being competitive in the years ahead. “Every day in the news, it seems, we hear about another company that is either being disrupted or going out of business completely as they weren’t able to adapt quickly enough,” said Paulo. “Organizations simply cannot spend years trying to get their IT shops in order so that they can compete. Low-code provides a real and proven way for organizations to develop new digital solutions much faster and leapfrog their competitors.”

Facebook @ 15: Takeaways For Entrepreneurs

In February 2004, Mark Zuckerberg launched a fairly basic site, called, which allowed Harvard students to connect. At first, he considered it an ordinary project. But of course, the site would quickly become a growth machine.

Yet there were still many challenges for Zuckerberg. In the early days, he had to fend off fierce rivals, such as Friendster and MySpace. There were also challenges like scaling the infrastructure and dealing with privacy snafus.

Despite all this, Zuckerberg was able to win the war – and build one of the world’s most valuable companies.

So then, what are some of the lessons for entrepreneurs from this amazing journey? Well, let’s take a look:

Ronn Torossian, CEO and Founder of 5WPR:

“Right from the start, Facebook did an outstanding job of hiring the right talent. As an entrepreneur, you have to trust other people to work on other components of your business—as much as you want to, you can’t do everything yourself. Take Sheryl Sandberg for example. When Zuckerberg hired Sandberg, he gave himself the room to spend more time on his true strengths – improving the Facebook platform – while she ran more of the business operations like PR, expansions, communications, etc. A company is only as good as the people who work there, and Zuckerberg has always invested in hiring the right people to continuously grow and keep things fresh. Zuckerberg’s focus has always been on two things: having a crystal clear trajectory for the company and a great team driving it in this direction.”

Magnus Larsson, CEO of Rebtel:

“It’s impressive to see how Facebook has continued to bet on its original product, platform and vision, while adding value for the company and users. Mark wrote in a letter to his investors before becoming listed, ‘Facebook is created to make the world more open and connected. Facebook aspires to build the services that give people the power to share.’ Staying true to that vision led to two of the best acquisitions ever: Instagram and WhatsApp. Consumer tech companies should always keep an eye on the new kids on the block to avoid becoming antiquated. If someone else beats you to it, be open to partnerships or other ways of working together. Not every newcomer has to be a threat.”

From Ryan Kelly, the VP of Marketing at Nanigans:

“Facebook has truly turned into an advertising giant over the last fifteen years. What has enabled this is threefold: 1) the platform’s massive reach 2) the identity data within it and 3) the enormous internal investment around Facebook’s ad tech stack. While the first two points get most of the attention, what really enabled them to stand out from the rest of the players in the space was, and still is #3: the innovations around ad technology.

“2012/2013 was really the inflection point. ‘The Social Graph’ was unleashed in 2012 giving Facebook’s advertisers access to user behavioral data like never before. A year later Facebook introduced their conversion pixel, website custom audience pixel, lookalike audience tool, and video ads. This is where many advertisers went from trying to get ‘likes’ or ‘clicks’ to actually acquiring revenue generating customers.

“While it’s true that Facebook invented the news feed as we know it and has had many innovations on the consumer facing side of its business, the ad platform is the main reason the company is where it is today. And while the ad blueprint and tech behind Facebook’s success is often times imitated by their social competitors, it’s never been truly duplicated.”

Gil Sommer, Head of Product at Connatix:

“Facebook has shown us that listening to users, understanding their needs and adjusting products to their liking is a winning formula. Just look at Stories— this format was not invented by Facebook. Actually, they initially thought Stories wasn’t a great product and tried to provide alternatives. But their users were not happy, and Facebook listened. They understood that what makes the Stories format great is its immersive nature, and the freshness of the content. Facebook decided to react in a smart way. First, they adapted the format. Nonetheless, they still kept focus on their core competency (massive scale of visual UGC). This combination is one of the most successful stories — pun intended — in Facebook’s history and clearly they were able to outperform the original format. Facebook has shown us that adjusting course to your user’s needs is always a solid strategy. Keeping your core competencies in the process makes it a winning one.”

Robert Levenhagen, CEO and Co-Founder of InfluencerDB:

“From the relatively early stages, Facebook benefited a great deal from the environment of business and end-user applications around their platform. Their open API policies and growth attracted thousands and thousands of partners making their platform and products ever more user friendly. The downside of this open approach and hypergrowth showed when bad players took advantage of the opportunities and turned the good intention into bad results for the users and the general public. But Facebook has taken a lot of steps to right this wrong ever since.”

AI & Data: Avoiding The Gotchas

When it comes to an AI (Artificial Intelligence) project, there is usually lots of excitement. The focus is often on using new-fangled algorithms – such as deep learning neural networks – to unlock insights that will transform the business.

But in this process, something often gets lost: The importance of establishing the right plan for the data. Keep in mind that 80% of the time of an AI project can be spent on identifying, storing, processing and cleansing data.

“The big gotcha is having bad data fed into your AI systems,” said David Linthicum, who is the Chief Cloud Strategy Officer at Deloitte Consulting LLP. “It’s only as smart as the data that it’s allowed to cull through. The quality of data is of utmost importance. The use of cloud computing allows for massive amounts of data to be stored for very low costs, which means that you can afford to provide all the data that your AI systems need.”

The data process can certainly be dicey. Even subtle changes can have a major impact on the outcomes.

So what to do to avoid the problems? Well, here are some strategies to consider:

Clear-Cut Focus: A majority of AI projects for traditional companies are about reducing costs, increasing revenues or keeping up with the competition. But for the most part, the goals can get easily muddled.

According to Stuart Dobbie, who is the Product Owner at Callsign: “Fundamentally, the core recurring problem remains simple: many businesses fail to clearly articulate their business problem prior to choosing the technologies and skill-sets required to solve it.”

The temptation is to over complicate things. But of course, this can mean an AI project will go off the rails and be a major waste of resources.

Overfitting: It seems like the more variables an AI model has, the better, right? Not really. If there are a large number of variables, then the model will probably not reflect what’s happening in the real world.  This is known as overfitting. And it’s a common issue with data.

“Overfitting, for example, is not solely a data problem,” said Dan Olley, who is the Global EVP and CTO of Elsevier, “but also a model training problem. This all comes back to designing the training and testing of models carefully and incorporating a varied group of inputs to validate the training and testing.”

Noise: This is the result of mislabeled examples (class noise) or errors in the values of attributes (attribute noise). The good news is that class noise can be easily identified and excluded. But attribute noise is another matter. This usually does not show up as an outlier.

“In machine learning algorithms, most good ones have the outlier identification/ elimination embedded in the algorithm logic,” said Prasad Vuyyuru, who is a partner for the Enterprise Insights Practice at Infosys Consulting. “The data scientist or SME will still need to apply additional filters or decision trees during the learning stage to exclude certain data that may skew from the sample.”

One way is to use cross validation, say by dividing the data into ten similar sized folds. You will then train the algorithm on nine folds and the computer will evaluate the measure on the last one – which should be done ten times.

“We should always follow Ockham’s Razor which states that the best Machine Learning models are simple models that fit the data well,” said Vuyyuru.

Maintenance: AI models are not static. They get better over time. Or, then again, they could actually decay over time because the data is not adequately updated. In other words, the data needs ongoing maintenance.

“AI systems are not like other pieces of software,” said Kurt Muehmel, who is the VP of Sales Engineering at Dataiku. “They can’t be released once and then forgotten. They take a lot of maintenance because people change, data changes, and models can drift over time. As more and more businesses develop AI systems, the issue of maintenance as a gotcha will quickly come to the forefront.”

Veeva CEO: Winning The AI Gold Rush

For the last 30 years, Peter Gassner has been at the center of various shifts in enterprise software. While at IBM Silicon Valley Lab, he worked in the mainframe market. Then he moved over to PeopleSoft, which was a pioneer of client-server platforms.

Yet he knew these technologies had major flaws. They were often rigid, clunky and complex. There was also the challenge of working with large amounts data, which was often spread across silos.

But when Peter saw the emergence of cloud computing, he knew this technology was the answer. So in 2003, he joined as the VP of Technology. He got a first-row seat on how the cloud could scale and transform organizations.

As with any new technology megatrend, the first applications were broad. And yes, this presented some complications. This is why, in 2007, Peter launched Veeva Systems, which focused on providing cloud services for the healthcare industry. This became part of a new wave called the “industry cloud.” The timing was spot on as Veeva saw strong adoption.

Fast forward to today: The company has a market cap of $16.5 billion and more than 600 life sciences customers. During the latest quarter, revenues jumped by 27% to $224.7 million and net income came to $64.1 million, up from $34.9 million in the same period a year ago.

AI and Healthcare?

When I first met Peter six years ago, I mentioned his company’s latest quarterly results. But he remarked: “I’m more concerned about where Veeva will be five years from now.”

This long-term thinking has certainly been critical for the success of Veeva, allowing for breakthrough innovations in the product line.

OK then, what about AI (Artificial Intelligence)? How will this be a part of Veeva’s product roadmap?

Well, of course, this is something that Peter has been thinking a lot about. “The industry is a gold rush,” he said. “And there will be more losers than winners.”

This should be no surprise. Whenever technology undergoes seismic change, there is overinvestment. Eventually this leads to a shakeout, with consolidation and shutdowns.

Now as for AI, Peter believes that success requires a key ingredient: data. Without it, a startup will have an extremely difficult challenge in standing out from the competition.

But this is not an issue for Veeva. Consider that its platform includes 70% of healthcare sales reps across the globe.

However, Peter did not rush to build an app to leverage the data. He instead first built a solid data layer, called Nitro. The goal was to make it easy to organize and classify data (based on industry-specific standards). To develop Nitro, Peter used’s Redshift as the core database. It is a petabyte-scale data warehouse service in the cloud. The bottom line:  insights can be accessed much quicker (a traditional system could have lags of several weeks).

“It would have taken us much longer to build Nitro without Redshift,” said Peter.

But Nitro is just the first step. Veeva is currently working on an AI engine called Andi, which is a 24/7 assistant. It will crunch a wide variety of data about a life science company’s customers and suggest the best actions to take. “Some days Veeva Andi will recommend a field rep go see a particular doctor, send an email, share a piece of content, or invite them to an event,” said Peter. “It might also send things directly to the customer on behalf of a pharmaceutical company. Keep in mind that the amount of data needed to drive intelligent engagement is overwhelming. Humans can’t consume all that data, find patterns, and make sense of it. Veeva Andi will solve this, learn, and get smarter over time.”

For the most part, AI is still in the early days. But Peter does not want to take any shortcuts. He understands that any technology takes awhile to get adoption and to make a real impact. The enterprise market also requires that things be done right – and that there be a clear-cut return on investment. And no doubt, such things can easily get lost when there is a gold rush among technology vendors.

What You Need To Know About RPA (Robotic Process Automation)

RPA (Robotic Process Automation) is one of the hottest sectors of the tech market. Some of the world’s top venture capitalists – such as Sequoia Capital, CapitalG and Accel Partners – have invested enormous sums in startups like UiPath and Automation Anywhere. According to Grand View Research, Inc., the spending on RPA is expected to hit $8.75 billion by 2024.

The irony is that this category has been around for awhile and was kind of a backwater. But with the breakthroughs in AI (Artificial Intelligence), the industry has come to life.

So what is RPA? Well, first of all, the “robotic” part of the term is somewhat misleading. RPA is not about physical robots. Rather, the technology is based on software robots. They essentially are focused on automating tedious activities, such as in back offices. These include processing insurance claims, payroll, procurement, bank reconciliation, quote-to-cash and invoices.

And the impact can be transformative on an organization. For example, a company can leverage RPA to reduce its reliance on outsourcing, which can be a big cost saver. There may also be less of a need for hiring internally.

Yet even if there is not much of a drop in headcount, there should still be material improvements in productivity. Based on research from Automation Anywhere, RPA can automate processes 70% faster. In other words, employees will have more time to devote to value-added activities.

Here are some other benefits:

  • Accuracy: RPA eliminates human error.
  • Compliance: Legal and regulatory requirements can be embedded in the systems.
  • Tracking: There can be diagnosis of technical issues and monitoring of risks, such as with customer service.

“RPA is a way for enterprises to create a true virtual workforce that drives business agility and efficiency,” said Richard French, who is the CRO at Kryon. “It is managed just as any other team in the organization and can interact with people just as other employees would interact with one another.”

The process of using RPA is also straightforward. “Show your bots what you do, then let them do the work,” said Mukund Srigopal, who is the Director of Product Marketing at Automation Anywhere. “They can interact with any system or application the same way you do. Bots can learn and they can also be cloned. It’s code-free, non-disruptive, non-invasive, and easy. Leading RPA platforms can add a layer of cognitive intelligence to the automation of business processes.”

Now RPA is not without its challenges (hey, no technology is a panacea!). There does need to be a rethinking of the current processes in place of an organization. After all, it’s not a good idea to automate a sub-par system!

Next, if there are major changes to the existing transaction platforms, it will take some time to retool the RPA. This can actually be a prolonged effort if there are many bots.

Bottom Line On RPA

When it comes to RPA, the costs for implementation are modest, especially when compared to the return. “Once implemented, the capabilities are easily scalable,” said French.

RPA is also a good way for a company to transition to AI. “There has been a proliferation of AI-enabled services in recent years, but businesses often struggle to operationalize them,” said Srigopal. “But RPA is a great way to infuse AI capabilities into business processes. A platform like ours can offer deep learning models built on neural networks to intelligently automate activities like document processing. For the most part, traditional RPA works well in very structured and predictable scenarios.”

Hiring For The AI (Artificial Intelligence) Revolution — Part II

For a recent post, I wrote about the required skillsets for hiring AI talent. No doubt, they are quite extensive — and in high demand.

Consider the following from Udacity:

“We’ve seen a tremendous rise in interest and enrollment in AI and machine learning, not just year over year but month over month as well. From 2017 to 2018, we saw over 30% growth in demand for courses on AI and machine learning. In 2018, we saw an even more significant rise with a 70% increase in demand for AI and machine learning courses. We anticipate interest to continue to grow month over month in 2019.”

Despite all this, when hiring AI people, you will still need to do your own training. And it must be ongoing. If not, there is a big risk of failure with a new AI hire.

So let’s see how various top companies are handling training:

Ohad Barnoy,VP of Customer Success, Kryon Systems:

Our AI developers start with an in-depth training itinerary in order to gain a deep understanding of our platforms. They do this via our home-grown on-line Kryon Academy, a program that helps further AI training in parallel with on-the-job training. The developer is assigned a three-week course in each one of our development pods and with QA.

Chris Hausler, Data Science Manager, Zendesk:

Research and technology in AI is moving so quickly that constant learning and upskilling is required to keep up with the state-of-the-art and do your job well. At Zendesk, we run a weekly paper club where we discuss emerging research related to our work and have frequent “lab days” where the team has time to experiment with new ideas.

From Atif Kureishy, Global VP, Emerging Practices at Teradata:

Though more and more people are retooling their skillsets by acquiring deep learning knowledge through avenues like massive open online courses (MOOCs) or Kaggle, it is rare to find people who can do it in practice – and this difference is important. The classroom or competitions are certainly a step in the right direction, but it does not replace real-life experience.

Organizations should deploy and rotate their AI teams across various business units to gain exposure and understand challenges that the line of business is facing in building AI capabilities. This will enable experiential knowledge that can be brought together in a Center of Excellence but carries forward experiences from across their Enterprise.

Guy Caspi, CEO and co-founder at Deep Instinct:

At Deep Instinct, we focus our training primarily on two areas: Comprehensive understanding of deep learning, machine learning and big data, plus one additional area: the domain our product is in. For instance, our cybersecurity experts are consistently sharing their knowledge with our deep learning experts during the training process. The reason is that a deep (or machine) learning expert who is saturated with knowledge specific to the domain (in our case cybersecurity) during training will operate more effectively and be better adapted to real-world use cases.

Yogesh Patel, CTO & Head of AI Research, Callsign:

The line between Data Engineers, Software Engineers and Data Scientists is blurring when it comes to big data. There is a clear pull towards the latter, with more Data Engineers and Software Engineers seeking to become Data Scientists. With the introduction of deep learning, there is less and less need to spend huge amounts of time dealing with data exploration, data cleansing and feature engineering — at least in theory. Correspondingly, we are seeing more people claiming to be Data Scientists, but who are really just applying a brute force approach to machine learning.

Furthermore, we have training companies claiming that no prior knowledge in data curation is required and that no background in statistics is required. While that may be true in some domains, in the domain of cybersecurity we need more people with a solid understanding of the domain, as well as data science concepts. This means understanding the meaning and statistical properties and relationships between data attributes across a variety of data sources. It also means understanding how those data attributes and data sources might impact a given algorithm, especially when dealing with issues such as the imbalanced classification problem. For example, for the task of credit fraud detection, it means having an intuitive grasp about how, when and where a given transaction type occurs — a prerequisite for formulating and testing experimental hypotheses. In the same example, it also means understanding exactly how a given classification algorithm might be impacted when few to no examples of a given transaction type are available, and tuning or adapting the classification algorithm as necessary.

Alex Spinelli, CTO, LivePerson:

Managers and leaders must learn the concepts. They must learn what is and is not an applicable use of AI.

For example, AI is powered by data and examples. Problems that have limited history are often not good examples of ones easily solved by AI tools. This is referred to as the cold start problem.

Outputs from AI are not always predictable. This means that the linear nature of product design and workflows will change. It is not easy to reverse engineer why an AI system provided a specific answer. Another critical component to the training process is to develop new skills on product design that leverages AI. Product designers and leaders must understand statistics and probability in new ways.

Corey Berkey, Director of Human Resources, JazzHR:

Many companies are investing in training their workers to ensure they are staying current with technology and advancements in the industry. While math and computer technology serve as the backbone of AI-focused roles, continuing education in the field is a must. Many online learning solutions today offer a variety of AI-related certifications from top-tier universities to help workers expand their knowledge in areas such as programming, machine learning, graphical modeling, and advanced mathematics. It’s critical that companies focus on providing development opportunities these transformative hires so they are able to fine-tune their skills and learn best practices from peers.

Hiring For The AI (Artificial Intelligence) Revolution – Part I

In the coming years, Artificial Intelligence (AI) is likely to be strategic for a myriad of industries. But there is a major challenge: recruiting. Simply put, it can be extremely tough to identify the right people who can leverage the technology (even worse, there is a fierce war for AI talent in Silicon Valley).

To be successful, it’s essential for companies to understand what are the key skillsets required (and yes, they are evolving). So let’s take a look:

Dan O’Connell, Chief Strategy Officer & Board Member, Dialpad:

I think it’s critical for “AI” teams (natural language processing, machine learning, etc.) to have a mix of backgrounds — hiring Ph.D’s and academics who are thinking about and building the latest innovations, but combining that with individuals who have worked in a business environment and know how to code, ship product and are used to the cadence of a start-up or technology company. You can’t go all academic, and you can’t go all first-hand experience. We found the mix to be important in both building models, designing features, and bringing things to market.

Sofus Macskassy, VP of Data Science, HackerRank:

Many don’t realize that you do not need a large team of deep learning experts to integrate AI in your business. A few experts, with a supporting staff of research engineers, product engineers and product managers can get the job done. There is much more to AI than deep learning, and businesses need to find a candidate with strong machine learning fundamentals. Many candidates with a theoretical background in machine learning have the tools they need to learn the job. Training AI talent on the specific needs for your business is cheaper and faster than training someone to be an AI expert. Hire strong research engineers that can take academic papers and equations and turn them into fast code. These are often engineers with a technical foundation in computer science, physics or electrical engineering. Together with your AI expert(s), they will make a powerful AI team. Add a product manager to tell them what product to build and you have a powerhouse.

Chris Hausler, Data Science Manager at Zendesk:

Any person working in the field of AI needs to be able to code and have solid mathematical and statistical skills. It’s a misnomer that you need a PhD to work in AI, but genuine curiosity and an eagerness to learn will help you keep up with this fast moving field. Having the skills to implement and validate your own experimental ideas is a huge advantage.

We have found success hiring people from disciplines that focus on experimentation and problem solving. The Data Science team at Zendesk has a diverse background with people coming from Genetics, Economics, Pharmacy, Neuroscience, Computer Science and Machine Learning to name a few.

Atif Kureishy, Global VP, Emerging Practices at Teradata:

One could argue that the skills for AI are similar to data science; math, computer science and domain expertise, but the truth is that AI models are predicated on two things, automation and data – and lots of it.

Increasing sophistication in automating key aspects of building, training and deploying AI models (such as model selection, feature representation, hyper parameter tuning, etc.) mean the skillset needed must be focused on model lifecycle and model risk management principles to ensure model trust, transparency, safety and stability. Typically, these are spread across roles in organizations that touch on policy, regulation, ethics, technology and data science. But these will need to converge to build AI at scale.

Guy Caspi, CEO and co-founder at Deep Instinct:

People who have strong academic backgrounds sometimes lean towards one of two directions: either they cannot leave a project until it’s perfect, often missing important deadlines – or the opposite: they’re satisfied with basic academic-level standards that may not meet an organization’s production requirements. We search out people who have both a strong academic background, but also have a strong product/operational inclination.

How To Get Customers To Renew…And Expand Their Accounts

Tien Tzuo is one of the pioneers of the SaaS (software-as-a-service) industry. He was employee No. 11 at, where he became the Chief Marketing Officer. He would then co-found Zuora, which operates a leading subscription platform.

Yet while the SaaS model is powerful – and has transformed companies like Microsoft and Adobe – it does demand much planning and organization. This is why Tien crated the PADRE operation model, a workflow to help companies better manage the process. And an important part of this is renewals:

“Acquiring new subscribers is critical, but in the Subscription Economy the vast majority of customer transactions consist of changes to existing subscriptions: renewals, suspensions, add-ons, upgrades, terminations, etc.”

The bottom line is that implementing a strong system for renewals not only helps reduce churn but also allows for the opportunity for higher growth, in terms of expansion of existing accounts and upsells.

So what can you do to improve the process for your own company? Well, I reached out to another top player in the SaaS field, Zendesk. The company, which has a market cap of about $7 billion, provides customer service and engagement cloud services. During the latest quarter, revenues jumped by 38% to $154.8 million.

“The renewals process starts with one key question: who is responsible for the renewal?” said Jaimie Buss, who is the VP of Sales for Zendesk. “The answer depends on what type of product you sell. With top-down large enterprise sales, a more senior account manager will most likely need to handle the renewal, as the objective of those renewals will likely be to extend the contract to multiple years, products, and include contract restructuring and heavy negotiation. For the mid-market, renewal objectives are a bit more straightforward; that is, to extend the contract term, minimize contraction by selling other products, and reducing or removing the discount offered at the initial sale. In this case, renewals could be supported by the Success organization or a renewals specialist.”

Regardless, the key is that you need to be proactive.  Waiting until a few days before the contract expires can mean losing business.

Here’s what Jaimie recommends:

  • 90-61 days before the renewal: You should begin the initial engagement. First of all, you want to confirm that the primary contact is still with the organization. Next, have a discussion about pricing, discount reductions and term length options. Then once you gather all the feedback, you should evaluate the potential growth of the account and the churn risk. “Churn risks should immediately be flagged and you should have all hands on deck: sales, success and sales engineers,” says Jamie.
  • 60-31 days before the renewal: This is when you get down to brass tacks. In other words, you want to confirm the contract term, pricing, billing frequency, and payment type. You will also want to confirm the paper process and timing of signatures with the customer.
  • 30-0 days before the renewal: The order should be processed and signed. “Once closed, a hand-off back to success or sales should occur if the renewal is driven by a renewals specialist,” said Jamie.

During this process, there are definitely some potential issues to keep in mind. For example, if you have an “auto-renew” option in the terms and conditions, the renewal specialist or sales person needs to coordinate with the collections team. If not, there’s the risk that a customer may receive an invoice during the contract negotiation! No doubt, this could be a deal killer.

Finally, there needs to be a clear-cut incentive structure for those people who are responsible for renewals.  According to Jamie, it must be focused on expansion of bookings.  And even if a customer does not want to add new subscribers, there should still be incentives to increase the term length, improve the billing frequency and reduce the discounts.

Cool AI Highlights At CES

AI was definitely the dominant theme at CES this week. According to a keynote from LG’s president and CTO, Dr. I.P. Park, this technology is “an opportunity of our lifetime to open the next chapter in … human progress.”

Wow! Yes, it’s heady stuff. But then again, AI really is becoming pervasive. Consider that recently announced that more than 100 million Alexa devices have been sold.

OK then, for CES – which, by the way, had about 180,000 attendees and more than 2.9 million net square feet of exhibit space in Las Vegas — what were some of the standout innovations? Let’s take a look:

3D Tracking: Intel and Alibaba announced a partnership to allow for real-time tracking of athletes. The technology, which is based on AI-capable Intel Xeon Scalable processors, creates a 3D mesh of a person that captures real-time biomechanical data. Note there is no need for the athlete to wear any sensors. Essentially, the AI and computer vision systems will process the digital data.  Oh, and Intel and Alibaba will showcase the technology at next year’s Tokyo Olympic Games.

Intel executive vice president and general manager of  the Data Center Group, Navin Shenoy, notes: “This technology has incredible potential as an athlete training tool and is expected to be a game-changer for the way fans experience the Games, creating an entirely new way for broadcasters to analyze, dissect and reexamine highlights during instant replays.”

AI For Your Mouth: Yes, Oral-B showcased its latest electric toothbrush, called Genius X. As the name implies, it does have whiz-bang AI systems built in. They are focused on tracking a person’s brushing styles so as to provide personalized feedback. The device will hit the markets in September.

Connected Bathroom: Baracoda Group Company thinks there is lots of opportunity here. This is why it has leveraged its CareOS platform – which uses AI, Augmented Reality (AR) and 4D, facial/object recognition – to create a start mirror. Called Artemis, it has quite a few interesting features. Just few of them include the following:

  • Visual Acuity Test: This tracks the changes in your vision.
  • AR Virtual Try-on: You can digitally apply beauty products like lipstick and eyeliners.
  • AR Tutorials: You can get coaching on hairstyles, makeup and so on.
  • Voice Commands: You can talk to the mirror to change the lights, control the mirror and adjust the shower settings.

Artemis will hit the market sometime in the second half of this year. However, the device will not be cheap – retailing at $20,000.

Cuddly Robot: AI is key for many robots.  Yet there are problems.  After all, robots are usually far from lifelike because of their stiff movements and metallic exteriors.

But Groove X takes a different tact. The company has developed Lovot, which looks like a teddy bear. Think of it as, well, a replacement for your pet.

There is quite a bit of engineering inside the Lovot, which has more than 50 sensors and uses deep learning (Groove X calls it Emotional Robotics).  Basically, the focus is to bring the power of love to machines.

As for when the Lovot will launch, it will be some time in 2020. The price tag will also be about $3,000.

Voice Identity: There has continued to be lots of innovation in this category. For example, at CES Pindrop launched a voice identity platform for IoT, voice assistants, smart homes/offices and connected cars. This technology means that you no longer have to use pin codes to gain access to your accounts or devices. Instead, Pindrop will be able to instantly provide authentication when you start to talk.