How To Get The Max From RPA (Robotic Process Automation)

Robotic Process Automation (RPA) is not sexy (the name alone is evidence of this). Yet it is one of the hottest sectors in the tech market.

Why all the interest? RPA allows companies to automate routine processes, which can quickly lower costs and allow employees to focus on more important tasks. The technology also reduces errors and helps to improve compliance.

Oh, and there is something else: RPA can be a gateway to AI. The reason is that the automation may help with finding patterns and insights from the data as well as to streamline the input with NLP (Natural Language Processing) and OCR.

Yet despite all this, there should definitely be care with an implementation. Keep in mind that there are still plenty of failures.

So let’s take a look at some things to consider to improve the odds of success:

Deep Dive On Your Current Processes: Rushing to implement RPA will probably mean getting subpar results. There first must be a thorough analysis and understanding of your current processes. Otherwise you’ll likely be just automating inefficiencies.

“Best practices for automation projects always begin with process mapping and re-engineering of all business scenarios,” said Sudhir Singh, who is the CEO of NIIT Technologies. “This allows all automation design to be completed upfront and can avoid multiple re-iterations during delivery.”

But truly understanding your processes can be time-consuming and difficult.  This is why it could be a good idea to bring in an expert.


Although, there are also several software systems that can essentially do an MRI of your processes. An example is Celonis, which has partnerships with top RPA players like iPath, Automation Anywhere, and Blue Prism. “Our system creates a business process map,” said Alexander Rinke, who is the CEO of Celonis. “With it, you can see what needs improvement.”

Start With The Mundane: RPA is best for those processes that are routine and repetitive. These are basically the kinds of things that … bore your employees. And yes, this means that RPA can span many parts of a business, like finance, HR, legal, the supply chain and so on.

It also helps if the processes do not change much. After all, this means fewer upgrades to the bots, which lowers the complexity.

Determine Whether to Replace or Supplement People: This is important as it will guide you in the type of RPA to use.

“By supplementing people, a business can implement attended bots that are assistants and helpers to team members that serve the purpose of speeding up processes and eliminating human error,” said Richard French, who is the CRO of Kryon. “This setup will empower staff to focus on advanced and complex tasks, while bot assistants handle their administrative assignments. “

But if you want to find ways to reduce headcount, then you should look at those vendors that focus on unattended bots.

Create a Center of Excellence (CoE): There needs to be a well thought-out plan for funding, training, governance and maintenance of the RPA. And to carry this out, it’s recommended to setup a CoE that can manage the process. Often this includes a mix of business people, IT personnel and developers.

Scaling: It’s often easy to get early wins. But the major challenge is making RPA more pervasive.

“Many companies that do implement the technology never scale past the first 50 automated processes,” said French. “The reason is that it is difficult for executives to think beyond and understand what processes will further improve the ROI or efficiency once there is already something in place.”

This is why having a CoE is so critical. What’s more, the team will likely need to change over time, as the needs and requirements of the RPA implementation evolve.

Implementing AI The Right Way

For many companies, when it comes to implementing AI, the typical approach is to use certain features from existing software platforms (say from Salesforce.com’s Einstein).  But then there are those companies that are building their own models.

Yes, this can move the needle, leading to major benefits. At the same time, there are clear risks and expenses. Let’s face it, you need to form a team, prepare the data, develop and test models, and then deploy the system.

In light of this, it should be no surprise that AI projects can easily fail.

So what to do? How can you boost the odds for success?

Well, let’s take a look at some best practices;

IT Assessment: The fact is that most companies are weighed down with legacy systems, which can make it difficult to implement an AI project. So there must be a realistic look at what needs to be built to have the right technology foundation — which can be costly and take considerable time.

Funny enough, as you go through the process, you may realize there are already AI projects in progress!

“Confusion like this must be resolved across the leadership team before a coherent AI strategy can be formulated,” said Ben MacKenzie, who is the Director of AI Engineering at Teradata Consulting.

The Business Case: Vijay Raghavan, who is the executive vice president and CTO of Risk and Business Analytics at RELX, recommends asking questions like:

  • Do I want to use AI to build better products?
  • Do I want to use AI to get products to market faster?
  • Do I want to use AI to become more efficient or profitable in ways beyond product development?
  • Do I want to use AI to mitigate some form of risk (Information security risk, compliance risk…)?

“In a sense, this is not that different from a company that asked itself say 30 or more years ago, ‘Do I need a software development strategy, and what are the best practices for such?,'” said Vijay. “What that company needed was a software development discipline — more than a strategy — in order to execute the business strategy. Similarly, the answers to the above questions can help drive an AI discipline or AI implementation.”

Measure, Measure, Measure: While it’s important to experiment with AI, there should still be a strong discipline when it comes to tracking the project.

“This should be done at every step and must be done with a critical sense,” said Erik Schluntz, who is the cofounder & CTO at Cobalt Robotics. “Despite the fantastic hype around AI today, it is still in no way a panacea, just a tool to help accomplish existing tasks more efficiently, or create new solutions that address a gap in today’s market. Not only that, but you need to be open about auditing the strategy on an on-going basis.”

Education and Collaboration: Even though AI tools are getting much better, they still require data science skills. The problem, of course, is that it is difficult to recruit people with this kind of talent. As a result, there should be ongoing education. The good news is that there are many affordable courses from providers like Udemy and Udacity to help out.

Next, fostering a culture of collaboration is essential. “So, in addition to education, one of the key components to an AI strategy should be overall change management,” said Kurt Muehmel, who is the VP of Sales Engineering at Dataiku. “It is important to create both short- and long-term roadmaps of what will be accomplished with first maybe predictive analytics, then perhaps machine learning, and ultimately – as a longer-term goal – AI, and how each roadmap impacts various pieces of the business as well as people who are a part of those business lines and their day-to-day work.”

Recognition: When there is a win, celebrate it. And make sure senior leaders recognize the achievement.

“Ideally this first win should be completed within 8-12 weeks so that stakeholders stay engaged and supportive,” said Prasad Vuyyuru, who is a Partner of the Enterprise Insights Practice at Infosys Consulting. “Then next you can scale it gradually with limited additional functions for more business units and geographies.”

How AI Will Change B2B Marketing Forever

Back in 2006, Phil Fernandez, Jon Miller, and David Morandi founded Marketo. At the time, they only had a PowerPoint. But then again, they also had a compelling vision to create a new category known as marketing automation.

Within a few years, Marketo would become one of the fastest software companies in the world, as the market-product fit was near perfect.  By 2013, the company went public and then a few years later, it would go private. Then as of 2018, Marketo agreed to sell to Adobe for $4.75 billion.

The deal will certainly be critical to scale growth even more and there will certainly be major synergies. But I also think there will be a supercharging of the AI strategy, which should be transformative for the company.

Yet this is not to imply that Marketo is a laggard with this technology.  Keep in mind that the company — in 2016 — launched Predictive Content. The system leverages AI to help marketers offer better targeting based on a prospect’s activities, firmographics, and buying stage.

After this, Marketo created other offerings like:

  • Account Profiling, just announced at Adobe Summit, uses a customer’s current customer data to determine the best prospective accounts to target based on billions of data points in real-time.
  • Predictive Audiences for Events selects the best audience to invite to an event and then forecasts attendance and recommends adjustments to meet customers’ goals.

But all this is still in the early days. “AI will become pervasive throughout B2B marketing efforts, improving performance and increasing efficiency throughout the entire buyer’s journey,” said Casey Carey, who is the Senior Director of Product Marketing for Marketo Digital Experience at Adobe.

In fact, here are just some of the important capabilities he sees with B2B marketing:

  • Audience Selection: “AI can inform improved audience selection and segmentation. Armed with tools to identify a target audience based on past behaviors, marketers can offer tailored experiences that will resonate with potential customers.”
  • Offers and Content: “AI can help marketers deliver higher value to potential customers by applying machine learning to the content selection and delivery process. This includes creative, formats, and offers. By creating personalized messages based on previous choices and behavior, marketers are able to engage in ways that resonate every time.”
  • Channels: “AI can help marketers determine the best time and place to engage with potential customers based on past channel performance and what you know about the individual.”
  • Analysis: “Using AI, marketers can quickly understand what’s working and what’s not so they can make adjustments to improve performance and drive a better return on their investments.”
  • Forecasting and Anomaly Detection: “It is not enough to know what to do, but you also need to understand what the impact will most likely be – this is where AI can help. By analyzing past results, AI can predict outcomes like campaign performance, conversion rates, revenue, and customer value. This provides a baseline for planning and then making mid-course adjustments as anomalies occur or other changes are needed.”

Yes, this is quite a lot! But Casey has some spot-on recommendations for marketers on how to use AI. “Rather than trying to understand the technology behind AI solutions, savvy marketers should focus instead on finding opportunities to use them,” he said. “If you catch yourself saying, ‘If only I could figure how to put all this data to use,’ consider an AI application. On the other hand, despite everything that can be achieved by strategically implementing AI, there are still areas where AI solutions are not appropriate, such as situations where there is poor quality or insufficient data. AI is, after all, artificial intelligence. It’s only as good as the data you feed it.”

But AI is not something to ruminate about — rather, it is something that must be acted on. “Two things are happening that are making AI more prevalent in marketing,” said Casey. “First, prospects are expecting more relevant and compelling engagement along their buyer journey, and second, more data is becoming available to inform our marketing strategies. As a result, the historical way of manually analyzing data and using rule-based approaches to marketing are no longer enough.”

Lyft IPO: What About The AI Strategy?

According to the shareholder letter from Lyft’s co-founders: “In those early days, we were told we were crazy to think people would ride in each other’s personal vehicles.”

Yea, crazy like a fox. Of course, on Friday Lyft pulled off its IPO, raising about $2.34 billion. The stock price ended the day up 8.74% to $78.29 – putting the valuation at $26.5 billion.

“Very few companies can claim 100% growth year over year at the scale they are operating at,” said Jamie Sutherland, who is the CEO and co-founder of Sonix. “It’s pretty amazing. True, it’s costing them an arm and a leg, but the nature of the industry — which is still evolving — is that there will be a handful of winners. Lyft is clearly in that camp.”

Lyft, which posted $2.2 billion in revenues in 2018, has the mission of improving “people’s lives with the world’s best transportation.” But this is more than just about technology or moving into adjacent categories like bikes and scooters. Lyft sees ride-hailing as a way to upend the negative aspects of autos. Keep in mind that they are the second highest household expense and a typical car is used only about 5% of the time. There are also 37,000 traffic-related deaths each year.

AI And The Lyft Mission

Despite all the success, Lyft is still in the early phases of its market opportunity, as rideshare networks account for roughly 1% of the miles traveled in the US. But to truly achieve the vision of transforming the transportation industry, the company will need to be aggressive with AI.  And yes, Lyft certainly understands this.

For some time, the company has been embedding machine learning into its technology stack. Lyft has the advantage of data on over one billion rides and more than ten billion miles – which allows for training of models to improve the experience, such as by reducing arrival times and maximizing the available number of riders. But the technology also helps with sophisticated pricing models.

But when it comes to AI, the holy grail is autonomous driving. For Lyft, this involves a two-part strategy. First of all, there is the Open Platform that allows third-party developers to create technology for the network. Lyft believes that autonomous vehicles will likely be most effective when managed through ridesharing networks because of the sophisticated routing systems.

Next, Lyft is building its own autonomous vehicles. For example, in October the company purchased Blue Vision Labs, which is a developer of computer vision technology. There have also been a myriad of partnerships with car manufacturers and suppliers.

So what is the time line for the autonomous vehicle efforts? Well, according to the S-1: “In the next five years, our goal is to deploy an autonomous vehicle network that is capable of delivering a portion of rides on the Lyft platform. Within 10 years, our goal is to have deployed a low-cost, scaled autonomous vehicle network that is capable of delivering a majority of the rides on the Lyft platform. And, within 15 years, we aim to deploy autonomous vehicles that are purpose-built for a broad range of ridesharing and transportation scenarios, including short- and long-haul travel, shared commute and other transportation services.”

This is certainly ambitious as there remain complex technology and infrastructure challenges. Furthermore, Lyft must deal with societal issues.

“According to an AAA study, 71% of Americans do not feel comfortable riding in fully autonomous vehicles,” said David Barzilai, who is the executive chairman and co-founder of Karamba Security. “Similarly, recent cyber security attacks have been shaking that trust as well and significantly decreasing consumer willingness to enter an autonomous vehicle.”

But hey, the founders of Lyft have had to deal with enormous challenges before. And besides, the company has the resources and scale to effectively pursue AI.

“Lyft is doing a tremendous job of pushing self-driving technology ahead,” said Aleksey Medvedovskiy, who is the founder of Lacus and president of NYC Taxi Group. “Self-driving cars will help to eliminate traffic and potential accident problems.  In my opinion, self-driving technology is much safer and better than many drivers who are currently on the roads.”

Salesforce.com @ 20: Secrets To Its $125 Billion Success

It was during the heyday of the Internet boom, in March 1999, that tech veterans Marc Benioff, Parker Harris, Frank Dominguez, and Dave Moellenhoff launched salesforce.com. In true startup fashion, it was located in a one-bedroom apartment.

Up until this time, the enterprise software market was stuck in its crusty ways. Vendors would sell on-premise solutions and charge hefty up-front licensing fees, with ongoing maintenance charges. There were usually hard-sell tactics for upgrades. And yes, it was far from clear how many employees really used the software.

salesforce.com had the bold ambition to turn this model upside down. Fees instead would be charged on a subscription basis. The software would be accessed via the Internet, which meant seamless upgrades as well as access to data in real-time.

Yet this vision had to be evangelized. Let’s face it, many companies did not want to place their data in the cloud. Would it be scalable? What about security?

These were legitimate issues but CEO Benioff worked tirelessly to promote his vision – and over time, it started to take hold in a big way.

Now the company generates over $13 billion in annual revenues and is growing at 26%.

Yes, it’s been an incredible journey. So then, what are some of the lessons? Well, there are really too many to count. In fact, Benioff wrote a book called Behind the Cloud: The Untold Story of How Salesforce.com Went from Idea to Billion-Dollar Company-and Revolutionized an Industry, which highlights 111 of them!

But recently, I talked to a variety of former Salesforce.com employees and got their takeaways. Here’s a look:

Tien Tzuo, who is the founder and CEO of Zuora:

I worked for Marc Benioff for nine years. Being at his side as we built Salesforce from the ground up was an amazing experience. After a month of working for Marc, I quickly realized that he has a relentless consistency with his storytelling. Once Salesforce started to scale, as we had new launch events, my team and I would work proudly to come up with a brand new message, an exciting new angle, a different kind of presentation. Then we would take our shiny new deck in to show Marc, and he would toss it. Then he would take us back to our fundamental ideas. He would return to the kinds of questions we were asking ourselves when we were just starting the company, like “How does the Internet change software delivery?” or “What if CRM was as simple and intuitive as buying a book on Amazon?” As it turns out, those messages were still relevant! Marc never lost focus of first principles. Marc taught me the discipline of giving the same message day after day, month after month, year after year. I’m not talking about rote recitation. The trick is delivering the same message in a thousand different ways. That’s how you change the world.

Judy Loehr, who is the founder of Bayla Ventures:

In the early days of salesforce.com we had to figure out how to make every part of this new company and business model work: prove an online CRM product could be widely accepted, prove customers would continue to pay month after month, and prove that a subscription model could work for software. I remember lots of knock-down arguments, but ultimately everyone was fighting for what was best for customers and how to make this new business model successful.

Shawna Wolverton, who is the head of product at Zendesk:

When I joined salesforce we were still inventing cloud software as we went. There were no playbooks or best practices for how to build or scale, our only option was to innovate and figure things out ourselves. Those early pioneering days created a pervasive culture of innovation that has clearly served the company very well over the past 20 years.

Jenny Cheng, who is a VP PayPal:There was a sense of urgency in the early days of Salesforce that you don’t see at every company. In both product and sales, we treated every month like it was both our first and last month. There was an urgency to get every new feature and release right and out to customers as soon as possible, an urgency to get new products into the hands of customers quickly, and always a focus on both short and long-term customer success.

Courtney Broadus, who is an angel investor:

It’s crazy that “move fast and break things” became a symbol of Silicon Valley when no business-oriented tech startup would have survived with that motto. Even in the earliest days of salesforce.com, we had a very strong culture around delivering innovation with excellence (do amazing things and DON’T break anything!). We were building a new model of cloud delivery for business software that no one understood, so our customer’s trust was sacred.

Our maniacal focus on getting each new technical capability done right – elegant, scalable, secure, iterative – created such a solid base architecture that we were able to open up our technology and share the first cloud platform for customers to build their own apps themselves.

Deepa Subramanian, who is the CEO of Wootric:

Salesforce treated people as a long-term investment even though we were in scrappy startup mode. They really invested in me and created an environment where individual contributors could be fearless about contributing to strategic discussions.

PagerDuty IPO: Is AI The Secret Sauce?

Because of the government shut down earlier in the year, there was a delay in with IPOs as the SEC could not evaluate the filings. But now it looks like the market is getting ready for a flood of deals.

One of the first will be PagerDuty, which was actually founded during the financial crisis of 2009. The core mission of the company is “to connect teams to real-time opportunity and elevate work to the outcomes that matter.”

Interestingly enough, PagerDuty refers to itself as the central nervous system of a digital enterprise. This means continuously analyzing systems to detect risks but also to find opportunities to improve operations, increase revenues and promote more innovation.

Keep in mind that this is far from easy. After all, most data is just useless noise. But then again, in today’s world where people expect quick action and standout customer experiences, it is important to truly understand data.

The PagerDuty S-1 highlights this with some of the following findings:

  • The abandon rate is 53% for mobile website visitors if the site takes longer than three seconds to load.
  • A major online retailer can lose up to $500,000 in revenue for every minute of downtime.
  • A survey from PricewaterhouseCoopers shows that 32% of customers say they would ditch a brand after one bad experience.

As for PagerDuty, it has built a massive data set from over 10,000 customers. Consider that this has allowed the company to leverage cutting-edge AI (Artificial Intelligence) models that supercharge the insights.

Here’s how PagerDuty describes it in the S-1 filing: “We apply machine learning to data collected by our platform to help our customers identify incidents from the billions of digital signals they collect each day. We do this by automatically converting data from virtually any software-enabled system or device into a common format and applying machine-learning algorithms to find patterns and correlations across that data in real time. We provide teams with visibility into similar incidents and human context, based on data related to past actions that we have collected over time, enabling them to accelerate time to resolution.”

The result is that there are a myriad of powerful use cases. For example, the AI helps GoodEggs to monitor warehouses to make sure food is fresh.  Then there is the case with Slack, which uses the technology to remove friction in dealing with the incident response process.

For PagerDuty, the result has been durable growth on the top line, with revenues jumping 48% during the past year. The company also has a 139% net retention rate and counts 33% of the Fortune 500 companies as customers.

Yet PagerDuty is still in the nascent stages of the opportunity. Note that the company estimates the total addressable market at over $25 billion, which is based on an estimated 85 million users.

Data + AI

But again, when looking at the IPO, it’s really about the data mixed with AI models. This is a powerful combination and should allow for strong barriers to entry, which will be difficult to replicate.  There is a virtuous cycle as the systems get smarter and smarter.

Granted, there are certainly risk factors. If the AI fails to effectively detect some of the threats or gives off false positives, then PagerDuty’s business would likely be greatly impacted.

But so far, it seems that the company has been able to build a robust infrastructure.

Now the PagerDuty IPO — which will likely hit the markets in the next couple weeks — will be just one of the AI-related companies that will pull off their offerings. Basically, get ready for a lot more – and fast.

Is AI Headed For Another Winter?

AI (Artificial Intelligence) has experienced several periods of severe funding cuts and lack of interest, such as during the 1970s and 1980s. They were called “AI winters,” a reference to the concept of nuclear winter where the sun is blocked from a layer of smoke and dust!

But of course, things are much different nowadays. AI is one of the hottest categories of tech and is a strategic priority for companies like Facebook, Google, Microsoft and many others.

Yet could we be facing another winter? May things have gone too far?  Well, it’s really tough to tell. Keep in mind that prior AI winters were a reaction to the fact that many of the grandiose promises did not come to fruition.

But as of now, we are seeing many innovations and breakthroughs that are impacting diverse industries. VCs are also writing large checks to fund startups while mega tech companies have been ramping their M&A.

Simply put, there are few signs of a slowdown.

A New Kind of AI Winter?

History doesn’t repeat itself but it often rhymes, said Mark Twain.  And this may be the case with AI — that is, we could be seeing a new type of winter.  It’s where society is negatively affected in subtle ways over a prolonged period of time.

According to Alex Wong, who is the chief scientist and co-founder of DarwinAI, the problem of bias in models is major part of this.  For example, AI is being leveraged in hiring, which involves screening for candidates based on large numbers of resumes.

“While this approach might seem data driven and thus objective, there are significant gender and cultural biases in these past hiring practices, which are then learned by the AI in the same way a child can pickup historical biases from what they are taught,” said Alex. “Without deeper investigation, the system will start making biased and discriminatory decisions that can have a negative societal impact and create greater inequality when released into companies.”

But this is not a one off. Bias is really a pervasive problem, corrosive and often goes undetected. In fact, a big reason is the culture of the AI community, which is more focused on striving for accuracy rates – not necessarily interested on the broader impact.

Another factor is that AI tools are becoming more pervasive and are often free to use. So as inexperienced people build models, there is a higher likelihood that we’ll see even more bias in the outcomes.

What To Do?

Explainability is about understanding AI models.  True, this is difficult because deep learning systems can be black boxes.  But there are creative ways to deal with this.

Consider the following from Christian Beedgen, who is the co-founder and CTO of Sumo Logic:

“Early in our development of Sumo Logic, we built a generic and unsupervised anomaly detection system to track how often different classifications of logs appeared, with the idea that this would help us spot interesting trends and outliers. Once implemented, however, we found that — despite a variety of approaches — the results generated by our advanced anomaly detection algorithms simply were not meaningfully explainable to our users. We realized that the results of sophisticated algorithms don’t matter if humans can’t figure out what they mean. Since then, we’ve focused on narrower problem states to create fundamentally simpler — and therefore more useful — predictive machinery.”

It was a tough lesson but it wound up being critical for the company, as the products got much stronger and useful.  Sumo Logic has gone on to raise $230 million and is one of the top players in its space.

What Next?

Going forward, the AI industry needs to be much more proactive, with an urgency for fairness, accountability and transparency.  A way to help this along would be to include building features in platforms that provide insights on explainability as well as bias.  Even having old-school ethics boards is a good option. After all, this is common for university research.

“AI-influenced decisions that result in discrimination and inequality, when left unchecked by humans, can lead to a loss of trust in AI, which can in turn hinder its widespread adoption, especially in the sectors that could truly benefit,” said Alex. “We should not only strive to improve the transparency and interpretability of deployed AI systems, but educate those who build these systems and make them aware of fairness, data leakage, creator accountability, and inherent bias issues. The goal is not only highly effective AI systems, but also ones that society can trust.”

Deep Learning: When Should You Use It?

Deep learning, which is a subset of AI (Artificial Intelligence), has been around since the 1950s. It’s focused on developing systems that mimic the brain’s neural network structure.

Yet it was not until the 1980s that deep learning started to show promise, spurred by the pioneering theories of researchers like Geoffrey Hinton, Yoshua Bengio and Yann Lecun. There was also the benefit of accelerating improvements in computer power.

Despite all this, there remained lots of skepticism. Deep learning approaches still looked more like interesting academic exercises that were not ready for prime time.

But this all changed in a big way in 2012, when Hinton, Ilya Sutskever, and Alex Krizhevsky used sophisticated deep learning to recognize images in an enormous dataset. The results were stunning, as they blew away previous records. So began the deep learning revolution.

Nowadays if you do a cursory search of the news for the phrase “deep learning” you’ll see hundreds of mentions. Many of them will be from mainstream publications.

Yes, it’s a case of a 60-plus-year-old overnight success story. And it is certainly well deserved.

But of course, the enthusiasm can still stretch beyond reality. Keep in mind that deep learning is far from a miracle technology and does not represent the final stages of true AI nirvana. If anything, the use cases are still fairly narrow and there are considerable challenges.

“Deep learning is most effective when there isn’t an obvious structure to the data that you can exploit and build features around,” said Dr. Scott Clark, who is the co-founder and CEO of SigOpt. “Common examples of this are text, video, image, or time series datasets. The great thing about deep learning is that it will automatically build and exploit patterns in the data in order to make better decisions. The downside is that this can sometimes take a lot of data and a lot of compute resources to converge to a good solution. It tends to be the most effective in places where there is a lot of data, a lot of compute power, and there is a need for the best possible solution.”

True, it is getting easier to use deep learning. Part of this is due to the ubiquity of open source platforms like TensorFlow and PyTorch. Then there is the emergence of cloud-based AI Systems, such as Google’s AutoML.

But such things only go so far. “Each neural network model has tens or hundreds of hyperparameters, so turning and optimizing these parameters requires deep knowledge and experiences from human experts,” said Jisheng Wang, who is the head of data science at Mist. “Interpretability is also a big challenge when using deep learning models, especially for enterprise software, which prefers to keep humans in the loop. While deep learning reduces the human effort of feature engineering, it also increases the difficulty for humans to understand and interpret the model. So in certain applications where we require human interaction and feedback for continuous improvement, deep learning may not be the appropriate choice.”

However, there are alternatives that may not be as complex, such as traditional machine learning. “In cases with smaller datasets and simpler correlations, techniques like KNN or random forest may be more appropriate and effective,” said Sheldon Fernandez, who is the CEO of DarwinAI.

Now this is not to somehow imply that you should mostly shun deep learning. The technology is definitely powerful and continues to show great progress (just look at the recent innovation of Generative Adversarial Networks or GANs).  Many companies — from mega operators like Google to early-stage startups — are also focused on developing systems to make the process easier and more robust.

But as with any advanced technology, it needs to be treated with care. Even experts can get things wrong.  “A deep learning model might easily get a problematic or nonsensical correlation,” said Sheldon,. “That is, the network might draw conclusions based on quirks in the dataset that are catastrophic from a practical point of view.”

What You Need To Know About Machine Learning

Machine learning is one of those buzz words that gets thrown around as a synonym for AI (Artificial Intelligence). But this really is not accurate. Note that machine learning is a subset of AI.

This field has also been around for quite some time, with the roots going back to the late 1950s. It was during this period that IBM’s Arthur L. Samuel created the first machine learning application, which played chess.

So how was this different from any other program? Well, according to Venkat Venkataramani, who is the co-founder and CEO of Rockset, machine learning is “the craft of having computers make decisions without providing explicit instructions, thereby allowing the computers to pattern match complex situations and predict what will happen.”

To pull this off, there needs to be large amounts of quality data as well as sophisticated algorithms and high-powered computers. Consider that when Samuel built his program such factors were severely limited. So it was not until the 1990s that machine learning became commercially viable.

“Current trends in machine learning are mainly driven by the structured data collected by enterprises over decades of transactions in various ERP systems,” said Kalyan Kumar B, who is the Corporate Vice President and Global CTO of HCL Technologies. “In addition, the plethora of unstructured data generated by social media is also a contributing factor to new trends. Major machine learning algorithms classify the data, predict variability and, if required, sequence the subsequent action. For example, an online retail app that can classify a user based on their profile data and purchase history allows the retailer to predict the probability of a purchase based on the user’s search history and enables them to target discounts and product recommendations.”

Now you’ll also hear another buzz word, which often gets confused with machine learning – that is, deep learning. Keep in mind that this is a subset of machine learning and involves sophisticated systems called neural networks that mimic the operation of the brain.  Like machine learning, deep learning has been around since the 1950s.  Yet it was during the 1980s and 1980s that this field gained traction, primarily from innovative theories of academics like Geoffrey Hinton, Yoshua Bengio and Yann Lecun. Eventually, mega tech operators like Google, Microsoft and Facebook would invest heavily in this technology. The result has been a revolution in AI. For example, if you use something like Google Translate, then you have seen the power of this technology.

But machine learning – supercharged by deep learning neural networks — is also making strides in the enterprise. Here are just a few examples:

  • Mist has built a virtual assistant, called Marvis, that is based on machine learning algorithms that mine insights from Wireless LANs. A network administrator can just ask it questions like “How are the wi-fi access points in the Baker-Berry Library performing?” and Marvis will provide answers based on the data. More importantly, the system gets smarter and smarter over time.
  • Barracuda Networks is a top player in the cybersecurity market and machine learning is a critical part of the company’s technology. “We’ve found that this technology is exponentially better at stopping personalized social engineering attacks,” said Asaf Cidon, who is the VP of Email Security for Barracuda Networks. “The biggest advantage of this technology is that it effectively allows us to create a ‘custom’ rule set that is unique to each customer’s environment. In other words, we can use the historical communication patterns of each organization to create a statistical model of what a normal email looks like in that organization. For example, if the CFO of the company always sends emails from certain email addresses, at certain times of the day, and logs in using certain IPs and communicates with certain people, the machine learning will absorb this data. We can also learn and identify all of the links that would be ‘typical’ to appear in an organization’s email system. We then use that knowledge and apply different machine learning classifiers that compare employee behavior with what a normal email would be like in the organization.”

Of course, machine learning has drawbacks – and the technology is far from achieving true AI. It cannot understand causation or engage in conceptual thinking. There are also potential risks of bias and overfitting of the models (which means that the algorithms determine that mere noise represents real patterns).

Even something like handling time-series data at scale can be extremely difficult. “An example is the customer journey,” said Anjul Bhambhri, who is the Vice President of Platform Engineering at Adobe. “This kind of dataset involves behavioral data that may have trillions of customer interactions. How important is each of the touch points in the purchase decision? To answer this, you need to find a way to determine a customer’s intent, which is complex and ambiguous. But it is certainly something we are working on.”

Despite all this, machine learning remains an effective way to turn data into valuable insights. And progress is likely to continue at a rapid clip.

“Machine language is important because its predictive power will disrupt numerous industries,” said Sheldon Fernandez, who is the CEO of DarwinAI. “We are already seeing this in the realm of computer vision, autonomous vehicles and natural language processing. Moreover, the implications of these disruptions may have far-reaching impacts to our quality of life, such as with advancements in medicine, health care and pharmaceuticals.”

Are Most Of Your Product’s Features…Useless?

Pendo operates a platform that helps product teams with insights, user communication and user guidance. There are currently 850 customers like Salesforce, Coupa, Gainsight, BMC, and Sprinklr.

So yes, Pendo has access to an enormous data set. So putting this to work in a recent research project, the company found that about 80% of features in the typical cloud software product are rarely or never used. The conclusion: about $29.5 billion is wasted.

This is certainly an eye-opener. But then again, the results should not be too surprising. Seriously, how many features have you used in, say, Excel or Word? Probably a tiny fraction.

Now this does not mean you should go full-on minimalist with your product either.

“My personal take on this is that while I’m sure many companies are doing poor product planning and wasting R&D cycles on low-impact product features,” said Judy Loehr, who is the founder of Bayla Ventures and one of the early product managers at Salesforce.com, “you can’t assume that a feature with low usage means low value. In many cloud products there are some features that are highly used and other features with lower usage that can be critical complements to the high-usage features. B2B cloud product strategy should never be driven solely by usage data.”

For example, a disaster recovery feature will probably never be used. But hey, you probably still want it, right?

Definitely.

On the other hand, there is a danger to be too aggressive with some features. This may give a false sense of engagement – even though the users could be getting annoyed and distracted.

“The excitement around AI in B2B analytics software is that it increases the signal/noise ratio,” said Deepa Subramanian, who is the CEO and co-founder of Wootric. “Instead, tell me when I need to pay attention, don’t force me to look at your charts everyday.”

Best Practices

When it comes to product development, there are often few bright-line rules. It’s really a combination of tracking usage, getting feedback from customers, using common sense and being creative.

Although, I do still think it’s a good idea to have a high burden of proof for creating new features.

“We have the advantage with working with many top consumer tech companies like Google and Facebook,” said Sam Boonin, who is the VP of product strategy at Zendesk. “We’ve learned a lot from this, which has helped with our own product. And generally, their apps are not full of features. Top companies understand that users can get overwhelmed.”

In fact, even adding a couple features can cause a stir from your customers!

But of course, for startups, your product is often the key to success. It’s your way to disrupt incumbents and find opportunities for growth.

This is why having a product-driven culture can be so critical.

“An inherent challenge in B2B is we serve two customers – the buyer vs end user,” said Clara Shih, who is the CEO and founder of Hearsay Systems. “Buyers often think they know what end users want, but sometimes they miss the mark. Looking at product usage analytics is only part of the story. What’s been game-changing for us at Hearsay Systems is having everyone in the company spend time with our end users — insurance agents and financial advisors — to first understand the ‘why’ and ‘jobs to be done.’ Especially as a vertical industry-focused company, we can go deep and the usage issues resolve themselves.”