Lyft IPO: What About The AI Strategy?

According to the shareholder letter from Lyft’s co-founders: “In those early days, we were told we were crazy to think people would ride in each other’s personal vehicles.”

Yea, crazy like a fox. Of course, on Friday Lyft pulled off its IPO, raising about $2.34 billion. The stock price ended the day up 8.74% to $78.29 – putting the valuation at $26.5 billion.

“Very few companies can claim 100% growth year over year at the scale they are operating at,” said Jamie Sutherland, who is the CEO and co-founder of Sonix. “It’s pretty amazing. True, it’s costing them an arm and a leg, but the nature of the industry — which is still evolving — is that there will be a handful of winners. Lyft is clearly in that camp.”

Lyft, which posted $2.2 billion in revenues in 2018, has the mission of improving “people’s lives with the world’s best transportation.” But this is more than just about technology or moving into adjacent categories like bikes and scooters. Lyft sees ride-hailing as a way to upend the negative aspects of autos. Keep in mind that they are the second highest household expense and a typical car is used only about 5% of the time. There are also 37,000 traffic-related deaths each year.

AI And The Lyft Mission

Despite all the success, Lyft is still in the early phases of its market opportunity, as rideshare networks account for roughly 1% of the miles traveled in the US. But to truly achieve the vision of transforming the transportation industry, the company will need to be aggressive with AI.  And yes, Lyft certainly understands this.

For some time, the company has been embedding machine learning into its technology stack. Lyft has the advantage of data on over one billion rides and more than ten billion miles – which allows for training of models to improve the experience, such as by reducing arrival times and maximizing the available number of riders. But the technology also helps with sophisticated pricing models.

But when it comes to AI, the holy grail is autonomous driving. For Lyft, this involves a two-part strategy. First of all, there is the Open Platform that allows third-party developers to create technology for the network. Lyft believes that autonomous vehicles will likely be most effective when managed through ridesharing networks because of the sophisticated routing systems.

Next, Lyft is building its own autonomous vehicles. For example, in October the company purchased Blue Vision Labs, which is a developer of computer vision technology. There have also been a myriad of partnerships with car manufacturers and suppliers.

So what is the time line for the autonomous vehicle efforts? Well, according to the S-1: “In the next five years, our goal is to deploy an autonomous vehicle network that is capable of delivering a portion of rides on the Lyft platform. Within 10 years, our goal is to have deployed a low-cost, scaled autonomous vehicle network that is capable of delivering a majority of the rides on the Lyft platform. And, within 15 years, we aim to deploy autonomous vehicles that are purpose-built for a broad range of ridesharing and transportation scenarios, including short- and long-haul travel, shared commute and other transportation services.”

This is certainly ambitious as there remain complex technology and infrastructure challenges. Furthermore, Lyft must deal with societal issues.

“According to an AAA study, 71% of Americans do not feel comfortable riding in fully autonomous vehicles,” said David Barzilai, who is the executive chairman and co-founder of Karamba Security. “Similarly, recent cyber security attacks have been shaking that trust as well and significantly decreasing consumer willingness to enter an autonomous vehicle.”

But hey, the founders of Lyft have had to deal with enormous challenges before. And besides, the company has the resources and scale to effectively pursue AI.

“Lyft is doing a tremendous job of pushing self-driving technology ahead,” said Aleksey Medvedovskiy, who is the founder of Lacus and president of NYC Taxi Group. “Self-driving cars will help to eliminate traffic and potential accident problems.  In my opinion, self-driving technology is much safer and better than many drivers who are currently on the roads.”

Salesforce.com @ 20: Secrets To Its $125 Billion Success

It was during the heyday of the Internet boom, in March 1999, that tech veterans Marc Benioff, Parker Harris, Frank Dominguez, and Dave Moellenhoff launched salesforce.com. In true startup fashion, it was located in a one-bedroom apartment.

Up until this time, the enterprise software market was stuck in its crusty ways. Vendors would sell on-premise solutions and charge hefty up-front licensing fees, with ongoing maintenance charges. There were usually hard-sell tactics for upgrades. And yes, it was far from clear how many employees really used the software.

salesforce.com had the bold ambition to turn this model upside down. Fees instead would be charged on a subscription basis. The software would be accessed via the Internet, which meant seamless upgrades as well as access to data in real-time.

Yet this vision had to be evangelized. Let’s face it, many companies did not want to place their data in the cloud. Would it be scalable? What about security?

These were legitimate issues but CEO Benioff worked tirelessly to promote his vision – and over time, it started to take hold in a big way.

Now the company generates over $13 billion in annual revenues and is growing at 26%.

Yes, it’s been an incredible journey. So then, what are some of the lessons? Well, there are really too many to count. In fact, Benioff wrote a book called Behind the Cloud: The Untold Story of How Salesforce.com Went from Idea to Billion-Dollar Company-and Revolutionized an Industry, which highlights 111 of them!

But recently, I talked to a variety of former Salesforce.com employees and got their takeaways. Here’s a look:

Tien Tzuo, who is the founder and CEO of Zuora:

I worked for Marc Benioff for nine years. Being at his side as we built Salesforce from the ground up was an amazing experience. After a month of working for Marc, I quickly realized that he has a relentless consistency with his storytelling. Once Salesforce started to scale, as we had new launch events, my team and I would work proudly to come up with a brand new message, an exciting new angle, a different kind of presentation. Then we would take our shiny new deck in to show Marc, and he would toss it. Then he would take us back to our fundamental ideas. He would return to the kinds of questions we were asking ourselves when we were just starting the company, like “How does the Internet change software delivery?” or “What if CRM was as simple and intuitive as buying a book on Amazon?” As it turns out, those messages were still relevant! Marc never lost focus of first principles. Marc taught me the discipline of giving the same message day after day, month after month, year after year. I’m not talking about rote recitation. The trick is delivering the same message in a thousand different ways. That’s how you change the world.

Judy Loehr, who is the founder of Bayla Ventures:

In the early days of salesforce.com we had to figure out how to make every part of this new company and business model work: prove an online CRM product could be widely accepted, prove customers would continue to pay month after month, and prove that a subscription model could work for software. I remember lots of knock-down arguments, but ultimately everyone was fighting for what was best for customers and how to make this new business model successful.

Shawna Wolverton, who is the head of product at Zendesk:

When I joined salesforce we were still inventing cloud software as we went. There were no playbooks or best practices for how to build or scale, our only option was to innovate and figure things out ourselves. Those early pioneering days created a pervasive culture of innovation that has clearly served the company very well over the past 20 years.

Jenny Cheng, who is a VP PayPal:There was a sense of urgency in the early days of Salesforce that you don’t see at every company. In both product and sales, we treated every month like it was both our first and last month. There was an urgency to get every new feature and release right and out to customers as soon as possible, an urgency to get new products into the hands of customers quickly, and always a focus on both short and long-term customer success.

Courtney Broadus, who is an angel investor:

It’s crazy that “move fast and break things” became a symbol of Silicon Valley when no business-oriented tech startup would have survived with that motto. Even in the earliest days of salesforce.com, we had a very strong culture around delivering innovation with excellence (do amazing things and DON’T break anything!). We were building a new model of cloud delivery for business software that no one understood, so our customer’s trust was sacred.

Our maniacal focus on getting each new technical capability done right – elegant, scalable, secure, iterative – created such a solid base architecture that we were able to open up our technology and share the first cloud platform for customers to build their own apps themselves.

Deepa Subramanian, who is the CEO of Wootric:

Salesforce treated people as a long-term investment even though we were in scrappy startup mode. They really invested in me and created an environment where individual contributors could be fearless about contributing to strategic discussions.

PagerDuty IPO: Is AI The Secret Sauce?

Because of the government shut down earlier in the year, there was a delay in with IPOs as the SEC could not evaluate the filings. But now it looks like the market is getting ready for a flood of deals.

One of the first will be PagerDuty, which was actually founded during the financial crisis of 2009. The core mission of the company is “to connect teams to real-time opportunity and elevate work to the outcomes that matter.”

Interestingly enough, PagerDuty refers to itself as the central nervous system of a digital enterprise. This means continuously analyzing systems to detect risks but also to find opportunities to improve operations, increase revenues and promote more innovation.

Keep in mind that this is far from easy. After all, most data is just useless noise. But then again, in today’s world where people expect quick action and standout customer experiences, it is important to truly understand data.

The PagerDuty S-1 highlights this with some of the following findings:

  • The abandon rate is 53% for mobile website visitors if the site takes longer than three seconds to load.
  • A major online retailer can lose up to $500,000 in revenue for every minute of downtime.
  • A survey from PricewaterhouseCoopers shows that 32% of customers say they would ditch a brand after one bad experience.

As for PagerDuty, it has built a massive data set from over 10,000 customers. Consider that this has allowed the company to leverage cutting-edge AI (Artificial Intelligence) models that supercharge the insights.

Here’s how PagerDuty describes it in the S-1 filing: “We apply machine learning to data collected by our platform to help our customers identify incidents from the billions of digital signals they collect each day. We do this by automatically converting data from virtually any software-enabled system or device into a common format and applying machine-learning algorithms to find patterns and correlations across that data in real time. We provide teams with visibility into similar incidents and human context, based on data related to past actions that we have collected over time, enabling them to accelerate time to resolution.”

The result is that there are a myriad of powerful use cases. For example, the AI helps GoodEggs to monitor warehouses to make sure food is fresh.  Then there is the case with Slack, which uses the technology to remove friction in dealing with the incident response process.

For PagerDuty, the result has been durable growth on the top line, with revenues jumping 48% during the past year. The company also has a 139% net retention rate and counts 33% of the Fortune 500 companies as customers.

Yet PagerDuty is still in the nascent stages of the opportunity. Note that the company estimates the total addressable market at over $25 billion, which is based on an estimated 85 million users.

Data + AI

But again, when looking at the IPO, it’s really about the data mixed with AI models. This is a powerful combination and should allow for strong barriers to entry, which will be difficult to replicate.  There is a virtuous cycle as the systems get smarter and smarter.

Granted, there are certainly risk factors. If the AI fails to effectively detect some of the threats or gives off false positives, then PagerDuty’s business would likely be greatly impacted.

But so far, it seems that the company has been able to build a robust infrastructure.

Now the PagerDuty IPO — which will likely hit the markets in the next couple weeks — will be just one of the AI-related companies that will pull off their offerings. Basically, get ready for a lot more – and fast.

Is AI Headed For Another Winter?

AI (Artificial Intelligence) has experienced several periods of severe funding cuts and lack of interest, such as during the 1970s and 1980s. They were called “AI winters,” a reference to the concept of nuclear winter where the sun is blocked from a layer of smoke and dust!

But of course, things are much different nowadays. AI is one of the hottest categories of tech and is a strategic priority for companies like Facebook, Google, Microsoft and many others.

Yet could we be facing another winter? May things have gone too far?  Well, it’s really tough to tell. Keep in mind that prior AI winters were a reaction to the fact that many of the grandiose promises did not come to fruition.

But as of now, we are seeing many innovations and breakthroughs that are impacting diverse industries. VCs are also writing large checks to fund startups while mega tech companies have been ramping their M&A.

Simply put, there are few signs of a slowdown.

A New Kind of AI Winter?

History doesn’t repeat itself but it often rhymes, said Mark Twain.  And this may be the case with AI — that is, we could be seeing a new type of winter.  It’s where society is negatively affected in subtle ways over a prolonged period of time.

According to Alex Wong, who is the chief scientist and co-founder of DarwinAI, the problem of bias in models is major part of this.  For example, AI is being leveraged in hiring, which involves screening for candidates based on large numbers of resumes.

“While this approach might seem data driven and thus objective, there are significant gender and cultural biases in these past hiring practices, which are then learned by the AI in the same way a child can pickup historical biases from what they are taught,” said Alex. “Without deeper investigation, the system will start making biased and discriminatory decisions that can have a negative societal impact and create greater inequality when released into companies.”

But this is not a one off. Bias is really a pervasive problem, corrosive and often goes undetected. In fact, a big reason is the culture of the AI community, which is more focused on striving for accuracy rates – not necessarily interested on the broader impact.

Another factor is that AI tools are becoming more pervasive and are often free to use. So as inexperienced people build models, there is a higher likelihood that we’ll see even more bias in the outcomes.

What To Do?

Explainability is about understanding AI models.  True, this is difficult because deep learning systems can be black boxes.  But there are creative ways to deal with this.

Consider the following from Christian Beedgen, who is the co-founder and CTO of Sumo Logic:

“Early in our development of Sumo Logic, we built a generic and unsupervised anomaly detection system to track how often different classifications of logs appeared, with the idea that this would help us spot interesting trends and outliers. Once implemented, however, we found that — despite a variety of approaches — the results generated by our advanced anomaly detection algorithms simply were not meaningfully explainable to our users. We realized that the results of sophisticated algorithms don’t matter if humans can’t figure out what they mean. Since then, we’ve focused on narrower problem states to create fundamentally simpler — and therefore more useful — predictive machinery.”

It was a tough lesson but it wound up being critical for the company, as the products got much stronger and useful.  Sumo Logic has gone on to raise $230 million and is one of the top players in its space.

What Next?

Going forward, the AI industry needs to be much more proactive, with an urgency for fairness, accountability and transparency.  A way to help this along would be to include building features in platforms that provide insights on explainability as well as bias.  Even having old-school ethics boards is a good option. After all, this is common for university research.

“AI-influenced decisions that result in discrimination and inequality, when left unchecked by humans, can lead to a loss of trust in AI, which can in turn hinder its widespread adoption, especially in the sectors that could truly benefit,” said Alex. “We should not only strive to improve the transparency and interpretability of deployed AI systems, but educate those who build these systems and make them aware of fairness, data leakage, creator accountability, and inherent bias issues. The goal is not only highly effective AI systems, but also ones that society can trust.”

Deep Learning: When Should You Use It?

Deep learning, which is a subset of AI (Artificial Intelligence), has been around since the 1950s. It’s focused on developing systems that mimic the brain’s neural network structure.

Yet it was not until the 1980s that deep learning started to show promise, spurred by the pioneering theories of researchers like Geoffrey Hinton, Yoshua Bengio and Yann Lecun. There was also the benefit of accelerating improvements in computer power.

Despite all this, there remained lots of skepticism. Deep learning approaches still looked more like interesting academic exercises that were not ready for prime time.

But this all changed in a big way in 2012, when Hinton, Ilya Sutskever, and Alex Krizhevsky used sophisticated deep learning to recognize images in an enormous dataset. The results were stunning, as they blew away previous records. So began the deep learning revolution.

Nowadays if you do a cursory search of the news for the phrase “deep learning” you’ll see hundreds of mentions. Many of them will be from mainstream publications.

Yes, it’s a case of a 60-plus-year-old overnight success story. And it is certainly well deserved.

But of course, the enthusiasm can still stretch beyond reality. Keep in mind that deep learning is far from a miracle technology and does not represent the final stages of true AI nirvana. If anything, the use cases are still fairly narrow and there are considerable challenges.

“Deep learning is most effective when there isn’t an obvious structure to the data that you can exploit and build features around,” said Dr. Scott Clark, who is the co-founder and CEO of SigOpt. “Common examples of this are text, video, image, or time series datasets. The great thing about deep learning is that it will automatically build and exploit patterns in the data in order to make better decisions. The downside is that this can sometimes take a lot of data and a lot of compute resources to converge to a good solution. It tends to be the most effective in places where there is a lot of data, a lot of compute power, and there is a need for the best possible solution.”

True, it is getting easier to use deep learning. Part of this is due to the ubiquity of open source platforms like TensorFlow and PyTorch. Then there is the emergence of cloud-based AI Systems, such as Google’s AutoML.

But such things only go so far. “Each neural network model has tens or hundreds of hyperparameters, so turning and optimizing these parameters requires deep knowledge and experiences from human experts,” said Jisheng Wang, who is the head of data science at Mist. “Interpretability is also a big challenge when using deep learning models, especially for enterprise software, which prefers to keep humans in the loop. While deep learning reduces the human effort of feature engineering, it also increases the difficulty for humans to understand and interpret the model. So in certain applications where we require human interaction and feedback for continuous improvement, deep learning may not be the appropriate choice.”

However, there are alternatives that may not be as complex, such as traditional machine learning. “In cases with smaller datasets and simpler correlations, techniques like KNN or random forest may be more appropriate and effective,” said Sheldon Fernandez, who is the CEO of DarwinAI.

Now this is not to somehow imply that you should mostly shun deep learning. The technology is definitely powerful and continues to show great progress (just look at the recent innovation of Generative Adversarial Networks or GANs).  Many companies — from mega operators like Google to early-stage startups — are also focused on developing systems to make the process easier and more robust.

But as with any advanced technology, it needs to be treated with care. Even experts can get things wrong.  “A deep learning model might easily get a problematic or nonsensical correlation,” said Sheldon,. “That is, the network might draw conclusions based on quirks in the dataset that are catastrophic from a practical point of view.”

What You Need To Know About Machine Learning

Machine learning is one of those buzz words that gets thrown around as a synonym for AI (Artificial Intelligence). But this really is not accurate. Note that machine learning is a subset of AI.

This field has also been around for quite some time, with the roots going back to the late 1950s. It was during this period that IBM’s Arthur L. Samuel created the first machine learning application, which played chess.

So how was this different from any other program? Well, according to Venkat Venkataramani, who is the co-founder and CEO of Rockset, machine learning is “the craft of having computers make decisions without providing explicit instructions, thereby allowing the computers to pattern match complex situations and predict what will happen.”

To pull this off, there needs to be large amounts of quality data as well as sophisticated algorithms and high-powered computers. Consider that when Samuel built his program such factors were severely limited. So it was not until the 1990s that machine learning became commercially viable.

“Current trends in machine learning are mainly driven by the structured data collected by enterprises over decades of transactions in various ERP systems,” said Kalyan Kumar B, who is the Corporate Vice President and Global CTO of HCL Technologies. “In addition, the plethora of unstructured data generated by social media is also a contributing factor to new trends. Major machine learning algorithms classify the data, predict variability and, if required, sequence the subsequent action. For example, an online retail app that can classify a user based on their profile data and purchase history allows the retailer to predict the probability of a purchase based on the user’s search history and enables them to target discounts and product recommendations.”

Now you’ll also hear another buzz word, which often gets confused with machine learning – that is, deep learning. Keep in mind that this is a subset of machine learning and involves sophisticated systems called neural networks that mimic the operation of the brain.  Like machine learning, deep learning has been around since the 1950s.  Yet it was during the 1980s and 1980s that this field gained traction, primarily from innovative theories of academics like Geoffrey Hinton, Yoshua Bengio and Yann Lecun. Eventually, mega tech operators like Google, Microsoft and Facebook would invest heavily in this technology. The result has been a revolution in AI. For example, if you use something like Google Translate, then you have seen the power of this technology.

But machine learning – supercharged by deep learning neural networks — is also making strides in the enterprise. Here are just a few examples:

  • Mist has built a virtual assistant, called Marvis, that is based on machine learning algorithms that mine insights from Wireless LANs. A network administrator can just ask it questions like “How are the wi-fi access points in the Baker-Berry Library performing?” and Marvis will provide answers based on the data. More importantly, the system gets smarter and smarter over time.
  • Barracuda Networks is a top player in the cybersecurity market and machine learning is a critical part of the company’s technology. “We’ve found that this technology is exponentially better at stopping personalized social engineering attacks,” said Asaf Cidon, who is the VP of Email Security for Barracuda Networks. “The biggest advantage of this technology is that it effectively allows us to create a ‘custom’ rule set that is unique to each customer’s environment. In other words, we can use the historical communication patterns of each organization to create a statistical model of what a normal email looks like in that organization. For example, if the CFO of the company always sends emails from certain email addresses, at certain times of the day, and logs in using certain IPs and communicates with certain people, the machine learning will absorb this data. We can also learn and identify all of the links that would be ‘typical’ to appear in an organization’s email system. We then use that knowledge and apply different machine learning classifiers that compare employee behavior with what a normal email would be like in the organization.”

Of course, machine learning has drawbacks – and the technology is far from achieving true AI. It cannot understand causation or engage in conceptual thinking. There are also potential risks of bias and overfitting of the models (which means that the algorithms determine that mere noise represents real patterns).

Even something like handling time-series data at scale can be extremely difficult. “An example is the customer journey,” said Anjul Bhambhri, who is the Vice President of Platform Engineering at Adobe. “This kind of dataset involves behavioral data that may have trillions of customer interactions. How important is each of the touch points in the purchase decision? To answer this, you need to find a way to determine a customer’s intent, which is complex and ambiguous. But it is certainly something we are working on.”

Despite all this, machine learning remains an effective way to turn data into valuable insights. And progress is likely to continue at a rapid clip.

“Machine language is important because its predictive power will disrupt numerous industries,” said Sheldon Fernandez, who is the CEO of DarwinAI. “We are already seeing this in the realm of computer vision, autonomous vehicles and natural language processing. Moreover, the implications of these disruptions may have far-reaching impacts to our quality of life, such as with advancements in medicine, health care and pharmaceuticals.”