Low-Code Is Cracking The Code On AI (Artificial Intelligence)

A recent survey from Figure Eight – an Appen company – shows that AI (Artificial Intelligence) is rapidly becoming a strategic imperative. But unfortunately, there are major bottlenecks, such as the divides between line-of-business owners and technical practitioners as well as the complexities of managing data.

But there is something that should help solve the problems: low-code. As the name implies, this involves creating applications with drag-and-drop and integrations. The result is that development is much quicker and effective (here’s a post I wrote for Forbes.com about low-code).

One of the leaders in this category is Appian, which is the first low-code operator to go public. The company has a bold guarantee for its customers: “Idea to app in eight weeks.”

Founded 20 years ago, Appian started as an IT consulting shop with a focus on AI-powered personalization and ecommerce. But at the time, the technology was far from being prime time. For example, the founders realized that a well-known collaborative filtering system would always recommend the same products – even when the parameters were different! This was certainly an eye-opener.

Despite all this, the founders were convinced that AI would be a big market.  However, it would need a strong platform for building applications with rules for data and models for processes. So the Appian system was born.

But in the early days, the software was used primarily for typical IT solutions, such as for building applications for BPM and case management.  But during the past few years, AI has become a more common use case.

OK then, how has low-code been able to help?  Well, let’s take a look:

  • Clean data: A low-code system makes it easy to describe the business process, which allows for creating a solid foundation for the data. But a platform like Appian can also make educated guesses about how the data should be organzied. True, a data scientist could improve upon this but such a person is really not necessary for maintaining data integrity.
  • Ease of implementing a model/testing models: Consider that Appian allows for integrations of various third-party AI systems, such as from Amazon, Microsoft and Google. “An Appian customer recently was able to do a bake-off between leading AI providers because of the ease of being able to integrate them into the Appian platform,” said Michael Beckley, who is the CTO of Appian.
  • Guardrails: When developing an AI project, even a few adjustments can wreak havoc on a model. But a strong low-code system can provide warnings and suggestions to avoid the mistakes.
  • Deployment: A low-code system can deliver an app across multiple platforms, whether on the web or mobile. There is also the benefit of having a modern UI. No doubt, all this can go a long way in terms of adoption.

An illustration of the power of low-code comes from KPMG.  The company has been investing heavily in AI, creating its own platform called Ignite.  And yes, it is integrated with Appian.

One project that KPMG took on was to help companies deal with the sun setting of LIBOR, which means that huge numbers of contracts need to be amended.  
The Ignite system processes and interprets the unstructured data using machine learning and natural language processing. After this, Appian then provides for sophisticated business process management and workflow capabilities – allowing for document sharing, customizing business rules and real-time reporting.

Based on KPMG’s own experience, the error rate for having people review the contracts ranges from 10% to 15% (this even includes trained attorneys). But with AI and low-code, the company has been able to achieve an accuracy rate better than 96%.

“Greater efficiency and higher accuracy translates to reduced operational risk, reduced economic exposure, lower cost, and better client experience through the LIBOR transition,” said Todd Lohr, who is a Principal at KPMG.  “What takes a few hours for a subject matter expert to do can be accomplished by Ignite in a matter of seconds.”

The Week’s Important AI Announcements From Google And Microsoft

There are always plenty of tech conferences happening. But this week saw two biggies: Google I/O and Microsoft Build.

No doubt, a red-hot topic at these events was AI (Artificial Intelligence).  It seemed as if this was the only thing that mattered — or existed in tech!

OK then, with these conferences, what were the important announcements? Which are likely to be game-changing for the AI space?

Well, let’s take a look:

Microsoft: In a Microsoft blog about the Build conference, this is what Chris Stetkiewicz had to say: “Just a few years ago, artificial intelligence was largely relegated to universities and research labs, a charming computer science concept with little use in mainstream business. Today, AI is being integrated into everything from your refrigerator to your favorite workout app.”

This is certainly spot-on.

Now as for the notable developments at the Build conference, Microsoft released a variety of new tools for developers infused with AI. But perhaps the most interesting ones were for its new offerings for AutoML (Automated Machine Learning). These are systems – which are part of the Azure Machine Learning service — that allow just about anyone to create sophisticated AI models. This is critical as it is extremely difficult to recruit data scientists (here’s a recent post I did for Forbes.com on the topic).

Besides the AutoML tools, Microsoft also highlighted its support for ONNX Runtime, or the Open Neural Network Exchange, which is a joint venture to allow models to be deployed across different platforms.  The company also announced the launch of Decision, which is an AI system that provides recommendations on making decisions (by using sophisticated approaches like reinforcement learning).

Google: For the keynote, CEO Sundar Pichai noted: “We are moving from a company that helps you find answers to a company that helps you get things done. We want our products to work harder for you in the context of your job, your home and your life.”

This is all part of the company’s ambitious “AI first” mission. In other words, the technology is not just about a few products; rather, its about transforming everything Google does.

Just look at Google Assistant, which is getting more and more powerful.

“I am amazed at the advancements we are seeing in AI, specifically Conversational AI,” said Bryan Stokes, who is the VP Product Management for Vonage. “Instead of the previous AI experience, which was more of a halting back and forth interaction, Conversational AI has evolved to become more of a natural ‘discussion.’ It’s like going from the Walkie Talkie to the telephone. Today’s AI capabilities enable a continuous conversation. As someone who lives and breathes communications of all kinds, seeing the next generation of Google Assistant – where you can have continuous conversation and the assistant can take actions – comes so much closer to natural human interaction. Not only can this lead to more individual productivity, but it also provides real-time insights and the ability to take action during a conversation between two people. It is the latter that I think will be most beneficial — taking away the menial tasks, like taking notes or scheduling a follow up meeting, so we can focus solely on what the other person is saying — to make that human connection.”

Google I/O also showcased how the company’s engineers and researchers are pushing the boundaries of AI innovation. For example, the company has been able to make significant strides with localized artificial intelligence, made possible by compressing 100GB algorithms to less than half a GB. This means that Google can implement AI into devices – allowing for near zero latency as well as improved security and privacy. In fact, at the keynote, Pichai dubbed it as having “a data center in your pocket.”

Uber IPO: What About AI (Artificial Intelligence)?

Last year Morgan Stanley and Goldman Sachs indicated that the valuation of Uber was about $120 billion. But of course, the real market value can be much different. Yesterday Uber came public at about $76 billion, as the shares fell 7.6% on its debut.

But hey, the company did raise a cool $8.1 billion, which will be essential because the losses have remained large (about $1 billion in the latest quarter). Uber also must deal with fierce competition – not just with Lyft but also other fast-charging startups in different countries like Brazil.

Yet there is something else that the money will be useful for: the AI (artificial intelligence) effort.

Keep in mind that this has been a priority for some time. According to the S-1: “Managing the complexity of our massive network and harnessing the data from over 10 billion trips exceeds human capability, so we use machine learning and artificial intelligence, trained on historical transactions, to help automate marketplace decisions. We have built a machine learning software platform that powers hundreds of models behind our data-driven services across our offerings and in customer service and safety. We have developed natural language and dialog system technologies upon which we can build and scale up conversational interfaces for our users, including Drivers and consumers, to simplify and enhance interactions with our platform. Our computer vision software technology automatically processes and verifies millions of business-critical images and documents such as drivers’ licenses and restaurant menus, among other items, per year. Our proprietary sensor processing algorithms enhance our location accuracy in dense urban areas, and power important applications such as automatic crash detection by analyzing the deceleration and unexpected movement of Driver and passenger mobile devices. Our advanced machine learning algorithms improve our ability to predict Driver supply, rider demand, ETAs, and food preparation time; they power personalization such as predictive destinations and food and restaurant recommendations.”

Yes, when you use your Uber app, there is quite a bit that happens in the background to create a seamless experience.

Now another part of the AI strategy is the autonomous driving unit – known as the Advanced Technologies Group (ATG) – that was founded in 2015 and now has over 1,000 employees. More than 250 vehicles have been built that have created enormous amounts of valuable data. Actually, last month Uber announced a $1 billion funding for the ATG unit (the investors included Toyota, Denso and Softbank’s Vision Fund) at a valuation of $7.25 billion.

Here’s how Scott Painter, a seasoned autotech serial entrepreneur (TrueCar and CarsDirect) and the CEO and founder of Fair (a company that provides a new model of flexible car ownership and does have a partnership with Uber), looks at the situation: “In particular for ridesharing, there is a massive underlying financial rationale to enabling this transition to autonomy to happen faster. It is a winner-take-all market that includes tens of millions of weekly supply hours. And, one thing that we know about ridesharing is that today, there is a constraint of supply.”

In other words, the holy grail of autonomous driving would be a true cure-all. Maintenance would be minimal and accidents a thing of the past. It would be, well, kind of like a transportation nirvana. But then again, getting to this will not be easy, cheap or quick.

OK, what about Elon Musk’s recent boast that Tesla will have one million robotaxis on the road by the end of next year?

“I would never bet against Elon, but he hasn’t demonstrated the ability to produce a million of anything yet,” said Painter. “He has ambition to do it, and that’s great. He doesn’t have the capacity or the capital to do it. But here’s the thing, that doesn’t mean that he can’t get there. I just don’t think he’s getting there next year. He’s made an announcement based on that hypothetical North Star of the business. And, if you really want to understand how a guy like Elon operates, he talks about the future in the future’s perfect tense. He talks about it as if it’s going to happen, because he wants everybody to understand that’s what he’s going to aim for.”

Uber CEO, Dara Khosrowshahi, also agrees. While he is certainly bullish on the long-term prospects of autonomous driving, he realizes that it’s a long-term undertaking. In an interview with CNBC, he noted: “I thought: If [Musk] can do it, more power to him. Our approach is a more conservative approach as far as sensor technology and mapping technology. The software’s going to get there. So I don’t think that his vision is by any means wrong. I just think we disagree on timing.”

How To Reskill Your Workforce For AI (Artificial Intelligence)

AI is considered the most disruptive technology, according to Gartner’s 2019 CIO Survey (it includes over 3,000 CIOs from 89 countries). So yes, this is big reason why there has been a major increase in adoption and implementation.

Yet there is a bottleneck that could easily slow the progress – that is, finding the right talent. The fact is that there are few data scientists and AI experts available.

“In our recent State of Software Engineer report, we found that demand for data engineers has increased by 38% and demand growth for machine learning engineers has increased by 27% in the last year,” said Mehul Patel, who is the CEO of Hired. “Based on data from our career marketplace, we believe the difficulty of recruiting for tech talent with specialized skills in machine learning and AI will continue to become increasingly competitive. Machine learning engineers are commanding an average salary of 153K in the SF Bay Area, which is nearly 20K above the global tech worker’s average salary.”

Actually, this is why one approach is to acquire companies that have strong teams! This appears to be the case with McDonald’s, which recently paid $300 million for Dynamic Yield. It’s an AI company that helps personalize customer experiences.

But of course, this option has its issues as well. Let’s face it, acquisitions can be difficult to integrate, especially when the target has a workforce with highly specialized skillsets.

So what are other approaches to consider? Well, here’s a look at some ideas:

Automation: With the growth in AI, there has also been the emergence of innovative automation tools, whether from startups or even the mega tech operators. For example, this week Microsoft introduced a new set of systems to streamline the process.

“The biggest and most impactful way that organizations can leverage their current team for data science is to implement a data science automation platform,” said Dr. Ryohei Fujimaki, who is the founder and CEO of dotData. “Data science automation significantly simplifies tasks that formerly could only be completed by data scientists, and enables existing resources — such as business analysts, BI engineers and data engineers — to execute data science projects through a simple GUI operation. Automation of the full data science process, from raw business data through data and feature engineering through machine learning, is enabling enterprises to build effective data science teams with minimal costs, using their current talent.”

Now this does not mean that a platform is a panacea, as there still needs to be qualified data scientists. But then again, there will be far more efficiency and scale with AI projects.

“If organizations have data scientists already, an automation platform frees up highly-skilled resources from many of the manual and time-consuming efforts involved, and allows them to focus on more complex and strategic analysis,” said Ryohei. “This empowers data scientists to achieve higher productivity and drive greater business impact than ever before.”

Reskilling: If you currently have employees who are business analysts or have experience with data engineering, then they could be good candidates to train for AI tasks. This would include focusing on skills like Python and TensorFlow, which is a deep learning framework.

“From a training and learning perspective, there are an abundance of online resources via Coursera, Udacity, open.ai, and deeplearning.ai that can help companies develop their employees’ AI/ML skills,” said Mehul. “Additionally, it will be valuable for a company to acquire someone with existing experience in AI to be a leader and mentor for developing employees. The interesting thing about AI/data science is that you don’t need to be an experienced software engineer to do it.  The field is so exciting because of the diversity of talent and backgrounds spanning science, engineering, and economics.”

But the training should not just be for a small group of people. It should be company-wide. “Without a data-driven culture and mindset, data science and AI cannot be truly implemented,” said Ryohei. “It is important for enterprise leaders and business teams to understand how to best work with the data science team to meet the organization’s key business objectives. While the business stakeholders do not need to be data experts, they need to know ‘How to use’ AI and ‘How it changes their businesses.’”

Artificial Intelligence (AI): What About The User Experience?

One of the key drivers of the AI (Artificial Intelligence) revolution is open source software. With languages like Python and platforms such as TensorFlow, anybody can create sophisticated models.

Yet this does not mean the applications will be useful. They may wind up doing more harm than good, as we’ve seen with cases involving bias.

But there is something else that often gets overlooked: The user experience. After all, despite the availability of powerful tools and access to cloud-based systems, the fact remains that it is usually data scientists that create the applications, who may not be adept at developing intuitive interfaces. But more and more, it’s non-technical people that are using the technology to achieve tangible business objectives.

In light of this, there has been the emergence of a new category of AI tools called Automated Machine Learning (AutoML). This uses simple workflows and drag-and-drop to create sophisticated models – allowing for the democratization of AI.

But even these systems require a background in data science and this can pose tricky issues with the development of the UI.

“Our mission when we designed Dataiku was to democratize data and AI across all people and to unite all of the various technology pieces out there,” said Florian Douetteau, who is the CEO of Dataiku. “We kept this mission in mind when we embarked on our UI. Enterprise AI is the future, and that means hundreds and thousands of people are using Dataiku every day as the core of their job, spending hours a day in the tool. So we keep the UI of Dataiku simple, clean, modern, and beautiful; no one wants to work in a space — virtual or otherwise — that is cluttered or that looks and feels old, especially when data science and machine learning are such cutting-edge fields. Another important consideration is ease of use, but not at the expense of robustness. That means making sure that Dataiku’s UI is simple for those on the business side — many of whom are used to working in spreadsheets — who don’t have extensive training in advanced data science as well as the most code-driven data scientist – but none of this as a tradeoff for deep functionality.”

Yes, it’s a tough balance to strike – but it is critical.

Actually, to get a sense at how this can work, consider Intuit’s TurboTax. The software deals with an incredibly important but complex topic for consumers. The technology also involves advanced AI systems and algorithms, such as by leveraging data to surface industry-specific personalized topics.

“When we went out and asked thousands of consumers about their tax preparation, most responded with emotions of fear, uncertainty and doubt,” said Eunie Kwon, who is the Director of Design at Intuit. “Once we started to unpack their reasons for these feelings, we found opportunities to influence their experience by applying some basic psychological principles and laws of UX heuristics to simplify through mindful design. To reduce cognitive load, we balanced the fundamental elements of design through content, visual expression, animation, and recreated the informational experience to reduce fatigue, friction and confusion. To improve workflow, we dissected the complicated tax forms into adaptable and consumable interview-like experiences. We added ‘breather’ screens where we acknowledge to the customer how much they’ve completed and the accuracy of their input. We also added ‘celebration’ screens to drive confidence that informs them of their progress while educating them on the changes in tax laws along the way.”

Such approaches are simple and make a lot of sense. But when developing software, they may not get much priority.

“The main lesson learned when designing for TurboTax is balancing simplicity while ensuring 100% confidence for a customer’s tax outcome,” said Kwon. “Every year, we are faced with new mindsets that evolve the behavior of how consumers interact with products and apps. The expectations for simplicity and delight change so often that we need to look at our experience and find improvements that meet those expectations, while driving complete confidence through their tax experience.”

How To Get The Max From RPA (Robotic Process Automation)

Robotic Process Automation (RPA) is not sexy (the name alone is evidence of this). Yet it is one of the hottest sectors in the tech market.

Why all the interest? RPA allows companies to automate routine processes, which can quickly lower costs and allow employees to focus on more important tasks. The technology also reduces errors and helps to improve compliance.

Oh, and there is something else: RPA can be a gateway to AI. The reason is that the automation may help with finding patterns and insights from the data as well as to streamline the input with NLP (Natural Language Processing) and OCR.

Yet despite all this, there should definitely be care with an implementation. Keep in mind that there are still plenty of failures.

So let’s take a look at some things to consider to improve the odds of success:

Deep Dive On Your Current Processes: Rushing to implement RPA will probably mean getting subpar results. There first must be a thorough analysis and understanding of your current processes. Otherwise you’ll likely be just automating inefficiencies.

“Best practices for automation projects always begin with process mapping and re-engineering of all business scenarios,” said Sudhir Singh, who is the CEO of NIIT Technologies. “This allows all automation design to be completed upfront and can avoid multiple re-iterations during delivery.”

But truly understanding your processes can be time-consuming and difficult.  This is why it could be a good idea to bring in an expert.

Although, there are also several software systems that can essentially do an MRI of your processes. An example is Celonis, which has partnerships with top RPA players like iPath, Automation Anywhere, and Blue Prism. “Our system creates a business process map,” said Alexander Rinke, who is the CEO of Celonis. “With it, you can see what needs improvement.”

Start With The Mundane: RPA is best for those processes that are routine and repetitive. These are basically the kinds of things that … bore your employees. And yes, this means that RPA can span many parts of a business, like finance, HR, legal, the supply chain and so on.

It also helps if the processes do not change much. After all, this means fewer upgrades to the bots, which lowers the complexity.

Determine Whether to Replace or Supplement People: This is important as it will guide you in the type of RPA to use.

“By supplementing people, a business can implement attended bots that are assistants and helpers to team members that serve the purpose of speeding up processes and eliminating human error,” said Richard French, who is the CRO of Kryon. “This setup will empower staff to focus on advanced and complex tasks, while bot assistants handle their administrative assignments. “

But if you want to find ways to reduce headcount, then you should look at those vendors that focus on unattended bots.

Create a Center of Excellence (CoE): There needs to be a well thought-out plan for funding, training, governance and maintenance of the RPA. And to carry this out, it’s recommended to setup a CoE that can manage the process. Often this includes a mix of business people, IT personnel and developers.

Scaling: It’s often easy to get early wins. But the major challenge is making RPA more pervasive.

“Many companies that do implement the technology never scale past the first 50 automated processes,” said French. “The reason is that it is difficult for executives to think beyond and understand what processes will further improve the ROI or efficiency once there is already something in place.”

This is why having a CoE is so critical. What’s more, the team will likely need to change over time, as the needs and requirements of the RPA implementation evolve.

Implementing AI The Right Way

For many companies, when it comes to implementing AI, the typical approach is to use certain features from existing software platforms (say from Salesforce.com’s Einstein).  But then there are those companies that are building their own models.

Yes, this can move the needle, leading to major benefits. At the same time, there are clear risks and expenses. Let’s face it, you need to form a team, prepare the data, develop and test models, and then deploy the system.

In light of this, it should be no surprise that AI projects can easily fail.

So what to do? How can you boost the odds for success?

Well, let’s take a look at some best practices;

IT Assessment: The fact is that most companies are weighed down with legacy systems, which can make it difficult to implement an AI project. So there must be a realistic look at what needs to be built to have the right technology foundation — which can be costly and take considerable time.

Funny enough, as you go through the process, you may realize there are already AI projects in progress!

“Confusion like this must be resolved across the leadership team before a coherent AI strategy can be formulated,” said Ben MacKenzie, who is the Director of AI Engineering at Teradata Consulting.

The Business Case: Vijay Raghavan, who is the executive vice president and CTO of Risk and Business Analytics at RELX, recommends asking questions like:

  • Do I want to use AI to build better products?
  • Do I want to use AI to get products to market faster?
  • Do I want to use AI to become more efficient or profitable in ways beyond product development?
  • Do I want to use AI to mitigate some form of risk (Information security risk, compliance risk…)?

“In a sense, this is not that different from a company that asked itself say 30 or more years ago, ‘Do I need a software development strategy, and what are the best practices for such?,'” said Vijay. “What that company needed was a software development discipline — more than a strategy — in order to execute the business strategy. Similarly, the answers to the above questions can help drive an AI discipline or AI implementation.”

Measure, Measure, Measure: While it’s important to experiment with AI, there should still be a strong discipline when it comes to tracking the project.

“This should be done at every step and must be done with a critical sense,” said Erik Schluntz, who is the cofounder & CTO at Cobalt Robotics. “Despite the fantastic hype around AI today, it is still in no way a panacea, just a tool to help accomplish existing tasks more efficiently, or create new solutions that address a gap in today’s market. Not only that, but you need to be open about auditing the strategy on an on-going basis.”

Education and Collaboration: Even though AI tools are getting much better, they still require data science skills. The problem, of course, is that it is difficult to recruit people with this kind of talent. As a result, there should be ongoing education. The good news is that there are many affordable courses from providers like Udemy and Udacity to help out.

Next, fostering a culture of collaboration is essential. “So, in addition to education, one of the key components to an AI strategy should be overall change management,” said Kurt Muehmel, who is the VP of Sales Engineering at Dataiku. “It is important to create both short- and long-term roadmaps of what will be accomplished with first maybe predictive analytics, then perhaps machine learning, and ultimately – as a longer-term goal – AI, and how each roadmap impacts various pieces of the business as well as people who are a part of those business lines and their day-to-day work.”

Recognition: When there is a win, celebrate it. And make sure senior leaders recognize the achievement.

“Ideally this first win should be completed within 8-12 weeks so that stakeholders stay engaged and supportive,” said Prasad Vuyyuru, who is a Partner of the Enterprise Insights Practice at Infosys Consulting. “Then next you can scale it gradually with limited additional functions for more business units and geographies.”

How AI Will Change B2B Marketing Forever

Back in 2006, Phil Fernandez, Jon Miller, and David Morandi founded Marketo. At the time, they only had a PowerPoint. But then again, they also had a compelling vision to create a new category known as marketing automation.

Within a few years, Marketo would become one of the fastest software companies in the world, as the market-product fit was near perfect.  By 2013, the company went public and then a few years later, it would go private. Then as of 2018, Marketo agreed to sell to Adobe for $4.75 billion.

The deal will certainly be critical to scale growth even more and there will certainly be major synergies. But I also think there will be a supercharging of the AI strategy, which should be transformative for the company.

Yet this is not to imply that Marketo is a laggard with this technology.  Keep in mind that the company — in 2016 — launched Predictive Content. The system leverages AI to help marketers offer better targeting based on a prospect’s activities, firmographics, and buying stage.

After this, Marketo created other offerings like:

  • Account Profiling, just announced at Adobe Summit, uses a customer’s current customer data to determine the best prospective accounts to target based on billions of data points in real-time.
  • Predictive Audiences for Events selects the best audience to invite to an event and then forecasts attendance and recommends adjustments to meet customers’ goals.

But all this is still in the early days. “AI will become pervasive throughout B2B marketing efforts, improving performance and increasing efficiency throughout the entire buyer’s journey,” said Casey Carey, who is the Senior Director of Product Marketing for Marketo Digital Experience at Adobe.

In fact, here are just some of the important capabilities he sees with B2B marketing:

  • Audience Selection: “AI can inform improved audience selection and segmentation. Armed with tools to identify a target audience based on past behaviors, marketers can offer tailored experiences that will resonate with potential customers.”
  • Offers and Content: “AI can help marketers deliver higher value to potential customers by applying machine learning to the content selection and delivery process. This includes creative, formats, and offers. By creating personalized messages based on previous choices and behavior, marketers are able to engage in ways that resonate every time.”
  • Channels: “AI can help marketers determine the best time and place to engage with potential customers based on past channel performance and what you know about the individual.”
  • Analysis: “Using AI, marketers can quickly understand what’s working and what’s not so they can make adjustments to improve performance and drive a better return on their investments.”
  • Forecasting and Anomaly Detection: “It is not enough to know what to do, but you also need to understand what the impact will most likely be – this is where AI can help. By analyzing past results, AI can predict outcomes like campaign performance, conversion rates, revenue, and customer value. This provides a baseline for planning and then making mid-course adjustments as anomalies occur or other changes are needed.”

Yes, this is quite a lot! But Casey has some spot-on recommendations for marketers on how to use AI. “Rather than trying to understand the technology behind AI solutions, savvy marketers should focus instead on finding opportunities to use them,” he said. “If you catch yourself saying, ‘If only I could figure how to put all this data to use,’ consider an AI application. On the other hand, despite everything that can be achieved by strategically implementing AI, there are still areas where AI solutions are not appropriate, such as situations where there is poor quality or insufficient data. AI is, after all, artificial intelligence. It’s only as good as the data you feed it.”

But AI is not something to ruminate about — rather, it is something that must be acted on. “Two things are happening that are making AI more prevalent in marketing,” said Casey. “First, prospects are expecting more relevant and compelling engagement along their buyer journey, and second, more data is becoming available to inform our marketing strategies. As a result, the historical way of manually analyzing data and using rule-based approaches to marketing are no longer enough.”

Lyft IPO: What About The AI Strategy?

According to the shareholder letter from Lyft’s co-founders: “In those early days, we were told we were crazy to think people would ride in each other’s personal vehicles.”

Yea, crazy like a fox. Of course, on Friday Lyft pulled off its IPO, raising about $2.34 billion. The stock price ended the day up 8.74% to $78.29 – putting the valuation at $26.5 billion.

“Very few companies can claim 100% growth year over year at the scale they are operating at,” said Jamie Sutherland, who is the CEO and co-founder of Sonix. “It’s pretty amazing. True, it’s costing them an arm and a leg, but the nature of the industry — which is still evolving — is that there will be a handful of winners. Lyft is clearly in that camp.”

Lyft, which posted $2.2 billion in revenues in 2018, has the mission of improving “people’s lives with the world’s best transportation.” But this is more than just about technology or moving into adjacent categories like bikes and scooters. Lyft sees ride-hailing as a way to upend the negative aspects of autos. Keep in mind that they are the second highest household expense and a typical car is used only about 5% of the time. There are also 37,000 traffic-related deaths each year.

AI And The Lyft Mission

Despite all the success, Lyft is still in the early phases of its market opportunity, as rideshare networks account for roughly 1% of the miles traveled in the US. But to truly achieve the vision of transforming the transportation industry, the company will need to be aggressive with AI.  And yes, Lyft certainly understands this.

For some time, the company has been embedding machine learning into its technology stack. Lyft has the advantage of data on over one billion rides and more than ten billion miles – which allows for training of models to improve the experience, such as by reducing arrival times and maximizing the available number of riders. But the technology also helps with sophisticated pricing models.

But when it comes to AI, the holy grail is autonomous driving. For Lyft, this involves a two-part strategy. First of all, there is the Open Platform that allows third-party developers to create technology for the network. Lyft believes that autonomous vehicles will likely be most effective when managed through ridesharing networks because of the sophisticated routing systems.

Next, Lyft is building its own autonomous vehicles. For example, in October the company purchased Blue Vision Labs, which is a developer of computer vision technology. There have also been a myriad of partnerships with car manufacturers and suppliers.

So what is the time line for the autonomous vehicle efforts? Well, according to the S-1: “In the next five years, our goal is to deploy an autonomous vehicle network that is capable of delivering a portion of rides on the Lyft platform. Within 10 years, our goal is to have deployed a low-cost, scaled autonomous vehicle network that is capable of delivering a majority of the rides on the Lyft platform. And, within 15 years, we aim to deploy autonomous vehicles that are purpose-built for a broad range of ridesharing and transportation scenarios, including short- and long-haul travel, shared commute and other transportation services.”

This is certainly ambitious as there remain complex technology and infrastructure challenges. Furthermore, Lyft must deal with societal issues.

“According to an AAA study, 71% of Americans do not feel comfortable riding in fully autonomous vehicles,” said David Barzilai, who is the executive chairman and co-founder of Karamba Security. “Similarly, recent cyber security attacks have been shaking that trust as well and significantly decreasing consumer willingness to enter an autonomous vehicle.”

But hey, the founders of Lyft have had to deal with enormous challenges before. And besides, the company has the resources and scale to effectively pursue AI.

“Lyft is doing a tremendous job of pushing self-driving technology ahead,” said Aleksey Medvedovskiy, who is the founder of Lacus and president of NYC Taxi Group. “Self-driving cars will help to eliminate traffic and potential accident problems.  In my opinion, self-driving technology is much safer and better than many drivers who are currently on the roads.”

PagerDuty IPO: Is AI The Secret Sauce?

Because of the government shut down earlier in the year, there was a delay in with IPOs as the SEC could not evaluate the filings. But now it looks like the market is getting ready for a flood of deals.

One of the first will be PagerDuty, which was actually founded during the financial crisis of 2009. The core mission of the company is “to connect teams to real-time opportunity and elevate work to the outcomes that matter.”

Interestingly enough, PagerDuty refers to itself as the central nervous system of a digital enterprise. This means continuously analyzing systems to detect risks but also to find opportunities to improve operations, increase revenues and promote more innovation.

Keep in mind that this is far from easy. After all, most data is just useless noise. But then again, in today’s world where people expect quick action and standout customer experiences, it is important to truly understand data.

The PagerDuty S-1 highlights this with some of the following findings:

  • The abandon rate is 53% for mobile website visitors if the site takes longer than three seconds to load.
  • A major online retailer can lose up to $500,000 in revenue for every minute of downtime.
  • A survey from PricewaterhouseCoopers shows that 32% of customers say they would ditch a brand after one bad experience.

As for PagerDuty, it has built a massive data set from over 10,000 customers. Consider that this has allowed the company to leverage cutting-edge AI (Artificial Intelligence) models that supercharge the insights.

Here’s how PagerDuty describes it in the S-1 filing: “We apply machine learning to data collected by our platform to help our customers identify incidents from the billions of digital signals they collect each day. We do this by automatically converting data from virtually any software-enabled system or device into a common format and applying machine-learning algorithms to find patterns and correlations across that data in real time. We provide teams with visibility into similar incidents and human context, based on data related to past actions that we have collected over time, enabling them to accelerate time to resolution.”

The result is that there are a myriad of powerful use cases. For example, the AI helps GoodEggs to monitor warehouses to make sure food is fresh.  Then there is the case with Slack, which uses the technology to remove friction in dealing with the incident response process.

For PagerDuty, the result has been durable growth on the top line, with revenues jumping 48% during the past year. The company also has a 139% net retention rate and counts 33% of the Fortune 500 companies as customers.

Yet PagerDuty is still in the nascent stages of the opportunity. Note that the company estimates the total addressable market at over $25 billion, which is based on an estimated 85 million users.

Data + AI

But again, when looking at the IPO, it’s really about the data mixed with AI models. This is a powerful combination and should allow for strong barriers to entry, which will be difficult to replicate.  There is a virtuous cycle as the systems get smarter and smarter.

Granted, there are certainly risk factors. If the AI fails to effectively detect some of the threats or gives off false positives, then PagerDuty’s business would likely be greatly impacted.

But so far, it seems that the company has been able to build a robust infrastructure.

Now the PagerDuty IPO — which will likely hit the markets in the next couple weeks — will be just one of the AI-related companies that will pull off their offerings. Basically, get ready for a lot more – and fast.