Robots + AI: Boring Is Beautiful

During the past year, there have been major implosions of robot startups, such as with Jibo, Anki and Rethink Robotics.  They all raised substantial amounts of capital from top-tier investors and had strong teams.

So why the failure? One of the main reasons is the extreme complexities of melding software and movable hardware.  As a result, the technology often does not live up to expectations.

Even with the strides in AI – such as deep learning — there is still much to do. “Deep learning and robotics is difficult for a variety of reasons,” said Carmine Rimi, who is a Product Manager for AI at Canonical, “For instance, simultaneous localization and mapping (SLAM) in unknown environments, while simultaneously keeping track of an agent’s location within it in tractable time, is a challenge. In real-time it is at least a magnitude more difficult. Research into advanced algorithms, that deliver better accuracy faster and at lower power consumption, along with quantum-like parallel states and processing, are some of the areas that will help. And this part of why it is difficult.”

But there is much more than this. Dr. Alex Wong, who is the Chief Scientist and co-founder of DarwinAI, has this to say: “One of the primary difficulties with AI in this context is that learning to manipulate physical objects with a high level of dexterity in dynamic and ‘noisy’ real-world environments is extremely challenging, as it must take into account an incredible number of environmental factors to make complex decisions in real-time. Additional complexities in this area are issues associated with ‘data sparsity’ and training speed.”

And finally, AI is still fairly narrow. The fact is that we are years away from some type of general intelligence. “The challenge of replicating the capabilities of a human being, whether it be on a production line or in a medical facility – is very difficult,” said Ran Poliakine, who is the co-founder of Musashi AI. “For example, the ability to imitate the function of the brain when looking at an image is incredibly complicated. This is why until now, even with all of the robotics and advanced hardware available, the ability to make a decision or imitate a human reaction was nearly impossible.”

Now all this is not to imply that the situation is hopeless. If anything, there are enormous opportunities with robots and AI. Yet there must be different approaches to the technology, especially when compared to software-only AI.

So what to do? Well, Erik Schluntz, who is the CTO of Cobalt Robotics, is someone who has been able to find lots of success – primarily because his approach is not about achieving moonshots. His company develops robots that provide security services in the workplace.

When Schluntz started Cobalt, he first talked to a range of companies across several industries so as to find real-world problems to focus on. “We did not want to come up with an idea in a vacuum,” he said.

But the Cobalt  robot would not be a replacement for people. “Marrying the benefits of robotics with the unique capabilities of humans means creating something that is greater than the sum of its parts,” said Schluntz. “The reliability of robots for tasks that require unwavering attention or precise repetition is unmatched. When you expand the capabilities of a robot by integrating its work with that of a human for flexible decision-making, you’re enabling a greater level of effectiveness for roles that are more than just the dirty, dull or dangerous. In this sense, the sweet spot for robotics applications is greatly expanded. A robot can detect leaks and spills in a building before a human, and working with a human to address the leak or spill can correct the anomaly. We let the robot to do the dull part – tirelessly patrol in search of water leaks, and save the interesting response to a human.”

True, it’s not necessarily sexy. But hey, Cobalt has turned into a solid company that has customers like Yelp and Slack.

“To enable the proliferation of robots and AI, robots need to be friendly, functional and easy to be around,” said Schluntz.  “Success will rely on key players in the robotics space being intellectually honest and realistic—identifying clear use-cases, demonstrating clear ROI, operating in an inherently safe and secure manner (both physical and cyber context), and creating future roadmaps.”

SF Facial Recognition Ban: What Now For AI (Artificial Intelligence)?

Recently San Francisco passed – in an 8-to-1 vote — a ban on local agencies to use facial recognition technologies. The move is likely not to be a one-off either. Other local governments are exploring similar prohibitions, so as to deal with the potential Orwellian risks that the technology may harm people’s privacy. “In the mad dash towards AI and analytics, we often turn a blind eye to their long-range societal implications which can lead to startling conclusions,” said Kon Leong, who is the CEO of ZL Technologies.

Yet some tech companies are getting proactive. For example, Microsoft has indicated there should be regulation of facial recognition systems (although, not a straight out ban). The company even declined a request to sell its own technology to a police department in California.

Keep in mind that — even with the strides in AI — there are still problems with the technology.  There are numerous cases where it has given off false positives.

“Before AI, the saying was: ‘Big brother is watching you, but he can’t see,’” said Stefan Ritter, who is the Chief Product Officer and co-Founder of Ruum. “In essence, it meant we had CCTV everywhere recording us, but in reality, it was not taking away our freedoms or rights — yes we were being taped, but in effect police only turned to the recordings when there was a crime severe enough to merit the many hours of painstakingly going through video recordings. However, with AI-powered facial recognition for social control, we could come dangerously close to a ‘Minority Report’-esque future, where neural networks could, in theory, recognize crimes before they happen. ‘Innocent until proven otherwise’ is one of the founding principles of the democratic state, so it’s crucial that we have a broad discussion about how we want to leverage AI in our social systems.”

Now of course, there are many benefits to facial recognition systems. They can be leveraged to identify diseases in MRIs or to even help predict crashes before they happen.

But then again, there really needs to be a focus on the unintended consequences. In fact, if there are high-profile mishaps, the result could be a stunting of AI’s progress.

“The San Francisco city government is setting a positive example by banning facial recognition technology,” said Asma Zubair, who is the Sr. Manager of IAST Product Management at Synopsys. “While it’s a good start, we must also recognize that the use of facial recognition in the private sector continues to grow. While the technology has improved greatly in recent years, there are known weaknesses when recognizing certain groups of people. As the adoption of facial recognition technology grows, raw video footage will become more easily available as structured data that includes biometric information and personally identifiable information will likely be stored in a searchable format — for example, names, location, time, date and so on. This structured data may be retained for periods of time which makes it susceptible to breaches and misuse. With so many data privacy breaches in the headlines already, organizations clearly aren’t ready to use facial recognition in a safe, secure, and responsible manner.”

Regardless, the silver lining is that there is robust debate and some action. This is in contrast to what happened with social media, which quickly got out of control.

“Facial recognition technology is part of the broader ethical and legal debates around algorithmic transparency and integrity, including the role of human intervention in validating the outputs of algorithms where the impacts to individuals may be significant,” said Hilary Wandall, who is the SVP of privacy intelligence at TrustArc.

Low-Code Is Cracking The Code On AI (Artificial Intelligence)

A recent survey from Figure Eight – an Appen company – shows that AI (Artificial Intelligence) is rapidly becoming a strategic imperative. But unfortunately, there are major bottlenecks, such as the divides between line-of-business owners and technical practitioners as well as the complexities of managing data.

But there is something that should help solve the problems: low-code. As the name implies, this involves creating applications with drag-and-drop and integrations. The result is that development is much quicker and effective (here’s a post I wrote for about low-code).

One of the leaders in this category is Appian, which is the first low-code operator to go public. The company has a bold guarantee for its customers: “Idea to app in eight weeks.”

Founded 20 years ago, Appian started as an IT consulting shop with a focus on AI-powered personalization and ecommerce. But at the time, the technology was far from being prime time. For example, the founders realized that a well-known collaborative filtering system would always recommend the same products – even when the parameters were different! This was certainly an eye-opener.

Despite all this, the founders were convinced that AI would be a big market.  However, it would need a strong platform for building applications with rules for data and models for processes. So the Appian system was born.

But in the early days, the software was used primarily for typical IT solutions, such as for building applications for BPM and case management.  But during the past few years, AI has become a more common use case.

OK then, how has low-code been able to help?  Well, let’s take a look:

  • Clean data: A low-code system makes it easy to describe the business process, which allows for creating a solid foundation for the data. But a platform like Appian can also make educated guesses about how the data should be organzied. True, a data scientist could improve upon this but such a person is really not necessary for maintaining data integrity.
  • Ease of implementing a model/testing models: Consider that Appian allows for integrations of various third-party AI systems, such as from Amazon, Microsoft and Google. “An Appian customer recently was able to do a bake-off between leading AI providers because of the ease of being able to integrate them into the Appian platform,” said Michael Beckley, who is the CTO of Appian.
  • Guardrails: When developing an AI project, even a few adjustments can wreak havoc on a model. But a strong low-code system can provide warnings and suggestions to avoid the mistakes.
  • Deployment: A low-code system can deliver an app across multiple platforms, whether on the web or mobile. There is also the benefit of having a modern UI. No doubt, all this can go a long way in terms of adoption.

An illustration of the power of low-code comes from KPMG.  The company has been investing heavily in AI, creating its own platform called Ignite.  And yes, it is integrated with Appian.

One project that KPMG took on was to help companies deal with the sun setting of LIBOR, which means that huge numbers of contracts need to be amended.  
The Ignite system processes and interprets the unstructured data using machine learning and natural language processing. After this, Appian then provides for sophisticated business process management and workflow capabilities – allowing for document sharing, customizing business rules and real-time reporting.

Based on KPMG’s own experience, the error rate for having people review the contracts ranges from 10% to 15% (this even includes trained attorneys). But with AI and low-code, the company has been able to achieve an accuracy rate better than 96%.

“Greater efficiency and higher accuracy translates to reduced operational risk, reduced economic exposure, lower cost, and better client experience through the LIBOR transition,” said Todd Lohr, who is a Principal at KPMG.  “What takes a few hours for a subject matter expert to do can be accomplished by Ignite in a matter of seconds.”

The Week’s Important AI Announcements From Google And Microsoft

There are always plenty of tech conferences happening. But this week saw two biggies: Google I/O and Microsoft Build.

No doubt, a red-hot topic at these events was AI (Artificial Intelligence).  It seemed as if this was the only thing that mattered — or existed in tech!

OK then, with these conferences, what were the important announcements? Which are likely to be game-changing for the AI space?

Well, let’s take a look:

Microsoft: In a Microsoft blog about the Build conference, this is what Chris Stetkiewicz had to say: “Just a few years ago, artificial intelligence was largely relegated to universities and research labs, a charming computer science concept with little use in mainstream business. Today, AI is being integrated into everything from your refrigerator to your favorite workout app.”

This is certainly spot-on.

Now as for the notable developments at the Build conference, Microsoft released a variety of new tools for developers infused with AI. But perhaps the most interesting ones were for its new offerings for AutoML (Automated Machine Learning). These are systems – which are part of the Azure Machine Learning service — that allow just about anyone to create sophisticated AI models. This is critical as it is extremely difficult to recruit data scientists (here’s a recent post I did for on the topic).

Besides the AutoML tools, Microsoft also highlighted its support for ONNX Runtime, or the Open Neural Network Exchange, which is a joint venture to allow models to be deployed across different platforms.  The company also announced the launch of Decision, which is an AI system that provides recommendations on making decisions (by using sophisticated approaches like reinforcement learning).

Google: For the keynote, CEO Sundar Pichai noted: “We are moving from a company that helps you find answers to a company that helps you get things done. We want our products to work harder for you in the context of your job, your home and your life.”

This is all part of the company’s ambitious “AI first” mission. In other words, the technology is not just about a few products; rather, its about transforming everything Google does.

Just look at Google Assistant, which is getting more and more powerful.

“I am amazed at the advancements we are seeing in AI, specifically Conversational AI,” said Bryan Stokes, who is the VP Product Management for Vonage. “Instead of the previous AI experience, which was more of a halting back and forth interaction, Conversational AI has evolved to become more of a natural ‘discussion.’ It’s like going from the Walkie Talkie to the telephone. Today’s AI capabilities enable a continuous conversation. As someone who lives and breathes communications of all kinds, seeing the next generation of Google Assistant – where you can have continuous conversation and the assistant can take actions – comes so much closer to natural human interaction. Not only can this lead to more individual productivity, but it also provides real-time insights and the ability to take action during a conversation between two people. It is the latter that I think will be most beneficial — taking away the menial tasks, like taking notes or scheduling a follow up meeting, so we can focus solely on what the other person is saying — to make that human connection.”

Google I/O also showcased how the company’s engineers and researchers are pushing the boundaries of AI innovation. For example, the company has been able to make significant strides with localized artificial intelligence, made possible by compressing 100GB algorithms to less than half a GB. This means that Google can implement AI into devices – allowing for near zero latency as well as improved security and privacy. In fact, at the keynote, Pichai dubbed it as having “a data center in your pocket.”

Uber IPO: What About AI (Artificial Intelligence)?

Last year Morgan Stanley and Goldman Sachs indicated that the valuation of Uber was about $120 billion. But of course, the real market value can be much different. Yesterday Uber came public at about $76 billion, as the shares fell 7.6% on its debut.

But hey, the company did raise a cool $8.1 billion, which will be essential because the losses have remained large (about $1 billion in the latest quarter). Uber also must deal with fierce competition – not just with Lyft but also other fast-charging startups in different countries like Brazil.

Yet there is something else that the money will be useful for: the AI (artificial intelligence) effort.

Keep in mind that this has been a priority for some time. According to the S-1: “Managing the complexity of our massive network and harnessing the data from over 10 billion trips exceeds human capability, so we use machine learning and artificial intelligence, trained on historical transactions, to help automate marketplace decisions. We have built a machine learning software platform that powers hundreds of models behind our data-driven services across our offerings and in customer service and safety. We have developed natural language and dialog system technologies upon which we can build and scale up conversational interfaces for our users, including Drivers and consumers, to simplify and enhance interactions with our platform. Our computer vision software technology automatically processes and verifies millions of business-critical images and documents such as drivers’ licenses and restaurant menus, among other items, per year. Our proprietary sensor processing algorithms enhance our location accuracy in dense urban areas, and power important applications such as automatic crash detection by analyzing the deceleration and unexpected movement of Driver and passenger mobile devices. Our advanced machine learning algorithms improve our ability to predict Driver supply, rider demand, ETAs, and food preparation time; they power personalization such as predictive destinations and food and restaurant recommendations.”

Yes, when you use your Uber app, there is quite a bit that happens in the background to create a seamless experience.

Now another part of the AI strategy is the autonomous driving unit – known as the Advanced Technologies Group (ATG) – that was founded in 2015 and now has over 1,000 employees. More than 250 vehicles have been built that have created enormous amounts of valuable data. Actually, last month Uber announced a $1 billion funding for the ATG unit (the investors included Toyota, Denso and Softbank’s Vision Fund) at a valuation of $7.25 billion.

Here’s how Scott Painter, a seasoned autotech serial entrepreneur (TrueCar and CarsDirect) and the CEO and founder of Fair (a company that provides a new model of flexible car ownership and does have a partnership with Uber), looks at the situation: “In particular for ridesharing, there is a massive underlying financial rationale to enabling this transition to autonomy to happen faster. It is a winner-take-all market that includes tens of millions of weekly supply hours. And, one thing that we know about ridesharing is that today, there is a constraint of supply.”

In other words, the holy grail of autonomous driving would be a true cure-all. Maintenance would be minimal and accidents a thing of the past. It would be, well, kind of like a transportation nirvana. But then again, getting to this will not be easy, cheap or quick.

OK, what about Elon Musk’s recent boast that Tesla will have one million robotaxis on the road by the end of next year?

“I would never bet against Elon, but he hasn’t demonstrated the ability to produce a million of anything yet,” said Painter. “He has ambition to do it, and that’s great. He doesn’t have the capacity or the capital to do it. But here’s the thing, that doesn’t mean that he can’t get there. I just don’t think he’s getting there next year. He’s made an announcement based on that hypothetical North Star of the business. And, if you really want to understand how a guy like Elon operates, he talks about the future in the future’s perfect tense. He talks about it as if it’s going to happen, because he wants everybody to understand that’s what he’s going to aim for.”

Uber CEO, Dara Khosrowshahi, also agrees. While he is certainly bullish on the long-term prospects of autonomous driving, he realizes that it’s a long-term undertaking. In an interview with CNBC, he noted: “I thought: If [Musk] can do it, more power to him. Our approach is a more conservative approach as far as sensor technology and mapping technology. The software’s going to get there. So I don’t think that his vision is by any means wrong. I just think we disagree on timing.”

Beyond Meat: The Keys To Disrupting An Enormous Market

This week’s IPO of Beyond Meat (BYND) was a flashback to the dot-com boom as the company’s shares soared 167% on its debut. The valuation: a hefty $3.8 billion.

Even more impressive is that Beyond Meat is, well, a food company (it develops plant-based meat products) and the sales for 2018 were only $87.9 million (and yes, the company has yet to post a profit).

But then again, Beyond Meat is a New Age food company that is disrupting a massive industry. Keep in mind that the global spending is roughly $1.4 trillion.

So how did the founder and CEO — Ethan Brown – pull this off? What are the lessons for disrupting an industry?

Let’s take a look:

Mission-Based Focus: In his shareholder letter, Ethan writes: “Beyond Meat’s story begins on farmland. Through my father’s love of farming and the natural world, my urban childhood was interwoven with time spent on our family’s farm in Western Maryland where we were partners in a Holstein dairy operation. As a child, I was fascinated by the animals surrounding us: the companions at our sides, the livestock in the barns and fields, and the wildlife in the woods, streams, and ponds. As a young adult, I enjoyed a career in clean energy but continued to wrestle with a question born of these early days: do we need animals to produce meat? Over the years, the question knocked more loudly and I set out to understand meat.”

Having a story is critical. It’s a way to get investors excited, inspire employees and to build loyalty with customers.

But Ethan’s story was more than just making healthy food that’s tasty. Through his research, he realized that his product has a positive environmental impact, such as with 90% fewer greenhouse gas emissions, 99% less water, 93% less land and 46% less energy. Oh, and there is also the benefit of animal welfare.

As a testament to Ethan’s vision, his company has amassed 1.2 million followers across social media and newsletters as well as 9.9 billion earned media impressions in 2018.

Market: When it comes to mission-based companies, there is always the issue of focusing on niche opportunities. For example, with plant-based meats, the market for vegetarians and vegans is less than 5% of the US population.

This is why one of the key strategies for Ethan has been to market to meat-loving consumers. Interestingly enough, he requests that his product be sold in the meat case at grocery stores.

Investors: Selecting the right ones is absolutely essential. To this end, Ethan received investments from top VCs like Kleiner Perkins Caufield & Byers and Obvious, which have deep experience with scaling startups.

But Ethan also looked to investors – like Bill Gates, Leonardo DiCaprio and Seth Goldman (the founder of Honest Tea) — who are influencers that can evangelize his story.

Product Innovation and Distribution: Getting this balance right is challenging. It’s common for companies to emphasize one over the other.

But so far, Ethan has been able to manage the process. When he founded the company in 2009, he started in a small commercial kitchen and experimented with different recipes.  He also sought the help from researchers at the University of Missouri’s Bioengineering and Food Science Department at the College of Agriculture and Natural Resources and faculty and students at the University of Maryland’s Nutrition & Food Science Department. Ethan wanted to make sure his product was amazing, which would create a flywheel with distribution.

His first major partner was Whole Foods. Then over the years, he was able to get buy-in from Target, Kroger, Del Taco, Carl’s Jr., and T.G.I. Friday.   Currently, there are about 30,000 distribution points across retail locations and restaurants.

According to the IPO prospectus: “[P]opular restaurants have approached us directly to carry our branded product, despite already carrying our competitors’ products. This type of demand for our products has been a driving force in building strong ties with customers who have been continuously impressed by the impact our brand can make on their business.”

How To Reskill Your Workforce For AI (Artificial Intelligence)

AI is considered the most disruptive technology, according to Gartner’s 2019 CIO Survey (it includes over 3,000 CIOs from 89 countries). So yes, this is big reason why there has been a major increase in adoption and implementation.

Yet there is a bottleneck that could easily slow the progress – that is, finding the right talent. The fact is that there are few data scientists and AI experts available.

“In our recent State of Software Engineer report, we found that demand for data engineers has increased by 38% and demand growth for machine learning engineers has increased by 27% in the last year,” said Mehul Patel, who is the CEO of Hired. “Based on data from our career marketplace, we believe the difficulty of recruiting for tech talent with specialized skills in machine learning and AI will continue to become increasingly competitive. Machine learning engineers are commanding an average salary of 153K in the SF Bay Area, which is nearly 20K above the global tech worker’s average salary.”

Actually, this is why one approach is to acquire companies that have strong teams! This appears to be the case with McDonald’s, which recently paid $300 million for Dynamic Yield. It’s an AI company that helps personalize customer experiences.

But of course, this option has its issues as well. Let’s face it, acquisitions can be difficult to integrate, especially when the target has a workforce with highly specialized skillsets.

So what are other approaches to consider? Well, here’s a look at some ideas:

Automation: With the growth in AI, there has also been the emergence of innovative automation tools, whether from startups or even the mega tech operators. For example, this week Microsoft introduced a new set of systems to streamline the process.

“The biggest and most impactful way that organizations can leverage their current team for data science is to implement a data science automation platform,” said Dr. Ryohei Fujimaki, who is the founder and CEO of dotData. “Data science automation significantly simplifies tasks that formerly could only be completed by data scientists, and enables existing resources — such as business analysts, BI engineers and data engineers — to execute data science projects through a simple GUI operation. Automation of the full data science process, from raw business data through data and feature engineering through machine learning, is enabling enterprises to build effective data science teams with minimal costs, using their current talent.”

Now this does not mean that a platform is a panacea, as there still needs to be qualified data scientists. But then again, there will be far more efficiency and scale with AI projects.

“If organizations have data scientists already, an automation platform frees up highly-skilled resources from many of the manual and time-consuming efforts involved, and allows them to focus on more complex and strategic analysis,” said Ryohei. “This empowers data scientists to achieve higher productivity and drive greater business impact than ever before.”

Reskilling: If you currently have employees who are business analysts or have experience with data engineering, then they could be good candidates to train for AI tasks. This would include focusing on skills like Python and TensorFlow, which is a deep learning framework.

“From a training and learning perspective, there are an abundance of online resources via Coursera, Udacity,, and that can help companies develop their employees’ AI/ML skills,” said Mehul. “Additionally, it will be valuable for a company to acquire someone with existing experience in AI to be a leader and mentor for developing employees. The interesting thing about AI/data science is that you don’t need to be an experienced software engineer to do it.  The field is so exciting because of the diversity of talent and backgrounds spanning science, engineering, and economics.”

But the training should not just be for a small group of people. It should be company-wide. “Without a data-driven culture and mindset, data science and AI cannot be truly implemented,” said Ryohei. “It is important for enterprise leaders and business teams to understand how to best work with the data science team to meet the organization’s key business objectives. While the business stakeholders do not need to be data experts, they need to know ‘How to use’ AI and ‘How it changes their businesses.’”