How Screenwriting Can Boost AI (Artificial Intelligence)

Scott Ganz, who is the Principal Content Designer at Intuit, did not get in the tech game the typical way. Keep in mind that he was a screenwriter, such as for Wordgirl (winning an Emmy for his work) and The Muppets. “I’ve always considered myself a comedy writer at heart, which generally involves a lot of pointing out what’s wrong with the world to no one in particular and being unemployed,” he said.

Yet his skills have proven quite effective and valuable, as he has helped create AI chatbots for brands like Mattel and Call of Duty. And yes, as of now, he works on the QuickBooks Assistant, a text-based conversational AI platform. “I partner with our engineering and design teams to create an artificial personality that can dynamically interact with our customers, and anticipate their questions,” he said. “It also involves a lot of empathy. It’s one thing to anticipate what someone might ask about their finances. You also have to anticipate how they feel about it.”

So what are his takeaways? Well, he certainly has plenty. Let’s take a look:

Where is chatbot technology right now? Trends you see in the coming years?

It’s a really exciting time for chatbots and conversational user interfaces in general. They’re becoming much more sophisticated than the rudimentary “press 1 for yes” types of systems that we used to deal with… usually by screaming “TALK TO A HUMAN!” The technology is evolving quickly, and so are the rules and best practices. With those two things still in flux, the tech side and the design side can constantly disrupt each other.

Using natural language processing, we’re finding opportunities to make our technology more humanized and relatable, so people can interact naturally and make better informed decisions about their money. For example, we analyze our customers’ utterances to not only process their requests, but also to understand the intent behind their requests, so we can get them information faster and with greater ease. We really take “natural language processing” seriously. Intuit works really hard to speak its customers’ language. That’s extra important to us. You can’t expect people to say “accounts receivable.” It’s “who owes me money?” As chatbots become more and more prevalent in our daily lives, it’s important that we’re aware of the types of artificial beings we’re inviting into our living rooms, cars, and offices. I’m excited to see the industry experiment with gender-neutral chatbots, where we can both break out of the female assistant trope and also make the world a little more aware of non-binary individuals.

Explain your ideas about screenwriting/improv/chatbots?

When I joined the WordGirl writing team, my first task was to read the show “bible” that the creators had written. Yes, “bible” is the term they use. WordGirl didn’t have a central office or writers’ room. We met up once a year to pitch ideas and, after that, we worked remotely. Therefore, the show bible was essential when it came to getting us all on the same page creatively.

Once I got to Intuit, I used those learnings to create a relationship bible to make sure everyone was on the same page about our bot’s personality and its relationship with our customers. Right now my friend Nicholas Pelczar and I are the only two conversation designers at QuickBooks, but we don’t want it to stay that way. As the team expands, documents like this become even more important.

As a side-note, I originally called the document, “the character bible,” but after my team had a coaching session with Scott Cook, who is the co-founder of Intuit, I felt that the document wasn’t customer-focused enough. I then renamed it the “relationship bible” and redid the sections on the bot’s wants, needs, and fears so that they started with the customer’s wants, needs, and fears. Having established those, I then spelled out exactly why the bot wants and fears those same things.

I also got to apply my improv experience to make the bot more personable. Establishing the relationship is the best first thing you can do in an improv scene. The Upright Citizens Brigade is all about this. It increases your odds of having your scene make sense. Also, comedy is hard, and I wanted to give other content designers some easy recipes to write jokes. Therefore, the last section of the relationship bible has some guidance on the kinds of jokes the bot would and wouldn’t/shouldn’t make, as well as some comedy tropes designers can use.

What are some of the best practices? And the gotchas?

On a tactical level, conversation design hinges on communicating clearly with people so that the conversation itself doesn’t slip off the rails. When this happens in human conversation, people are incredibly adept at getting things back on track. Bots are much less nuanced and flexible and they really struggle with this. Therefore, designers need to be incredibly precise both in how you ask and answer questions. It’s really easy to mislead people about what the bot is asking or what it’s capable of answering.

Beyond that, it’s really important to track the emotional undercurrents of the conversation. It may be an AI, but it’s dealing with people (and all of their emotions). AI has its strengths, but design requires empathy, and this is absolutely the case when you’re engaging with people in conversation. Our emotions are a huge part of what we’re communicating, and true understanding requires that we understand that side of it as well. Even though a chatbot is not a person, it is made by real people, expressing real care, in a way that makes it available to everyone, all the time. It’s important to keep this in mind when working on a chatbot, so that the voice remains authentic.

One of the biggest things we need to take into account is ensuring the chatbot is a champion for the company, but still remains a little bit separated from the company. Since chatbots still make mistakes, it’s important to take steps to protect the company’s reputation. This plays out in creating strict guidelines around the use of “I” vs. “We.” When the chatbot messes up, it uses “I,” and only uses “we” in moments when it is working in conjunction with humans at the company. By using these two separate phrases, customers are able to better understand what is the work of the chatbot vs. the company. Essentially, we think of QuickBooks Assistant as a kind of digital employee.

Bias: The Silent Killer Of AI (Artificial Intelligence)

When it comes to AI (Artificial Intelligence), there’s usually a major focus on using large datasets, which allow for the training of models. But there’s a nagging issue: bias. What may seem like a robust dataset could instead be highly skewed, such as in terms of race, wealth and gender.

Then what can be done? Well, to help answer this question, I reached out to Dr. Rebecca Parsons, who is the Chief Technology Officer of ThoughtWorks, a global technology company with over 6,000 employees in 14 countries. She has a strong background in both the business and academic worlds of AI.

So here’s a look at what she has to say about bias:

Can you share some real-life examples of bias in AI systems and explain how it gets there?

It’s a common misconception that the developers who are responsible for infusing bias in AI systems are either prejudiced or acting out of malice—but in reality, bias is more often unintentional and unconscious in nature. AI systems and algorithms are created by people with their own experiences, backgrounds and blind spots which can unfortunately lead to the development of fundamentally biased systems. The problem is compounded by the fact that the teams responsible for the development, training, and deployment of AI systems are largely not representative of the society at large. According to a recent research report from NYU, women comprise only 10% of AI research staff at Google and only 2.5% of Google’s workforce is black. This lack of representation is what leads to biased datasets and ultimately algorithms that are much more likely to perpetuate systemic biases.

One example that demonstrates this point well is the use of voice assistants like Siri or Alexa that are trained on huge databases of recorded speech that are unfortunately dominated by speech from white, upper-middle class Americans—making it challenging for the technology to understand commands from people outside that category. Additionally, studies have shown that algorithms trained on historically biased data have significant error rates for communities of color especially in over-predicting the likelihood of a convicted criminals to reoffend which can have serious implications for the justice system.

How do you detect bias in AI and guard against it?

The best way to detect bias in AI is by cross-checking the algorithm you are using to see if there are patterns that you did not necessarily intend. Correlation does not always mean causation, and it is important to identify patterns that are not relevant so you can amend your dataset. One way you can test for this is by checking if there is any under- or overrepresentation in your data. If you detect a bias in your testing, then you must overcome it by adding more information to supplement that underrepresentation.

The Algorithmic Justice League has also done interesting work on how to correct biased algorithms. They ran tests on facial recognition programs to see if they could accurately determine race and gender. Interestingly, lighter-skinned males were almost always correctly identified, while 35% of darker-skinned females were misidentified. The reason? One of the most widely used facial-recognition data training sets was estimated to be more than 75% male and more than 80% white. Because the categorization had far more definitive and distinguishing categories for white males than it did for any other race, it was biased in correctly identifying these individuals over others. In this instance, the fix was quite easy. Programmers added more faces to the training data and the results quickly improved.

While AI systems can get quite a lot right, humans are the only ones who can look back at a set of decisions and determine whether there are any gaps in the datasets or oversight that led to a mistake.  This exact issue was documented in a study where a hospital was using machine learning to predict the risk of death from pneumonia. The algorithm came to the conclusion that patients with asthma were less likely to die from pneumonia than patients without asthma. Based off this data, hospitals could decide that it was less critical to hospitalize patients with both pneumonia and asthma, given the patients appeared to have a higher likelihood of recovery. However, the algorithm overlooked another important insight, which is that those patients with asthma typically receive faster and more intensive care than other patients, which is why they have a lower mortality rate connected to pneumonia. Had the hospital blindly trusted the algorithm, they may have incorrectly assumed that it’s less critical to hospitalize asthmatics, when in reality they actually require even more intensive care.

What could be the consequences to the AI industry if bias is not dealt with properly?

As detailed in the asthma example, if biases in AI are not properly identified, the difference can quite literally be life and death. The use of AI in areas like criminal justice can also have devastating consequences if left unchecked. Another less-talked about consequence is the potential of more regulation and lawsuits surrounding the AI industry. Real conversations must be had around who is liable if something goes terribly wrong. For instance, is it the doctor who relies on the AI system that made the decision resulting in a patient’s death, or the hospital that employs the doctor? Is it the AI programmer who created the algorithm, or the company that employs the programmer?

Additionally, the “witness” in many of these incidents cannot even be cross-examined since it’s often the algorithm itself. And to make things even more complicated, many in the industry are taking the position that algorithms are intellectual property, therefore limiting the court’s ability to question programmers or attempt to reverse-engineer the program to find out what went wrong in the first place. These are all important discussions that must be had as AI continues to transform the world we live in.

If we allow this incredible technology to continue to advance but fail to address questions around biases, our society will undoubtedly face a variety of serious moral, legal, practical and social consequences. It’s important we act now to mitigate the spread of biased or inaccurate technologies.

What Do VCs Look For In An AI (Artificial Intelligence) Deal?

Recently SoftBank Group launched its latest fund, called Vision Fund 2, which has $108 billion in assets. The focus: It’s primarily on making investments in AI (Artificial Intelligence). No doubt, the fund will have a huge impact on the industry.

But of course, VCs are not the only ones ramping up their investments. So are mega tech companies. For example, Microsoft has announced a $1 billion equity stake in OpenAI (here’s a post I wrote for Forbes.com on the deal).

The irony is that—until a decade ago—AI was mostly a backwater, which had suffered several winters.  But with the surge in Big Data, new innovations in academic research and the improvements in GPUs, the technology has become a real force.

“I’m convinced that the current AI revolution will be the largest technology trend driving innovation in enterprise software over the next decade,” said Jeremy Kaufmann, who is a Principal at ScaleVP. “The magnitude of this trend will be at least as large as what we observed with cloud software overtaking on-prem software over the last two decades, which created over 200 billion dollars in value. While certain subfields like autonomous driving may be overhyped with irrational expectations on timing, I would argue that progress in the discipline has actually exceeded expectations since the deep learning revolution in 2012.”

Given all this, there are many AI startups springing up to capitalize on the megatrend. So then what are some of the factors to improve the odds of getting funding?

Well, in the AI wave, things may be different from what we saw with the cloud revolution.

“Unlike the shift from on-prem software to SaaS, though, progress in AI will not rewrite every business application,” said Kaufmann. “For example, SaaS companies like Salesforce and Workday were able to get big by fundamentally eating old-school on-prem vendors like Oracle and ADP. In this AI revolution, however, do not expect a new startup to displace an incumbent by offering an ‘AI-first’ version of Salesforce or Workday, as AI does not typically replace a core business system of record. Rather, the most logical area for AI to have an impact in the world of enterprise software will be to sit on top of multiple systems of record where it can act as a system of prediction. I am excited about conversations around the likelihood of sales conversion, the most effective marketing channels, the probability that a given credit card transaction is fraudulent, and whether a particular website visitor might be a malicious bot.”

Data, The Team And Business Focus

An AI startup will also need a rock-solid data strategy, which allows for training of the models. Ideally this would mean having a proprietary source.

“At the highest level, a lot of our diligence comes down to who has proprietary data,” said Kaufmann. “In every deal, we ask, ‘Does this startup have domain-specific understanding and data or could a Google or Amazon sweep in and replicate what they do?’ It’s one of the biggest challenges for startups—in order to succeed, you need a data set that can’t easily be replicated. Without the resources of a larger, more established company, that’s a very big challenge—but it can be achieved with a variety of hacks, including ‘selling workflow first, AI second,’ scraping publicly available data for a minimum viable product, incentivizing your customers to share their data with you in return for a price discount, or partnering with the relevant institutions in the field who possess the key data in question.”

And even when you have robust data, there remain other challenges, such as with tagging and labeling.  There are also inherent problems with bias, which can lead to unintended consequences.

All this means that—as with any venture opportunity—the team is paramount. “We look at the academic backgrounds,” said Rama Sekhar, who is a partner at Norwest Venture Partners. “We also like a team that has worked with models at scale, say from companies like Google, Amazon or Apple.”

But for AI startups, there can often be too much of a focus on the technology, which could stymie the progress of the startup. “Some of the red flags for me are when a pitch does not define a market target clearly or there is not a differentiation,” said David Blumberg, who is the Founder and Managing Partner of Blumberg Capital. “Failed startups are usually not because of the technology but instead from a lack of product-market fit.”

Microsoft Bets $1 Billion On The Holy Grail Of AI (Artificial Intelligence)

Microsoft is back to its winning ways. And with its surging profits, the company is looking for ways to marshal its enormous resources to keep up the momentum. Perhaps one of the most important recent deals is a $1 billion investment in OpenAI. The goal is to build a next-generation AI platform that is not only powerful but ethical and trustworthy

Founded in 2015, OpenAI is one of the few companies — along with Google’s DeepMind  that is focused on AGI (Artificial General Intelligence), which is really the Holy Grail of the AI world. According to a blog post from the company: “AGI will be a system capable of mastering a field of study to the world-expert level, and mastering more fields than any one human — like a tool which combines the skills of Curie, Turing, and Bach. An AGI working on a problem would be able to see connections across disciplines that no human could. We want AGI to work with people to solve currently intractable multi-disciplinary problems, including global challenges such as climate change, affordable and high-quality healthcare, and personalized education. We think its impact should be to give everyone economic freedom to pursue what they find most fulfilling, creating new opportunities for all of our lives that are unimaginable today.”

It’s a bold vision. But OpenAI has already made major advances in AI.  “It has pushed the limits of what AI can achieve in many fields, but there are two in particular that stand out,” said Stephane Rion, who is the Senior Deep Learning Specialist of Emerging Practices at Teradata. “The first one is reinforcement learning, where OpenAI has driven some major research breakthroughs, including designing an AI system capable of defeating most of its human challengers in the video game Dota 2. This project doesn’t just show the promise of AI in the video game industry, but how Reinforcement Learning can be used for numerous other applications such as robotics, retail and manufacturing. OpenAI has also made some major advances in the area of Natural Language Processing (NLP), specifically in unsupervised learning and attention mechanism. This can be used to build systems that achieve many language-based tasks such as translation, summarization and generation of coherent paragraphs of text.”

But keep in mind that AI is still fairly weak — with applications for narrow use cases. The fact is that AGI is likely something that is years away. “In fact, we’re far from machines learning basic things about the world in the same way animals can,” said Krishna Gade, who is the CEO and Co-founder at Fiddler Labs. “It is true, machines can beat humans on a some tasks by processing tons of data at scale. However, as humans, we don’t understand or approach the world this way — by processing massive amounts of labeled data. Instead, we use reasoning and predictions to infer the future from available information. We’re able to fill in gaps, extrapolate things with common sense and work with incomplete premises — which machines simply can’t to do yet. So while it is interesting, I do believe that we’re quite far from AGI.”

Guy Caspi, who is the CEO of Deep Instinct, agrees on this. “While deep learning has drastically improved the state-of-the-art in many fields, we have seen little progress in AGI,” he said. “A deep learning model trained for computer vision cannot suddenly learn to understand Japanese, and vice versa.”

Regardless, with OpenAI and Microsoft willing to take on tough challenges, there will likely be an acceleration of innovation and breakthroughs — providing benefits in the near-term. “I love to see a company like Microsoft, which the market has rewarded with enormous returns on capital, making investments that aim to benefit society,” said Dave Costenaro, who is the head of artificial intelligence R&D at Jane.ai. “Funding OpenAI, creating more open source content, and sponsoring their ‘AI for Earth’ grants are all examples that Microsoft has recently spun up in earnest. Microsoft is not a charity, however, so the smart money will take this as signal of how strategically important these issues are.”

What’s more, the emphasis on ethics will also be impactful. As AI becomes pervasive, there needs to be more attention to the social implications.  We are already seeing problems, such as with bias and deepfakes.

“With the fast pace of AI innovation, we’re continually encountering new, largely unforeseen ethical implications,” said Alexandr Wang, who is the CEO of Scale. “In the absence of traditional governing bodies, creating ethical guidelines for AI is both a shared and a perpetual responsibility.”

AI (Artificial Intelligence): An Extinction Event For The Corporate World?

Back in 1973, Daniel Bell published a pioneering book, called The Coming of Post-Industrial Society.  He described how society was undergoing relentless change, driven by rapid advances in technology. Services would become more important than goods. There would also be the emergence of a “knowledge class” and gender roles would evolve. According to Bell: “The concept of the post-industrial society deals primarily with changes in the social structure, the way in which the economy is being transformed and the occupational system reworked, and with the new relations between theory and empiricism, particularly science and technology.”

Among the many readers of the book was a young Tom Siebel. And yes, it would rock his world. It’s what inspired him to enroll at the graduate school of engineering at the University of Illinois and get a degree in Computer Science. He would then join Oracle during the 1980s, where he helped lead the wide-scale adoption of relational databases. Then in 1993, he started his own business, called Siebel Systems, which capitalized on the Internet wave, and after this, he launched C3.ai, a top enterprise AI software provider.

No doubt, he has a great view of tech history (during his career, the IT business has gone from $50 billion to $4 trillion). But he also has an uncanny knack for anticipating major trends and opportunities. Bell’s book was certainly a big help. Keep in mind that he said it was essentially about “social forecasting.”

OK then, so what now for Siebel? Well, he has recently published his own book: Digital Transformation: Survive and Thrive in an Era of Mass Extinction

In it, he contends that technology is at a inflection point. While the PC and Internet eras were mostly about streamlining procedures and operations, the new era is much different. It’s about the convergence of four megatrends: cloud computing, big data, artificial intelligence (AI) and IoT (Internet-of-Things). All these are making systems and platforms increasingly smarter.

Siebel provides various case studies to show what’s already being done, such as:

  • Royal Dutch Shell: The oil giant has created an AI app that analyzes 500,000 refineries across the globe so as to allow for predictive maintenance.
  • Caterpillar: With an iPhone, a customer can easily get the vitals on a tractor. Caterpillar has also leveraged telemetry to monitor assets in order to predict failures.
  • 3M: The company is leveraging AI to anticipate complaints from invoices.

All these initiatives have had a major impact, resulting in hundreds of millions in savings. Of course, this does not include the positive impact from improved customer service, safety and environmental protection.

In fact, Siebel believes that if companies do not engage in digital transformation, the prospects for success will be bleak. This means that CEOs must lead the process and become much more knowledgeable about technology. In other words, all CEOs will need to deeply rethink their businesses.

Now there will definitely be challenges and resistance. Many companies simply lack the talent to transition towards next-generation technologies like AI. Even worse, the current technologies are often scattered, disconnected and complex.

Yet Siebel does point out that large companies have inherent advantages. One is that there is often a large amount of quality data, which can be a powerful moat against the competition. What’s more, large companies have the resources, distribution and infrastructure to scale their efforts.

But for the most part, CEOs can no longer wait as the pace of change is accelerating.

According to Siebel:  “New business models will emerge.  Products and services unimaginable today will be ubiquitous.  New opportunities will abound.  But the great majority of corporations and institutions that fail to seize this moment will become footnotes in history.”

Deepfake: What You Need To Know

During the 1970s and 1980s, Memorex ran a string of successful commercials about the high quality of their audio cassettes. The tag line was: “Is it live, or is it Memorex?”

Yes, it seems kind of quaint nowadays. After all, in today’s world of AI (Artificial Intelligence), we may have a new catch phrase: “Is it real, or is it deepfake?”

The word deepfake has been around only for a couple years. It is a combination of “deep learning” – which is a subset of AI that uses neural networks – and “fake.” The result is that it’s possible to manipulate videos that still look authentic.

During the past couple weeks, we have seen high-profile examples of this. There was a deepfake of Facebook’s Mark Zuckerberg who seemed to be talking about world domination. Then there was another one of House Speaker Nancy Pelosi, in which it appeared she was slurring her speech (this actually used less sophisticated technology known as “cheapfake”).

Congress is getting concerned, especially in light of the upcoming 2020 election. This week the House Intelligence Committee had a hearing on deepfake. Although, it does look remote that much will be done.

“The rise of deepfakes on social media is a series of cascading issues that will have real consequences around our concept of freedom of speech,” said Joseph Anthony, who is the CEO of Hero Group. “It’s extremely dangerous to manipulate the truth when important decisions weigh in the balance, and the stakes are high across the board. Viral deepfake videos don’t just damage the credibility of influential people like politicians, brands and celebrities; they could potentially cause harm to our society by affecting stock prices or global policy efforts. Though some people are creating them for good fun and humor, experimenting with this technology is like awakening a sleeping giant. It goes beyond goofing off, into manipulative and malicious territory.”

Now it’s certainly clear that deepfake technology will get better and better. And over time, this may make it difficult to really know what’s true, which could have a corrosive impact.

It’s also important to keep in mind that it is getting much easier to develop deepfakes. “They take the threat of fake news even higher as seemingly anyone can now have the ability to literally and convincingly put words in someone else’s mouth,” said Gil Becker, who is the CEO of AnyClip

So what can be done? What can we do to combat deepfakes? Well, one approach is to have a delay within social networks to evaluate the videos – say by leveraging sophisticated AI/ML — before they go viral. To this end, Anthony recommends a form of watermarks.

“Whichever way that authentication is developed technologically, it’s clear this is the kind of investment that will cost a ton of money, but it has to be done,” he said. “Silicon Valley and all the tech companies are all about growing fast and keeping their cash flow in the positive. I expect they’ll continue to fight back on making these investments in security.”

Yet despite all this, the fears about deepfakes may still be overblown. If anything, the recent examples of Zuckerberg and Pelosi may serve as a wake-up call to spur constructive approaches.

“Currently, there is a lot of sensationalism on the use and implications of deepfakes,” said Jason Tan, who is the CEO of Sift. “It is also very much fear-based. Even the word sounds sinister or malicious, when really, it is ‘hyper realistic.’ Deepfakes can provide innovation in the market and we shouldn’t blatantly dismiss the technology as all bad. We should be looking at the potential benefits of it as well.”

Looker + Google: How The Deal Will Rock The BI World

Even with heavy investments and continued innovation, Google’s cloud business remains a disappointing No. 3 in the market. But the new head of the division, Thomas Kurian, is making a bold step to change things up – that is, shelling out $2.6 billion for Looker. This is actually Google’s third largest acquisition in its history.

“It’s a very smart deal for Google,” said Jake Stein, who is a SVP of Stitch at Talend. “It’s a great indicator and testament to the value of technologies embracing a multi-cloud future.”

The founders of Looker, back in 2011, set out to solve a major problem:  As companies were accumulating enormous amounts of data — from business apps, social networks, and IoT — there was a need to find much better ways to gain real-time insights.  Many of these companies were hamstrung by sprawling IT systems. The result was often the creation of brittle Frankenstacks, which usually became unworkable.

For Looker, it created a next-generation BI (Business Intelligence) platform.  At the heart of this was the linking to data warehouses — say from Amazon.com’s RedShift, Snowflake, Google’s BigQuery or Microsoft’s SQL — without having to dump the information into the vendor’s datastore.  Looker also created its own language, called LookML, that meant that there was no need to learn complex SQL statements.  Because of these differentiations, the company was able to get lots of traction (there are currently more than 1,700 customers).

The Deal

Keep in mind that there are strong synergies between Looker and Google. Both are cloud-native operators and have roughly 350 customers in common like WPP, Hearst and BuzzFeed. And going forward, there is considerable potential for the evolution of the platform. For example, Google has an impressive set of AI (Artificial Intelligence) and ML (Machine Learning) capabilities that are likely to be integrated.

“The pace and scale at which analytics workloads are moving to the cloud has been unrelenting and the size of Google Cloud’s acquisition of Looker is a prime example of this market shift,” said Adam Wilson, who is the CEO of Trifacta. “The early days of cloud computing were driven by developers building and hosting applications on the cloud but today’s growth opportunities in cloud are focused on analytics, machine learning and AI.”

The Google/Looker deal is also likely to shakeup the industry in a big way. “Google’s acquisition of Looker is validation that the age of pure-play visualization tools is over,” said Ajeet Singh, who is the co-founder and executive chairman of ThoughtSpot. “Businesses need agility to compete in a digital world and the older generation of tools like Tableau and Qlik are holding them back.”

The Risks

But of course, M&A can be dicey. Let’s face it, there are many examples of acquisitions that fall apart – and open up new opportunities for rivals. There is also the risk that Google may be tempted to find ways to lock-in customers, such as by limiting certain features to BigQuery. Oh, and regulatory approval of the acquisition should not be taken for granted either. It appears that the federal government is exploring an antitrust investigation against Google.

And finally, there may even be an opportunity for open source alternatives, which could be disruptive.

“Overall we believe that anyone should be able to make sense of data without having to write complex business queries,” said Danielle Morrill, who is the General Manager of Meltano at GitLab. “With the acquisition of Looker, there is a lot of conversation about the open source data analytics space. As Looker is proprietary, we believe this is the right time to develop an open source alternative that helps users define re-usable business logic to allow everyday people to consume data for business purposes.”

3 Ways To Transform The Supply Chain With AI (Artificial Intelligence)

JDA Software and KPMG LLP recently published a wide-ranging survey regarding supply-chain technology. The main takeaway: end-to-end visibility is the No. 1 priority. But in order to make this a reality, the survey also notes that AI (Artificial Intelligence), machine learning (ML) and cognitive analytics will be critical.

Yet pulling this off is far from easy and fraught with risks. So what to do? Well, I recently had a chance to talk to Dr. Michael Feindt. A physicist by education, he has used his strong mathematical skills to focus on AI.  He developed the NeuroBayes algorithm while at the scientific research center at CERN and founded Blue Yonder in 2008 to apply his theories to supply-chain management.  And yes, the company got lots of traction, as the platform would eventually deliver 600 million intelligent, automated decisions every day. Then in 2018 JDA Software acquired Blue Yonder.

No doubt, when it comes to applying AI and the supply chain, Michael is definitely someone to listen to.

“The self-learning supply chain marks the next major frontier of supply chain innovation,” he said. “It’s a futuristic vision of a world in which supply chain systems, infused with AI and machine learning (ML), can analyze existing strategies and data to learn what factors lead to failures. Because of recent advancements in technology, the autonomous supply chain is no longer ‘blue-sky thinking.’”

OK then, so let’s take a look at some of his recommendations:

The System Must Read Signals and Manage Billions of Pieces of Information: You need to process as many signals as possible to get a complete picture, such as weather events, temperatures, social trends and so on. For example, by using weather forecasts and port congestion data, it’s possible to predict the impact on freighters in route and determine which shipments will be late — and the captain may not even know what’s happening!

Or take another example: Let’s say an ice storm halts traffic on I-75 in Ohio. By using AI signals, you can answer questions like: What is every possible transit alternative and at how much added time or cost will there be? How will expediting some deliveries during the storm mess with the rest of the supply chain?

The System Must Look Into The Future: Rules-based approaches are too brittle to provide solid forecasts. In fact, these systems may do more harm than good.

“To help companies draw the right conclusions from the data they gather,” said Michael, “businesses need to apply ML and AI technology designed to grasp the oncoming impacts of what’s happening everywhere in the moment and predict how demand and supply will look in the future. That means having algorithms that can evolve over time.”

He points to the following: Suppose you are doing assortment planning in a retail business. The traditional approach is to forecast sales based on prior history and trends. “A retailer may always send one style of athletic shoe to the Midwest because they know the sales history and the product does well there,” said Michael. “But with ML and AI, there is now the ability to blend external and internal data to predict demand and areas for growth. If retailers take an index and predict where customers are most concentrated, that data can help them figure out where to ship the athletic shoe to maximize their sales.”

The Technology Must Overcome Human Nature: So long as the data is correct and the algorithms appropriate, then an AI system will learn and react to ensure that the orders and price points remain in line with a probability that keeps a business both stocked and efficient.

“However, as humans, our instinct is to fix things ourselves, especially if it’s an area we have been tasked with overseeing,” said Michael. “The autonomous supply chain requires us to discard pride, ego and personal bias and trust the technology. As trust in the system’s recommendations increases, a greater and greater portion of decisions can be made automatically by the system, without human intervention. This will allow the professionals to focus their time and effort on problems that only they can solve.”

Robots + AI: Boring Is Beautiful

During the past year, there have been major implosions of robot startups, such as with Jibo, Anki and Rethink Robotics.  They all raised substantial amounts of capital from top-tier investors and had strong teams.

So why the failure? One of the main reasons is the extreme complexities of melding software and movable hardware.  As a result, the technology often does not live up to expectations.

Even with the strides in AI – such as deep learning — there is still much to do. “Deep learning and robotics is difficult for a variety of reasons,” said Carmine Rimi, who is a Product Manager for AI at Canonical, “For instance, simultaneous localization and mapping (SLAM) in unknown environments, while simultaneously keeping track of an agent’s location within it in tractable time, is a challenge. In real-time it is at least a magnitude more difficult. Research into advanced algorithms, that deliver better accuracy faster and at lower power consumption, along with quantum-like parallel states and processing, are some of the areas that will help. And this part of why it is difficult.”

But there is much more than this. Dr. Alex Wong, who is the Chief Scientist and co-founder of DarwinAI, has this to say: “One of the primary difficulties with AI in this context is that learning to manipulate physical objects with a high level of dexterity in dynamic and ‘noisy’ real-world environments is extremely challenging, as it must take into account an incredible number of environmental factors to make complex decisions in real-time. Additional complexities in this area are issues associated with ‘data sparsity’ and training speed.”

And finally, AI is still fairly narrow. The fact is that we are years away from some type of general intelligence. “The challenge of replicating the capabilities of a human being, whether it be on a production line or in a medical facility – is very difficult,” said Ran Poliakine, who is the co-founder of Musashi AI. “For example, the ability to imitate the function of the brain when looking at an image is incredibly complicated. This is why until now, even with all of the robotics and advanced hardware available, the ability to make a decision or imitate a human reaction was nearly impossible.”

Now all this is not to imply that the situation is hopeless. If anything, there are enormous opportunities with robots and AI. Yet there must be different approaches to the technology, especially when compared to software-only AI.

So what to do? Well, Erik Schluntz, who is the CTO of Cobalt Robotics, is someone who has been able to find lots of success – primarily because his approach is not about achieving moonshots. His company develops robots that provide security services in the workplace.

When Schluntz started Cobalt, he first talked to a range of companies across several industries so as to find real-world problems to focus on. “We did not want to come up with an idea in a vacuum,” he said.

But the Cobalt  robot would not be a replacement for people. “Marrying the benefits of robotics with the unique capabilities of humans means creating something that is greater than the sum of its parts,” said Schluntz. “The reliability of robots for tasks that require unwavering attention or precise repetition is unmatched. When you expand the capabilities of a robot by integrating its work with that of a human for flexible decision-making, you’re enabling a greater level of effectiveness for roles that are more than just the dirty, dull or dangerous. In this sense, the sweet spot for robotics applications is greatly expanded. A robot can detect leaks and spills in a building before a human, and working with a human to address the leak or spill can correct the anomaly. We let the robot to do the dull part – tirelessly patrol in search of water leaks, and save the interesting response to a human.”

True, it’s not necessarily sexy. But hey, Cobalt has turned into a solid company that has customers like Yelp and Slack.

“To enable the proliferation of robots and AI, robots need to be friendly, functional and easy to be around,” said Schluntz.  “Success will rely on key players in the robotics space being intellectually honest and realistic—identifying clear use-cases, demonstrating clear ROI, operating in an inherently safe and secure manner (both physical and cyber context), and creating future roadmaps.”

SF Facial Recognition Ban: What Now For AI (Artificial Intelligence)?

Recently San Francisco passed – in an 8-to-1 vote — a ban on local agencies to use facial recognition technologies. The move is likely not to be a one-off either. Other local governments are exploring similar prohibitions, so as to deal with the potential Orwellian risks that the technology may harm people’s privacy. “In the mad dash towards AI and analytics, we often turn a blind eye to their long-range societal implications which can lead to startling conclusions,” said Kon Leong, who is the CEO of ZL Technologies.

Yet some tech companies are getting proactive. For example, Microsoft has indicated there should be regulation of facial recognition systems (although, not a straight out ban). The company even declined a request to sell its own technology to a police department in California.

Keep in mind that — even with the strides in AI — there are still problems with the technology.  There are numerous cases where it has given off false positives.

“Before AI, the saying was: ‘Big brother is watching you, but he can’t see,’” said Stefan Ritter, who is the Chief Product Officer and co-Founder of Ruum. “In essence, it meant we had CCTV everywhere recording us, but in reality, it was not taking away our freedoms or rights — yes we were being taped, but in effect police only turned to the recordings when there was a crime severe enough to merit the many hours of painstakingly going through video recordings. However, with AI-powered facial recognition for social control, we could come dangerously close to a ‘Minority Report’-esque future, where neural networks could, in theory, recognize crimes before they happen. ‘Innocent until proven otherwise’ is one of the founding principles of the democratic state, so it’s crucial that we have a broad discussion about how we want to leverage AI in our social systems.”

Now of course, there are many benefits to facial recognition systems. They can be leveraged to identify diseases in MRIs or to even help predict crashes before they happen.

But then again, there really needs to be a focus on the unintended consequences. In fact, if there are high-profile mishaps, the result could be a stunting of AI’s progress.

“The San Francisco city government is setting a positive example by banning facial recognition technology,” said Asma Zubair, who is the Sr. Manager of IAST Product Management at Synopsys. “While it’s a good start, we must also recognize that the use of facial recognition in the private sector continues to grow. While the technology has improved greatly in recent years, there are known weaknesses when recognizing certain groups of people. As the adoption of facial recognition technology grows, raw video footage will become more easily available as structured data that includes biometric information and personally identifiable information will likely be stored in a searchable format — for example, names, location, time, date and so on. This structured data may be retained for periods of time which makes it susceptible to breaches and misuse. With so many data privacy breaches in the headlines already, organizations clearly aren’t ready to use facial recognition in a safe, secure, and responsible manner.”

Regardless, the silver lining is that there is robust debate and some action. This is in contrast to what happened with social media, which quickly got out of control.

“Facial recognition technology is part of the broader ethical and legal debates around algorithmic transparency and integrity, including the role of human intervention in validating the outputs of algorithms where the impacts to individuals may be significant,” said Hilary Wandall, who is the SVP of privacy intelligence at TrustArc.