AI (Artificial Intelligence) Words You Need To Know

In 1956, John McCarthy setup a ten-week research project at Dartmouth University that was focused on a new concept he called “artificial intelligence.” The event included many of the researchers who would become giants in the emerging field, like Marvin Minsky, Nathaniel Rochester, Allen Newell, O.G. Selfridge, Raymond Solomonoff, and Claude Shannon.

Yet the reaction to the phrase artificial intelligence was mixed. Did it really explain the technology? Was there a better way to word it?

Well, no one could come up with something better–and so AI stuck.

Since then, we’ve seen the coining of plenty of words in the category, which often define complex technologies and systems. The result is that it can be tough to understand what is being talked about.

So to help clarify things, let’s take a look at the AI words you need to know:

Algorithm

From Kurt Muehmel, who is a VP Sales Engineer at Dataiku:

A series of computations, from the most simple (long division using pencil and paper), to the most complex. For example, machine learning uses an algorithm to process data, discover rules that are hidden in the data, and that are then encoded in a “model” that can be used to make predictions on new data.

Machine Learning

From Dr. Hossein Rahnama, who is the co-founder and CEO of Flybits:

Traditional programming involves specifying a sequence of instructions that dictate to the computer exactly what to do. Machine learning, on the other hand, is a different programming paradigm wherein the engineer provides examples comprising what the expected output of the program should be for a given input. The machine learning system then explores the set of all possible computer programs in order to find the program that most closely generates the expected output for the corresponding input data. Thus, with this programming paradigm, the engineer does not need to figure out how to instruct the computer to accomplish a task, provided they have a sufficient number of examples for the system to identify the correct program in the search space.

Neural Networks

From Dan Grimm, who is the VP and General Manager of Computer Vision a RealNetworks:

Neural networks are mathematical constructs that mimic the structure of the human brain to summarize complex information into simple, tangible results. Much like we train the human brain to, for example, learn to control our bodies in order to walk, these networks also need to be trained with significant amounts of data. Over the last five years, there have been tremendous advancements in the layering of these networks and the compute power available to train them.

Deep Learning

From Sheldon Fernandez, who is the CEO of DarwinAI:

Deep Learning is a specialized form of Machine Learning, based on neural networks that emulate the cognitive capabilities of the human mind. Deep Learning is to Machine Learning what Machine Learning is to AI–not the only manifestation of its parent, but generally the most powerful and eye-catching version. In practice, deep learning networks capable of performing sophisticated tasks are 1.) many layers deep with millions, sometimes, billions of inputs (hence the ‘deep’); 2.) trained using real world examples until they become proficient at the prevailing task (hence the ‘learning’).

Explainability

From Michael Beckley, who is the CTO and founder of Appian:

Explainability is knowing why AI rejects your credit card charge as fraud, denies your insurance claim, or confuses the side of a truck with a cloudy sky. Explainability is necessary to build trust and transparency into AI-powered software. The power and complexity of AI deep learning can make predictions and decisions difficult to explain to both customers and regulators. As our understanding of potential bias in data sets used to train AI algorithms grows, so does our need for greater explainability in our AI systems. To meet this challenge, enterprises can use tools like Low Code Platforms to put a human in the loop and govern how AI is used in important decisions.

Supervised, Unsupervised and Reinforcement Learning

From Justin Silver, who is the manager of science & research at PROS:

There are three broad categories of machine learning: supervised, unsupervised, and reinforcement learning. In supervised learning, the machine observes a set of cases (think of “cases” as scenarios like “The weather is cold and rainy”) and their outcomes (for example, “John will go to the beach”) and learns rules with the goal of being able to predict the outcomes of unobserved cases (if, in the past, John usually has gone to the beach when it was cold and rainy, in the future the machine will predict that John will very likely go to the beach whenever the weather is cold and rainy). In unsupervised learning, the machine observes a set of cases, without observing any outcomes for these cases, and learns patterns that enable it to classify the cases into groups with similar characteristics (without any knowledge of whether John has gone to the beach, the machine learns that “The weather is cold and rainy” is similar to “It’s snowing” but not to “It’s hot outside”). In reinforcement learning, the machine takes actions towards achieving an objective, receives feedback on those actions, and learns through trial and error to take actions that lead to better fulfillment of that objective (if the machine is trying to help John avoid those cold and rainy beach days, it could give John suggestions over a period of time on whether to go to the beach, learn from John’s positive and negative feedback, and continue to update its suggestions).

Bias

From Mehul Patel, who is the CEO of Hired:

While you may think of machines as objective, fair and consistent, they often adopt the same unconscious biases as the humans who built them. That’s why it’s vital that companies recognize the importance of normalizing data—meaning adjusting values measured on different scales to a common scale—to ensure that human biases aren’t unintentionally introduced into the algorithm. Take hiring as an example: If you give a computer a data set with 100 female candidates and 300 male candidates and ask it to predict the best person for the job, it is going to surface more male candidates because the volume of men is three times the size of women in the data set. Building technology that is fair and equitable may be challenging but will ensure that the algorithms informing our decisions and insights are not perpetuating the very biases we are trying to undo as a society.

Backpropagation

From Victoria Jones, who is the Zoho AI Evangelist:

Backpropagation algorithms allow a neural network to learn from its mistakes. The technology tracks an event backwards from the outcome to the prediction and analyzes the margin of error at different stages to adjust how it will make its next prediction. Around 70% of our AI assistant (called Zia) features the use of backpropagation, including Zoho Writer’s grammar-check engine and Zoho Notebook’s OCR technology, which lets Zia identify objects in images and make those images searchable. This technology also allows Zia’s chatbot to respond more accurately and naturally. The more a business uses Zia, the more Zia understands how that business is run. This means that Zia’s anomaly detection and forecasting capabilities become more accurate and personalized to any specific business.

NLP (Natural Language Processing)

Courtney Napoles, who is the Language Data Manager at Grammarly:

The field of NLP brings together artificial intelligence, computer science, and linguistics with the goal of teaching machines to understand and process human language. NLP researchers and engineers build models for computers to perform a variety of language tasks, including machine translation, sentiment analysis, and writing enhancement. Researchers often begin with analysis of a text corpus—a huge collection of sentences organized and annotated in a way that AI algorithms can understand.

The problem of teaching machines to understand human language—which is extraordinarily creative and complex—dates back to the advent of artificial intelligence itself. Language has evolved over the course of millennia, and devising methods to apprehend this intimate facet of human culture is NLP’s particularly challenging task, requiring astonishing levels of dexterity, precision, and discernment. As AI approaches—particularly machine learning and the subset of ML known as deep learning—have developed over the last several years, NLP has entered a thrilling period of new possibilities for analyzing language at an unprecedented scale and building tools that can engage with a level of expressive intricacy unimaginable even as recently as a decade ago.

How Screenwriting Can Boost AI (Artificial Intelligence)

Scott Ganz, who is the Principal Content Designer at Intuit, did not get in the tech game the typical way. Keep in mind that he was a screenwriter, such as for Wordgirl (winning an Emmy for his work) and The Muppets. “I’ve always considered myself a comedy writer at heart, which generally involves a lot of pointing out what’s wrong with the world to no one in particular and being unemployed,” he said.

Yet his skills have proven quite effective and valuable, as he has helped create AI chatbots for brands like Mattel and Call of Duty. And yes, as of now, he works on the QuickBooks Assistant, a text-based conversational AI platform. “I partner with our engineering and design teams to create an artificial personality that can dynamically interact with our customers, and anticipate their questions,” he said. “It also involves a lot of empathy. It’s one thing to anticipate what someone might ask about their finances. You also have to anticipate how they feel about it.”

So what are his takeaways? Well, he certainly has plenty. Let’s take a look:

Where is chatbot technology right now? Trends you see in the coming years?

It’s a really exciting time for chatbots and conversational user interfaces in general. They’re becoming much more sophisticated than the rudimentary “press 1 for yes” types of systems that we used to deal with… usually by screaming “TALK TO A HUMAN!” The technology is evolving quickly, and so are the rules and best practices. With those two things still in flux, the tech side and the design side can constantly disrupt each other.

Using natural language processing, we’re finding opportunities to make our technology more humanized and relatable, so people can interact naturally and make better informed decisions about their money. For example, we analyze our customers’ utterances to not only process their requests, but also to understand the intent behind their requests, so we can get them information faster and with greater ease. We really take “natural language processing” seriously. Intuit works really hard to speak its customers’ language. That’s extra important to us. You can’t expect people to say “accounts receivable.” It’s “who owes me money?” As chatbots become more and more prevalent in our daily lives, it’s important that we’re aware of the types of artificial beings we’re inviting into our living rooms, cars, and offices. I’m excited to see the industry experiment with gender-neutral chatbots, where we can both break out of the female assistant trope and also make the world a little more aware of non-binary individuals.

Explain your ideas about screenwriting/improv/chatbots?

When I joined the WordGirl writing team, my first task was to read the show “bible” that the creators had written. Yes, “bible” is the term they use. WordGirl didn’t have a central office or writers’ room. We met up once a year to pitch ideas and, after that, we worked remotely. Therefore, the show bible was essential when it came to getting us all on the same page creatively.

Once I got to Intuit, I used those learnings to create a relationship bible to make sure everyone was on the same page about our bot’s personality and its relationship with our customers. Right now my friend Nicholas Pelczar and I are the only two conversation designers at QuickBooks, but we don’t want it to stay that way. As the team expands, documents like this become even more important.

As a side-note, I originally called the document, “the character bible,” but after my team had a coaching session with Scott Cook, who is the co-founder of Intuit, I felt that the document wasn’t customer-focused enough. I then renamed it the “relationship bible” and redid the sections on the bot’s wants, needs, and fears so that they started with the customer’s wants, needs, and fears. Having established those, I then spelled out exactly why the bot wants and fears those same things.

I also got to apply my improv experience to make the bot more personable. Establishing the relationship is the best first thing you can do in an improv scene. The Upright Citizens Brigade is all about this. It increases your odds of having your scene make sense. Also, comedy is hard, and I wanted to give other content designers some easy recipes to write jokes. Therefore, the last section of the relationship bible has some guidance on the kinds of jokes the bot would and wouldn’t/shouldn’t make, as well as some comedy tropes designers can use.

What are some of the best practices? And the gotchas?

On a tactical level, conversation design hinges on communicating clearly with people so that the conversation itself doesn’t slip off the rails. When this happens in human conversation, people are incredibly adept at getting things back on track. Bots are much less nuanced and flexible and they really struggle with this. Therefore, designers need to be incredibly precise both in how you ask and answer questions. It’s really easy to mislead people about what the bot is asking or what it’s capable of answering.

Beyond that, it’s really important to track the emotional undercurrents of the conversation. It may be an AI, but it’s dealing with people (and all of their emotions). AI has its strengths, but design requires empathy, and this is absolutely the case when you’re engaging with people in conversation. Our emotions are a huge part of what we’re communicating, and true understanding requires that we understand that side of it as well. Even though a chatbot is not a person, it is made by real people, expressing real care, in a way that makes it available to everyone, all the time. It’s important to keep this in mind when working on a chatbot, so that the voice remains authentic.

One of the biggest things we need to take into account is ensuring the chatbot is a champion for the company, but still remains a little bit separated from the company. Since chatbots still make mistakes, it’s important to take steps to protect the company’s reputation. This plays out in creating strict guidelines around the use of “I” vs. “We.” When the chatbot messes up, it uses “I,” and only uses “we” in moments when it is working in conjunction with humans at the company. By using these two separate phrases, customers are able to better understand what is the work of the chatbot vs. the company. Essentially, we think of QuickBooks Assistant as a kind of digital employee.

How To Raise Your First Series A Round

The Series A is the first round of startup funding from institutional investors, such as venture capitalists. This is certainly a huge milestone. But getting this funding is not easy. “Series A investors will be looking at your prototype, traction, and management team,” said Sid Sijbrandij, who is the CEO of GitLab.

And even if you get funding, there are lots of risks. “When negotiating, remember that you’re not just setting terms for the Series A but you’re signaling what is acceptable for all future rounds,” said Adam Wilson, who is the CEO of Trifacta.

So then, what are some strategies and approaches to consider? Well, I reached out to entrepreneurs who have been through the process. Here’s what they had to say:

Kraig Swensrud, who is the CEO and Founder of Qualified.com:

You need to check off the Three T’s. (1) Team – We had built the core team, the first 10 employees, that could lead and grow the business to $10M in revenue. (2) Timing – We had built a product, and our first 50 customers tell us that our product presents a real and meaningful opportunity for change in their business right NOW. Therefore, we can see a path to product-market fit within months. (3) TAM – If our grand vision is more or less accurate, and our product delivers on that vision, the Total Addressable Market is huge.

Tim Eades, who is the CEO of vArmour:

I have led three companies through fundraising rounds and the most important lesson I have learned, as basic as it sounds, is to create a fully fleshed out fundraising strategy. You would think that is easy to do, but you’d be surprised how many people don’t have a well-thought-out plan. Start with building out a timeline of around five to six months and identify the key metrics you want to achieve each month. Then you need to identify your top targets. I usually go with around five VCs, but make sure you do your homework! It’s not enough for a VC to have a big name—they need to understand your market, aren’t currently investing in a competitor of yours, and have a proven track record of helping companies grow and scale to market. Finally, don’t be afraid to own the process. VCs want to see you take initiative, so be proactive and meet with them every six weeks to show your progress as well as talk through what your next steps are.

Chris Nicholson, who is the founder and CEO of Skymind:

VCs are looking for traction in the form of users or revenue, and the best way to help them understand all that—the problem, the product, the team—is in the form of a story, which you tell in a slide deck. The best slide deck I saw was from Front founder Mathilde Collin. I structured ours like hers.

Tomer Tagrin, who is the CEO and Cofounder of Yotpo:

Tactically, it’s good practice to bring someone with you to meetings to take notes. Then do a debrief on what worked and what didn’t to improve for the next meeting. You might also discover something new to incorporate into your pitch.

Jake Stein, who is the SVP of Stitch, Talend:

For Series A, for example, you’re partially selling investors on the dream of what could come in the future and partially providing evidence that you’ve made tangible progress with your product, service, and users to back you up. For any company, it’s useful to be rigorous about where you are now, where you need to go, and when you will switch phases.

Martin Hitch, who is the Chief Business Officer and co-founder at Bossa Nova:

The most important advice for raising a Series A is to make sure it’s the number one project for the funding lead (usually the CEO). While CEOs often juggle a number of responsibilities, they need to carve out dedicated time to prioritize fundraising.

Ross Schibler, who is the CEO and co-founder of Opsani:

I have founded three companies over the past 25 years in Silicon Valley—two with successful exits, and my current company, Opsani has just closed a $10M round from Redpoint, Zetta, and Bain. One piece of advice I would pass on to fellow entrepreneurs is to consider the quality of the partner making the investment because that will greatly affect the quality of the outcome. I would go so far as to say that a dollar from one investor might be worth 2x the same dollar from a higher quality investor. Folks tend to get hung up on valuations and percentages when what they should be focused on is the question: “Do I want to work with this partner and will they help me build a great company?”

Bipul Sinha, who is the Co-Founder & CEO at Rubrik and a Venture Partner at Lightspeed Venture Partners:

When I was in VC, I had a strong thesis about risk capital that when someone gives you VC funding, the purpose is for you to take risks and create high growth, especially in the early days of a company. Entrepreneurs often get bad advice around risk and become more conservative than what the situation demands—a startup taking capital should by nature be taking risks, so if the idea works, you can create a massive company.

Adam Karp, who is the CEO of Livley:

When pitching an investor, the most important thing is to know what impact your business is going to make in the lives of your customers. Yes, the fundamentals of your business must be solid. Your product needs to work. But until you can truly put yourself in the shoes of your customer, and understand what they lose by not having your solution, you’re not finished. For Lively, we share stories of our earliest customers. A man who, after he became a customer, sat on his porch and heard the birds sing for the first time in years. A mom who had given up on phone calls, and was finally able to hear her kids again once she tried our bluetooth-enabled device. These stories help explain why Lively’s hearing aids are more than just another gadget, and they help investors see why our customers are so wildly passionate about the service that they get from us. Don’t enter a pitch without knowing why exactly your customers love you.

Ways To Successfully Pitch The Media

I get a fair amount of media pitches in my inbox. I’d like to think this is due to my own inherent popularity. But of course, the main reason is that I write for Forbes.com!

Some of the pitches I receive come directly from the founder, which is fine with me. But most come from a company’s PR agency.

For the most part, the pitches are solid and helpful. But as with anything, there are times when things go off the rails.

Then what are some of the ways to boost the odds of getting coverage? What should be avoided?  Well, for me, here are some approaches that work:

Avoid Follow-Ups: I really don’t have the time to respond to all pitches. And besides, I’m pretty sure many of them are being sent to multiple writers anyway.

But sometimes a PR person will follow up with another email, writing something like: “I’m resending this to make sure you did not miss it.” Oh, and this may not be the end of it. Sometimes I get three or even four of these emails.

Note that I do check and retain all my emails. And in some cases, I revisit them. There have been times when I have gone back to an email months after it was sent and used the source for a story.

Relevancy: I think this is most important for a pitch.  Now this does not mean you need to read everything from writer.  Rather, spending time going over headlines is a good approach.

Top Three Mistakes Managers Make With Their Teams

For over 35 years, Shelle Rose Charvet has researched the power of words and language.  At the heart of this is the Language and Behavior Profile (LAB) system, which you can learn from her best-selling book, Words That Change Minds: The 14 Patterns for Mastering the Language of Influence (I included this in my Forbes.com post regarding the best management books for entrepreneurs and executives).  It has proven quite effective for just about any context, whether for business or personal relationships.  Keep in mind that Charvet’s focus is about finding hidden triggers that motivate people at the unconscious level.

Sounds kind of heady, right? Actually, Charvet has a way of making her concepts easy and understandable.

For me, I was particularly interested in how they could help entrepreneurs, who are usually new to management. Yet if they get things wrong the consequences can be devastating.

So in an interview with Charvet, she provided the following helpful advice:

Mistake #1:  Many managers think if you give someone the correct information, it will change their behavior. If you tell someone what to do, they will do it.

Yet research has repeatedly shown that when given the correct information, people double-down on erroneous beliefs.  Most people dislike being told what to think, what to do or what to believe, especially by their boss (or spouse!). Even a simple statement of facts can be perceived as “Command Language” and raise hackles.

Then managers end up spending a frustrating amount of time checking if tasks were completed, and done correctly.

I suggest using the Suggestion Model™, to avoid this passive resistance, get more buy-in, and more things done on time and well. And it’s an easy 4-step process.

  • Make a suggestion
  • State what problem it avoids or solves
  • State the benefit
  • Overall why it’s easy to do

Here’s an example: “I believe this version of the software makes the most sense right now because it doesn’t have the issues the other ones have, plus it integrates well with the other software you are using, and it will be fairly easy to implement.”

Mistake #2: Managers are still using the “Feedback Sandwich” that has demoralized millions of team members around the world.

You know the drill: Your boss compliments you, and then you brace for the expected “improvement point” (lightly-disguised criticism), followed by a vaguely-worded bit of praise. This ubiquitous method has trained people to be immediately suspicious of any positive comments and to feel bad even when the news is only good.

Instead, I recommend completely separating praise from critique, so that people won’t cringe whenever you say something nice. They will be more likely to take in the critique and do something about it and it’s easier for you to deliver.

  • For praise: Go into your team member’s office or phone them when they are unlikely to pick up the phone. Tell them what they did well, and the positive consequence of that, and then immediately leave the room or hang up the phone, saying “Thanks, gotta go.” (If you stay, they will be waiting for the bad shoe to drop.)
  • For critique: You could use the Bad News Formula™ to reduce bad feelings while still getting your message across. This involves clearly telling the bad news first and then adding “but,” which then includes some pieces of good news. It’s so much easier than criticizing.

Mistake #3: A manager often assumes that what is important for him or her is also important for the team.

Having worked with hundreds of teams, the first thing I often notice is the team leader is often disappointed with the lower level of engagement from team members. When they have a discussion on what everyone thinks the objectives are, and which operating values are important, they are all surprised at how different the points of view are.

The first step is to look at the goals and objectives of the team and get input from everyone as to why they are important. Find out what are the personal motivations from the team. It’s not just business. It’s personal!

Bias: The Silent Killer Of AI (Artificial Intelligence)

When it comes to AI (Artificial Intelligence), there’s usually a major focus on using large datasets, which allow for the training of models. But there’s a nagging issue: bias. What may seem like a robust dataset could instead be highly skewed, such as in terms of race, wealth and gender.

Then what can be done? Well, to help answer this question, I reached out to Dr. Rebecca Parsons, who is the Chief Technology Officer of ThoughtWorks, a global technology company with over 6,000 employees in 14 countries. She has a strong background in both the business and academic worlds of AI.

So here’s a look at what she has to say about bias:

Can you share some real-life examples of bias in AI systems and explain how it gets there?

It’s a common misconception that the developers who are responsible for infusing bias in AI systems are either prejudiced or acting out of malice—but in reality, bias is more often unintentional and unconscious in nature. AI systems and algorithms are created by people with their own experiences, backgrounds and blind spots which can unfortunately lead to the development of fundamentally biased systems. The problem is compounded by the fact that the teams responsible for the development, training, and deployment of AI systems are largely not representative of the society at large. According to a recent research report from NYU, women comprise only 10% of AI research staff at Google and only 2.5% of Google’s workforce is black. This lack of representation is what leads to biased datasets and ultimately algorithms that are much more likely to perpetuate systemic biases.

One example that demonstrates this point well is the use of voice assistants like Siri or Alexa that are trained on huge databases of recorded speech that are unfortunately dominated by speech from white, upper-middle class Americans—making it challenging for the technology to understand commands from people outside that category. Additionally, studies have shown that algorithms trained on historically biased data have significant error rates for communities of color especially in over-predicting the likelihood of a convicted criminals to reoffend which can have serious implications for the justice system.

How do you detect bias in AI and guard against it?

The best way to detect bias in AI is by cross-checking the algorithm you are using to see if there are patterns that you did not necessarily intend. Correlation does not always mean causation, and it is important to identify patterns that are not relevant so you can amend your dataset. One way you can test for this is by checking if there is any under- or overrepresentation in your data. If you detect a bias in your testing, then you must overcome it by adding more information to supplement that underrepresentation.

The Algorithmic Justice League has also done interesting work on how to correct biased algorithms. They ran tests on facial recognition programs to see if they could accurately determine race and gender. Interestingly, lighter-skinned males were almost always correctly identified, while 35% of darker-skinned females were misidentified. The reason? One of the most widely used facial-recognition data training sets was estimated to be more than 75% male and more than 80% white. Because the categorization had far more definitive and distinguishing categories for white males than it did for any other race, it was biased in correctly identifying these individuals over others. In this instance, the fix was quite easy. Programmers added more faces to the training data and the results quickly improved.

While AI systems can get quite a lot right, humans are the only ones who can look back at a set of decisions and determine whether there are any gaps in the datasets or oversight that led to a mistake.  This exact issue was documented in a study where a hospital was using machine learning to predict the risk of death from pneumonia. The algorithm came to the conclusion that patients with asthma were less likely to die from pneumonia than patients without asthma. Based off this data, hospitals could decide that it was less critical to hospitalize patients with both pneumonia and asthma, given the patients appeared to have a higher likelihood of recovery. However, the algorithm overlooked another important insight, which is that those patients with asthma typically receive faster and more intensive care than other patients, which is why they have a lower mortality rate connected to pneumonia. Had the hospital blindly trusted the algorithm, they may have incorrectly assumed that it’s less critical to hospitalize asthmatics, when in reality they actually require even more intensive care.

What could be the consequences to the AI industry if bias is not dealt with properly?

As detailed in the asthma example, if biases in AI are not properly identified, the difference can quite literally be life and death. The use of AI in areas like criminal justice can also have devastating consequences if left unchecked. Another less-talked about consequence is the potential of more regulation and lawsuits surrounding the AI industry. Real conversations must be had around who is liable if something goes terribly wrong. For instance, is it the doctor who relies on the AI system that made the decision resulting in a patient’s death, or the hospital that employs the doctor? Is it the AI programmer who created the algorithm, or the company that employs the programmer?

Additionally, the “witness” in many of these incidents cannot even be cross-examined since it’s often the algorithm itself. And to make things even more complicated, many in the industry are taking the position that algorithms are intellectual property, therefore limiting the court’s ability to question programmers or attempt to reverse-engineer the program to find out what went wrong in the first place. These are all important discussions that must be had as AI continues to transform the world we live in.

If we allow this incredible technology to continue to advance but fail to address questions around biases, our society will undoubtedly face a variety of serious moral, legal, practical and social consequences. It’s important we act now to mitigate the spread of biased or inaccurate technologies.

What Do VCs Look For In An AI (Artificial Intelligence) Deal?

Recently SoftBank Group launched its latest fund, called Vision Fund 2, which has $108 billion in assets. The focus: It’s primarily on making investments in AI (Artificial Intelligence). No doubt, the fund will have a huge impact on the industry.

But of course, VCs are not the only ones ramping up their investments. So are mega tech companies. For example, Microsoft has announced a $1 billion equity stake in OpenAI (here’s a post I wrote for Forbes.com on the deal).

The irony is that—until a decade ago—AI was mostly a backwater, which had suffered several winters.  But with the surge in Big Data, new innovations in academic research and the improvements in GPUs, the technology has become a real force.

“I’m convinced that the current AI revolution will be the largest technology trend driving innovation in enterprise software over the next decade,” said Jeremy Kaufmann, who is a Principal at ScaleVP. “The magnitude of this trend will be at least as large as what we observed with cloud software overtaking on-prem software over the last two decades, which created over 200 billion dollars in value. While certain subfields like autonomous driving may be overhyped with irrational expectations on timing, I would argue that progress in the discipline has actually exceeded expectations since the deep learning revolution in 2012.”

Given all this, there are many AI startups springing up to capitalize on the megatrend. So then what are some of the factors to improve the odds of getting funding?

Well, in the AI wave, things may be different from what we saw with the cloud revolution.

“Unlike the shift from on-prem software to SaaS, though, progress in AI will not rewrite every business application,” said Kaufmann. “For example, SaaS companies like Salesforce and Workday were able to get big by fundamentally eating old-school on-prem vendors like Oracle and ADP. In this AI revolution, however, do not expect a new startup to displace an incumbent by offering an ‘AI-first’ version of Salesforce or Workday, as AI does not typically replace a core business system of record. Rather, the most logical area for AI to have an impact in the world of enterprise software will be to sit on top of multiple systems of record where it can act as a system of prediction. I am excited about conversations around the likelihood of sales conversion, the most effective marketing channels, the probability that a given credit card transaction is fraudulent, and whether a particular website visitor might be a malicious bot.”

Data, The Team And Business Focus

An AI startup will also need a rock-solid data strategy, which allows for training of the models. Ideally this would mean having a proprietary source.

“At the highest level, a lot of our diligence comes down to who has proprietary data,” said Kaufmann. “In every deal, we ask, ‘Does this startup have domain-specific understanding and data or could a Google or Amazon sweep in and replicate what they do?’ It’s one of the biggest challenges for startups—in order to succeed, you need a data set that can’t easily be replicated. Without the resources of a larger, more established company, that’s a very big challenge—but it can be achieved with a variety of hacks, including ‘selling workflow first, AI second,’ scraping publicly available data for a minimum viable product, incentivizing your customers to share their data with you in return for a price discount, or partnering with the relevant institutions in the field who possess the key data in question.”

And even when you have robust data, there remain other challenges, such as with tagging and labeling.  There are also inherent problems with bias, which can lead to unintended consequences.

All this means that—as with any venture opportunity—the team is paramount. “We look at the academic backgrounds,” said Rama Sekhar, who is a partner at Norwest Venture Partners. “We also like a team that has worked with models at scale, say from companies like Google, Amazon or Apple.”

But for AI startups, there can often be too much of a focus on the technology, which could stymie the progress of the startup. “Some of the red flags for me are when a pitch does not define a market target clearly or there is not a differentiation,” said David Blumberg, who is the Founder and Managing Partner of Blumberg Capital. “Failed startups are usually not because of the technology but instead from a lack of product-market fit.”

Microsoft Bets $1 Billion On The Holy Grail Of AI (Artificial Intelligence)

Microsoft is back to its winning ways. And with its surging profits, the company is looking for ways to marshal its enormous resources to keep up the momentum. Perhaps one of the most important recent deals is a $1 billion investment in OpenAI. The goal is to build a next-generation AI platform that is not only powerful but ethical and trustworthy

Founded in 2015, OpenAI is one of the few companies — along with Google’s DeepMind  that is focused on AGI (Artificial General Intelligence), which is really the Holy Grail of the AI world. According to a blog post from the company: “AGI will be a system capable of mastering a field of study to the world-expert level, and mastering more fields than any one human — like a tool which combines the skills of Curie, Turing, and Bach. An AGI working on a problem would be able to see connections across disciplines that no human could. We want AGI to work with people to solve currently intractable multi-disciplinary problems, including global challenges such as climate change, affordable and high-quality healthcare, and personalized education. We think its impact should be to give everyone economic freedom to pursue what they find most fulfilling, creating new opportunities for all of our lives that are unimaginable today.”

It’s a bold vision. But OpenAI has already made major advances in AI.  “It has pushed the limits of what AI can achieve in many fields, but there are two in particular that stand out,” said Stephane Rion, who is the Senior Deep Learning Specialist of Emerging Practices at Teradata. “The first one is reinforcement learning, where OpenAI has driven some major research breakthroughs, including designing an AI system capable of defeating most of its human challengers in the video game Dota 2. This project doesn’t just show the promise of AI in the video game industry, but how Reinforcement Learning can be used for numerous other applications such as robotics, retail and manufacturing. OpenAI has also made some major advances in the area of Natural Language Processing (NLP), specifically in unsupervised learning and attention mechanism. This can be used to build systems that achieve many language-based tasks such as translation, summarization and generation of coherent paragraphs of text.”

But keep in mind that AI is still fairly weak — with applications for narrow use cases. The fact is that AGI is likely something that is years away. “In fact, we’re far from machines learning basic things about the world in the same way animals can,” said Krishna Gade, who is the CEO and Co-founder at Fiddler Labs. “It is true, machines can beat humans on a some tasks by processing tons of data at scale. However, as humans, we don’t understand or approach the world this way — by processing massive amounts of labeled data. Instead, we use reasoning and predictions to infer the future from available information. We’re able to fill in gaps, extrapolate things with common sense and work with incomplete premises — which machines simply can’t to do yet. So while it is interesting, I do believe that we’re quite far from AGI.”

Guy Caspi, who is the CEO of Deep Instinct, agrees on this. “While deep learning has drastically improved the state-of-the-art in many fields, we have seen little progress in AGI,” he said. “A deep learning model trained for computer vision cannot suddenly learn to understand Japanese, and vice versa.”

Regardless, with OpenAI and Microsoft willing to take on tough challenges, there will likely be an acceleration of innovation and breakthroughs — providing benefits in the near-term. “I love to see a company like Microsoft, which the market has rewarded with enormous returns on capital, making investments that aim to benefit society,” said Dave Costenaro, who is the head of artificial intelligence R&D at Jane.ai. “Funding OpenAI, creating more open source content, and sponsoring their ‘AI for Earth’ grants are all examples that Microsoft has recently spun up in earnest. Microsoft is not a charity, however, so the smart money will take this as signal of how strategically important these issues are.”

What’s more, the emphasis on ethics will also be impactful. As AI becomes pervasive, there needs to be more attention to the social implications.  We are already seeing problems, such as with bias and deepfakes.

“With the fast pace of AI innovation, we’re continually encountering new, largely unforeseen ethical implications,” said Alexandr Wang, who is the CEO of Scale. “In the absence of traditional governing bodies, creating ethical guidelines for AI is both a shared and a perpetual responsibility.”

AI (Artificial Intelligence): An Extinction Event For The Corporate World?

Back in 1973, Daniel Bell published a pioneering book, called The Coming of Post-Industrial Society.  He described how society was undergoing relentless change, driven by rapid advances in technology. Services would become more important than goods. There would also be the emergence of a “knowledge class” and gender roles would evolve. According to Bell: “The concept of the post-industrial society deals primarily with changes in the social structure, the way in which the economy is being transformed and the occupational system reworked, and with the new relations between theory and empiricism, particularly science and technology.”

Among the many readers of the book was a young Tom Siebel. And yes, it would rock his world. It’s what inspired him to enroll at the graduate school of engineering at the University of Illinois and get a degree in Computer Science. He would then join Oracle during the 1980s, where he helped lead the wide-scale adoption of relational databases. Then in 1993, he started his own business, called Siebel Systems, which capitalized on the Internet wave, and after this, he launched C3.ai, a top enterprise AI software provider.

No doubt, he has a great view of tech history (during his career, the IT business has gone from $50 billion to $4 trillion). But he also has an uncanny knack for anticipating major trends and opportunities. Bell’s book was certainly a big help. Keep in mind that he said it was essentially about “social forecasting.”

OK then, so what now for Siebel? Well, he has recently published his own book: Digital Transformation: Survive and Thrive in an Era of Mass Extinction

In it, he contends that technology is at a inflection point. While the PC and Internet eras were mostly about streamlining procedures and operations, the new era is much different. It’s about the convergence of four megatrends: cloud computing, big data, artificial intelligence (AI) and IoT (Internet-of-Things). All these are making systems and platforms increasingly smarter.

Siebel provides various case studies to show what’s already being done, such as:

  • Royal Dutch Shell: The oil giant has created an AI app that analyzes 500,000 refineries across the globe so as to allow for predictive maintenance.
  • Caterpillar: With an iPhone, a customer can easily get the vitals on a tractor. Caterpillar has also leveraged telemetry to monitor assets in order to predict failures.
  • 3M: The company is leveraging AI to anticipate complaints from invoices.

All these initiatives have had a major impact, resulting in hundreds of millions in savings. Of course, this does not include the positive impact from improved customer service, safety and environmental protection.

In fact, Siebel believes that if companies do not engage in digital transformation, the prospects for success will be bleak. This means that CEOs must lead the process and become much more knowledgeable about technology. In other words, all CEOs will need to deeply rethink their businesses.

Now there will definitely be challenges and resistance. Many companies simply lack the talent to transition towards next-generation technologies like AI. Even worse, the current technologies are often scattered, disconnected and complex.

Yet Siebel does point out that large companies have inherent advantages. One is that there is often a large amount of quality data, which can be a powerful moat against the competition. What’s more, large companies have the resources, distribution and infrastructure to scale their efforts.

But for the most part, CEOs can no longer wait as the pace of change is accelerating.

According to Siebel:  “New business models will emerge.  Products and services unimaginable today will be ubiquitous.  New opportunities will abound.  But the great majority of corporations and institutions that fail to seize this moment will become footnotes in history.”

Surprises With Tech Compensation

The IPO market continues to be red hot. This week, Medallia (MDLA) soared 76% on its debut and Phreesia (PHR) rose 39%. And yes, this bullish activity is driving up compensation in Silicon Valley, especially from stock option packages.

Yet this is also causing some problems. After all, non-tech companies realize they need to hire technical talent but have fewer resources to compete. Hey, even startups are feeling the pressures.

But when you look at compensation data, there are some other interesting trends that are emerging. Consider the findings from a recent report from Hired, which is a marketplace for matching tech talent. It is based on more than 420,000 interview requests and job offers across 10,000 companies and 98,000 job seekers.

So then what are some of the takeaways? Well, let’s take a look:

Boston, Austin and D.C. are gaining momentum: As should be no surprise, the San Francisco Bay Area is the highest paying market for tech workers (the salary levels rose 2% last year). But the fact is that the market is dynamic and workers are looking elsewhere. For example, tech salaries in Boston jumped 9% last year and Austin and Washington D.C. saw 6% increases.

“The salary growth in up-and-coming tech hubs is a clear sign of these cities doing everything they can to attract the best tech talent and compete with the Bay Area — and from our analysis, it’s working too,” said Mehul Patel, who is the CEO of Hired. “Our data shows that Austin, where average tech salaries grew from $118K in 2017 to $125K in 2018, is the most appealing place for tech talent to work.”

IPOs and equity compensation: Keep in mind that the IPO boom is not creating a frenzy for equity. “We also looked at tech worker sentiment around the importance of equity in a compensation package and despite this year’s IPO wave, our results found that more than half of global tech workers (54%) are on the fence about forgoing a higher salary for company equity, suggesting that added pressures could be boosting salaries,” said Patel.

Tech workers will move: The situation in the Bay Area is creating something odd. That is, tech workers feel underpaid because the cost of living is increasing even more, the taxes are high and real estate prices are at extreme levels.

According to Hired, when you make adjustments for these factors, an average salary in Austin would be $208,000. In other words, a typical tech worker would need an $83,000 raise to get to parity.

Currently, the most popular cities to relocate include: Austin, Seattle and Denver. Oh, and 60% of tech workers plan to relocate within the next five years.

Age: Tech remains generally biased towards younger people. The Hired report indicates that the average tech salary plateaus at 40 in the US.

Education: The requirements are changing rapidly. For the most part, tech workers do not see as much value in advanced degrees. The Hired survey shows that 31% believe they could have the exact same job without their degree and only 23% of those with master’s or doctorates believe they command higher salaries because of their advanced degree.

Instead, tech workers are looking to alternative forms of education, such as coding bootcamps and online learning platforms.

The Hired report notes: “Tech giants like Apple, Google and PayPal have moved away from traditional education requirements and are increasingly interested in candidates with specific in-demand tech skills and on-the-job experience that may not be acquired through higher education.”