My Course On The Python Language

Have you always wanted to learn Python — but never seemed to have the time to do so? Well, I have a solution: my new online course, which only takes two hours to complete. In it, you will learn about the core basics, such as variables, loops, data structures, functions and so on. I also provide numerous cheat sheets.

Python is the world’s most popular language. And a big reason for this is that it is essential for data science, machine learning and AI.

Interested? I hope so.

To help things along, I also have a special — for the next ten signups — to get the course for free! You can click here for it.

BTW, if you do sign up, reviews are very helpful 🙂

Tecton.ai Snags $20 Million To Solve AI’s Data Problem

While the COVID-19 pandemic has halted many venture fundings, Tecton.ai has been able to buck the trend. This week the company announced a $20 million investment from Andreessen Horowitz and Sequoia (last year there was a $5 million angel round).

Tecton is a platform that morphs raw data into AI models that can be successfully deployed.  And yes, this is far from a trivial process.

“The foundational success of an AI-based technology revolution or even the build of a very simple algorithm ultimately lies in the health of the data,” said Kim Kaluba, who is the Senior Manager for Data Management Solutions at SAS. “However, in survey after survey organizations continue to report problems with accessing, preparing, cleansing and managing data, ultimately stalling the development of trustworthy and transparent analytical models.”

Consider that the data wrangling is often the most time-consuming and expensive part of the AI process. “Some data scientists report spending 80% of their time collecting and cleaning data,” said Jen Snell, who is the Vice President of Product Marketing and Intelligent Self Service at Verint. “This problem has become so ubiquitous that it’s now called the ‘80/20 rule’ of data science.”

Regarding Tecton, the technology is the result of deep experience of its three founders, who helped build the AI platform for Uber (called Michelangelo). “When we got to Uber, everything was breaking because of the extreme growth,” said Mike Del Balso, who is the CEO and co-founder of Tecton. “Data was spread across silos and there were challenges with the deployment of models. With Michelangelo, we made an end-to-end platform that was targeted for the average data science person. We didn’t want to create huge engineering teams. We also built Michelangelo with the focus on production, collaboration, visibility and reusability.”

Within a couple years, the platform would lead to the development of thousands of AI models, helping with such capabilities as ETA, safety and fraud scores. The result was more sustainable growth and stronger competitive advantages for Uber.

Why Is Data So Complicated?

Data is actually fairly simple. It’s just a string of numbers, right?

This is true. But data does present many tough challenges for enterprises, even for some of the most advanced technology companies.

“Oftentimes the data that we receive is ‘dirty,’” said Melissa McSherry, who is the SVP Global Head of Credit and Data Products at Visa. “Think about your credit card statement. The merchant names are sometimes unrecognizable—that has to do with the way merchants are set up in the system. When we clean up the data, we can often generate amazing insight. But that is significant work. Oftentimes organizations don’t understand how much work is required and are disappointed in what it takes to actually get results.”

Another issue with data is organizational. “Enterprises enforce data security and governance policies that weren’t designed to feed data science teams with a steady stream of up-to-date, granular business data,” said Bethann Noble, who is the Senior Director of Product Marketing and Machine Learning at Cloudera. “As data science teams start new projects with different stakeholders, they have to solve for data access once again, which could mean a different journey through a different bureaucratic maze every time. And the necessary data can be anywhere, in any form—residing across different data centers, cloud platforms, or edge devices. It needs to be moved and pre-processed to be ready for machine learning, which can involve complex analytical pipelines across physical and organizational silos.”

Keep in mind that the data problem is only getting more complicated. Based on research from IDC, the total amount of global data will reach 175 zettabytes by 2025, up from 33 zettabytes in 2018 (a zettabyte is 10 to the 21st power or 1 sextillion bytes!)

“In this digital age, we are suffering from ‘InfoObesity’—gorging ourselves on an inconsumable amount of data that is not just unwieldy but can become dysfunctional, especially as we increase the amount of data we collect without scaling our ability to support, filter and manage it,” said Michael Ringman, who is the CIO of TELUS International. “While investing in Big Data is easy, efficient and effective use of it has become difficult.”

Oh, and then there are the privacy and security issues. “Given the mass amounts of data used for complex algorithms, data science platforms can be hot targets for data breaches,” said Ross Ackerman, who is the Director of Analytics and Transformation at NetApp. “Often, the most important data for algorithms contain or can be mapped to CII (Customer Identifiable Information) or PII (Personal Identifiable Information).”

Tecton’s Capabilities

For enterprise AI applications, there are really two main approaches. First, there are analytical models, which provide insights like forecasted churn rates. These types of applications do not need real-time data.

Next, there are operational models. These are embedded in a company’s product, such as a mobile app. They need highly sophisticated data systems and scale. “This is where you can create magical experiences,” said Del Balso.

For the most part, Tecton is about operational models, which are essentially the most demanding–but can provide the most benefits. “It’s high stakes,” said Del Balso.

Tecton is built to streamline the data pipeline, which means that data scientists can spend more time on building effective models. An essential part of this is a feature store that allows for the seamless transition between data scientists and data engineers. Tecton, of course, has other cutting-edge features–and the funding will definitely accelerate the innovation (the platform is currently in private beta).

“For decades, companies have worked to develop technology, knowledge, skills and infrastructure to handle and harvest unstructured data in pursuit of unlocking answers to the most difficult questions,” said Michal Siwinski, who is a Corporate VP at Cadence Design Systems. “However, there’s more work to be done. Because the technology is still continuing to evolve, data is a virtually untapped resource with only as high as 4% of today’s data being analyzed.”

AI (Artificial Intelligence) Projects: Where To Start?

Artificial Intelligence (AI) is clearly a must-have when it comes to being competitive in today’s markets. But implementing this technology has been challenging, even for some of the world’s top companies. There are issues with data, finding the right talent and creating models that generate sufficient ROI.

As a result, many AI projects fail. According to IDC, only abut 35% of organizations succeed in getting models into production successfully.

“While we see AI technologies performing a swath of incredible feats such as Google Translate, AlphaGo, and solving a rubik’s cube, it can be hard to tell which business problems AI is apt to solve,” said Ankur Goyal, who is the CEO at Impira. “This has led to a lot of confusion—and a vendor community that has taken advantage of it by labeling things as AI when they aren’t. It’s very reminiscent of early last decade when cloud technologies took off and we had a lot of cloud washing going on. We had vendors marketing themselves as cloud players when their offerings were vaporware. Similarly, we are going through a period of AI washing now.”

So then, if your company is thinking of implementing AI, what is the best way to start? How can you help boost the odds of success and avoid the pitfalls?

Here’s a look at some strategies:

Beware Of The Hype

AI is not magic. It will not solve all your company’s problems.  Rather, you need to take a realistic approach to the technology.

“Unlike traditional data analytics, machine learning (ML) models that power AI are not always going to offer clear-cut answers,” said Santiago Giraldo, who is the Senior Product Marketing Manager of Data Engineering at Cloudera. “Implementing AI into the business requires experimentation and an understanding that not every experiment is going to drive ROI. When an AI project is successful, it is often built on top of many failed data science experiments. Taking a portfolio approach to ML and AI enables greater longevity in projects and the ability to build on the successes more effectively in the future.”

Interestingly enough, there are often situations when the technology is really just overkill!

“Often times businesses take on AI projects not realizing that it might have been cheaper to continue a process manually instead of investing large amounts of time and money into building a system that doesn’t save the company time or money,” said Gus Walker, who is the Senior Director of Product Management at Veritone.

Governance First

You don’t want to spend time and money on a project and then realize there are legal or compliance restrictions. This could easily mean having to abandon the effort.

“First, customer data should not be used without permission,” said Debu Chatterjee, who is the senior director of platform AI engineering at ServiceNow. “Secondly, bias from data should be mitigated. Any model which is a black box and cannot be tested through APIs for bias should be avoided. The risk of bias is present in nearly any AI model, even in an algorithmic decision, regardless of whether the algorithm was learned from data or written by humans.”

Identify the Problem To Be Solved

In the early phases of an AI project, there should be lots of brainstorming. This should also involve a cross-section of people in the organization, which will help with buy-in. The goal is to identify a business problem to be solved. 

“For many companies, the problem is that they start with a need for technology, and not with an actual business need,” said Colin Priest, who is the VP of AI Strategy at DataRobot. “It reminds me of this famous quote from Steve Jobs, ‘You’ve got to start with the customer experience and work backwards to the technology. You can’t start with the technology and try to figure out where you’re going to sell it.’”

The problem to be solved should also be specific–that is, something that can be measured–and narrow. Don’t boil the ocean.

“It is the small steps that count,” said Mike Brooks, who is the Senior Director of APM Consulting at Aspen Technology. “Do not make the mistake of trying to make AI work for everything, all at once. After analyzing value for each AI initiative, the real benefits come when it solves a very specific goal.”

Costs

While it is important to estimate the ROI of a project, there is often too little attention paid to the cost side of the equation. But this can lead to disappointment. After all, it is never fun to be over budget on a corporate initiative. 

“Companies looking to implement an AI project should start by looking at the cost of the operation and doing an analysis on how that cost structure compares to best practices,” said Jerry Kurtz, who is the EVP and Head of I&D at Capgemini North America. “The cost of storing and transforming data is typically 70% of the budget, and only brings 10% of the value. Being able to leverage AI to solve business problems is only 30% of the cost, and brings 90% of the value. If an organization can reduce data costs and improve data quality, they’ll have more budget to put toward leveraging AI to solve those business problems, like improving productivity and efficiency.”

Buy-In

Implementing AI can be wrenching for an organization.  Employees may be skeptical of the technology and could fear for their jobs. 

This is why there needs to be focus on getting buy-in, which means having clear communication of the benefits. It should also involve a commitment from the C-Suite. Consider that—according to a recent survey from O’Reilly—the biggest bottleneck for AI is an unsupportive culture. 

“For AI to succeed you must have the buy-in of your workforce and the right employee upskilling programs,” said Anand Rao, who is the Global AI Lead at PwC. “You can’t simply offer AI training courses to employees; you need to go further and offer both immediate opportunities and incentives to apply what they’ve learned. Furthermore, business stakeholders and end-users—not just the tech staff—need to be included from the beginning of any project. If they’re not brought in at the start, your organization risks building a solution that does not work for the people who will be using it.”

AI (Artificial Intelligence) Companies That Are Combating The COVID-19 Pandemic

AI (Artificial Intelligence) has a long history, going back to the 1950s when the computer industry started.  It’s interesting to note that much of the innovation came from government programs, not private industry. This was all about how to leverage technologies to fight the Cold War and put a man on the moon. 

The impact of these program would certainly be far-reaching. They would lead to the creation of the Internet and the PC revolution.

So fast forward to today: Could the COVID-19 pandemic have a similar impact? Might it be our generation’s Space Race?

I think so. And of course, it’s just not the US this time. This is about a worldwide effort.

The Catalysts

Wide-scale availability of data will be key. The White House Office of Science and Technology has formed the Covid-19 Open Research Dataset, which has over 24,000 papers and is constantly being updated. This includes the support of the National Library of Medicine (NLM), National Institutes of Health (NIH), Microsoft and the Allen Institute for Artificial Intelligence.

“This database helps scientists and doctors create personalized, curated lists of articles that might help them, and allows data scientists to apply text mining to sift through this prohibitive volume of information efficiently with state-of-the-art AI methods,” said Noah Giansiracusa, who is the Assistant Professor at Bentley University.

Yet there needs to be an organized effort to galvanize AI experts to action. The good news is that there are already groups emerging. For example, there is the C3.ai Digital Transformation Institute, which is a new consortium of research universities, C3.ai (a top AI company) and Microsoft. The organization will be focused on using AI to fight pandemics. 

There are even competitions being setup to stir innovation. One is Kaggle’s COVID-19 Open Research Dataset Challenge, which is a collaboration with the NIH and White House. This will be about leveraging Kaggle’s 4+ million community of data scientists. The first contest was to help provide better forecasts of the spread of COVID-19 across the world. 

Next, the Decentralized Artificial Intelligence Alliance, which is led by SingularityNET, is putting together an AI hackathon to fight the pandemic. The organization has more than 50 companies, labs and nonprofits.

And then there is MIT Solve, which is a marketplace for social impact innovation. It has established the Global Health Security & Pandemics Challenge. In fact, a member of this organization, Ada Health, has developed an AI-powered COVID-19 personalized screening test. 

Free AI Tools

AI tools and infrastructure services can be costly. This is especially the case for models that target complex areas like medical research.

But AI companies have stepped up—that is, by eliminating their fees:

  • NVIDIA is providing a free 90-day license for Parabricks, which allows for using AI for genomics purposes. Consider that the technology can significantly cut down the time for processing. The program also involves free support from Oracle Cloud Infrastructure and Core Scientific (a provider of NVIDIA DGX systems and NetApp cloud-connected storage).
  • DataRobot is offering its platform for no charge. This allows for the deployment, monitoring and management of AI models at scale. The technology is also provided to the Kaggle competition. 
  • Run:AI is offering its software for free to help with building virtualization layers for deep learning models. 
  • DarwinAI has collaborated with the University of Waterloo’s VIP Lab to develop COVID-Net.  This is a convolutional neural network that detects COVID-19 using chest radiography. DarwinAI is also making this technology open source (below you’ll find a visualization of this).

Patient Care

Patient care is an area where AI could be essential. An example of this is Biofourmis. In a two-week period, this startup created a remote monitoring system that has a biosensor for a patient’s arm and an AI application to help with the diagnosis. In other words, this can help reduce infection rates for doctors and medical support personnel. Keep in mind that–in China–about 29% of COVID-19 deaths were healthcare workers. 

Another promising innovation to help patients is from Vital.  The founders are Aaron Patzer, who is the creator of Mint.com, and Justin Schrager, an ER doc. Their company uses AI and NLP (Natural Language Processing) to manage overloaded hospitals. 

Vital is now devoting all its resources to create C19check.com. The app, which was built in a partnership with Emory Department of Emergency Medicine’s Health DesignED Center and the Emory Office of Critical Event Preparedness and Response, provides guidance to the public for self-triage before going to the hospital. So far, it’s been used by 400,000 people. 

And here are some other interesting patient care innovations:

  • AliveCor: The company has launched KardiaMobile 6L, which measures QTc (heart rate corrected interval) in COVID-19 patients. This helps detect sudden cardiac arrest by using AI. It’s based on the FDA’s recent guidance to allow more availability of non-invasive remote monitoring devices for the pandemic. 
  • CLEW: It has launched the TeleICU. It uses AI to identify respiratory deterioration in advance. 

Drug Discovery

While drug discovery has made many advances over the years, the process can still be slow and onerous. But AI can help out.

For example, a startup that is using AI to accelerate drug development is Gero Pte.  It has used the technology to better isolate compounds for COVID-19 by testing treatments that are already used in humans. 

“Mapping the virus genome has seemed to happen very quickly since the outbreak,” said Vadim Tabakman, who is the Director of technical evangelism at Nintex. “Leveraging that information with Machine Learning to explore different scenarios and learn from those results could be a game changer in finding a set of drugs to fight this type of outbreak. Since the world is more connected than ever, having different researchers, hospitals and countries, providing data into the datasets that get processed, could also speed up the results tremendously.”

Robotic Process Automation (RPA): Is It Recession Proof?

Recession seems all but inevitable, as stocks have plunged to bear market levels. Yet there are certain industries that could be insulated. There will also likely be changes in consumer and business behavior that will be lasting.

No doubt, it seems like video conferencing and remote work will become increasingly mainstream. But there are other corners of the tech industry that could be poised for transformations. One is actually Robotic Process Automation (RPA). 

“RPA can help save an organization money by automating any repetitive task that a human does with keyboard and mouse, as well as tasks in legacy systems that can’t be accessed via APIs and Web Services,” said Vadim Tabakman, who is the Director of Technical Evangelism at Nintex. “RPA bots accelerate ‘low-hanging fruit’ processes in every business like opening email and attachments, filling in forms, reading from and writing to databases, making calculations, collecting social media statistics, and extracting data from documents, all very quickly.”

Such capabilities are likely to be essential as companies struggle.  If anything, there may be a dramatic boost in adoption of RPA.Today In: Entrepreneurs

“The RPA industry is at an inflection point right now because economic uncertainty makes efficiency, accuracy, and—above all—maximizing human intellect critical to survival and growth,” said Kyle Kim-Hays, who is the CMO of Softomotive. “Specifically, RPA will move increasingly from the ‘back office’ and IT-focused tasks, to the ‘front office’ and attended use case scenarios where business end users directly invoke and monitor automated tasks. So, RPA will become democratized as more people incorporate it into their day-to-day activities.”

But RPA is not just about efficiency. It can help with improving customer experiences, which will be essential in retaining revenues.  “Current Automation Anywhere customers are already increasing investments to increase ROI and potentially hedge against a declining economy,” said Prince Kohli, who is the CTO of Automation Anywhere.

Pat Geary, the Chief Evangelist at Blue Prism, agrees. “As long as economic uncertainty runs high, and companies face even greater pressure to keep profits strong without incurring costs, RPA vendors will thrive. We provide the operational agility to do more with less through automation. When companies can automate manual processes at scale, they can redeploy workers and focus on more strategic business initiatives, ones that will help them weather the storm that may come.”

RPA The Right Way

As is the case with any technology, RPA is not a cure-all. It certainly has its disadvantages, like high license costs and difficulties with scaling. Successful implementations also require a strong focus on change management within the enterprise.

“We are hearing from our clients that RPA/digital workforce has some significant benefits for an economic downturn and many have little to do with the actual technology,” said Tim Kulp, who is the Vice President of Innovation & Strategy at Mind Over Machines. “The process to build a bot requires detailed process documentation that provides people a chance to examine ‘business as usual’ with a new set of eyes and eliminate waste in the process.”

Tom Thaler, who is the senior product manager for ARIS at Software AG, believes that RPA has the danger of only showing short-term results. This is why he recommends a comprehensive strategy, which includes: 

  • Capturing impact: “Companies that successfully apply RPA start with a top-down assessment of the automation candidates. They have a clear understanding of their ambitions and how to predict and capture the impact. The focus is no longer on simple back-office processes, but is increasingly shifting towards core competencies and customer relevance. Process mining tools make it easier to spot opportunities for automation based on the right relevance criteria, but also deliver the insights for a post assessment of the achievements.”
  • Managing complexity: “A holistic automation approach makes end-to-end use cases available for automation. Handling that requires a specialized team at the core–a center of excellence (CoE). These competence teams act as enablers for the business and manage the complexity of the robotic process landscape. Following a structured approach, they produce new insights that are leveraged for continuous process improvement initiatives. This is addressed by enterprise business process analysis (EBPA) tools.”

Survival Of The Fittest?

During recessions, CEOs generally favor technology from companies with strong balance sheets and broad-based solutions. And this will likely to be the case with RPA. So yes, Microsoft should do quite well, especially since it has been forward-thinking with its own automation platform.

Then there are mid-size operators, like Appian, which should gain more traction. The company, which has been around for over 20 years, has a hefty cash balance, is publicly traded and has a full-suite of automation applications. 

“We are a one-stop shop for the modern workforce,” said Matt Calkins, who is the CEO of the company. “While RPA is a pretty good option for efficiency, it still does not handle exceptions well. This is why you need an integrated solution, such as with business process management, workflow, AI and case management.”

Keep in mind that this week the company announced a host of new features, including: 

  • Full-stack automation that orchestrates workflows for people, bots and AI.
  • Intelligent Document Processing (IDP), which is an out-of-the-box document understanding system powered by AI. 
  • Governance that provides more control over all enterprise automation technologies.
  • DevSecOps to perform tests, packaging and deployment of apps much quicker. 

In other words, RPA must go beyond just task automation. There must be a broader approach.

“The fourth industrial revolution (4IR) is disrupting every industry but it also represents a major opportunity to address these very needs—taking advantage of new technologies from data and analytics to RPA,” said Mohamed Kande, who is PwC’s Vice Chair, US and Global Advisory Leader. “4IR investments can help companies weather any downturn while also positioning them to emerge stronger. And business leaders agree. Based on a recent survey we conducted, 63% of business leaders believed that 4IR technologies will provide protection against an economic downturn.”

Microsoft Goes All-In On RPA (Robotic Process Automation)

Robotic Process Automation (RPA), which involves automating repetitive and tedious processes within organizations, is dominated by three pure-play software vendors: UiPath, Blue Prism and Automation Anywhere. These companies are some of the fastest growing in the tech industry and have raised substantial amounts of venture capital.

But the mega software companies want to get a piece of the RPA opportunity. And the one that is perhaps best positioned is Microsoft. 

This should be no surprise. The company has a massive roster of corporate customers, a strong global infrastructure and a vast ecosystem of partners and developers. It also helps that Microsoft has been aggressively bolstering its cloud business, which is now second only to Amazon.

The key to the strategy for RPA has been to leverage the Power Automate platform, which helps automate legacy systems. Just some of the features include: the understanding of structured and unstructured data (say for invoices) and the integrations with more than 300 modern apps and services. There are also numerous AI capabilities. Today In: Small Business

Ok then, so what about RPA? Well, it was added last year. It’s called UI Flows, which has both attended and unattended automation. The application also is fairly easy to use as it allows for the recording of workflows (keystrokes, mouse movements, data entry, etc) and provides for low-code and no-code approaches. For example, Schlumberger has used the technology to drive efficiency with 13,000 bots–and a majority of them were built outside of IT. 

“Everybody can be a developer,” said Charles Lamanna, who is the CVP of the Citizen Developer Platform at Microsoft. “It takes less than 30 seconds to sign up. You can then create a bot in a few minutes.”

However, might the accessibility of this technology lead to security issues? For instance, could an employee do something like put payroll information in Dropbox storage? 

Microsoft is certainly mindful of the risks and has created a system to enforce compliance. This is possible since the platform is cloud native. “You have complete visibility with every bot,” said Lamanna.

The Disruption

So how big is UI Flows in the RPA market? Well, it’s not clear. But in a blog post, Microsoft noted: “Power Automate already helps hundreds of thousands of organizations automate millions of processes every day.”

For example, Ingram Micro uses Power Automate across its organization to help with onboarding, account creation, management of credit lines, and other critical workflows. About 75% of the projects took less than 30 days to develop.

Yet I suspect we will see accelerated growth of UI Flows—and soon. A big part of this will certainly be the core technology. But I think the business model is also likely to be disruptive to the RPA industry.

Consider that its typical for a software vendor to charge on a per-bot basis, which could come to over $1,000 per month. This does not include the fees for orchestration and other modules.

But Microsoft is breaking this model, which involves two tiers. First, there is a $40-per-user monthly fee for running attended or unattended bots. Next, you can elect to pay $150-per-month for each unattended bot. 

In other words, this low-cost strategy should greatly expand adoption. It will also likely have a major impact across the RPA landscape. Cost has certainly been a major point of concern for customers, especially those that are looking to scale the automation.

“There are three trends on the horizon for RPA,” said Lamanna. “First, cloud is inevitable and cloud hosting will be the only environment that matters end-to-end. Second, if RPA wants to become mainstream, it has to be democratized. The reality is Windows didn’t become a big deal until it was on everybody’s desk. For RPA to be transformative, it has to be on everybody’s desk. And the need for RPA is real. Over 60% of all positions for information workers spend over 30% of their time doing rote, automatable tasks. The economic benefit for companies, and even more importantly the fulfillment at work for employees, is very, very large. We just have to make it possible and affordable. And third, automation is going to stretch beyond UI automation. True automation has elements like chatbots and forms that collect information and these will all start to mix together with digital process automation and robotic process automation. Customers want to solve an automation problem with one integrated solution.”

How AI Is Supercharging RPA (Robotic Process Automation)

Robotic Process Automation (RPA), which allows for the automation of the tasks of workers, has been one of the hottest categories in tech. The reason is actually simple: the ROI (Return on Investment) has generally been fairly high.

Yet there are some nagging issues. And perhaps the biggest is the scaling of the technology. For the most part, companies max out with 20 to 30 bots within the organization. 

But AI (Artificial Intelligence) is likely to help out. To see how, consider one of the leaders in the space, Automation Anywhere.  Keep in mind that the company has been investing in AI for many years and has launched several interesting applications, such as IQ Bot. It essentially helps to process huge amounts of unstructured data by using Computer Vision, Natural Language Processing (NLP), fuzzy logic and Machine Learning (ML). Even more impressive is that there is no need for AI experts to use it successfully. It’s an out-of-the-box solution. Although, if you want, you can still embed your own AI models in IQ Bot.

Yet perhaps the most important AI system from Automation Anywhere is the Discovery Bot. It is currently in beta and will be rolled out in the coming months. 

The Discovery Bot cuts through some of the biggest challenges with RPA. For example, the system can identify processes to automate, which is done by recording and analyzing the keystrokes from workers’ desktops. 

Over time, the Discovery Bot will then find repeated processes that are good candidates for automation–and include other details like the estimated ROI. This makes it much easier for an RPA lead or CoE (Center-of-Excellence) to prioritize the efforts. 

So how is the Discovery Bot different from process mining? Well, this technology–which has gained much traction in Europe with companies like Celonis–is focused on analyzing log events from ERPs, CRMs and other systems of record. 

“Our approach is about observing human behavior to better automate end-to-end business processes,” said Prince Kohli, who is the CTO and Head of Products at Automation Anywhere.

Note that process mining is more of a complementary technology for the Discovery Bot. After all, Automation Anywhere is a partner with various players in the market.

But the Discovery Bot can do something else–that is, with just one click, it will create a bot! Because of this, the acceleration of the process automation journey can be increased by as much as 5X. This essentially means reducing much tedious work and allowing for the RPA team to focus on more value-added tasks.

Interestingly enough, Automaton Anywhere thinks that there is an Automation Law emerging, similar to Moore’s Law. It’s about how hours of work automated by software bots will double every 12 to 18 months. If so, then RPA and AI will certainly provide deep value for organizations. 

The AI Factor

Of course, there continues to be intense hype with AI. The fact is that the technology is extremely complex, requiring highly technical talent.  In the meantime, mega tech companies like Google, Amazon, Facebook and Microsoft are driving up the compensation levels.

This has made it tough for many companies to engage in true digital transformation. But sophisticated out-of-the-box solutions like the Discovery Bot should make a big difference.

“With over 3,000 customers, we are in a position to learn the interactions on many different platforms,” said Kohli. “This allows us to build strong out-of-the-box models.”

AIOps: What You Need To Know

AIOps, which is a term that was coined by Gartner in 2017, is increasingly becoming a critical part of next-generation IT. “In a nutshell, AIOps is applying cognitive computing like AI and Machine learning techniques to improve IT operations,” said Adnan Masood, who is the Chief Architect of AI & Machine Learning at UST Global. “This is not to be confused with the entirely different discipline of MLOps, which focuses on the Machine learning operationalization pipeline. AIOps refers to the spectrum of AI capabilities used to address IT operations challenges–for example, detecting outliers and anomalies in the operations data, identifying recurring issues, and applying self-identified solutions to proactively resolve the problem, such as by restarting the application pool, increasing storage or compute, or resetting the password for a locked-out user.”

The fact is that IT departments are often stretched and starved for resources. Traditional tools have usually been rule-based and inflexible, which has made it difficult to deal with the flood of new technologies.

“IT teams have adopted microservices, cloud providers, NoSQL databases, and various other engineering and architectural approaches to help support the demands their businesses are putting on them,” said Shekhar Vemuri, who is the CTO of Clairvoyant. “But in this rich, heterogeneous, distributed, complex world, it can be a challenge to stay on top of vast amounts of machine-generated data from all these monitoring, alerting and runtime systems.  It can get extremely difficult to understand the interactions between various systems and the impact they are having on cost, SLAs, outages etc.”

So with AIOps, there is the potential for achieving scale and efficiencies.  Such benefits can certainly move the needle for a company, especially as IT has become much more strategic.

“From our perspective, AIOps equips IT organizations with the tools to innovate and remain competitive in their industries, effectively managing infrastructure and empowering insights across increasingly complex hybrid and multi-cloud environments,” said Ross Ackerman, who is the NetApp Director of Analytics and Transformation. “This is accomplished through continuous risk assessments, predictive alerts, and automated case opening to help prevent problems before they occur. At NetApp, we’re benefiting from a continuously growing data lake that was established over a decade ago. It was initially used for reactive actions, but with the introduction of more advanced AI and ML, it has evolved to offer predictive and prescriptive insights and guidance. Ultimately, our capabilities have allowed us to save customers over two million hours of lost productivity due to avoided downtime.”

As with any new approach, though, AIOps does require much preparation, commitment and monitoring. Let’s face it, technologies like AI can be complex and finicky. 

“The algorithms can take time to learn the environment, so organizations should seek out those AIOps solutions that also include auto-discovery and automated dependency mapping as these capabilities provide out-of-the-box benefits in terms of root-cause diagnosis, infrastructure visualization, and ensuring CMDBs are accurate and up-to-date,” said Vijay Kurkal, who is the CEO of Resolve. “These capabilities offer immediate value and instantaneous visibility into what’s happening under the hood, with machine learning and AI providing increasing richness and insights over time.”

As a result, there should be a clear-cut framework when it comes to AIOps. Here’s what Appen’s Chief AI Evangelist Alyssa Simpson Rochwerger recommends: 

  • Clear ability to measure product success (business value outcomes).
  • Ability to measure and report on associated performance metrics such as accuracy, throughput, confidence and outcomes
  • Technical infrastructure to support—including but not limited to—model training, hosting, management, versioning and logging
  • Data Set management including traceability, data provenance and transparency
  • Low confidence/fallback data handling (this could be either a data annotation or other human-in-the-loop process or default when the AI system can’t handle a task or has a low-confidence output)

All this requires a different mindset. It’s really about looking at things in terms of software application development. 

“Most enterprise businesses are struggling with a wall to production, and need to start realizing a return on their machine learning and AI investments,” said Santiago Giraldo, who is a Senior Product Marketing Manager at Cloudera. “The problem here is two-fold. One issue is related to technology: Businesses must have a complete platform that unifies everything from data management to data science to production. This includes robust functionalities for deploying, serving, monitoring, and governing models. The second issue is mindset: Organizations need to adopt a production mindset and approach machine learning and AI holistically in everything from data practices to how the business consumes and uses the resulting predictions.”

So yes, AIOps is still early days and there will be lots of trial-and-error. But this approach is likely to be essential.

“While the transformative promise of AI has yet to materialize in many parts of the business, AIOps offers a proven, pragmatic path to improved service quality,” said Dave Wright, who is the Chief Innovation Officer at ServiceNow. “And since it requires little overhead, it’s a great pilot for other AI initiatives that have the potential to transform a business.”

Coronavirus: Can AI (Artificial Intelligence) Make A Difference?

The mysterious coronavirus is spreading at an alarming rate. There have been at least 305 deaths as more than 14,300 persons have been infected.

On Thursday, the World Health Organization (WHO) declared the coronavirus a global emergency. To put things into perspective, it has already exceeded the numbers infected during the 2002-2003 outbreak of SARS (Severe Acute Respiratory Syndrome) in China. 

Many countries are working hard to quell the virus. There have been quarantines, lock-downs on major cities, limits on travel and accelerated research on vaccine development. 

However, could technologies like AI (Artificial Intelligence) help out? Well, interestingly enough, it already has.

Just look at BlueDot, which is a venture-backed startup. The company has built a sophisticated AI platform that processes billions of pieces data, such as from the world’s air travel network, to identity outbreaks.

In the case of the coronavirus, BlueDot made its first alert on December 31st. This was ahead of the US Centers for Disease Control and Prevention, which made its own determination on January 6th.

BlueDot is the mastermind of Kamran Khan, who is an infectious disease physician and professor of Medicine and Public Health at the University of Toronto. Keep in mind that he was a frontline healthcare worker during the SARS outbreak. 

“We are currently using natural language processing (NLP) and machine learning (ML) to process vast amounts of unstructured text data, currently in 65 languages, to track outbreaks of over 100 different diseases, every 15 minutes around the clock,” said Khan. “If we did this work manually, we would probably need over a hundred people to do it well. These data analytics enable health experts to focus their time and energy on how to respond to infectious disease risks, rather than spending their time and energy gathering and organizing information.”

But of course, BlueDot will probably not be the only organization to successfully leverage AI to help curb the coronavirus. In fact, here’s a look at what we might see:

Colleen Greene, the GM of Healthcare at DataRobot:

“AI could predict the number of potential new cases by area and which types of populations will be at risk the most. This type of technology could be used to warn travelers so that vulnerable populations can wear proper medical masks while traveling.”

Vahid Behzadan, the Assistant Professor of Computer Science at the University of New Haven:

“AI can help with the enhancement of optimization strategies. For instance, Dr. Marzieh Soltanolkottabi’s  research is on the use of machine learning to evaluate and optimize strategies for social distancing (quarantine) between communities, cities, and countries to control the spread of epidemics. Also, my research group is collaborating with Dr. Soltanolkottabi in developing methods for enhancement of vaccination strategies leveraging recent advances in AI, particularly in reinforcement learning techniques.”

Dr. Vincent Grasso, who is the IPsoft Global Practice Lead for Healthcare and Life Sciences:

“For example, when disease outbreaks occur, it is crucial to obtain clinical related information from patients and others involved such as physiological states before and after, logistical information concerning exposure sites, and other critical information. Deploying humans into these situations is costly and difficult, especially if there are multiple outbreaks or the outbreaks are located in countries lacking sufficient resources. Conversational computing working as an extension of humans attempting to get relevant information would be a welcome addition. Conversational computing is bidirectional—it can engage with a patient and gather information, or the reverse, provide information based upon plans that are either standardized or modified based on situational variations. In addition, engaging in a multilingual and multimodal manner further extends the conversational computing deliverable. In addition to this ‘front end’ benefit, the data that is being collected from multiple sources such as voice, text, medical devices, GPS, and many others, are beneficial as datapoints and can help us learn to combat a future outbreak more effectively.”

Steve Bennett, the Director of Global Government Practice at SAS and former Director of National Biosurveillance at the U.S. Department of Homeland Security:

“AI can help deal with the coronavirus in several ways. AI can predict hotspots around the world where the virus could make the jump from animals to humans (also called a zoonotic virus). This typically happens at exotic food markets without established health codes.  Once a known outbreak has been identified, health officials can use AI to predict how the virus is going to spread based on environmental conditions, access to healthcare, and the way it is transmitted. AI can also identify and find commonalities within localized outbreaks of the virus, or with micro-scale adverse health events that are out of the ordinary. The insights from these events can help answer many of the unknowns about the nature of the virus.

“Now, when it comes to finding a cure for coronavirus, creating antivirals and vaccines is a trial and error process. However, the medical community has successfully cultivated a number of vaccines for similar viruses in the past, so using AI to look at patterns from similar viruses and detect the attributes to look for in building a new vaccine gives doctors a higher probability of getting lucky than if they were to start building one from scratch.”

Don Woodlock, the VP of HealthShare at InterSystems:

“With ML approaches, we can read the tens of billions of data points and clinical documents in medical records and establish the connections to patients that do or do not have the virus. The ‘features’ of the patients that contract the disease pop out of the modeling process, which can then help us target patients that are higher risk.

“Similarly, ML approaches can automatically build a model or relationship between treatments documented in medical records and the eventual patient outcomes. These models can quickly identify treatment choices that are correlated to better outcomes and help guide the process of developing clinical guidelines.”

Prasad Kothari, who is the VP Data Science and AI for The Smart Cube:

“The coronavirus can cause severe symptoms such as pneumonia, severe acute respiratory syndrome, kidney failure etc. AI empowered algorithms such as genome based neural networks already built for personalized treatment can prove very helpful in managing these adverse events or symptoms caused by coronavirus, especially when the effect of virus depends on immunity and the genome structure of individual and no standard treatment can treat all symptoms an effects in the same way.

“In recent times, immunotherapy and Gene therapy empowered through AI algorithms such as boltzmann machines (entropy based combinatorial neural networks) have stronger evidence of treating such diseases which stimulate body’s immunity systems. For this reason, Abbvie’s Aluvia HIV drug is one possible treatment. If you look at data of affected patients and profile virus mechanics and cellular mechanism affected by the coronavirus, there are some similarities in the biological pathways and treatment efficacy. But this is yet to be tested.”

CES: The Coolest AI (Artificial Intelligence) Announcements

As seen at this week’s CES 2020 mega conference, the buzz for AI continues to be intense. Here are just a few comments from the attendees:

  • Nichole Jordan, who is Grant Thornton’s Central region managing partner: “From AI-powered agriculture equipment to emotion-sensing technology, walking the exhibit floors at CES drives home the fact that artificial intelligence is no longer a vision of the future. It is here today and is clearly going to be more integrated into our world going forward.”
  • Derek Kennedy, the Senior Partner and Global Technology Leader at Boston Consulting Group: “AI is increasingly playing a role in every intelligent product, such as upscaling video signals for an 8K TV as well as every business process, like predicting consumer demand for a new product.”
  • Houman Haghighi, the Business Development Partner at Menlo Ventures: “Voice, natural language and predictive actions are continuing to become the new—and sometimes the only—user interface within the home, automobile, and workplace.”

So what were some of the stand out announcement at CES? Well, given that there were over 4,500 exhibitors, this is a tough question to answer. But here are some innovations that certainly do show the power of AI:

Prosthetics: Using AI along with EMG technology, BrainCo has built a prosthetic arm that learns. In fact, it can allow for people to play a piano or even do calligraphy. 

“This is an electronic device that allows you to control the movements of an artificial arm with the power of thought alone,” said Max Babych, who is the CEO of SpdLoad. Today In: Small Business

The cost for the prosthetic is quite affordable at about $10,000 (this is compared to about $100,000 for alternatives). 

SelfieType: One of the nagging frictions of smartphones is the keyboard. But Samsung has a solution: SelfieType. It leverages cameras and AI to create a virtual keyboard on a surface (such as a table) that learns from hand movements. 

“This was my favorite and simplest AI use case at CES,” said R. Mordecai, who is the Head of Innovation and Partnerships at INNOCEAN USA. “I wish I had it for the flight home so I could type this on the plane tray.”

Lululab’s Lumine: This is a smart mirror that is for skin care. Lumine uses deep learning to analyze six categories–wrinkles, pigment, redness, pores, sebum and trouble–and then recommends products to help.

Whisk: This is powered by AI to scan the contents of your fridge so as to think up creative dishes to cook (it is based research from over 100 nutritionists, food scientists, engineers and retailers). Not only does this technology allow for a much better experience, but should help reduce food waste. Keep in mind that the average person throws away 238 pounds of food every year. 

Wiser: Developed by Schneider Electric, this is a small device that you install in your home’s circuit breaker box. With the use of machine learning, you can get real-time monitoring of usage by appliance, which can lead to money savings and optimization for a solar system.

Vital Signs Monitoring: The Binah.ai app analyzes a person’s face to get medical-grade insights, such as oxygen saturation, respiration rate, heart rate variability and mental stress. The company also plans to add monitoring for hemoglobin levels and blood pressure.

Neon: This is a virtual assistant that looks like a real person, who can engage in intelligent conversation and show emotion. While still in the early stages, the technology is actually kind of scary. The creator of Neon–which is Samsung-backed Star Labs—thinks that it will replace doctors, lawyers and other white collar professionals. No doubt, this appears to be a recipe for wide-scale unemployment, not to mention a way to unleash a torrent of deepfakes!