Presented by Cloudera
“I’m extremely excited about the future of the intersection between conversational AI and the multitude of platforms that are being developed around these capabilities,” said Linden Hillebrand, VP Global Customer Success and Support at Cloudera during his opening remarks at the Transform 2020 Conversational AI Summit.
Over the course of the day tech giants from Adobe and Capital One to Google, Amazon, and Twitter spoke about how they’re using conversational AI to solve problems for their businesses in new and innovative ways.
The technology is being leveraged for both text chatbots and the NLP-powered voice assistants that are increasingly able to understand intent and offer a seamless, personalized user experience, helping automate the majority of customer interactions. But in most sessions, panelists emphasized that implementing these AI technologies also means tackling some of the bigger picture issues, including fairness, explainability, and elimination of bias.
Here’s a look at some of the top panels of the day, featuring leaders from Capital One, Google Assistant, and more.
Zero to helpful in 2.2 seconds: A platform approach to business-specific AI capabilities
Data company Cloudera had a head start in developing a conversational AI platform: the vast data sets they had stored from past customer issues and solutions. To feed their new chatbot, they were able to extract the semantic context of both the conversations between the customer and the support person as well as the specifics of the problem, giving them a running start.
To ensure they were using relevant data, their subject matter experts manually labeled and classified their millions-strong data set over the course of two weeks so that they were able to achieve their goal of 90% accuracy. They’ve also boosted their knowledge results by 300%, in some cases dropped time-to-resolution for customers by over 90%, said Hillebrand.
They went from inception to chatbot in under a month because they hit the ground running with pertinent models they could press into service, as well as an unsiloed modern data structure. The company will continue to update and refine their chatbot as data continues to come in from real interactions in order to improve the quality of their interactions even further.
They recommend that companies looking to develop their own chatbot platform rely on subject matter experts to ensure the accuracy and relevancy of data, focus on a specific problem or set of problems to solve, and start with data architectures that grants you agility. It’s also important to start simple – you can provide useful results to customers, very quickly, when you focus more on accuracy than on bells and whistles.
How Adobe’s 22K+ employees are leveraging conversational AI for the shift to work-from-home
In the wake of the COVID-19 pandemic, much like other companies, Adobe had to pivot from gathering in offices to working from home in the span of a weekend, says Cynthia Stoddard, CIO & Senior Vice President, Adobe. That’s meant a major uptick in internal IT requests as employees got settled in their new home offices.
In order to keep their IT department from getting overwhelmed, the company deployed a chatbot powered by AI and machine learning to answer employee questions. With natural language processing helping to interpret employee requests, the bot was able to provide answers or link to relevant knowledge base articles. With email and chatbots rather than phone calls to IT, the company was able to improve its response time from 10 hours on average to 1 hour, a 90% improvement that’s also significantly improved employee productivity.
In addition to triaging questions, Adobe is also using AI/ML bots to “eliminate toil,” Stoddard explained, improving overall business efficiency. An AI-based catalog ordering system for hardware eliminated around 76% of the work of creating purchase orders, Stoddard said, while a contract creation system eliminated about 82% of the prior workload.
The chatbots will reduce or eliminate queues for IT services, but not kill IT departments – with self-service, employees are able to solve systemic issues, giving IT staff time to focus on higher-value work.
The power of personalized proactivity in AI assistants
About 15% of the time, Capital One discovered, customers weren’t responding yes or no to their SMS fraud alerts; they were using full sentences, offering explanations, and even using emojis said Ken Dodelin, VP Conversational AI Products, Capital One. That was the impetus for the company’s initial exploration of natural language processing.
Their current conversational AI product, the ungendered ENO, is an assistant designed to make money management easier for their customers. With SMS or push notifications, ENO will let you know you were charged double for a purchase, ensure that your generous tip was deliberate, not a decimal point slip, or let you know your free trail is about to expire. And ENO now understands 99% of customer replies, Dodelin said.
ENO was created as a cloud native application, which has lowered costs, allows it to run infrastructure more efficiently, and gives it the compute power needed to most effectively extract and analyze data.
There’s been a lot of trial and error and learning in the process of building ENO, Dodelin says. Part of what has made ENO so successful was the data and customer context they already had on hand to teach the bot. As more real-time data becomes available, they continue to refine their models, with the goal of getting ENO closer to a one-to-one personalized experience.
Can we trust a machine to be fair? Addressing the challenge of bias in conversational AI algorithms
As a leader in artificial intelligence, Google has a responsibility to address a machine learning bias that has multiple examples of results about race and sex and gender across many areas, including conversational AI, said Barak Turovsky, director of product for Google AI.
In particular, the results from Translate have a big global impact. Roughly 50% of the content on the internet is in English, but only 20% of the world has English-speaking skills. Google translates 140 billion words every single day by 150 billion active users, including 95% outside the U.S.
Two of the major problems with the engine: it does not recognize how to appropriately translate gender in gendered languages; and some very gender-biased source material, including the Bible, was used in training. As a result, the algorithm will return, for example, a default translation in English that says, “He’s a doctor, she’s a nurse.”
There are only imperfect ways to address this currently; Google has opted to provide multiple responses and let users choose the best one. If someone wants to translate the word nurse from English to Spanish, the engine will return both “enfermera” and “enfermero.”
This sounds simple, he said, but required the team to build three new machine learning models. The models detect gender-neutral queries, generate gender-specific translations, and then check for accuracy. The first model trained algorithms on which words could potentially express gender and which ones would not; the second required training data to be tagged as male or female; and the third model then filters out suggestions that could potentially change the underlying meaning.
Results aren’t perfect, but they’ve already resulted in a vast improvement on the original. Google continues to fine-tune all three models and how they interact with each other to continue to improve results.
Check out all the session from the Conversational AI Summit here. Learn more from industry-leading practitioners about conversational AI technology, the ways they’ve unlocked ROI from them, and their thoughts about what the future holds.
Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact [email protected]