Please tell us about your current role and the team/technology you handle at Salesforce.
I lead the Conversation Design practice within Salesforce Experience. My academic background is in Linguistics and my primary research focus was in Interactional Sociolinguistics, or the study of how people use language in everyday interaction. This is the lens through which I drive our strategy and vision for Conversation Design at Salesforce. Salesforce has several product offerings that facilitate and enable conversational experiences for our customers, the flagship being Einstein Bots and the Einstein Bot Builder.
What is unique about this role compared to my experience at other companies working in Conversation Design and Research is that Salesforce is offering our customers a tool with which they can build bots, first and foremost.
That said, our customers who are new to conversational experiences seek guidance and best practices for how to optimize the conversational experience for their users.
One of the most frequent questions customers ask me is, “What should my bot say to the customer?” That question is often followed quickly by, “How should my bot say it to the customer?” That is where Conversation Design and Linguistics play a huge role in setting direction. I deliver these resources to our customers by working with our Product, Engineering, Design, and Data Science teams to craft the setup experience of Einstein Bot Builder such that it is optimized for adhering to users’ expectations of conversational behavior.
Additionally, I publish best practices for conversation design through Salesforce’s outlets such as in Help & Documentation, Trailhead, the Salesforce.com Blog, and our Admin Podcast. I also evangelize our best practices and vision for the future of Conversation Design at scale at various industry conferences and in academic publication.
How have chatbot technologies evolved in the last 2 years around Automation, CX Management and Personalization?
I may be a little biased here, but I honestly think that the greatest innovation in this space has been around Conversation Design. About three years ago, we had few resources in terms of cross-platform and cross-device Conversation Design prototyping (in other words, the conversational equivalent of Invision for graphical user interfaces).
Now, tools like Botsociety and Botmock have exploded onto the scene and helped Conversation Designers develop quick prototypes across channels in a lightweight fashion. Einstein Bots have invested heavily in the dialog building functionality of the Bot Builder and the Template feature we recently announced in our Winter Release Notes go a long way in helping our customers get up and running quickly without starting completely from scratch. This is in line with a broader trend I’m seeing around pre-built bots, templates, and components across the chatbot industry that help scale Conversation Design efforts.
What is the difference between Conversational AI tools and Chatbots? How does Salesforce AI see these two digital engineering marvels from its current position?
I think of Conversational AI and Chatbots as a Venn diagram.
Conversational AI encompasses a broad scope of technology that can produce, parse, and analyze conversational language across channels. Some organizations implement chatbots that are powered by conversational AI (e.g., intent-enabled bots, NLP-powered bots, bots configured with NLG); they operate in the center of the Venn diagram. But, chatbots don’t have to be powered by Conversation AI in order to be effective; if the use case is simple enough, they can also be rule-based and menu-driven without any AI capabilities and fulfill the user’s needs.
As a Conversation Designer, I’m always thinking about what solution is the best fit for a given user need. Just because we have the Conversational AI technology doesn’t mean it’s the only—let alone ideal—solution for the context at hand. For example, in research, we found that users seek guardrails from the chatbot in order to guide the experience; providing a blank canvas for the user to type into can leave users anxious and unsure of what to input to get the chatbot to react in the way they desire. Providing the user a menu can help guide them through the experience. In that case, the Conversational AI technology itself can take a backseat in powering the interaction—if the user has a request outside of the scope of what the chatbot presents in the first menu and they type a freeform utterance into the composer to convey their intent, a strongly trained intent model paired with Named Entity Recognition (NER) technology can serve as a failsafe. Giving user examples can go a long way in setting their expectations for the interaction at hand.
What are the key features of a human-like chatbot? What unique experiences do you focus on while building a chatbot for customers?
I think it’s less about making a chatbot human-like and more about designing the conversation purposefully, based on research, such that it adheres to users’ expectations for conversational behavior. Conversation is fundamentally a human behavior, but I think that it’s a jump in logic to assume by that token alone that chatbot designers should, therefore, aspire to build chatbots that are human-like. Chatbots are machines and should be framed as such. At Salesforce, our number one core value is Trust, and as a result, we’ve included in our Acceptable Use Policy for Einstein Bots that our customers must disclose to their users in the first turn of the chat that they are indeed talking with a chatbot and not a human.
To pass off the bot as human when it is certainly not is a violation of customer trust. That said, I think there are some key linguistic patterns that Conversation Designers can leverage to design conversations that adhere closely to users’ expectations for conversational behavior. One example would be pausing and pacing in the conversation.
Pauses are vital to maintaining conversation—in vocal interactions, they provide space for speakers to breathe (because that’s really important!), but they also leave listeners time to process what is being said and convey understanding back to the listener. The same goes for text-based chat conversations. If only one person is taking turns in the conversation, then it’s more of a monologue. Conversation is fundamentally a collaborative achievement between more than one individual. Luckily, the graphical UI of the chat window already lends itself to displaying turn-taking with the messages emerging from the left and right sides of the frame. But, in order for the interaction to truly “feel” conversational, the turns must flow at a pace. I posit that one of the leading patterns that makes a chat feel “artificial” to users is when a chatbot takes little to no pause in the chat before responding.
Doing so crowds the chat UI such that the user must scroll to read and catch up to the bot. In order to avoid this, we included the Bot Response Delay feature in the Einstein Bot Builder (which, as a conversation nerd, is probably my favorite feature in the product). It allows the system admin to customize the length of the pause a chatbot takes before responding to the user in the chat. This isn’t to pass off the chatbot as human—rather, it’s to adhere to user expectations of pacing and flow of conversation.
I found in my research of chat agents in service centers in Manila that the fastest response they produced in chat with customers took four seconds.
By that standard, if we want to leave the user time to read but also be efficient, then we’d want the Bot Response Delay to fall somewhere between one and three seconds—which, incidentally, is exactly the setting boundary of the feature in the product. All of this might seem like a lot of thought about a tiny, minute aspect of the experience, but the implications of pause length in conversation are enormous—they can mean the difference between a high and low CSAT score because the user felt that the pace was natural or artificial and rushed. These are the types of experiences I work with my stakeholders to unpack and account for in the tools we provide our customers—I want to give them enough guidance to build the best experiences for their users.
How is a chatbot for employees different from that used to engage with customers? Any specific user case scenarios that you want to list out?
From a conversational perspective, I don’t see a difference between the two.
I would like to think that companies would convey their employees the same courtesies in conversation as their customers.
Customers and employees alike would need to log a case, reset a password, look up a policy, or return an item—it’s just that the entities may be different for each group. That said, from an ethical standpoint, as a Conversation Designer, I would still want to make sure that I am enabling users to be understood by the system regardless of their dialect or language variant, get the information that they need, escalate to a human when needed, or fulfill their intent—regardless of whether they’re a customer or employee. I would also want to put boundaries in place to keep the company from collecting sensitive information from users that it should not have. For example, given how the pandemic has impacted society at large, the likelihood that health-related discourse could find its way into chat is higher than pre-COVID.
For example, rather than asking a user an open-ended question about their issue, I might list COVID-related issues in a menu with something like, “Learn more about our COVID-19 policies” to avoid collecting any potentially sensitive or revealing health information.
As Head of Conversation Design technology at Salesforce, how do you tackle the challenges arising from current uncertainties during the pandemic? What is your message for all AI Product Engineering teams working remotely?
The pandemic presented challenges to all, including our customers and as well as our own team and internal partners. But, generally, we want to focus our efforts in the right places. We need to truly understand the needs of our customers, and be proactive in anticipating future issues that might arise and plan for the technical roadmaps accordingly.
The product, engineering, and AI research teams must closely collaborate in shaping the product priorities, so that we don’t get stuck in the reactive fire fighting mode, nor do we chase after the first shining new object we see. It comes down to saying “yes” to the truly important initiatives and “no” to the great but not great enough ones.
For our own team, the key is to be mindful of the fact that the challenges facing each team members are unique, and be sure to approach each decision and interaction with empathy and care. For example, some team members may live alone without a roommate, so virtual social activities might be a welcome change of pace; some folks might be working while taking care of three school-age children and personal alone time is precious. So well-intentioned efforts might not always yield the best outcome if attention to the individual conditions and needs is missing. For AI Product Engineering teams, please take good care of yourselves and each other, first and foremost; then stay hungry for knowledge – knowledge for AI and knowledge about customers; this community holds the solution to many of today’s challenges.
What are your thoughts on Enterprise AI versus Open Source AI ML Projects building chatbots and virtual assistants — any specific advice for all young AI developers?
Salesforce Research has been an active contributor to the academic research literature as well as the open source communities, from which we have also benefited greatly. In particular, we actively share our research work in NLP and Conversational AI with the community, by sharing code and releasing large-scale pre-trained models (e.g., CTRL), in the hope that it can be of help for other AI researchers and developers, hence advancing the field.
We also base our work on existing research and open source projects to create newer and better AIs and products.
The recent advancement in AI research and productization is in part due to the culture of transparency and reproducibility – it’s not good enough to claim state-of-the-art results, we need to make it easy for the community to reproduce and build on top of our work. Last but not least, participating in open source communities is consistent with our value of Trust. Without trust, AI would not be able to truly make a positive impact and realize its potential to improve human conditions. For aspiring and practicing AI developers, I would say master your crafts by participating in open source, benefit from it and give back.
Thank you, Greg, for answering all our questions!
Greg Bennett is Conversation Design Principal at Salesforce, leading the company’s first dedicated department for Conversation Design since its inception. As a linguist, Greg focuses his work on empowering businesses to create chatbots that feel natural and helpful, build user trust, and meet customer expectations for conversational behavior.
Greg works with Salesforce’s Product teams, customers, and partners to tailor their conversation designs for cross-cultural differences across channels and user populations, as well as how to effectively express personality or conversational style.
Salesforce.com, inc. is an American cloud-based software company headquartered in San Francisco, California. It provides customer relationship management service and also sells a complementary suite of enterprise applications focused on customer service, marketing automation, analytics, and application development.