LaMDA: Google’s language model is elevating human-to-machine conversations

IMAGE CREDIT:
Image credit
iStock

LaMDA: Google’s language model is elevating human-to-machine conversations

LaMDA: Google’s language model is elevating human-to-machine conversations

Subheading text
Language Model for Dialogue Applications (LaMDA) might enable artificial intelligence to sound more humanlike.
    • Author:
    • Author name
      Quantumrun Foresight
    • January 3, 2023

    Google’s LaMDA aims to simulate human conversations that are organic and meaningful. To do this, the firm’s engineers developed a training methodology to synthesize information instead of following algorithms. This feature allows the tool to understand context more easily and react accordingly.

    LaMDA context

    Human speech’s unpredictable and sometimes unstructured nature presents a real challenge to chatbots and virtual assistants. Because the traditional language models use pre-programmed information to engage in human conversations, they tend to reach sudden dead ends when their training data can no longer understand human reasoning and intent. Google is trying to change this unnatural progression through LaMDA. The language model is built on Transformer, Google Research’s open-sourced neural network system. That architecture generates a model that can be trained to interpret many words (a sentence or paragraph, for example), focus on how those terms are related, and then predict what words it thinks will follow.

    During the 2022 Google annual developer conference (I/O), CEO Sundar Pichai demonstrated the enhanced capabilities of LaMDA 2.0. The firm released a developer test kit called the AI Test Kitchen app. In this app, three demos showcase LaMDA’s capabilities. 

    The first feature was Imagine It, where LaMDA is asked to describe or “imagine” different kinds of scenarios. For example, a user can ask the model to explain the sights, sounds, and feel of being in the Marianas Trench. 
    The following demo was Talk About It, where LaMDA engages in a conversation around one main topic. No matter how much the user introduces off-topic ideas, the model always tries to steer the conversation back to the original subject. 
    Finally, there was List It, where LaMDA breaks down one primary goal into relevant sub-tasks. For example, a user can ask tips on how to build a vegetable garden, and the model suggests different mini-tasks that the user can start doing, such as listing down the vegetables they want to plant and knowing where to buy the best seeds. 

    Disruptive impact

    According to Google’s article on LaMDA, it designed the tool to adhere to the company’s AI principles. Although language is an incredible tool, it can sometimes be abused. Models that learn from language can end up continuing this abuse by learning and repeating biases, hateful speech, or false information. Even when the model is only trained on accurate data, it can still be tweaked for unethical purposes. Google’s solution is to build open-source resources that invite other researchers to analyze LaMDA’s training data. 

    The tool’s increasing levels of sensibleness, specificity, and interestingness (SSI, evaluated by human raters) creates more useful avenues for virtual assistants and chatbots. Instead of simply obeying orders, these bots can soon make open-ended conversations, suggest alternative solutions, ask for clarifications, and just be overall engaging conversationalists. 

    These characteristics make them more well-suited for client-facing conversations. An example may include virtual tour guides that would be able to present background or historical information more cohesively, depending on the questions asked by tourists. Business chatbots would be able to handle all customer concerns, regardless of complexities. Government agencies can create AI guides that can assist citizens with availing public services. While LaMDA still has a long way to go before it can reach this commercial level of usefulness, its continued progress is promising for the natural language processing (NLP) field in general. 

    Implications of LaMDA

    Wider implications of LaMDA may include: 

    • Customer chatbots and digital assistants continuing to improve year-over-year. This trend may lead people to believe they are talking to another human being online or over the phone.
    • LaMDA being continually trained to identify nuances in accents, dialects, cultural use of words, slang, and other speech patterns.
    • More clients asking for full disclosure and transparency whenever a chatbot engages them over the phone.
    • Fraudsters attempting to use intelligent chatbots to trick people/victims into releasing sensitive information by mimicking voices or speech patterns.
    • The increasing risk of algorithm bias because of human-written training data, which may reinforce racism and discrimination.

    Questions to comment on

    • How might LaMDA or other AI conversationalists improve public services?
    • What are other ways a better AI conversationalist can make your life easier?

    Insight references

    The following popular and institutional links were referenced for this insight: