Gesture recognition: From touch-sensitive to touchless

IMAGE CREDIT:
Image credit
iStock

Gesture recognition: From touch-sensitive to touchless

Gesture recognition: From touch-sensitive to touchless

Subheading text
Companies are creating more natural device interfaces that can easily recognize gestures.
    • Author:
    • Author name
      Quantumrun Foresight
    • February 3, 2023

    With the invention and expansion of touch-sensitive technologies, such as smartphones, smartwatches, and wall screens, gesture-based user interfaces have become increasingly popular. This trend has encouraged researchers to invent recognizer technology that is not only fast and accurate for end-users but also simple enough for practitioners.

    Gesture recognition context

    Natural user interfaces (NUIs) are essential to many future technologies. They can identify us and make decisions based on our gestures. Imagine being able to pick up a digital object or control a remote robotic arm without wires or snapping your fingers to trigger your coffee maker to brew a fresh cup. 

    Various approaches and techniques have been created to classify single-stroke and multiple-stroke gestures based on the pioneering work done in two-dimensional (2D) stroke gesture recognition. These new methods take into account properties of variances, such as articulation, rotation, scale, and translation. Hand gestures are a natural way of communicating, so it’s not surprising that we want to use them to interact with technology. 

    The gesture recognition market is anticipated to reach USD $32.3 billion by 2025 from just USD $9.8 billion in 2020, as stated by intelligence firm Markets and Markets. The top producers of gesture interface products are Intel, Apple, Microsoft, and Google. Automotive, healthcare, and consumer electronics are the key industries driving the mass adoption of touchless tech.

    However, there are some significant challenges in perfecting gesture recognition. These limitations range from having to wave your hands in front of a small smartphone screen to the complex machine learning (ML) algorithms needed to recognize more than a simple thumbs-up gesture. Nonetheless, the potential of gesture recognition in assistive technology and smart homes motivates companies to invest in this space.

    Disruptive impact

    One potential benefit of gesture recognition is in sorting datasets. For example, security analysts can investigate incidents using physical gestures to sort files within a virtual setting. Hand motions, such as “grab and drag” or “tap on,” would control data in this environment. 

    This file system can further expand into standard office settings. This development can eliminate the need for physical monitors and other peripherals, such as keyboards and mice. For example, Microsoft and Meta teamed up to integrate Meta’s Oculus headset with Microsoft’s Teams and 365 apps to build a virtual reality (VR) workflow platform. The companies hope that this partnership can encourage people to explore virtual interfaces.

    Other form factors that can enhance usability are also being developed. A 2021 study by researchers from the Nara Institute of Science and Technology (Japan) and Arizona State University (US) has resulted in a projection system that can turn any surface into a touch screen. This innovative system makes use of a scanning laser and camera, and an image-processing algorithm to accurately detect the location of a user’s finger. This budget-friendly solution (USD $500 for the prototype) offers users VR experiences without requiring headsets. 

    Other startups are developing wearables for gesture recognition. An example is Italy-based Deed, which created a smart bracelet called Get that uses bone conduction technology to interpret hand gestures. This feature allows people to make phone calls or digital payments through the device.

    Implications of gesture recognition

    Wider implications of gesture recognition may include: 

    • More companies investing in discovering NUIs for assistive technology, particularly those with audiovisual and mobility impairments.
    • Smart home appliances that can quickly detect sensors and bounce off each other’s signals, creating a more intuitive home living experience.
    • Future workspaces that are entirely virtual, either through specialized headsets or through most surfaces becoming interactive.
    • Companies developing skin interface technologies that completely eliminate devices and rely on embedded microchips.
    • An increasing gap between the younger generation adopting touchless tech and the older generation preferring touch-based interfaces.
    • Modest population-scale reductions in viral infections as less people touch physical objects throughout the day.
    • The emergence of new social norms and the eventual widespread adoption of touchless gestures that may be unique to each country.

    Questions to comment on

    • How do you think gesture recognition is going to make your life easier?
    • What are the other potential limitations of these technologies?

    Insight references

    The following popular and institutional links were referenced for this insight:

    Association for Computing Machinery Two-dimensional Stroke Gesture Recognition: A Survey