Gestures, holograms, and matrix-style mind uploading

IMAGE CREDIT: Quantumrun

Gestures, holograms, and matrix-style mind uploading

    First, it was punch cards, then it was the iconic mouse and keyboard. The tools and systems we use to engage with computers is what allows us to control and build the world around us in ways unimaginable to our ancestors. We’ve come a long way to be sure, but when it comes to the field of user interface (UI, or the means by which we interact with computer systems), we really haven’t seen anything yet.

    Over the last two installments of our Future of Computers series, we explored how the coming innovations set to reshape humble microchip and disk drive will, in turn, launch global revolutions in business and society. But these innovations will pale in comparison to the UI breakthroughs now being tested in science labs and garages throughout the world.

    Every time humanity has invented a new form of communication—be it speech, the written word, the printing press, the phone, the Internet—our collective society blossomed with new ideas, new forms of community, and entirely new industries. The coming decade will see the next evolution, the next quantum leap in communication and interconnectivity … and it may just reshape what it means to be human.

    What is good user interface, anyway?

    The era of poking, pinching, and swiping at computers to get them to do what we wanted began a decade ago. For many, it started with the iPod. Where once we were accustomed to clicking, typing, and pressing down against sturdy buttons to communicate our wills to machines, the iPod popularized the concept of swiping left or right on a circle to select music you wanted to listen to.

    Touchscreen smartphones began entering the market around that time as well, introducing a range of other tactile command prompts like the poke (to simulate pressing a button), the pinch (to zoom in and out), the press, hold and drag (to skip between programs, usually). These tactile commands gained traction quickly among the public for a number of reasons: They were new. All the cool (famous) kids were doing it. Touchscreen technology became cheap and mainstream. But most of all, the movements felt natural, intuitive.

    That’s what good computer UI is all about: Building more natural and intuitive ways to engage with software and devices. And that’s the core principle that will guide the future UI devices you’re about to learn about.

    Poking, pinching, and swiping at the air

    As of 2015, smartphones have replaced standard mobile phones in much of the developed world. This means a large portion of the world is now familiar with the various tactile commands mentioned above. Through apps and through games, smartphone users have learned a large variety of abstract skills to control the supercomputers in their pockets.

    It’s these skills that will prepare consumers for the next wave of devices—devices that will allow us to more easily merge the digital world with our real world environments. So let’s take a look at some of the tools we’ll use to navigate our future world.

    Open-air gesture control. As of 2015, we’re still in the micro-age of touch control. We still poke, pinch, and swipe our way through our mobile lives. But that touch control is slowly giving way to a form of open-air gesture control. For the gamers out there, your first interaction with this may have been playing overactive Nintendo Wii games or the latest Xbox Kinect games—both consoles use advanced motion-capture technology to match player movements with game avatars.

    Well, this tech isn’t staying confined to videogames and green screen filmmaking; It will soon enter the broader consumer electronics market. One striking example of what this might look like is a Google venture named Project Soli (watch its amazing and short demo video here). Developers of this project use miniature radar to track the fine movements of your hand and fingers to simulate the poke, pinch, and swipe in open-air instead of against a screen. This is the kind of tech that will help make wearables easier to use, and thus more attractive to a wider audience.

    Three-dimensional interface. Taking this open-air gesture control further along its natural progression, by the mid-2020s, we may see the traditional desktop interface—the trusty keyboard and mouse—slowly replaced by the gesture interface, in the same style popularized by the movie, Minority Report. In fact, John Underkoffler, UI researcher, science advisor, and inventor of the holographic gesture interface scenes from Minority Report, is currently working on the real life version—a technology he refers to as a human-machine interface spatial operating environment.

    Using this technology, you will one day sit or stand in front of a large display and use various hand gestures to command your computer. It looks really cool (see link above), but as you might guess, hand gestures might be great for skipping TV channels, pointing/clicking on links, or designing three-dimensional models, but they won’t work so well when writing long essays. That’s why as open-air gesture technology is gradually included into more and more consumer electronics, it will likely be joined by complementary UI features like advanced voice command and iris tracking technology.

    Yes, the humble, physical keyboard may yet survive into the 2020s … at least until these next two innovations fully digitize it by the end of that decade.

    Haptic holograms. The holograms we’ve all seen in person or in the movies tend to be 2D or 3D projections of light that show objects or people hovering in the air. What these projections all have in common is that if you reached out to grab them, you would only get a handful of air. That won’t be the case for much longer.

    New technologies (see examples: one and two) are being developed to create holograms you can touch (or at least mimic the sensation of touch, i.e. haptics). Depending on the technique used, be it ultrasonic waves or plasma projection, haptic holograms will open up an entirely new industry of digital products that can be used in the real world.

    Think about it, instead of a physical keyboard, you can have a holographic one that can give you the physical sensation of typing, wherever you’re standing in a room. This technology is what will mainstream the Minority Report open-air interface and end the age of the traditional desktop.

    Imagine this: Instead of carrying around a bulky laptop, you could one day carry a small square wafer (maybe the size of a CD case) that would project a touchable display screen and keyboard. Taken one step further, imagine an office with only a desk and a chair, then with a simple voice command, an entire office projects itself around you—a holographic workstation, wall decorations, plants, etc. Shopping for furniture or decoration in the future may involve a visit to the app store along with a visit to Ikea.

    Virtual and augmented reality. Similar to the haptic holograms explained above, virtual and augmented reality will play a similar role in the UI of the 2020s. Each will have their own articles to explain them fully, but for the purpose of this article, it’s useful to know the following: Virtual reality will largely be confined to advanced gaming, training simulations, and abstract data visualization for the next decade.

    Meanwhile, augmented reality will have far broader commercial appeal as it will overlay digital information over the real world; if you’ve ever seen the promo video for Google glass (video), then you’ll understand how useful this technology can one day be once it matures by the mid-2020s.

    Your virtual assistant

    We’ve covered the touch and movement forms of UI set to take over our future computers and electronics. Now it’s time to explore another form of UI that might feel even more natural and intuitive: speech.

    Those who own the latest smartphone models have most likely already experienced speech recognition, whether it’s in the form of iPhone’s Siri, Android’s Google Now, or Windows Cortana. These services are designed to let you interface with your phone and access the knowledge bank of the web simply by verbally telling these ‘virtual assistants’ what you want.

    It’s an amazing feat of engineering, but it’s also not quite perfect. Anyone who’s played around with these services knows they often misinterpret your speech (especially for those people with thick accents) and they occasionally give you an answer you weren’t looking for.

    Luckily, these failings won’t last much longer. Google announced in May 2015 that its speech recognition technology now only has an eight per cent error rate, and shrinking. When you combine this falling error rate with the massive innovations happening with microchips and cloud computing, we can expect virtual assistants to become frighteningly accurate by 2020.

    Watch this video for an example of what’s possible and what will become publicly available in a few short years.

    It might be shocking to realize, but the virtual assistants currently being engineered will not only understand your speech perfectly, but they will also understand the context behind the questions you ask; they will recognize the indirect signals given off by your tone of voice; they will even engage in long form conversations with you, Her-style.

    Overall, voice recognition based virtual assistants will become the primary way we access the web for our day-to-day informational needs. Meanwhile, the physical forms of UI explored earlier will likely dominate our leisure and work-focused digital activities. But this isn’t the end of our UI journey, far from it.

    Enter the Matrix with Brain Computer Interface

    Just when you thought we’d covered it all, there’s yet another form of communication that’s even more intuitive and natural than touch, movement, and speech when it comes to controlling machines: thought itself.

    This science is a bioelectronics field called Brain-Computer Interface (BCI). It involves using an implant or a brain-scanning device to monitor your brainwaves and associate them with commands to control anything that’s run by a computer.

    In fact, you might not have realized it, but the early days of BCI have already begun. Amputees are now testing robotic limbs controlled directly by the mind, instead of through sensors attached to the wearer’s stump. Likewise, people with severe disabilities (such as quadriplegics) are now using BCI to steer their motorized wheelchairs and manipulate robotic arms. But helping amputees and persons with disabilities lead more independent lives isn’t the extent of what BCI will be capable of. Here’s a short list of the experiments now underway:

    Controlling things. Researchers have successfully demonstrated how BCI can allow users to control household functions (lighting, curtains, temperature), as well as a range of other devices and vehicles. Watch demonstration video.

    Controlling animals. A lab successfully tested a BCI experiment where a human was able to make a lab rat move its tail using only his thoughts.

    Brain-to-text. Teams in the US and Germany are developing a system that decodes brain waves (thoughts) into text. Initial experiments have proven successful, and they hope this technology could not only assist the average person, but also provide people with severe disabilities (like renowned physicist, Stephen Hawking) the ability to communicate with the world more easily.

    Brain-to-brain. An international team of scientists were able to mimic telepathy by having one person from India think the word “hello,” and through BCI, that word was converted from brain waves to binary code, then emailed to France, where that binary code was converted back into brainwaves, to be perceived by the receiving person. Brain-to-brain communication, people!

    Recording dreams and memories. Researchers at Berkeley, California, have made unbelievable progress converting brain waves into images. Test subjects were presented with a series of images while connected to BCI sensors. Those same images were then reconstructed onto a computer screen. The reconstructed images were super grainy, but given about a decade of development time, this proof of concept will one day allow us ditch our GoPro camera or even record our dreams.

    We’re Going to Become Wizards, You Say?

    That’s right everyone, by the 2030s and mainstreamed by the late 2040s, humans will begin to communicate with each other and with animals, control computers and electronics, share memories and dreams, and navigate the web, all by using our minds.

    I know what you’re thinking: Yes, that did escalate quickly. But what does this all mean? How will these UI technologies reshape our shared society? Well, I guess you’ll just have to read the final installment of our Future of Computers series to find out.

    FUTURE OF COMPUTERS SERIES LINKS

    Moores Law’s Slowing Appetite for Bits, Bytes, and Cubits: The Future of Computers P1

    The Digital Storage Revolution: The Future of Computers P2

    Society and the Hybrid Generation: The Future of Computers P4

    Next scheduled update for this forecast

    2023-01-26

    Forecast references

    The following popular and institutional links were referenced for this forecast:

    The following Quantumrun links were referenced for this forecast: