AI startup 'vicarious' excites silicon valley elite – But is it all hype?

<span property="schema:name">AI startup 'vicarious' excites silicon valley elite – But is it all hype?</span>
IMAGE CREDIT:  Image via tb-nguyen.blogspot.com

AI startup 'vicarious' excites silicon valley elite – But is it all hype?

    • Author Name
      Loren March
    • Author Twitter Handle
      @Quantumrun

    Full story (ONLY use the 'Paste From Word' button to safely copy and paste text from a Word doc)

    Artificial Intelligence startup, Vicarious, has been getting a lot of attention lately, and it is not entirely clear why. A lot of Silicon Valley bigwigs have been opening their personal pocketbooks and dishing out the big bucks in support of the company’s research. Their website flaunts the recent influx of funding from such notables as Amazon CEO Jeff Bezos, Yahoo co-founder Jerry Yang, Skype co-founder Janus Friis, Facebook founder Mark Zuckerberg and... Ashton Kutcher. It’s not really known where all this money is going. AI is a highly secretive and protective area of technological development lately, but the public debate about the arrival and use of highly anticipated AI in the real world has been anything but hushed.Vicarious has been a bit of a dark horse on the tech scene.

    While there’s been lots of buzz about the company, especially since their computers cracked “CAPTCHA” last fall, they’ve managed to remain an elusive and mysterious player. For example, they don’t give out their address for fear of corporate espionage, and even a visit to their website will leave you confused about what they actually do. All this playing hard to get has still got investors lining up. Vicarious’ main project has been the construction of a neural network capable of replicating the part of the human brain that controls vision, body movement and language.

    Co-founder Scott Phoenix has said the company is trying to “build a computer that thinks like a person, except it doesn't have to eat or sleep.” Vicarious’ focus so far has been on visual object recognition: first with photos, then with videos, then with other aspects of human intelligence and learning. Co-founder Dileep George, previously the lead researcher at Numenta, has been stressing the analysis of perceptual data processing in the company’s work. The plan is to eventually create a machine that can learn to “think” through a series of efficient and unsupervised algorithms. Naturally, this has people pretty freaked out.

    For years the possibility of AI becoming a part of real life has immediately drawn knee-jerk Hollywood references. On top of fears about human jobs being lost to robots, people are genuinely concerned that it won’t be long before we find ourselves in a situation not unlike those presented in the Matrix. Tesla Motors and PayPal co-founder Elon Musk, also an investor, expressed concerns about AI in a recent CNBC interview.

    “I like to just keep an eye on what’s going on with artificial intelligence,” Musk said. “I think there is potentially a dangerous outcome there. There have been movies about this, you know, like Terminator. There are some scary outcomes. And we should try to make sure the outcomes are good, not bad.”

    Stephen Hawking put in his two cents, essentially confirming our fear that we should be afraid. His recent comments in The Independent led to media frenzy, sparking such headlines as Huffington Post’s “Stephen Hawking is Terrified of Artificial Intelligence,” and MSNBC’s brilliant “Artificial Intelligence Could End Mankind!” Hawking’s comments were significantly less apocalyptic, amounting to a sensible warning: “Success in creating AI would be the biggest event in human history.

    Unfortunately, it might also be the last, unless we learn how to avoid the risks. The long-term impact of AI depends on whether it can be controlled at all.” This question of “control” brought a lot of robot rights activists out of the woodwork, advocating for robot freedom, saying that trying to “control” these thinking beings would be cruel and amount to a form of slavery, and that we need to let the robots be free and live their lives to the fullest potential (Yes, these activists exist.)

    Many loose ends need to be addressed before people get carried away. For one, Vicarious is not creating a league of robots that are going to have feelings, thoughts and personalities or a desire to rise up against the humans who made them and take over the world. They can barely understand jokes. So far it has been nearly impossible to teach computers anything resembling street sense, human “meaningfulness” and human subtleties.

    For example, a project out of Stanford called “Deeply Moving,” meant to interpret movie reviews and give films a thumbs-up or thumbs-down review, has been totally incapable of reading sarcasm or irony. In the end, Vicarious is not talking about a simulation of the human experience. The broadly sweeping statement that Vicarious’ computers will “think” like people is pretty vague. We need to come up with another word for “think” in this context. We’re talking about computers that can learn through recognition – at least for now.

    So what does this mean? The kinds of developments we are realistically moving towards have more practical and applicable characteristics like face recognition, self-driving cars, medical diagnosis, the translation of text (we could definitely use something better than Google translate, after all) and tech hybridization. The silly thing about all of this is none of it is new. Tech guru and Chairman of the Artificial General Intelligence Society, Dr. Ben Goertzel points out in his blog, “If you picked other problems like being a bicycle messenger on a crowded New York Street, writing a newspaper article on a newly developing situation, learning a new language based on real-world experience, or identifying the most meaningful human events among all the interactions between people in a large crowded room, then you would find that today’s statistical [Machine Learning] methods aren't so useful.”

    There are just certain things that machines don’t yet understand, and some things that can’t quite be captured in an algorithm. We’re seeing a rolling snowball kind of hype that has pretty much proven, so far at least, to be mostly fluff. But hype itself can be dangerous. As Facebook’s Director of AI Research and the Founding Director of the NYU Center for Data Science, Yann LeCun publicly posted to his Google+ page: “Hype is dangerous to AI. Hype killed AI four times in the last five decades. AI hype must be stopped.”

    When Vicarious cracked CAPTCHA last fall, LeCun was skeptical of the media frenzy, pointing out a couple of very important realities: “1. Breaking CAPTCHAs is hardly an interesting task, unless you are a spammer; 2. It’s easy to claim success on a dataset you cooked up yourself.” He went on to advise tech journalists, “Please, please do not believe vague claims by AI startups unless they produce state of the art results on widely accepted benchmarks,” and says to beware of fancy or vague jargon like “machine learning software based on computational principles of the human brain,” or “recursive cortical network.”

    By LeCun’s standards, object and image recognition is by far a more impressive step in AI development. He has more faith in the work of groups like Deep Mind, who have a good track record in prestigious publications and tech development, and an excellent team of scientists and engineers working for them. “Perhaps Google overpaid for Deep Mind,” says LeCun, "but they did get a good chunk of smart people with the money. Although some of what Deep Mind does is kept secret, they do publish papers at major conferences.” LeCun’s opinion of Vicarious is quite different. “Vicarious is all smoke and mirrors,” he says. “The people have no track record (or if they have one, it’s a track record of hyping and not delivering).

    They have never made any contributions to AI, machine learning or computer vision. There is zero information about the methods and algorithms they are using. And there is no result on standard datasets that could help the community assess the quality of their methods. It’s all hype. There are lots of AI/deep learning startups that do interesting stuff (mostly applications of methods recently developed in academia). It’s baffling to me that Vicarious attracts so much attention (and money) with nothing but wild unsubstantiated claims.”

    Maybe it’s the reminiscence of pseudo-cult spiritual movements that get celebrities involved. It makes the whole thing seem a little hokey or at least partly fantastical. I mean, how seriously can you take an operation that involves Ashton Kutcher and about a million Terminator references? In the past, a lot of the media coverage has been hugely enthusiastic, the press perhaps overly excited to be using words like “biologically inspired processor” and “quantum computation.”

    But this time around, the hype-machine is a little more reluctant to automatically shift into gear. As Gary Marcus pointed out recently in The New Yorker, a lot of these stories are “confused at best,” actually failing to dish out anything new and rehashing information about technology that we already have and use. And this stuff has been going on for decades. Just check out the Perceptron and you can get an idea of just how rusty this tech-train actually is. That said, rich people are jumping aboard the money train and it doesn't seem like it’s going to stop any time soon. 

    Tags
    Category
    Topic field