Supercomputing advancements: using neuromorphic optical networks

<span property="schema:name">Supercomputing advancements: using neuromorphic optical networks</span>
IMAGE CREDIT:  

Supercomputing advancements: using neuromorphic optical networks

    • Author Name
      Jasmin Saini Plan
    • Author Twitter Handle
      @Quantumrun

    Full story (ONLY use the 'Paste From Word' button to safely copy and paste text from a Word doc)

    In the last few decades, the once well-known and accurate trend, Moore’s Law, predicted by Gordon Moore of IBM in 1965, is now slowly becoming a defunct measure of computing performance.  Moore’s Law predicted that about every two years the number of transistors in an integrated circuit would double, that there would be more transistors in the same amount of space, leading to increased computation and thus computer performance.  In April 2005, in an interview, Gordon Moore himself stated his projection would likely no longer be sustainable: “In terms of size [of transistors] you can see that we're approaching the size of atoms which is a fundamental barrier, but it will be two or three generations before we get that far—but that is as far out as we have ever been able to see. We have another 10 to 20 years before we reach a fundamental limit.”   

    Although Moore’s law is doomed to hit some dead-end, other indicators of computing are seeing a rise in applicability.  With the technology we use in our daily lives, we all can see the trends of computers getting smaller and smaller but also that device batteries are lasting longer and longer.  The latter trend with batteries is termed Koomey’s Law, named after Stanford University professor Jonathan Koomey.  Koomey’s law predicts that "… at a fixed computing load, the amount of battery you need will fall by a factor of two every year and a half." Therefore, electronic power consumption or energy efficiency of computers is doubling about every 18 months.  So, what all these trends and changes are pointing towards and revealing is the future of computing.

    The future of computing

    We have come to a time in history where we are having to redefine computing as the trends and laws predicted several decades ago are no longer applicable. Also, as computing pushes towards the nano and quantum scales, there are obvious physical limitations and challenges to over come. Perhaps the most notable attempt at supercomputing, quantum computing, has the obvious challenge of truly harnessing quantum entanglement for parallel computation, that is, performing computations before quantum decoherence. However, despite the challenges of quantum computing there has been much progress in the past few decades. One can find models of the traditional John von Neumann computer architecture applied to quantum computing.  But there is another not so well-known realm of (super)computing, termed neuromorphic computing that does not follow the traditional von Neumann architecture. 

    Neuromorphic computing was envisioned by Caltech professor Carver Mead back in his seminal paper in 1990.  Fundamentally, the principles of neuromorphic computing are based on theorized biological principles of action, like those thought to be utilized by the human brain in computation.  A succinct distinction between neuromorphic computing theory versus classical von Neumann computing theory was summarized in an article by Don Monroe in the Association for Computing Machinery journal.  The statement goes like this: “In the traditional von Neumann architecture, a powerful logic core (or several in parallel) operates sequentially on data fetched from memory. In contrast, ‘neuromorphic’ computing distributes both computation and memory among an enormous number of relatively primitive ‘neurons,’ each communicating with hundreds or thousands of other neurons through ‘synapses.’”  

    Other key features of neuromorphic computing include fault intolerance, which aims to model the human brain’s ability to lose neurons and still be able to function.  Analogously, in traditional computing the loss of one transistor affects proper functioning.  Another envisioned and aimed advantage of neuromorphic computing is there is no need to be programmed; this last aim is again modeling the human brain’s ability to learn, respond and adapt to signals.  Thus, neuromorphic computing is currently the best candidate for machine learning and artificial intelligence tasks. 

    Advancements of neuromorphic supercomputing

    The rest of this article will delve into advancements of neuromorphic supercomputing.  Specifically, recently published research on the Arxiv from Alexander Tait et. al. out of Princeton University shows that a silicon-based photonic neural network model outperforms a conventional computing approach by nearly 2000-fold.  This neuromorphic photonic platform of computing could lead to ultrafast information processing. 

    The Tait et. al. paper entitled Neuromorphic Silicon Photonics starts off describing the pros and cons of using the photonic light form of electromagnetic radiation for computing.  The initial main points of the paper are that light has been widely used for information transmission yet not for information transformation, i.e. digital optical computing.  Similarly, to quantum computing, there are fundamental physical challenges to digital optical computing.  The paper then goes into the details of an earlier proposed neuromorphic photonic computing platform the Tait et. al. team published in 2014, entitled Broadcast and weight: An integrated network for scalable photonic spike processing.  Their newer paper describes the results of the first experimental demonstration of an integrated photonic neural network. 

    In the “broadcast and weight” computing architecture, the “nodes” are assigned a unique “wavelength carrier” that is “wavelength division multiplexed (WDM)” and then broadcasted to other “nodes”.  The “nodes” in this architecture are meant to simulate neuron behavior in the human brain.  Then “WDM” signals are processed via continuous-valued filters called “microring (MRR) weight banks” and then summed electrically into a measured total power detection value.  The non-linearity of this last electro-optic transformation/computation is precisely the non-linearity required to mimic neuron functionality, essential to computing under neuromorphic principles. 

    In the paper, they discuss that these experimentally verified electro-optic transformation dynamics are mathematically identical to a “2-node continuous-time recurrent neural network” (CTRNN) model. These pioneering results suggest that programming tools that have been used for CTRNN models could be applied to silicon-based neuromorphic platforms.  This discovery opens the path to adapting CTRNN methodology to neuromorphic silicon photonics.  In their paper, they do just such a model adaption onto their “broadcast and weight” architecture.  The results show that the CTRNN model simulated onto their 49-node architecture yields the neuromorphic computing architecture to outperform classical computing models by 3 orders of magnitude.   

    Tags
    Category
    Topic field