GTM-MDX3HJT
Skip to main content

Novel Computing

In our quest for future computing systems to efficiently solve complex real-world problems, we’re developing innovative computing paradigms and systems—for exponential speedup and energy efficiency in solving real-world applications, such as, pattern recognition, and natural language processing to realize a “Human Centric Intelligent Society”.

Novel ComputingTraditional sequential von-Neumann machines, such as today’s microprocessors, are excellent for crunching numbers one at a time, but inefficient for cognitive tasks involving massive parallel datasets. Therefore software-based artificial neural networks running on today’s machines require enormous computing resources, sometimes even a supercomputer, and this makes it impractical for consumer or many industrial applications. Neural computing systems are designed from the ground up to deal with noisy parallel data, just as our brains do, and thus are ideally suited to handle ever-increasing demand for machines to perform cognitive kind of tasks, such as speech recognition and decision making, that are second-nature to humans. There is no scarcity of data to process, ubiquitous sensors and the Internet of Things are projected to generate amount of data about 40 zettabytes in 2020.

Neural computing systems use massively parallel and distributed architectures to mimic the neocortical function of mammalian brains. We believe these accelerators can be made compact, agile, and energy-efficient to perform real-world tasks such as pattern recognition, business analytics, and weather prediction. These tasks deal with high-volume, unstructured, and noisy data. Some target applications, such as self-driving cars, require that the data be processed in real-time. Many innovative technologies need to be developed for novel computing systems, such as silicon neurons, effective learning algorithms, hardware-friendly neuron networks, high bandwidth communications and efficient memory hierarchy

Press Release

  1. Fujitsu Doubles Deep Learning Neural Network Scale with Technology to Improve GPU Memory Efficiency, Fujitsu Laboratories Ltd., Kawasaki, Japan, September 21, 2016 http://www.fujitsu.com/global/about/resources/news/press-releases/2016/0921-02.html

Publications

  1. M. Tomono, K. Yoda, I. Makiko, T. Notsu, R. Yamanaka, T. Ishihara, “Circuit Technology to Improve Energy Efficiency of Hardware Used for Deep Learning”, The 1st. cross-disciplinary Workshop on Computing Systems, Infrastructures, and Programming (xSIG), Apr. 2017.
  2. K. Shirahata, Y. Tomita, A. Ike, “Memory Reduction Method For Deep Neural Network Training”, Machine Learning for Signal Processing (MLSP), Sep. 2016. 
  3. M. Yamazaki, A. Kasagi, T. Tabaru, T. Nakahira, “High-Speed Technology to Process Deep Learning using MPI”, Summer United Workshops on Parallel, Distributed and Cooperative Processing, August 2016.
  4. A. Hayakawa, M. Kibune, A. Toda, S. Tanaka, T. Simoyama, Y. Chen, T. Akiyama, S. Okumura, T. Baba, T. Akahoshi, S. Ueno, K. Maruyama, M. Imai, Jian Hong Jiang, P. Thachile, T. Riad, S. Sekiguchi, S. Akiyama, Y. Tanaka, K. Morito, D. Mizutani, T. Mori, T. Yamamoto, H. Ebe, A 25 Gbps silicon photonic transmitter and receiver with a bridge structure for CPU interconnects, Optical Fiber Communications Conference and Exhibition (OFC), March 2015.
  5. Y. Chen, M. Kibune, A. Toda, A. Hayakawa, T. Akiyama, S. Sekiguchi, H. Ebe, N. Imaizumi, T. Akahoshi, S. Akiyama, S. Tanaka, T. Simoyama, K. Morito, T. Yamamoto, T. Mori, Y. Koyanagi, H. Tamura, A 25Gb/s hybrid integrated silicon photonic transceiver in 28nm CMOS and SOI, Solid- State Circuits Conference (ISSCC), February 2015.
  6. Y. Hidaka, M. Lionbarger, T. Akahoshi, T. Yamada, D. Mizutani, T.-K. Chen, M. Lee, T. Yamamoto, T. Fukumori, H. Nagaoka, K. Kawai, and Y. Mizutani, Mitigation of Fiber-Weave Effects by Broadside-Coupled Differential Striplines, DesignCon, January 2015.
  7. J. H. Jiang, S. Parikh, M. Lionbarger, N. Nedovic, T. Yamamoto, A DC-46Gb/s 2:1 multiplexer and source-series terminated driver in 20nm CMOS technology, Solid-State Circuits Conference (A-SSCC), November 2014.