Fujitsu Laboratories carries out research and development for the realization of Intelligent Society, which provides people with a securer and more affluent life by making use of ICT. In research on using big data, Fujitsu Laboratories is working on leading-edge technologies including temporal and spatial data processing, complex multi-series data analysis, and dynamic optimization. These technologies use advanced ICT to analyze massive amounts of data such as social media and sensor information gathered from the real world. They help achieve prediction, optimization and other sophisticated decision support systems. Fujitsu Laboratories is also researching social innovation, which is intended to discover and create affluence and value for individuals and society, by observing and analyzing people, organizations and communities. It is also developing machibata.net, a new social medium that helps individuals and groups engaged in town building to cooperate with one another. By integrating these types of research, Fujitsu Laboratories intends to offer solutions to complex social problems that are difficult for individuals and independent enterprises to solve, such as energy and security issues. In this way, it aims to realize a truly affluent and secure Intelligent Society.
Fujitsu Laboratories is analyzing and utilizing various types of data on the behavior and actions of people and society, as well as environmental change. In this way, it is proceeding with R&D on Intelligent Society to achieve a more prosperous and secure society. This paper focuses on two new types of data. The first one is social media including blogs, Twitter, and social networking services (SNS). The second is data obtained from various types of sensors such as mobile phones, automobiles, and environmental sensors. These data are very different from business data that traditional analytic technologies deal with in business intelligence applications. To realize Intelligent Society, we are researching new and advanced technologies to analyze such data. This paper introduces three of the technologies: social media analysis, optimization and spatiotemporal data processing.
Fujitsu Laboratories is developing social solutions to help establish the Human-Centric Intelligent Society. Collecting, unifying and analyzing data on personal activities, business activities and social circumstances, social solutions provide answers to composite social problems that cannot be solved by individual persons or individual enterprises. The application area of social solutions is vast and we have just started our research activities on them. However, some solutions that focus on realizing a safe and wealthy society are so mature that we can perform demonstration experiments or user tests on them. In this paper, we introduce four solutions (proactive risk management, traffic safety management, market quality management and community energy management) to illustrate our approach to social solutions.
Human-Centric Computing is a new technology paradigm in which computing resources are provided to humans anywhere and at any time in accordance with their circumstances. By shifting the paradigm from technology-centric to human-centric, new value will be created in the real world. Hence, large and new markets are expected to be developed in areas where information and communications technology (ICT) has yet to reach. Based on this new vision, Fujitsu is conducting research and development, vertically integrating mobile terminals and the cloud, so as to provide adequate services to humans anywhere and anytime in a natural way. This paper describes the goal of Human-Centric Computing and three fundamental research activities: context-aware service, multi-device collaboration and human interaction technologies.
Fujitsu Laboratories is developing several new solutions to social problems that help enhance information and communications technology (ICT) capability using mobile terminals, to establish a Human-Centric Society. Each solution consists of the following three steps: 1) sensing to acquire real-world information, 2) analyzing that information, 3) actuating people or circumstances at a proper timing and according to the situation. These three solution steps seem to be applicable for other solutions. Also each solution is targeting new areas where ICT has yet to reach for several reasons. In this paper, we describe the research and development status of the above three steps for solutions in three areas: energy management in office buildings, agriculture, and healthcare. The solutions are still being developed.
Fujitsu Laboratories of America, Inc. and Fujitsu Laboratories of Europe Ltd. have been doing R&D on healthcare. We investigated what should be done in this area as an ICT company, and found out two important areas: improving medical systems and promoting preventive medicine. Both of these could be done very efficiently by utilizing ICT. To improve medical systems, we introduced the new notion of Q-Score, which represents the quality of each provision of healthcare in order to evaluate them. Also, with respect to preventive medicine, we are conducting investigations on how to measure biodata such as blood pressure, pulse rate, or weight. There is a need to do this in various aspects of life without inconveniencing those who are having their data measured. We would like to have a method where people can be given advice based on such biodata as a way to promote health management. We report an overview of the current status of our activities in this paper.
Cloud is expected to develop from a single-purpose cloud to a hybrid cloud that links clouds or existing systems, or to a fusion of two or more clouds in the future. Fujitsu Laboratories named this advanced form of coordination "Cloud Fusion" at the start of 2010. This paper explains the aim of this coordination and the direction in which research should head. It goes on to describe the relationship between Cloud Fusion and Fujitsu's and Fujitsu Laboratories' vision — enabling a Human-Centric Intelligent Society. It describes the five pillars of research on Cloud Fusion and its outline. In particular, it introduces details about the development and execution environment that is one of the pillars.
With the progress of virtualization technology, cloud systems have been started to be deployed on a full-scale basis. However, there are many issues in terms of managing cloud systems in a stable and high-quality way. This is because the number of servers becomes immense and dependencies between servers become complex. Conventionally, individual business applications and services have been operated in systems. However, in the cloud there is a degree of uniformity in the infrastructure that makes up systems. Consequently, there are hopes that it will be possible to prepare common management platforms and methods such as those to manage application life cycles and predictive fault detection technology. This paper introduces technology that integrates the development and system management phases in the PaaS region by leveraging such characteristics of clouds. This technology functions according to the characteristics of the applications or individual service level agreements (SLA), and makes it possible to configure applications that deploy applications on the cloud. This paper also introduces technology that allows operators to automatically or simply build a test environment the same as the real environment when changing applications and run automated tests. In addition, this paper touches on technology to monitor and visualize work that is core technologies for the life cycles management. Moreover, this paper describes technology that can conduct statistical processing of the logs that are issued from the system during operation to detect the prediction of fault phenomenon.
In recent years, accompanied by lower prices of information and communications technology (ICT) equipment and networks, various items of data gleaned from the real world have come to be accumulated in cloud data centers. There are increasing hopes that analysis of this massive amount of data will provide insight that is valuable to both businesses and society. Since tens of terabytes or tens of petabytes of data, big data, should be handled to make full use of it, there needs to be a new type of technology different from ordinary ICT. Furthermore, as important services such as social infrastructure services should keep running 24 hours a day, 7 days a week, technology to dynamically change system configurations is also required. Fujitsu and Fujitsu Laboratories are working on basic technologies and application-promoting technologies for processing big data in a cloud environment. In this paper, we introduce two fundamental technologies: distributed data store and complex event processing, and workflow description for distributed data processing. We hope this gives a perspective on the direction in which this new field should head.
With the advent of cloud computing, the boundary separating internal and external data has become increasingly blurred due to the utilization of external services. As a result, existing methods of preventing data leakage, such as only using a gateway to block the outflow of confidential data, have become insufficient. Therefore, there is increased demand for new security technology to allow confidential data to be safely used even in the cloud. We have developed new cloud information gateway and access gateway technologies that can mask confidential information contained within data before it is processed in the cloud. They can also transfer applications from the cloud to inside the company for internal processing. In this way, they make it possible to utilize cloud services without transmitting actual data. These technologies enable users to safely utilize confidential data in the cloud, encouraging new uses of cloud computing, such as cross-industry collaborations and specialized uses in specific industries.
There has been a rapid growth in demand for data centers, and in recent years this has led to the problems of the increased amount of energy they consume and their higher operational costs. To solve these problems, we are developing a new data center architecture called "Next-Generation Green Data Center," which conserves energy and also reduces operational costs. In this development, we aim to realize specifications beyond those which can be achieved by optimizing equipment. Facilities are also optimized simultaneously with ICT equipment to omit duplicated functions, to amend the mismatched interfaces. To achieve this development, we concentrated on five areas: resource pooling, hardware as middleware, utilization of commodities, facility ICT co-optimization, and unified management. We hope the system will help bring about a Human-Centric Intelligent Society.
We propose new system architecture for Next-Generation Green Data Centers. It integrates servers, storage, networks, middleware and facilities into one consolidated system. This new system architecture, called Mangrove, realizes green data center concepts such as "resource pooling" and "offloading to middleware." An IT platform based on Mangrove allows flexible and effective resource usage, agile configuration, high availability and high reliability. These features lead to reduced costs and power savings at data centers. Mangrove consists of several basic elements including server and storage architecture, which enables hardware resource pooling; storage functions on the resource pool as middleware; scalable data center networks; high-performance interfaces achieved by low-cost and high-density optical interconnects; and system management with virtual machine (VM) placement optimization. This paper describes the aims and features of the basic elements of Mangrove.
Data centers (DCs) are important as a key part of the infrastructure providing advanced network services in the era of cloud computing. However, the amount of energy consumed by these DCs is expected to increase rapidly. To improve energy efficiency in DCs, it is necessary to build an energy-saving value chain as a total system from IT equipment at the device level, to DC facilities including the power supply and cooling systems. This must be done while taking into account the environmental setting of the DC as well. Fujitsu's global network of laboratories takes a holistic approach to designing next-generation environmentally friendly DCs. We are also developing a total unified system consisting of IT systems and facility functions, such as cooling and power supply technologies, aiming to improve them overall. As R&D on element technologies for constructing energy efficient DCs, this paper introduces micro-channel water cooling, green uninterruptible power supplies (UPS), unified power supply units, super-multipoint temperature measurement technology using optical fiber, and simulation technologies.
Improving the flow of traffic is becoming increasingly important in preventing traffic congestion and accidents, as well as in reducing automobile-related CO2 (carbon dioxide) emissions. Various new traffic management measures are being studied to resolve these issues. However, the huge computational complexity involved makes simulating a driving experience in real time over a wide area difficult, though this is necessary for evaluating large-area traffic management measures. We have developed a wide-area traffic simulator featuring a virtual driving experience to evaluate and improve these traffic management measures from both subjective and objective perspectives. The virtual driving experience requires two real-time processes: one is the real-time simulation of tens of thousands of vehicles using parallel computing, and the other is real-time video generation of the traffic situation from the driver's viewpoint according to the driver's operation. We applied real-time synchronization in a parallel computing simulation that considered the interaction between the vehicle driven by the driver and other vehicles. We also added a driving simulation function to a microscopic traffic simulator that we had developed. As a result, the simulator provided a driving experience in a wide-area road network. Finally, our simulator was shown to be effective by evaluating a non-stop driving assistance service which is an example of a traffic management measure.
In 2010, 3D televisions went on the market, and 3D content has been supplied via Blu-ray packages and broadcasting programs. Moreover, digital cameras and camcorders which can take stereoscopic pictures are coming out. These new products have two cameras corresponding to the left and right eyes. But if it becomes possible to take stereoscopic pictures with appliances having monocular cameras, like digital still cameras or mobile phone cameras, 3D imaging will become more common. Based on this consideration, we set a target to develop a new technology that enables people to take 3D photographs with a monocular camera. Our technology consists of two techniques: 1) automatically selecting two photographs with an appropriate parallax, and 2) correcting the misregistration between the two images. To take a 3D photograph, the user presses the shutter of the camera first, and then swings it horizontally. The camera automatically takes the second picture at an appropriate position, and adjusts the images so that is viewed naturally. This technology has been adopted in FOMA F-09C, a product that will let many people enjoy the 3D world.
Achieving highly accurate handwriting optical character recognition (OCR) is still a challenge in real applications, especially for non-Western languages like Chinese and Japanese. We proposed an advanced recognition algorithm using modified LDA, subspace-based similar-character discrimination, multi-classifier combination and mutual-information-based adaptive rejection. As an application, our technologies were adopted by the Chinese government in the Sixth National Population Census (the largest census in the world) in 2010. As shown in the paper, by combining these technologies with knowledge about addresses and nationalities, our algorithms can achieve an accuracy of over 99% with a low rejection rate. This is the first time that Chinese character recognition technology has been used in a large-scale Chinese census project. This paper will introduce Fujitsu Research and Development Center's highly accurate handwriting OCR techniques applied in the Sixth National Population Census.
Coherent optical fiber transmission system that uses digital signal processing technology is attractive for 100 G and beyond-100 G long-haul transmission systems. One of the key features of such transmission systems is the flexibility of the modulation format, which can be adaptively changed depending on the transmission distance and bit rate between the ingress and egress node. This feature means it can provide higher frequency utilization compared with the conventional fixed-modulation format and bit rate system. To realize such a flexible photonic network, it is essential to assign wavelength paths based on the transmission characteristics of each modulation format and to evaluate the transmission characteristics of signal processing algorithms under a quasi-field environment. In this paper, we first discuss the effects of flexible modulation format transmitters and receivers and the feasibility of such a flexible network. Then, we show an evaluation platform for digital coherent systems that integrates an FPGA-based receiver, PMD/PDL emulators, and a recirculating loop, and which emulates the real conditions in the field to evaluate digital signal algorithms and transmission characteristics.
This paper explains the need for continuous bandwidth improvement in computer server interconnects, and the work that is being done at Fujitsu Laboratories to provide the bandwidth, both electrically and, in the future, optically. It explains concepts of frequency dependent channel loss and equalization, and the hurdles to be overcome to reach 25 Gbps per lane using electrical HSIO and 40 Gbps per lane using optical HSIO while improving energy efficiency.
SPARC64VIIIfx is a processor chip in the SPARC64 series and is intended for use in Fujitsu's next-generation supercomputer. SPARC64VIIIfx has eight processor cores operating at 2 GHz clock frequency. Compared with the previous-generation SPARC64 processor, SPARC64VII for Unix servers, the performance of the processor cores has been enhanced with HPC extensions (which include SIMD instruction support and register extensions). SPARC64VIIIfx has achieved a power consumption as low as 58 W with a peak performance of 128 GFLOPS by employing a method of reducing leak power such as water cooling and with various dynamic power reduction techniques using power analysis results by gate-level power analysis flow. The achieved performance per watt is six times larger than the previous processor chip, SPARC64VII. This paper will introduce SPARC64VIIIfx power analysis flow and some of the leakage power and dynamic power reduction techniques applied in the SPARC64VIIIfx design. It will go on to show power analysis results at the chip and core level and measured power consumptions of a sample chip, and make a comparison with the power analysis results.
The conventional enhancement of LSIs based on Moore's Law is approaching its limits in terms of high-speed inter-chip buses and low power consumption as well as physical limits of device operation. Three-dimensional integration (3DI) has been actively researched recently as an innovative device manufacturing technique. The technology allows for functions and performances different from those offered by the existing devices. It achieves this by stacking LSI chips and connecting between the top and bottom devices with through-silicon vias (TSVs). This paper presents the wafer-level 3D stacking technology that Fujitsu Laboratories is developing by participating in the Wafer-on-Wafer (WOW) Alliance centered on the University of Tokyo. This paper focuses on device thinning and bump-less TSV process technologies. Fujitsu has helped develop various technologies. They include ultra-thin wafer transfer technology, in which device wafers such as 45 nm CMOS logic LSIs and FeRAMs are thinned to 10 µm or less for stacking. Another example is bump-less TSV technology that uses a low-temperature process of up to 200˚C and dual damascene method. High yield and reliability have been demonstrated and the feasibility of high-bandwidth and low-power-consumption 3D LSIs verified.
This paper presents an intuitive, two-parameter metric for fully describing the energy efficiency of data centers (DCs). The metric accurately characterizes the energy performance of a DC from when it is first commissioned through to full capacity. Thus, the metric can be used to predict future performance and to form the basis of deployment policy. The metric also describes the theoretical ideal performance of DCs. It can therefore be used to compare DCs of different sizes, at different stages of deployment, or in different phases of design and development. The paper then verifies this metric by applying it to two Fujitsu DCs, ranging in size from 600 kW to 3 MW (IT power) and demonstrating it to be accurate against both detailed simulation and measurement.
Fujitsu Laboratories is conducting R&D on a new cooling technology based on the adsorption heat pump (AHP) process. With this technology, waste heat from IT equipment such as servers is recovered and then reused to produce chilled water below 20˚C. The power consumed by air-conditioning systems in data centers can be decreased drastically by using the aforementioned chilled water to cool servers. So far, we have developed highly efficient water adsorbents and a system to circulate the waste heat, and successfully constructed an AHP system that utilizes the waste heat of servers. We confirmed that this system can produce chilled water from a waste heat source temperature of 60˚C, which is 10˚C lower than our previous lowest possible temperature. This suggests that this system can be used to generate chilled water using waste heat not only from IT equipment but also from factories, buildings, domestic solar water heaters and so on. Therefore, this technology is expected to have potential applications in areas such as air-conditioning and refrigerator systems. This paper introduces the current state of the new cooling technology using waste heat from IT equipment, and future developments in this field.
Easy and fast measurement of proteins is strongly requested in, for example, cancer marker proteins in blood or food poisoning proteins in foods. Fujitsu Laboratories has been working for years to develop a novel technology enabling such protein measurement using DNA materials. As one of the outcomes, modified DNA aptamer technology has been established. It will be utilized to make substitutes for the antibodies used in the current methodology of protein detection. As other outcomes, a new measurement principle has been established in collaboration with Technical University Munich to utilize the artificially induced molecular motion of DNA to measure protein concentration. By combining these two functional units through DNA double helix formation, we can assemble protein sensors easily. The technology has proven useful for rapidly detecting food poisoning toxins in collaborative research conducted with Nagoya University.