NOTE: this is an archived page and the content is likely to be out of date.
Fujitsu Laboratories Ltd. has been promoting research and development of information integration and utilization technology using "Linked Data," which is a standard method for publishing data on the Web. Linked Data is recommended by the World Wide Web Consortium (W3C), which is the main international standards organization for the Web. Linked Data uses a machine-readable structured data format that can be processed mechanically. Recently, highly public data such as academic and government related data have been released in Linked Data and created a global data space called "Linked Open Data (LOD)" on the Web. In this paper, we describe our basic LOD utilization technology to collect, store and cross-search worldwide Linked Data with applications. This technology is jointly-developed by the authors and the Digital Enterprise Research Institute of the National University of Ireland, Galway.
Many difficulties are encountered along all three axes of Big Data (Volume, Variety and Velocity) which serve to limit the applicability of established technology. BigGraph is a research project and platform for realizing the vision of Intelligent Society, where requirements along all three axes can be accommodated. In the Fraud Detection scenario, working with Fujitsu UK & Ireland, BigGraph considers different types of information, links external data sources as streams, and consolidates existing knowledge in order to find undiscovered and fraudulent anomalies in Big Data.
In recent years, drastic improvements in network and computing speed combined with the development of information sensor technology have contributed to the growth of Internet-connected devices. As a result, there is a strong demand to extract valuable information from the massive amounts of diverse time-series data that is generated from the Web or other sensors, and quickly re-use this information in services such as navigation. Fujitsu Laboratories Ltd. has set the achievement of "a human-centric intelligent society" as its vision and aims to establish a cloud platform to support this goal. In this paper, we describe the aim, characteristics, impact and future orientation of the following technologies: an integrated development and runtime platform that supports the utilization of massive amounts of data; parallelism extraction technologies for complex event processing, and distributed complex event processing technology to realize high-speed processing.
In recent years, high-throughput time-series data, such as sensor data and machine logs, have come to be widely used in Real-time Analytical Processing (RTAP). Enterprises have increasing needs to save and retrieve vast amount of such data so that it can be repeatedly analyzed from varying perspectives in order to promote the discovery of new findings. However, given that the size of each data piece is generally very small, as well as the fact that the data is transmitted repeatedly from many sources, satisfying the throughput and capacity demands with existing storage capabilities is quite difficult. In addition, no existing storage provides high-speed access capable of being optimized for arranging data in chronological order. Thus, we have developed StreamStorage, a high throughput, scalable storage for streaming data. To achieve high performance and scaling out, a stream is partitioned into blocks and stored in a distributed key-value store while the stream is reconstructed by gathering and merging blocks upon read access. This paper presents an overview of the architecture of StreamStorage and an example of its application to provide high-availability RTAP.
Fujitsu Laboratories Limited has developed technologies that protect the privacy of sensor data, such as data from intelligent home appliances and location data. This protection starts in the data collection stage and continues to the analytic result utilization stage. In current times, with the increasing number of utilization data leaks and other breaches of privacy that have created causes for concern, the protection of sensitive data is vital. These technologies consist of partial decryption technology that takes encrypted sensor data and either masks a portion of it, replacing the data with other data, or converts the encryption key, while maintaining encryption and anonymous access technology through which users are able to receive their own analytic results from the utilization service without divulging their ID to the service. These technologies enable users to effectively control their private data within a pool of sensor data so that it is secure to use third-party contractors to handle the processing involved in data utilization. This article describes the above-mentioned newly developed technologies.
Recently users have come to demand a natural user interface (NUI) so that they can operate devices naturally, as an alternative to a graphical user interface (GUI). At Fujitsu Laboratories, with the aim of achieving an NUI, we are developing touchless user interfaces that make it possible to monitor users' behavior through intelligent sensing technologies, such as gesture recognition, eye tracking and speech recognition, and then understand users' intent to enable them to operate a device in a natural way. It is difficult to achieve this by using individual sensing technologies. Therefore, we have developed a way to ensure a device operation by integrating multiple sensing technologies for natural motion detection. Both gesture recognition and eye tracking technologies are combined in the developed interface, which gives the users the feeling that they are interacting more naturally and effectively with devices than they would if only individual sensing technologies are utilized. This paper describes an overview of the developed sensing technologies, the merits and problems regarding their combination, the developed interface, and future work.
Recently, the chance to see videos in various places has increased due to the speed-up of networks and spread of digital signage. However, most of the videos are non-interactively broadcast at viewers, and currently viewers often input keywords related to the videos in a search site after they see the videos when they want to find out some information related to the videos. So that viewers can easily obtain information related to videos, Fujitsu Laboratories Ltd. has developed new data transfer technology that enables communication between videos and smart devices by embedding communication information into the videos invisibly and extracting it using the camera application of smart devices. Viewers are able to acquire information related to videos easily just by filming them. In this paper, we introduce an outline and the usage scenarios of this data transfer technology via video data.
With the rapid spread of smart devices such as smart phones and tablets, corporate IT systems are increasingly shifting away from PC's towards smart devices. In order for businesses to take full advantage of smart device capabilities, new security features are needed that are different from those conventionally-designed for PCs. Accordingly, Fujitsu Laboratories Ltd. has developed technology for application management and execution that provides for flexible and secure use of business applications without sacrificing usability. In this paper, we introduce the concept and technical mechanism along with use cases. This technology provides the secured environment necessary for corporate use by distributing the applications associated with the users' contexts to their devices and properly protecting and controlling the execution of those distributed applications which include confidential information.
Recently, crafty cyber-attacks such as Advanced Persistent Threats (APT) have become a menace to systems in enterprises. The APT attack usually uses a variety of intelligent techniques to gain access to a specific target. Therefore, the number of incidents involving such an attack has increased greatly. To confront such novel cyber-attacks, a new approach is needed. The conventional countermeasures walled the enterprise system, and prevented outsiders from intruding. Conversely, the new approach aims to reduce the damage of the cyber-attack by using technologies of early detection and avoidance. This paper presents four research activities concerning cybersecurity based on this approach. It describes the endpoint alerting technology that detects the target e-mail that pretends to be from a close associate by analyzing the e-mail header. It also mentions the risk management method that analyzes the individual characteristics and behaviors of a person who is easily deceived by a fake e-mail. This paper then describes the method of detecting malware's espionage activity to access sensitive information inside the enterprise network. And it covers the extrusion detection system that detects an improper HTTP tunneling communication at the edge of network. We believe these technologies will allow enterprise systems to reduce the damage they may suffer from a cyber-attack.
Bad weather conditions such as fog, haze, and dust often reduce the performance of outdoor cameras. In order to improve the visibility of surveillance and on-vehicle cameras, we propose a fast image-defogging method based on a dark channel prior. It first estimates the atmospheric light by searching the sky area in the foggy image. Then it estimates the transmission map by refining a coarse map from a fine map. Finally, it produces a clear image from the foggy image by using the atmospheric light and the transmission map. We achieved a run speed of 50 fps @ 720 × 480, with a software implementation on a central processing unit (CPU) with a graphics processing unit (GPU). This fast image-defogging method can be used in surveillance and driving systems in real time.
Once failures occur in a cloud datacenter accommodating a large number of virtual resources, they tend to spread rapidly and widely, impacting many cloud services and their users. One of the best ways to prevent a failure from spreading in the system is to identify signs of a failure before its occurrence and deal with it proactively before it causes serious problems. Although several approaches have been proposed to predict failures by analyzing past logs of system messages and identifying the relationship between the messages and the failures, it is still difficult to automatically predict the failure for several reasons such as variation of log message formats and frequent changes in their configurations. Based on this understanding, we propose a new failure prediction method that Fujitsu Laboratories has developed. The method automatically learns message patterns as signs of failure by classifying messages by their similarity regardless of their format and re-learning the message patterns in frequently-changed configurations. We evaluated our method in an actual cloud datacenter. The experimental results showed that our approach predicted failures with 80% precision and 90% recall in the best case.
It is essential to achieve the efficient use of energy in a smart city in order to contribute to the environment and decrease the socials costs associated with inefficient energy use. To realize this, advancement related to increases in renewable energy and a change from customer to prosumer (not only in the consumption of electric power, but also in the generation of electricity to sell as electric power) are necessary. This will mean that a different variation factor will increase for both the supply side and demand side of electric power, and it will become difficult to control supply and demand for electrical power. To cope with such a problem, power distribution system is changing from pure load-following generation model to incorporation with supply-following load control. That is, in the next generation power distribution networks, both power generation and demand should be controlled to maintain balance. For such a system, Demand Response is a promising technological development. In this paper, we will present our R&D activities on the power demand analysis and forecast technologies based on the building simulation and development of the international standard OpenADR compliant DRAS (Demand Response Automation Server).
Addressing the various social challenges arising from aging, technological advancement is raising expectations that it can assist both the elderly and patients in maintaining and improving their daily functions. It is envisioned that such technologies will expand applications of advanced mobile devices, and allow wearable sensors to evolve or use numerous sensors embedded in the environment, as a result of which systems that support people in their activities and health in their daily lives will become widespread. In such systems, one important technological element is continuously sensing and understanding the conditions of people and their environment. Its realization requires a user-friendly approach to ensure sensors can readily and continuously gather information with hassle-free operations while also being able to capture the necessary information. This paper presents technologies for readily grasping people's activities and health conditions without them being hassled, under the theme of health support in daily life. It covers a technology for detecting a pulse from a facial image and in-home monitoring with small, lightweight and multifunctional wearable sensors. A joint research project for assisted independent living in a smart house is also described.
Advances in ICT are expected to help people lead a healthy life amidst the many stressors present in today's fast-paced society. As smartphones and tablets surpass desktops and laptops globally in 2013, the next computing revolution—human-centric computing—has already taken root. This shift in computing is made possible by the increasing ubiquity of sensors that are around us, on us, and even in us. Deploying intelligent, human-centric services using these connected sensors requires advancements in the underlying IT infrastructure itself. In this paper, we describe various novel services built atop a general-purpose mobile platform developed by Fujitsu Laboratories of America, Inc. for continuous mobile monitoring. Our platform was developed with next-generation healthcare services in mind, but has applicability more broadly as an extensible platform for deploying real-time services that incorporate data from arbitrary sensors. We provide an overview of our platform, and highlight a few services that act as new points of contact between a user and the IT infrastructure. In the domain of health and wellness, we show how continuous bio-monitoring allows us to measure stress and enable stress management services. We describe how such services may be used in a typical day, contributing to a new and improved quality of life.
Energy Harvesting Technology obtains electric power from the ambient environment, from sources such as sunshine, machine vibration and thermal sources. It extends the possible range of application of Machine to Machine (M2M) wireless sensor networks by offering maintenance-free operations, saving energy, and requiring less wiring. Moreover, Energy Harvesting Technology that achieves battery-less operations and reduces CO2 is useful as technology for the next-generation Smart City and the Sustainable Society. This paper describes the research and development situation in Fujitsu Laboratories with regards to oxide thermoelectric material technology, all-solid secondary battery technology, and environmental power generation tester technology that are used in Energy Harvesting Technology for maintenance-free M2M wireless sensor modules.
Recently, various new information services have emerged as a means for sustaining the significant growth of modern applications spearheaded by Cloud and smart phones. These new services rely heavily on the underlying network and data center infrastructures. Thus, core networks are demanded to further expand the capacity to support the rapid increase in data traffic. Meanwhile, it is becoming increasingly important for core networks to be more flexible in order to accommodate feature-rich services such as bandwidth-on-demand. Additionally, for the realization of a sustainable human society and eco-friendly IT services, products capable of low energy consumption are strongly desired. In the near future, it is expected that conventional optical networks which operate on a rigid fixed channel basis will be replaced by flexible optical networks in which signals can be freely allocated on arbitrary frequency slots of the optical spectrum. This flexibility enables a more dynamic and efficient utilization of resources, which in turn leads to lowered energy consumption, heightened usable capacity, and superior agility, making it capable of providing adaptive networking services based on the dynamism of user requests. In flexible optical networks, due to the frequent setup and tear down of optical signals that occupy different spectrum slots, the utilization of such slots has the potential to become heavily fragmented. This so-called spectrum fragmentation phenomenon dramatically degrades resource utilization and reduces usable network capacity. Therefore, spectrum defragmentation technology that Fujitsu Laboratories has developed is needed to restore efficient resource utilization by reallocating the fragmented slots to more continuous ones. In this paper, we discuss a photonic network defragmentation technology that can improve resource utilization during network operation by continuous and in-sync reconfiguration of flexible optical nodes (transceivers and optical switches, etc.). We show the effectiveness of this technology through network simulations, as well as experimental results of hitless defragmentation.
In order to improve the performance of storage systems and servers that make up the cloud, it is essential to have high-bandwidth interconnects that connect systems. Fujitsu has already marketed a CMOS high-speed interconnect product that works between CPUs for the UNIX server (SPARC M10). Its data rate per wire is 14.5 Gb/s. And Fujitsu has recently researched a CMOS interconnect that can operate at over 32 Gb/s per lane to achieve a higher data rate. We researched some new techniques for high-speed interconnects such as a data interleaved driver in a transmitter, wideband loss compensation equalizer, and a clock and data recovery system using a data interpolator. When these technologies are implemented in a CPU chip, we can expect to double the performance of server systems. This paper introduces the features of ultra-high-speed interconnect technologies for 32 Gb/s serial interconnects implemented using 28 nm CMOS technology.
In the near future, an improvement in the performance of and integration of servers (for example blade servers) will cause a data expansion in the places where data connection is required. Furthermore, the transmission rate of data is growing to 25 Gbps. These will increase interconnect bandwidth in servers to increase to the 10 Tbps order. We propose data connection using optical fibers so as to avoid a limitation of bandwidth caused by a degradation of waveform and interference in copper wiring. Optical fibers can be used to transmit data at high speed and they excel in terms of wiring density and transmission distance. To apply such fibers to servers, many optical channels must be integrated in a limited space of a server chassis. To do so requires low-cost and compact optical components. In this paper, we propose optical technology (an optical transceiver module, optical connector and optical mid-plane) that makes it possible to have low-cost and high-density optical interconnects, and it describes a verification of their practical use in trial manufacturing.
In the near future, due to a successive increase in the processing capacity of high-end CPUs, a large I/O bandwidth of more than 1 Tbps will be required in HPC systems and high-end servers. For this, an optical I/O technology that overcomes the limitations of a conventional electrical I/O is attracting much attention. Especially, a large-scale integrated optical I/O chip based on silicon (Si) photonics technology is a very promising candidate for a Tbps-class I/O co-packaged with a CPU. This paper describes the current status of our development of a Si optical transmitter for a Tbps-class optical I/O. In order to place the Tbps-class optical I/O inside a CPU package, we have to develop a low-power-consumption Si optical transmitter that can be operated under temperature-instable circumstances. Therefore, we proposed a novel transmitter scheme that enables stable operation of a highly energy-efficient Si ring modulator without a complex wavelength tuning procedure. With this scheme, we successfully demonstrated wavelength-tuning-free 10 Gbps operation of an integrated Si optical transmitter chip over a temperature range of 25 to 60˚C. Additionally, we report on the recent progress in developing a compact, high-performance Si hybrid laser using precise flip-chip bonding technology and a 4-ch integrated Si hybrid laser array for a large-capacity coarse wavelength division multiplexing (CWDM) optical transmitter.
Recently, the transmission rate for handheld devices has been increasing by Long Term Evolution (LTE), and baseband LSI has come to need a higher performance. In addition, handheld devices will use the second- and third-generation communication method, so a baseband LSI will need to handle multiple communication methods. Because implementing all communication circuits results in a large area, we has been developing Software Defined Radio (SDR), which switches each communication method with software. To implement SDR for handheld devices, a high-performance and low-power Digital Signal Processor (DSP) is needed. We have developed a DSP which inherits the architecture of vector supercomputers, and the architecture has advantages of a low power consumption and application developments. We have downsized the vector architecture for embedded systems. The peak performance is 12 GOPS at 250 MHz, and the power consumption is relatively low at 30 mW for 28 nm process technology on average. This paper presents the vector processor that we developed.
Because of their high breakdown voltage, high switching speed and low on-state resistance (Ron), GaN high-electron-mobility transistors (HEMTs) have come to be widely used as high-frequency power amplifiers in recent wireless communication systems. There are also high expectations for them as next-generation power conversion devices as we aim for a green ICT society. In this paper, first we review high-frequency and high-efficiency GaN HEMTs and their MMIC power amplifier applications. Second, we explain high-efficiency GaN HEMTs on Si for power conversion applications. GaN HEMTs on Si have attracted much attention for their ability to be mass produced and cost performance. We report their power conversion characteristics in a power factor correction (PFC) circuit of a power supply unit for a high-performance server system. Finally, we describe the outlook for GaN HEMT technology that can help us to achieve a green ICT society.