NOTE: this is an archived page and the content is likely to be out of date.
Vol. 53, No. 5, September 2017
Fujitsu Laboratories' mission is to drive the growth of the Fujitsu Group by continuously developing emerging technologies with global application aimed at addressing current and future social issues. With this aim in mind, Fujitsu Laboratories is researching and developing technologies ranging from advanced materials, next-generation devices, computers, networks, and information and communications technology (ICT) systems to the creation of next-generation solutions, new services, and novel business models.
This special issue introduces the future perspective of AI technologies covering all layers from hardware and middleware to applications as well as the latest achievements in leading-edge basic research and application examples of ICT such as blockchain technology.
In today's world, there are many situations in which difficult decisions must be made under such constraints as a limited resource and a limited amount of time. These situations include disaster response planning, economic policy decision-making, and investment portfolio optimization. In such situations, it is often necessary to solve a "combinatorial optimization problem," which involves evaluating different combinations of various factors and selecting the optimum combination. Since the number of combinations increases explosively as the number of factors increases, it becomes difficult to find the best answer in a realistic amount of time using a von Neumann type processor. To give a solution for such problems, we have developed two schemes to speed up the 1024-bit Ising model and implemented them in a field-programmable gate array (FPGA). Testing demonstrated that a system using this architecture can solve a 32-city traveling salesman problem 12,000 times faster than the same algorithm running on a 3.5-GHz Intel Xeon E5-1620 v3 processor.
Deep learning, a machine learning method, is attracting more and more attention. Research and development of deep learning have accelerated as it achieves recognition accuracy that far surpasses that of conventional methods of extracting features manually. Two issues are affecting its practical application: lengthy training and limited graphical processing unit (GPU) memory. As neural networks are being enlarged to enable higher recognition accuracy, these two issues are becoming more and more serious. In this paper, we introduce three technologies targeting them: distributed parallel processing for faster training, memory optimization for increased GPU memory, and a dedicated hardware engine architecture for data size reduction.
Domain-specific computing is one approach to greatly improving server performance by specially designing a server architecture for a particular application domain. A good way to implement such an approach is to use a field-programmable gate array (FPGA), a commodity device in which processing units and memory blocks can be configured depending on the application requirements. We have developed a domain-specific server for media processing that is specialized for high-speed image retrieval by using an FPGA accelerator. We also developed a design support environment that facilitates FPGA architecture design by enabling the data dependency between modules and a performance bottleneck to be visualized. This technology supports efficient design of high-performance domain-specific FPGA accelerators. In this paper, we describe a partial image retrieval accelerator that serves as a key component of a domain-specific server for media processing and its application to a high-speed image-based document search system. We also describe our design support environment that enables design of high-performance FPGA accelerators.
An important problem in information and communications technology (ICT) is classifying graph data that expresses the relationships between people and things. For example, how can cyberattacks be detected by using network traffic logs showing the relationships between the source IP address and the destination IP addresses and ports, and how can fraudulent activities be detected by using banking transactions showing the relationships between senders and receivers and bank branches? When classifying large volumes of graph data, however, there are many yet-to-be-expressed features in the partial graphs used in conventional graph learning methods, so there are limits to achieving accurate classification. We propose using a novel tensor decomposition method called "Deep Tensor" for leveraging a deep neural network to enable it to automatically extract these features of graph data. Experiments in three different domains demonstrated that use of this decomposition method results in high accuracy for various types of graph data, enabling interpretation based on the activity of the neural network.
Security games are used for mathematically optimizing security measures aimed at minimizing the effects of criminal activity. Their use has been attracting attention in the fields of artificial intelligence (AI) and multi-agent systems, and they are now being put to practical use by several U.S. public agencies. However, the use of urban network security games to analyze the problem of catching criminals at road checkpoints is difficult, which has hindered their application to city-scale networks. To overcome this difficulty, we have developed min-cut arrangement and graph contraction algorithms. The min-cut arrangement algorithm identifies candidate checkpoint locations that maximize security. The graph contraction algorithm reduces the problem, leading to a dramatic reduction in computational time for urban network security games, for which the computational cost increases exponentially with the size of the problem. In this paper, we introduce these algorithms and present results for a 200,000-node problem centered on the 23 wards of Tokyo.
The growing use of smartphones and social media is complicating and diversifying the purchasing behaviors of customers. They can now access products and services at various customer contact points such as physical stores and e-commerce sites. Furthermore, digital marketing through smartphones that reflects customer concerns and recommends products that suit individual customers is growing steadily. Of particular interest is the growing use of omni-channel retailing, which enables customer data collected at various contact points to be used collectively, thereby improving service throughout the channels. Given this background, Fujitsu Laboratories is researching "Affective Digital Marketing," which estimates the customer's state of mind (concerns, satisfaction, etc.), motivates the customer in accordance with his or her stage of purchasing and/or state of mind, and optimizes the customer's experience. In this paper, we explain the technical features of three media processing technologies that support Affective Digital Marketing: "Advertising Copy Creation Assistive Technology," "Touch Emotion Analysis Technology," and "Speech Emotion Analysis Technology." We also introduce applied examples in the field of digital marketing.
Communication robots capable of talking to users have been put into practical use, and a movement is underway to introduce them into businesses. However, because most communication robots work only in response to user instructions, users have to learn the instruction set of each robot. To overcome this problem, Fujitsu Laboratories developed service robot platform technologies that enable robots to actively talk to users and introduce potentially useful services. With this platform, once many applications have been deployed, a robot chooses one that matches the interests and concerns of the customer and the circumstances of the interaction. The robot senses the customer's reactions and utilizes them to introduce more suitable choices. This platform enables service providers to enhance customer contact points, where customers and services come together, by presenting various services. This paper explains the service robot platform technologies developed by Fujitsu Laboratories that lead a customer to appropriate services by using the customer's preferences and circumstances as obtained through interactions between the user and the robot.
Blockchain technology, which supports low-cost decentralized distributed data management featuring tamper resistance, high availability, and transparency, is a breakthrough technology that will lead to the next generation of information and communications technology (ICT). Originally devised to support the Bitcoin digital currency, it is expected to be applied to a broad range of financial applications as well as in various other sectors such as distribution and sharing economies. This broader application requires that several technical challenges including data privacy protection and better processing performance be addressed, and Fujitsu Laboratories is working on several relevant R&D projects. This paper introduces blockchain technology and example applications, describes a technology for achieving security in a business context, and examines Fujitsu's efforts in commercializing this technology and an accompanying service as well as the open source software (OSS) project.
The number of targeted attacks aimed at stealing information from government and municipal offices, specific enterprises, and individuals is growing year by year, and the attack methods are becoming more and more clever. In a targeted attack, the attacker stubbornly attacks the target after thoroughly investigating it beforehand. The risk of malware (malicious and illegal code) infecting an internal network is thus increasing. Therefore, there is a pressing need for countermeasures against malware infection that detect attack activity as soon as possible and respond effectively to prevent or minimize damage before the attack proceeds further. We have developed high-speed forensic technology that promptly analyzes the situation after an attack has been detected. Previously, such analysis took a long time. Application of this high-speed forensic technology enables inclusive countermeasures to be promptly implemented before the damage expands.
As business digitalization continues to accelerate, adapting existing enterprise systems to changing business practices and advances in information and communications technology (ICT) has become a significant problem. Not only must current systems be virtualized for execution on cloud computing systems, but software must also be restructured with higher flexibility to meet expanding business requirements, such as coordination with other services. However, it is typically not feasible to re-implement an entire system due to the high cost and risk of system malfunction. In this paper, we propose an approach to transforming a system in order to enhance its flexibility and expandability. Our approach works by extracting each part of the system individually, analyzing its characteristics, and identifying an appropriate implementation strategy based on those characteristics. Three techniques are used to support this approach. First, the structure of the system and the relationships between functions are visualized by analyzing the program files. Next, the business logic complexity, update frequency, etc. of each program is characterized. The feature values obtained are assigned to the heights of their corresponding structures on a software map and are used to characterize and prioritize functions or programs. Finally, the identified functions are analyzed using symbolic execution, and the rules and calculation methods used in the business are extracted as decision tables in a readable format. This approach enables an existing system, the system of record (SoR), to be transformed by extracting its features and identifying the best solution for each feature, such as defining it as a service, using it with a business rules management system (BRMS), or using the program as is.
The emergence of 5th generation mobile networks (5G) and Internet of Things (IoT) technologies, as well as the continuous evolution of cloud services, is creating an increasing need for easy and on-demand creation of network slices, which are logically isolated (closed) network spaces spanning user devices and the cloud, tailored for individual businesses and services. While creating network slices within a single network infrastructure using software-defined networking (SDN) is possible, creating an end-to-end network slice across multiple network infrastructures remains a challenging task. Skilled network engineers must configure each infrastructure using different protocols and procedures and configure an array of virtual private networks (VPNs) for interconnecting them. In this paper, we explain the issues associated with the creation, operation, and management of end-to-end network slices and our approach to addressing them by using virtualization and softwarization. Specifically, we present the One Network Architecture, which virtualizes multiple network infrastructures as one infrastructure and facilitates the creation of network slices on top of it. We also discuss several of the latest research topics such as information centric networking technology deployed over such network slices.
Enhancing the capacity of optical communication networks is essential to achieving a hyper-connected world in which people, information, and things are connected and to enabling continued development of information and communications technologies (ICT) such as the Internet of Things (IoT), big data, artificial intelligence, and 5G mobile communications. In particular, increasing optical fiber transmission capacity to more than 100 Tbps by 2020 or shortly thereafter is needed to handle the ever-increasing volume of digital data traffic. Since conventional technologies are getting close to the transmission limit, technological breakthroughs enabling higher capacity must be made. Given this requirement, we are researching and developing key technologies for optical transceivers and optical nodes that will enable transmission capacity to be increased. In this paper, we introduce our recent advances in optical modulation and demodulation technologies for sending and receiving large-capacity signals and in optical node technologies for achieving energy-saving broadband optical signal switching.