Today, the acceleration of digitalization is making the connections among companies, individuals, businesses, and systems increasingly complex with the amount of generated data increasing at an explosive rate. As a result, it is becoming extremely difficult to verify the quality, authenticity, and qualification of all sorts of interconnected elements. To achieve trust in a digital era of highly dispersed and complex relationships, it is important to use ICT to simplify the verification mechanism.
This issue introduces digital technologies to provide our customers with an environment of trust, the latest technologies for continuously verifying the level of trust in ICT and people, and cutting-edge technologies essential to achieving trust in the digital era.
Digital transformation has caused continuous changes to the world at an unprecedented speed, leading to a new paradigm separate from the past. Companies, individuals, businesses, and systems are intricately connected together, which necessitates the handling of rapidly increasing data. With this, the verification of quality, authenticity, and eligibility of all factors has grown difficult. In order to resolve this issue, it is important to have ICT simplify the mechanism for achieving "trust" in the digital era, which is distributed and correlated in a complicated manner. This paper presents Fujitsu Laboratories' approach for how to use advanced technology in the realization and social implementation of "trust" in the digital era, which has taken on more significance than ever before in our increasingly complex world.
Trust plays an important role in maintaining the norms of nations and communities. When trust is present, norms in communities are maintained and civic society functions smoothly. Trust can also reduce the complexity of social mechanisms, which allows for things to run smoothly in societies or companies or even among friends. Furthermore, trust also contributes to future prospects as well as to the present. Trust, on the other hand, does not necessarily deepen infinitely over time, and may be formed or lost. This paper first reviews the meaning of trust and then introduces some prior studies, including Niklas Luhmann to discuss the role of relationships and trust in achieving a digital society. It then goes on to describe changes in trust at Fujitsu.
While AI technology provides significant benefits, downsides of AI have also been reported, including AI that uses biased data for learning makes unfair decisions. To prevent further advancements in the technology from causing serious side effects, AI ethics are being discussed and many organizations and companies have recently announced new guidelines regarding AI. When Fujitsu released FUJITSU Human Centric AI Zinrai in 2015, we proposed "collaborative, human centric AI" and have paid attention to the ethical aspects of AI. In March 2019, we announced the Fujitsu Group AI Commitment that expresses our approach to AI ethics in a concrete and easy-to-understand manner based on the results of over 30 years of R&D and social implementation of AI. The promises of the Fujitsu Group AI Commitment were formulated in cooperation with AI4People, Europe's expert forum, and are in accordance with the AI ethics principles that the forum advocates. They are Fujitsu's message to its major stakeholders, including customers, people, society, shareholders, and employees. This paper presents the social trends related to AI ethics and describes the basic concept of Fujitsu's AI Commitment.
As expectations for data utilization are rising around the world, the Japanese Government is promoting information banks, which are businesses that facilitate the secure distribution and utilization of personal data, expressly with the individual's involvement. To ensure the reliability of these information banks, the government is encouraging the private sector to establish a certification system. In response, the private sector has launched projects to review and certify information bank businesses. In view of this worldwide trend, Fujitsu Laboratories has developed privacy risk assessment technology that makes it possible to manage data by quantifying data leakage risks, and consent management technology that is capable of aggregating and managing individual consent for distributed personal data. These technologies allow for secure distribution of personal data and contribute to the realization of a data-driven society that gives consideration to privacy. This paper provides overviews of the two developed technologies, example applications of privacy risk assessment technology, and a performance evaluation of consent management technology.
Blockchain is a technology devised as the fundamental technology of Bitcoin. It allows reliability to be ensured in a decentralized manner, and expectations are high for its application in various areas beyond virtual currency, such as real estate and healthcare. However, any problem in the boundaries between the elements that constitute an entire system or in the smart contracts that are often introduced as subsystems may lead directly to significant losses in business, such as the theft of the virtual currency managed by a blockchain. Accordingly, improving the reliability of the system as a whole, including applications, is needed. Fujitsu Laboratories has developed a threat analysis method that checks a blockchain system for any problems at the time of its construction and operation, and a smart contract verification technology that exhaustively detects threats in smart contracts by using static analysis technology. These will enable blockchain developers to quickly develop systems that use blockchains with greater security. This paper describes the threat analysis method and smart contract verification technology that we have developed.
In recent years, business and lifestyles have been undergoing significant changes with the progress of networks and ICT. Fundraising and information provision, conventionally difficult for individuals, have become easier with the use of affiliate marketing, crowdfunding, websites, social media, and so on. In addition, the emergence of the sharing economy has made various services available at low cost. As this trend becomes more widespread, however, problems between the people involved in transactions and communication on the Internet are emerging. In response to this issue, Fujitsu Laboratories has developed an ICT risk assessment technology for quantifying users' ICT risks based on the psychological and behavioral characteristics observed in people who have experienced harm from cyber attacks, as well as a reliability scoring technology for expanding the scope of application of this assessment technology to various other risks besides ICT risks. These technologies allow for the scoring of a variety of risks, including delays and inadequate service provision even for people with whom transactions are conducted for the first time. The scores obtained in this way can be utilized to develop and implement measures aimed at preventing problems. This paper describes these ICT risk assessment and reliability scoring technologies as well as examples of their application.
While AI performance continues to improve, when experts use it in practical applications, its presented inferences may not provide sufficient information for decision-making. This failure to ensure sufficient reliability has encouraged R&D on explainable AI to be conducted on a global scale in recent years. In response to this problem, Fujitsu has achieved the world's first "trustworthy and explainable AI" that makes use of knowledge graphs capable of representing expertise in a systematic manner to explain the reason and basis. This provides users with sufficient information to make decisions and improves AI reliability. This paper outlines knowledge graphs as a basis for Fujitsu's trustworthy and explainable AI. It also introduces application examples in the fields of finance and chemistry.
With the recent emergence of the AI black box problem, the fairness, accountability, and transparency (FAT) of AI have been vigorously debated. Companies and academic institutions have developed various types of explainable AI. Fujitsu Laboratories has developed a new technology, Wide Learning, which is not only capable of dealing with the black box problem but also includes explainability that achieves knowledge discovery. Wide Learning uses "enumeration," which is a major technology of discovery science, to exhaustively enumerate pieces of knowledge called knowledge chunks described in a human-understandable format. By using this knowledge for judgment, the overlooking of potential candidates is significantly reduced and high-accuracy prediction and classification are achieved. If knowledge without omission can be discovered continuously, the trust of service systems can be strengthened and knowledge diversion in turn should provide trust between service systems. This paper first outlines the history of how Wide Learning came into being and introduces its technical characteristics. It then reports on a demonstration of knowledge discovery using Wide Learning in Fujitsu and presents future prospects.
Today, companies are overflowing with a variety of data, inside and out. Recently, there is a growing movement to push the co-creation of innovative digital businesses by distributing and utilizing valuable data that cannot be obtained independently between different companies. The realization of a world like this, however, requires the data shared between companies to be reliable. Specifically, it must be possible to verify where the data have come from and how they have been processed and, when personal data are included, whether or not individuals have given consent for the data to be provided. Fujitsu Laboratories is moving ahead with research and development relating to data distribution and utilization technology that resolves these issues and enables the safe use of data. This paper first presents the issues relating to data reliability in data distribution. Then, it describes the Fujitsu-developed Chain Data Lineage technology, which improves the reliability of data exchanged across different categories of business and industries, together with examples of its application.
In order to realize and promote a cashless society, payment using only the biometric information of an individual without the need for identification by ID card or smartphone is attracting attention. In cases where authentication on a scale of one million users is assumed, such as payments at brick-and-mortar stores and admission to event venues, it is necessary to search for and authenticate a single individual from a huge volume of personal data. However, high-speed searches of large volumes of data were difficult with one biometric modality alone. For that reason, inputting additional information such as IDs and personal identification numbers (PINs) was necessary, which compromised user convenience. Accordingly, Fujitsu Laboratories has developed an integrated biometric authentication technology that combines palm vein and face recognition. This technology uses facial images effortlessly captured during payment terminal use to narrow down the possible matches, thereby allowing the number of system users to be increased without the need for the input of information other than biometric data. This paper describes the present state and issues of payments using biometric authentication, and the integrated biometric authentication technology developed.
Recently, there is a movement gaining momentum to create business reforms and innovations by storing and utilizing large volumes of data generated on various sites. In addition to the structured data handled in conventional databases, advancements are being made in the use of video and other unstructured. There are three requirements for efficient utilization of unstructured data: data processing performance, cost-performance ratio, and data management. Fujitsu Laboratories is working on the R&D of "Dataffinic Computing," which uses distributed storage systems to achieve high-speed processing of large volumes of data, as a data-centric architecture that supports the utilization of massive volumes of data. This paper outlines data neighborhood processing technology, large-volume memory technology, and high-speed thin client technology, which are elemental technologies of Dataffinic Computing. It also presents an approach to video monitoring systems as an application example.
Systems of engagement (SoE) are ICT systems for creating and maintaining connections between companies and customers and building trust. SoE must be able to offer flexibility and quickness in responding to changes in business, in addition to availability comparable to that of systems of record (SoR), which are ICT systems that support the mission-critical operations in an organization. For services provided by SoE to be used by more users, a system must essentially satisfy the security and integrity requirements that allow for a greater sense of security in users. However, grasping the configuration and requirements of an entire system is generally difficult for service developers, and it was difficult to redesign a system considering the security and performance based on the service every time improvements were made to the service. Therefore, Fujitsu Laboratories has developed a system of building SoE and supporting stable operations so that such service developers can quickly and flexibly provide a secure service in a multi-cloud environment. This paper presents network as code (NaC) and network verification technologies that assist even those service developers who lack an understanding of the entire system with the construction of a secure system. It also describes traffic prediction and anomaly detection technologies, which promptly recognize any failures and performance degradation in an SoE system to allow for the provision of integrity.
With digital innovation, companies, individuals, businesses, systems, and so on, are expected to connect organically through data. The key to realizing these connections is to create cyber-physical systems (CPS) that create stable connections between physical space (real space), where people and things exist, and cyberspace (virtual space), where processing is performed. In particular, connecting mobile entities such as people, things, robots, and cars to cyberspace requires the utilization of wireless technology. Wireless technology is evolving and diversifying on a daily basis, improving its convenience. At the same time, expertise in wireless technology is needed to obtain sufficient performance in accordance with the target application. Fujitsu Laboratories has been developing technology that enables wireless network managers without advanced expertise to easily design, construct, and operate wireless networks. This paper describes automatic design technology, interference control technology, and our approach to autonomous operation, which provide anyone with the ability to easily execute each phase of the design, construction, and operation of wireless networks.
Inference results provided by AI require accountability in terms of the reasons and basis behind the inference. For AI to be accepted in areas where accountability is needed, such as the medical and financial sectors in particular, the reasons and basis for an inference must be shown to earn sufficient trust. Unfortunately, explaining the reasons or basis for an inference is difficult for many of the AI methods that provide highly accurate inferences, such as deep learning. Deep Tensor, an AI technology developed by Fujitsu Laboratories, is capable of highly accurate analysis of graph data representing connections between people and things. With Deep Tensor, accounting for the reasons behind inferences is still an important issue. Accordingly, Fujitsu Laboratories has developed inference factor identification technology as a means of resolving this issue. The technology uses feature values called core tensors generated by Deep Tensor to indicate which elements of graph data contributed to the results of an inference, thereby providing an explanation. This paper describes inference factor identification technology and presents examples of its application in the medical and financial sectors.
In recent years, AI technologies have been applied to various fields. At the same time, the difficulty in appropriately selecting AI technologies on the basis of actual business issues and implementing them in society is also becoming clear. In addition to the provision of reliable AI solutions come security issues and other new problems. The key to solving these issues is co-creation, where a company with a wealth of business knowledge suited for business issues cooperates with another company that creates AI technologies to explore possible solutions. Fujitsu Laboratories has developed AIEcosystem, a platform that closely connects sites that create AI technology with business sites to facilitate quick provision of AI solutions. AIEcosystem is characterized by its capability of storing AI technologies and past case examples in forms that actually function as knowledge and holding trials immediately without the need for implementation and deployment. This paper provides a technology outline of AIEcosystem and discusses the platform's usefulness.