Fujitsu Research of America
AI Lab
Collaborating with people to create new value
Our focus is on realizing a sustainable society by developing cutting-edge AI technologies that not only create new value but also contribute to the transformation of society and business.
Our research focuses on the next frontier: developing self-improving machine learning systems that leverage AI to optimize and enhance AI itself. Our core thesis is that by enabling AI systems to autonomously refine their own architecture, learning algorithms, and data utilization, we can achieve orders of magnitude improvement over current methodologies.
An AI system is built upon three fundamental layers: the infraestructure layer, encompassing both hardware and software infrastructure; the modeling layer, which defines model architectures and training methodologies; and the data layer, where data selection and supervision strategies come into play. Traditionally, each of these layers relies heavily on human intervention, demanding specialized expertise and manual tuning. Yet, self-improving ML systems will be able to optimize these elements with unparalleled precision and efficiency, surpassing the capabilities of human experts.
By pioneering self-improving ML, we aim to make AI not just a powerful tool for automatically solving today’s challenges but a dynamic entity that can evolve independently to tackle the complex problems of tomorrow. This evolution goes beyond technological advancement; we believe it can drive meaningful change. Our research is contributing to more sustainable use of resources, reduce energy consumption, and ultimately enable AI to address environmental and societal challenges more effectively. Through this work, we aspire to harness AI’s potential to make the world not only smarter but also more sustainable for future generations.
Researchers in the AI Lab
Avraham Cooper (Avi)
Self-improving ML,
multi-modal foundation models,
large scale AI system designHiromichi Kobashi
Machine Learning (ML),
distributed systems,
databasesIan Mason
Self-improving ML,
continual learning
foundation modelsJin Yamanaka
Machine Learning (ML), Computer VisionKanji Uchino
NLP (information retreival,
semantic web),open education,graph AI,
AI ethics,AutoMLKasper Vinken
Self-improving ML,
foundational models,
neuroscienceKatie Hahm
Self-improving ML,
computer vision,
natural language processingLei Liu
Natural Language Processing,
(NLP),Machine Learning (ML),
and communication networksMaria Xenochristou
Machine Learning (ML),
Computer Vision,
multimodal learningMehdi Bahrami
Self-improving ML
natural language processing
computer visionRamya Srinivasan
Machine Learning (ML),
computer vision,AI ethics,
Natural Language Processing (NLP),AI and creativityShailaja Sampat
Natural Language Processing (NLP),Computer Vision,
multimodal learningSo Hasegawa
Machine Learning (ML),
Computer Vision,
generative modelsTrishala Ahalpara
Machine Learning (ML),
customer segmentation, AutoML,bias in AI,
feature selectionWei-Peng Chen
Machine Learning (ML),
optimization,
mobile networksWill Xiao
Self-improving ML,
neuroscience,
AI for scienceWing Yee Au
Machine Learning (ML),
relational and graph dataXavier Boix
The Self-Improving Machine Learning group
Director of Research
Publications
- Lei Liu, So Hasegawa, Shailaja Keyur Sampat, Maria Xenochristou, Wei-Peng Chen, Takashi Kato, Taisei Kakibuchi, Tatsuya Asai, "AutoDW: Automatic Data Wrangling Leveraging Large Language Models", ASE '24: Proceedings of the 39th IEEE/ACM International Conference on Automated Software Engineering 2024
- Anay Majee, Maria Xenochristou, Wei-Peng Chen, "TabGLM: Tabular Graph Language Model for Learning Transferable Representations through Multi-Modal Consistency Minimization", Proceedings of the AAAI Conference on Artificial Intelligence, 2025
- Multi-domain improves classification in out-of-distribution and data-limited scenarios for medical image analysis, journal: Scientific Reports- Ece Ozkan, Xavier Boix
- D3: Data Diversity Design for Systematic Generalization in Visual Question Answering / journal: TMLR - Amir Rahimi, Vanessa D'Amario, Moyuru Yamada, Kentaro Takemoto, Tomotake Sasaki, Xavier Boix
- Transformer Module Networks for Systematic Generalization in Visual Question Answering / journal TPAMI - Moyuru Yamada, Vanessa D'Amario, Kentaro Takemoto, Xavier Boix*, Tomotake Sasaki* (*equal authorship)