AI Validated Designs
A Clear Path from Data to Solution.

Demystifying AI - Supported by AI Benchmarks and Validated Designs for transparency of AI adoption
In a move aimed at providing unparalleled transparency and insights into the world of Large Language Models (LLMs), Fsas Technologies, in collaboration with investment partners Intel, NVIDIA, and NetApp, has embarked on an ambitious project with VAGO Solutions to benchmark, build, and optimize AI solutions. This initiative seeks to empower customers with the knowledge needed to make informed decisions about their AI investments, focusing on performance, efficiency, and sustainability.
The project addresses a critical need in the rapidly evolving AI landscape: the lack of clear and readily available benchmarks for LLMs. Businesses are often faced with the daunting task of navigating a complex ecosystem of models and hardware, without a clear understanding of their performance characteristics in real-world scenarios.
We understand the challenges our customers face when trying to build and deploy AI solutions, this project is designed to demystify the process, providing transparent data and validated designs that empower them to make informed decisions and achieve their AI goals.
Benchmarking for Transparency and Optimization
The initial phase of the project will focus on rigorously benchmarking Private GPT. VAGO Solutions will leverage cutting-edge methods for fine-tuning and inferencing, meticulously tracking key performance indicators such as:
- Time to Completed Vectorization: Measuring the speed of data preparation.
- Time to First Token: Assessing the responsiveness of the model.
- Tokens Per Second: Evaluating the throughput of the LLM.
- Success Rate: Gauging the accuracy and reliability of the model's outputs.
- Hallucination Rate: Identifying the tendency of the model to generate factually incorrect or nonsensical information.
AI Validated Designs: Blueprints for Success
The project extends beyond benchmarking to encompass the development of AI Validated Designs. These blueprints provide organizations with a step-by-step guide to building AI solutions from scratch, covering all the essential aspects of the process.
(This analysis begins with a high-level overview of the 'fine-tuning' use case, followed by a progressively deeper dive into its intricacies. Additional use cases will be presented in future updates) :
- Starting with Data: The foundation of any AI solution.
- Choosing and Setting Up the Inference Engine: The core component for running the LLM.
- Choosing and Fine-Tuning an LLM: Selecting the right model and tailoring it to specific needs.
- Setting Up the LLM for Inferencing: Configuring the system for efficient and accurate predictions.
Benchmark Information
We're committed to providing our customers with a clear and comprehensive understanding of our AI solutions, by openly sharing these benchmarks, we aim to foster trust and enable them to optimize their deployments for maximum performance and efficiency. The results of this benchmark will be incorporated into our upcoming whitepaper.
However, it's important to note that these benchmarks represent a snapshot of performance under specific conditions. Our analysis extends beyond these individual results to encompass the broader methodology of benchmarking AI RAG/LLM systems. This includes understanding the key factors influencing performance, enabling replication of benchmarks to meet specific customer needs, and identifying limitations of current solutions implementation (such as Private GPT).
Furthermore, we are actively investigating performance boundaries to inform sizing recommendations and exploring potential improvements through alternative models, hardware configurations, and architectural optimizations.
AI Validated Designs
These validated designs are accompanied by benchmarks on PRIMERGY hardware, providing valuable insights into performance, efficiency, and sustainability. This allows organizations to make informed decisions about their hardware investments and optimize their AI infrastructure.
These AI Validated Designs provide valuable insights for customers wanting to build an AI solution from scratch, they gain valuable results, insights and benchmarks on PRIMERGY hardware to allow them to make more informed decisions and investments.
A Long-Term Commitment to AI Innovation:
This project represents a long-term commitment from Fsas Technologies and its partners to advancing the field of AI and empowering customers with the tools and knowledge they need to succeed. By sharing transparent outcomes and providing validated designs, Fsas Technologies aims to foster innovation and accelerate the adoption of AI across a wide range of industries.
Look out for more information and updates as this exciting project continues to gather momentum.
What's the next step? - Register for a Test Drive
The AI Test Drive - supported by a broad ecosystem - giving direct access to the technology and experts to develop, validate and test solutions for AI.
Contact us!
Not yet ready to register for a Test Drive? Still in the idea generation phase? Contact us to discuss your situation and see how we can help your organization. Together, we will discover the potential for your business.