Democratizing AI: The Journey to AI Adoption // Road to Mountain Cook along Pukaki Lake, New Zealand

At Fsas Technologies, we believe that the true democratization of Artificial Intelligence isn’t just about making algorithms available; it’s about making AI performant and accessible for everyone, from individual innovators to large enterprises. Our core aim is to put performance for innovation into everyone’s hands. It’s a commitment to accelerating the entire AI lifecycle, from data synthesis to deployment, and empowering a broader audience to harness its transformative power.

The Foundation: Unlocking AI Potential with Robust Hardware

The journey to democratizing AI begins with overcoming performance bottlenecks. While optimized algorithms are crucial, they can only go so far without a robust hardware foundation. This is where Fsas Technologies truly differentiates itself. We’ve cultivated a solid portfolio of diverse hardware within our PRIMERGY brand, and we actively collaborate with industry leaders like Supermicro. This strategic approach ensures we can support a vast spectrum of AI workloads—from the initial stages of data synthesis and fine-tuning to intensive training and final deployment. Our hardware isn’t just powerful; it’s designed to be the bedrock upon which innovation thrives, enabled by seamless hybrid cloud integration, offering unparalleled flexibility and scalability.

Beyond Hardware: Demystifying AI Adoption and Reducing Risk

It’s a common misconception that our approach is solely hardware-centric. In reality, it’s about the efficient utilization of a powerful hardware platform and portfolio. We understand that the complexity of scoping hardware requirements for AI workloads can be daunting, often leading to overspending or underspending. This is where we step in to demystify the process.

Our strategy is a measured approach to AI adoption: from ‘bring your own data’, where we provide a full solution stack (e.g., Private GPT), to ‘bring your own AI’, where we focus on the infrastructure for your workloads. We provide real-world benchmarks and scenarios, taking the guesswork out of investment and adoption. This reduces both the financial risk and ongoing resource costs, such as energy consumption. By providing clear, data-driven insights, we ensure our customers make informed decisions, minimizing waste while maximizing AI investment.

Real-World Impact: Transparency Through AI Validated Designs

How do we put this into practice? Through concrete, transparent examples. Take our recent work with the PRIMERGY GX, where we trained a 55-million-parameter Late-Interaction (ColBERT-style) encoder. This compact, efficient model, optimized for semantic search and reasoning-intensive retrieval, was trained in just hours on 4x H100 GPUs. Some of the key highlights of this model are:

    • Compact and Efficient: The \~55M parameter model achieves high retrieval accuracy while maintaining a small memory footprint suitable for edge devices.
    • Retrieval-Oriented: Designed for multi-vector semantic search, semantic textual similarity (STS) and reasoning-intensive retrieval, with optional re-ranking.
    • Edge Optimized: With low latency and minimal storage requirements.
    • Training Resources: Base model trained for 6 hours 25 minutes on 4x H100 GPUs; Reasoning model fine-tuned for 3 hours 25 minutes on the same hardware.
    • Data Volume: \~9.6B English tokens for base pretraining, plus 210k synthetic pairs for reasoning specialization.
    • Context Length: 2,048 tokens with absolute positional embeddings.
    • Use Cases: Semantic search, sparse retrieval, reasoning-heavy query answering, and STS.

We’re not just telling you it works; we’re showing you exactly what happened.

Vertical Applications: Adapting AI for Healthcare and Beyond

As a secondary step, we’re fine-tuning another model with a vertical focus on healthcare, demonstrating the adaptability and impact of our approach across different sectors. With this we will be demonstrating the resources required to train and fine-tune these vertical specific embedding models and how a language- and terminology-specific focus while maintaining reasoning-like capabilities can advance existing solutions or be a part of a new solution from the ground up.

We will release a comprehensive technical whitepaper, an article, and even the embedding model itself on Hugging Face. Crucially, in this material we will publish the “AI Validated Design” or blueprint of the hardware, software and processes used, offering complete transparency on its performance. This level of detail empowers our customers to understand the exact capabilities and requirements to undertake a similar venture.

What’s Next?

Looking ahead, our next big project involves benchmarking our full-stack Private GPT solution. We’ll rigorously test its performance across various hardware configurations—from single L40s cards to multiple units and H100s—and publish all the results. We will also explore how we can further advance this solution offering with our alliance partners including industry-leading storage solutions and different core large language models. This will give our customers a clear picture of current capabilities and future expansion possibilities, with future phases exploring node scaling as well as potential branches from the main turnkey offering delivering multi-modality and multi-tenancy.

The Future: Private AI and Continuous Innovation

Our commitment to making AI easier to adopt remains unwavering. We will continue to train new models, publish results, and share the models themselves, ensuring a cycle of continuous improvement and transparency. We’re constantly exploring new solutions and features to enrich our offerings and serve our customers better.

Looking forward, a significant focus will be on addressing the critical needs of sensitive data environments such as legal, public, and defense sectors. We recognize that these fields often cannot rely solely on cloud-based solutions due to risk concerns. Therefore, we are dedicated to offering new ways to serve “Private AI” by expanding our Private GPT offering, alongside alternative solutions in the future.

Ready to take the next step in your AI journey?

At Fsas Technologies, we believe the journey to AI adoption isn’t just about today’s solutions; it’s about building a future where AI is truly accessible, understandable, and transformative for everyone, without compromise.

If you would like to follow our journey on how we look at breaking down barriers to AI adoption keep an eye on our dedicated website for AI Validated Designs which we will continue to update as we progress. Would you like to know more about our Private GPT offering? Then visit our Private GPT website. Or would you like to register for an AI Test Drive? You can do so here.

Have a more complex case to discuss? Reach out directly to our AI consultants at ai.team@fujitsu.com to discuss your specific use case and see how we can support you whether it’s solution design, workshops, or Proof Of Concepts.

Share this post:

Author

  • Pete Auker

    As a Business Growth Lead for Artificial Intelligence in Europe, Pete leverages three decades of IT industry experience to drive innovation. He is deeply embedded in the AI ecosystem, collaborating with customers, partners, data scientists, and technical consultants.

    Pete's passion lies in transforming highly technical information into compelling narratives that resonate with diverse audiences – from industry specialists to the general public. His aim is to foster a greater appreciation and understanding of emerging technologies and make the world of AI accessible to all.

    Connect now: LinkedIn  

    All Posts