Qumulo Boosts AI Efficiency with Azure Native Achieves Top Benchmark Results

Qumulo, the simple way to manage exabyte-scale data anywhere, announced the industry’s fastest and most cost-effective cloud-native storage solution, as demonstrated by the latest SPECstorage Solution 2020 AI_IMAGE Benchmark result. Qumulo’s Azure Native Qumulo (ANQ) achieved an Overall Response Time (ORT) of 0.84ms with a total customer cost of just $400 for a 5-hour burst period.

AiThority.com Latest News: Wipro Launches On-Premise GenAI Solution with Hewlett Packard Enterprise

Deploying cost-effective AI training infrastructure in the public cloud requires transferring data from inexpensive and scalable object storage to limited and expensive file caches. This results in added complexity and up to 40% GPU idle time, as data is being staged from object storage into local file caches.

ANQ acts as an intelligent data accelerator for the object store, executing parallelized, prefetched reads served directly from the Azure primitive infrastructure via the Qumulo filesystem to GPUs running AI training models. This innovative architecture improves GPU-side performance by accelerating load times between the object layer and the filesystem, fundamentally transforming how file-dependent AI training in the cloud should be architected.

AiThority.com Latest News: Veritone Announces Appointment of Michael Keithley to Board of Directors

Benchmark-Proven Performance and Cost Efficiency

The SPECstorage Solution 2020 AI_IMAGE Benchmark results underscore ANQ’s superior performance and cost efficiency, as noted in our blog. Key highlights include:

  • Achieving an ORT of 0.84ms at 700 jobs, the fastest benchmark of its kind run on Microsoft Azure infrastructure.
  • Utilizing a SaaS PAYGO model, metering stops when performance isn’t needed, resulting in a cost of ~$400 list pricing to run the benchmark.

Three things set ANQ apart from other cloud-native file solutions offering AI solutions for their customers:

  1. True Elastic Scalability: ANQ allows customers to focus on business and technology concerns, rather than cloud-native storage infrastructure. Storage performance scales with the AI application stack demands, saving costs when there is no demand. Unlike other cloud file systems, ANQ operates without pre-provisioned volumes.
  2. Disruptive Pricing: Qumulo passes cloud economics savings directly to customers. The pricing model is based on actual storage usage (GB) and performance needed (throughput and IOPs), without the need for pre-provisioned capacity.
  3. Linear Performance Scaling: ANQ’s architecture ensures that performance increases linearly as workloads increase. With an average cache-hit ratio higher than 95%, ANQ accelerates GPU-side scalability and performance, bypassing load times between the object layer and the filesystem.

Read More: Blockchain for Good Alliance and Bybit Web3 Join SocialPlus Hackathon to Empower Builders

[To share your insights with us as part of editorial or sponsored content, please write to [email protected] ]

The post Qumulo Boosts AI Efficiency with Azure Native Achieves Top Benchmark Results appeared first on AiThority.

Your custom text © Copyright 2024. All rights reserved.