Storage

Get performant, secure, and reliable storage for AI.

<Tailor-made for AI workloads>

Don’t let storage performance slow your cluster down.

Get higher performance for containerized workloads and virtual servers.

Feed data into GPUs ASAP

GenAI models need a lot of data—and they need it fast. Handle massive datasets with reliability and ease, helping to enable better performance and faster training times.

Pick up where you left off

Reduce major delays after hardware interruptions . Get strategic checkpointing of intermediate results—so your teams can stay on track after interruptions.

Get top-tier reliability

Keep production on track and avoid massive data losses. Automated snapshots occur every 6-hour interval with a 3-day retention, meaning your teams can rest assured their work is saved. Plus, we manage storage separately from compute, making data easier to move and track.

Stay secure

CoreWeave Storage follows industry best practices with security. Encryption at rest and in transit, identity access management protocol, authentication, and policies with roles-based enable data protection and security.

CoreWeave AI Object Storage

Train, fine-tune, and deploy models faster and more reliably at any scale. Eliminate replication costs and performance tradeoffs with AI-native object storage.

Local Object Transport Accelerator (LOTA)

LOTA brings data closer to your GPUs by caching directly on GPU nodes, delivering near-local throughput of up to 7 GB/s per GPU. The result: faster training, lower latency, and efficiently utilized clusters without complex replication layers.

Automated, usage-based billing

Seamlessly store inactive data at a lower rate while keeping it instantly accessible when needed, lowering TCO without impacting performance. No more manual tiering or multiple APIs. Just simple pricing without egress, ingress, or request fees.

Cross-region and multi-cloud flexibility

Access a single global dataset without replication to simplify operations, cut costs, and keep GPUs efficiently utilized—with LOTA acceleration available across regions today and coming soon across clouds.

Industry-leading performance

Keep compute efficiently utilized with throughput up to 7 GB/s per GPU, far beyond traditional object storage.

Simplified scaling

One global dataset and comprehensive observability tools make scaling effortless wherever your workloads are.

Lower TCO

Reduce costs with automated usage-based billing for inactive data and transparent, straightforward pricing.

Enterprise-grade reliability

Protect critical AI workloads with SSO/SAML authentication, 11 nines durability, 99.9% uptime, and encryption at rest and in transit.

Left
Right

Distributed file storage

CoreWeave distributed file storage helps with centralized asset storage or parallel computation setups necessary for GenAI.

A network made for AI

Our highly performant, ultra-low latency networking architecture is built from the ground up to get data to GPUs with the speed and efficiency GenAI models need to develop, train, and deploy.

Strategic partnerships

Our partnership with VAST enables us to manage and secure hundreds of petabytes of data at a time. Plus, we’re POSIX-compliant and suitable for shared access across multiple instances.

Ultra-fast access at scale

Leverage a petabyte-scale shared file system with up to 1 GB/s per GPU. Our benchmarks show strong results for up to 64-node NVIDIA H200 GPU clusters.

Left
Right

Dedicated Storage Cluster

Need your own cluster? Our flexible tech stack provides access to dedicated storage clusters.

We work with a broad ecosystem of leading storage partners to deliver performance, security, and isolation for your workloads.

VAST Data
WEKA
DDN
IBM Spectrum Scale
Pure Storage

Local storage

Our Kubernetes solution uniquely allows customers to access local storage. CoreWeave supports ephemeral storage up to 60TB, depending on node type.

All physical nodes have SSD or NVMe ephemeral (local) storage. No need for Volume Claims to allocate ephemeral storage. Simply write anywhere in the container file system.

That’s all included at no additional cost.

Bring your own storage

Have a storage provider you like working with for dedicated clusters? No problem.

CoreWeave works with an ecosystem of storage partners to support generative AI’s complex needs. Bring your own complete stack, and we’ll make sure it runs with our infrastructure.

Specialized storage for AI

CoreWeave infrastructure all works in tandem to ensure the future of GenAI.