On-Demand

Accelerating AI Innovation
Session Recordings

On-demand sessions from the virtual event available now.

Q&A

Welcome & Interview with CoreWeave CEO

Mike Intrator (CEO) and host Mario Armstrong discuss why today’s breakthroughs in AI demand a fundamentally new approach to infrastructure. Learn what’s driving the shift in cloud computing, what it means for builders and enterprises on the front lines of AI, and why this moment represents a rare opportunity to gain a competitive edge in this rapidly accelerating field.

Speakers: Mario Armstrong, Mike Intrator

Session

The CoreWeave Cloud Advantage

AI workloads aren’t just bigger—they’re fundamentally different. That’s why AI teams across industries turn to CoreWeave to train faster, deploy smarter, and cut infrastructure costs at scale. In this session, you’ll discover how CoreWeave delivers those results by reimagining every layer of the cloud and see a live demo of the platform in action. If you’re looking to push boundaries on performance, reliability, and cost-efficiency, this is your blueprint.

Speakers: Chen Goldberg,  Lukas Biewald

Customer Story

AI Innovators in Action: OpenAI

OpenAI depends on CoreWeave to deliver the infrastructure needed to serve its most advanced models at scale. In this spotlight, CEO Sam Altman shares why CoreWeave’s reliability, speed, and deep expertise make us the ideal partner to help them bring their cutting-edge AI innovations to the world, from breakthrough through production.

Speakers: Sam Altman

Session

Performance Deep Dive: Breaking Barriers in Training & Inference

What if you could retrain a model in hours instead of days—or serve 30x more tokens per second? Whether you're running trillion-parameter training jobs or building real-time AI products, make infrastructure performance your next competitive edge. In this session, CoreWeave and NVIDIA unpack the latest MLPerf benchmark results and what they unlock for AI teams: faster iteration, lower latency at scale, and a clearer path to production.

Speakers: Chetan Kapoor, Dion Harris

Customer Story

AI Innovators in Action: IBM

Training state-of-the-art foundation models like Granite requires serious scale, speed, and reliability. In this customer spotlight, IBM leaders share how partnering with CoreWeave enabled their teams to push the boundaries of model performance, accelerate development timelines, and deliver cutting-edge AI capabilities—all powered by the latest NVIDIA GPUs.

Speakers: Hillary Hunter, Danny Barnett

Fireside Chat

What’s Next for AI Infrastructure?

Ready to move faster, scale smarter, and stay ahead? You need a strong, purpose-built foundation of AI infrastructure architected by two of the biggest names in accelerated computing. In this candid conversation, CoreWeave and NVIDIA lay out what’s broken in traditional cloud design and what it takes to support the next wave of AI breakthroughs, from trillion-parameter models to advanced reasoning and real-time inference.

Speakers: Ian Buck, Brian Venturo

Customer Story

AI Innovators in Action: Mistral AI

For Mistral, speed is everything. In this spotlight, co-founder and CTO Timothée Lacroix shares how CoreWeave helped cut training time in half for their open-source reasoning models—while also delivering fewer interruptions and more consistent performance at scale. Discover how Mistral moves faster, trains smarter, and brings cutting-edge AI into production at record speed.

Speakers: Timothée Lacroix

Keynote

The State of AI

Is AI moving faster than your infrastructure intended to support it? In this provocative keynote, renowned researcher Kate Crawford joins CoreWeave CMO Jean English to explore how AI is reshaping everything from energy grids to data center design. Together, they examine the scale of transformation ahead, the urgent need for sustainable infrastructure, and the role enterprises must play in building a future where AI can truly deliver on its promise.

Speakers: Kate Crawford, Jean English

Q&A

Q&A with CoreWeave Co-Founders

Straight from the source. In this live Q&A, CoreWeave’s co-founders field questions from the audience on everything from scaling AI workloads to staying ahead of GPU innovation cycles. Hear how CoreWeave is navigating the AI infrastructure boom, what’s driving customer adoption, and why specialization—not generalization—is the key to outperforming hyperscalers.

Speakers: Mike Intrator, Brian Venturo

Featured speakers

Mike Intrator

CEO, CoreWeave

Chen Goldberg

SVP Engineering, CoreWeave

Brian Venturo

CSO, CoreWeave

Chetan Kapoor

CSO, CoreWeave

Jean English

CMO, CoreWeave

Max Hjelm

SVP of Revenue, CoreWeave

Lukas Biewald

CEO, Co-Founder Weights & Biases

Dion Harris

Sr. Director, AI, and HPC Infrastructure, NVIDIA

Ian Buck

VP Hyperscaler and HPC Computing NVIDIA

Kate Crawford

Guest Speaker, Leading Scholar in AI

Mario Armstrong

Host
Left
Right