NVIDIA Announces Major Release of Cosmos World Foundation Models and Physical AI Data Tools

  • New Models Enable Prediction, Controllable World Generation and Reasoning for Physical AI
  • Two New Blueprints Deliver Massive Physical AI Synthetic Data Generation for Robot and Autonomous Vehicle Post-Training
  • 1X, Agility Robotics, Figure AI, Skild AI Among Early Adopters

SAN JOSE, Calif., March 18, 2025 (GLOBE NEWSWIRE) — GTCNVIDIA today announced a major release of new NVIDIA Cosmos™ world foundation models (WFMs), introducing an open and fully customizable reasoning model for physical AI development and giving developers unprecedented control over world generation.

NVIDIA is also launching two new blueprints — powered by the NVIDIA Omniverse™ and Cosmos platforms — that provide developers with massive, controllable synthetic data generation engines for post-training robots and autonomous vehicles.

Industry leaders including 1X, Agility Robotics, Figure AI, Foretellix, Skild AI and Uber are among the first to adopt Cosmos to generate richer training data for physical AI faster and at scale.

“Just as large language models revolutionized generative and agentic AI, Cosmos world foundation models are a breakthrough for physical AI,” said Jensen Huang, founder and CEO of NVIDIA. “Cosmos introduces an open and fully customizable reasoning model for physical AI and unlocks opportunities for step-function advances in robotics and the physical industries.”

Cosmos Transfer for Synthetic Data Generation

Cosmos Transfer WFMs ingest structured video inputs such as segmentation maps, depth maps, lidar scans, pose estimation maps and trajectory maps to generate controllable photoreal video outputs.

Cosmos Transfer streamlines perception AI training, transforming 3D simulations or ground truth created in Omniverse into photorealistic videos for large-scale, controllable synthetic data generation.

Agility Robotics will be an early adopter of Cosmos Transfer and Omniverse for large-scale synthetic data generation to train its robot models.

“Cosmos offers us an opportunity to scale our photorealistic training data beyond what we can feasibly collect in the real world,” said Pras Velagapudi, chief technology officer of Agility Robotics. “We’re excited to see what new performance we can unlock with the platform, while making the most use of the physics-based simulation data we already have.”

The NVIDIA Omniverse Blueprint for autonomous vehicle simulation uses Cosmos Transfer to amplify variations of physically based sensor data. With the blueprint, Foretellix can enhance behavioral scenarios by varying conditions like weather and lighting for diverse driving datasets. Parallel Domain is also using the blueprint to apply similar variation to its sensor simulation.

The NVIDIA GR00T Blueprint for synthetic manipulation motion generation combines Omniverse and Cosmos Transfer to generate diverse datasets at scale, benefiting from OpenUSD-powered simulations and reducing data collection and augmentation time from days to hours.

Cosmos Predict for Intelligent World Generation


Announced
at the CES trade show in January, Cosmos Predict WFMs generate virtual world states from multimodal inputs like text, images and video. New Cosmos Predict models will enable multi-frame generation, predicting intermediate actions or motion trajectories when given start and end input images. Purpose-built for post-training, these models can be customized using NVIDIA’s openly available physical AI dataset.

With the inference compute power of NVIDIA Grace Blackwell NVL72 systems and their large NVIDIA NVLink™ domain, developers can achieve real-time world generation.

1X is using Cosmos Predict and Cosmos Transfer to train its new humanoid robot NEO Gamma. Robot brain developer Skild AI is tapping into Cosmos Transfer to augment synthetic datasets for its robots. Plus, Nexar and Oxa are using Cosmos Predict to advance their autonomous driving systems.

Multimodal Reasoning for Physical AI

Cosmos Reason is an open, fully customizable WFM with spatiotemporal awareness that uses chain-of-thought reasoning to understand video data and predict the outcomes of interactions — such as a person stepping into a crosswalk or a box falling from a shelf — in natural language.

Developers can use Cosmos Reason to improve physical AI data annotation and curation, enhance existing world foundation models or create new vision language action models. They can also post-train it to build high-level planners to tell the physical AI what it needs to do to complete a task.

Accelerating Data Curation and Post-Training for Physical AI

Based on their downstream task, developers can post-train Cosmos WFMs using native PyTorch scripts or the NVIDIA NeMo framework on NVIDIA DGX™ Cloud.

Cosmos developers can also use NVIDIA NeMo Curator on DGX Cloud for accelerated data processing and curation. Linker Vision and Milestone Systems are using it for curating large amounts of video data to train large vision language models for visual agents built on the NVIDIA AI Blueprint for video search and summarization. Virtual Incision is exploring it to be deployed in future surgical robots, while Uber and Waabi are advancing autonomous vehicles development.

Driving Responsible AI and Content Transparency

In line with NVIDIA’s trustworthy AI principles, NVIDIA enforces open guardrails across all Cosmos WFMs. In addition, NVIDIA is collaborating with Google DeepMind to integrate SynthID to watermark and help identify AI-generated outputs from the Cosmos WFM NVIDIA NIM™ microservice featured on build.nvidia.com.

Availability


Cosmos WFMs
are available for preview in the NVIDIA API catalog and now listed in the Vertex AI Model Garden on Google Cloud. Cosmos Predict and Cosmos Transfer are openly available on Hugging Face and GitHub. Cosmos Reason is available in early access.

Learn more by watching the NVIDIA GTC keynote and by registering for Cosmos sessions and training from NVIDIA and industry leaders at the show, including “An Introduction to Cosmos World Foundation Models” with Ming-Yu Liu, vice president of generative AI research at NVIDIA.

About NVIDIA


NVIDIA
(NASDAQ: NVDA) is the world leader in accelerated computing.

For further information, contact:

Paris Fox
Corporate Communications
NVIDIA Corporation
+1-408-242-0035
[email protected]

Certain statements in this press release including, but not limited to, statements as to: the benefits, impact, availability, and performance of NVIDIA’s products, services, and technologies; third parties adopting NVIDIA’s products and technologies and the benefits and impact thereof; and Cosmos opening opportunities for step-function advances in robotics and the physical industries are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: global economic conditions; our reliance on third parties to manufacture, assemble, package and test our products; the impact of technological development and competition; development of new products and technologies or enhancements to our existing product and technologies; market acceptance of our products or our partners’ products; design, manufacturing or software defects; changes in consumer preferences or demands; changes in industry standards and interfaces; unexpected loss of performance of our products or technologies when integrated into systems; as well as other factors detailed from time to time in the most recent reports NVIDIA files with the Securities and Exchange Commission, or SEC, including, but not limited to, its annual report on Form 10-K and quarterly reports on Form 10-Q. Copies of reports filed with the SEC are posted on the company’s website and are available from NVIDIA without charge. These forward-looking statements are not guarantees of future performance and speak only as of the date hereof, and, except as required by law, NVIDIA disclaims any obligation to update these forward-looking statements to reflect future events or circumstances.

© 2025 NVIDIA Corporation. All rights reserved. NVIDIA, the NVIDIA logo, NVIDIA Cosmos, NVIDIA DGX, NVIDIA NeMo, NVIDIA NIM, NVIDIA Omniverse and NVLink are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated. Features, pricing, availability and specifications are subject to change without notice.

A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/6c781321-9544-4bbf-bb47-8bab73fe2f63



NVIDIA Launches Family of Open Reasoning AI Models for Developers and Enterprises to Build Agentic AI Platforms

  • Post-Trained by NVIDIA, New Llama Nemotron Reasoning Models Provide Business-Ready Foundation for Agentic AI
  • Accenture, Amdocs, Atlassian, Box, Cadence, CrowdStrike, Deloitte, IQVIA, Microsoft, SAP and ServiceNow Pioneering Reasoning AI Agents With NVIDIA to Transform Work

SAN JOSE, Calif., March 18, 2025 (GLOBE NEWSWIRE) — GTC — NVIDIA today announced the open Llama Nemotron family of models with reasoning capabilities, designed to provide developers and enterprises a business-ready foundation for creating advanced AI agents that can work independently or as connected teams to solve complex tasks.

Built on Llama models, the NVIDIA Llama Nemotron reasoning family delivers on-demand AI reasoning capabilities. NVIDIA enhanced the new reasoning model family during post-training to improve multistep math, coding, reasoning and complex decision-making.

This refinement process boosts accuracy of the models by up to 20% compared with the base model and optimizes inference speed by 5x compared with other leading open reasoning models. The improvements in inference performance mean the models can handle more complex reasoning tasks, enhance decision-making capabilities and reduce operational costs for enterprises.

Leading agent AI platform pioneers — including Accenture, Amdocs, Atlassian, Box, Cadence, CrowdStrike, Deloitte, IQVIA, Microsoft, SAP and ServiceNow — are collaborating with NVIDIA on its new reasoning models and software.

“Reasoning and agentic AI adoption is incredible,” said Jensen Huang, founder and CEO of NVIDIA. “NVIDIA’s open reasoning models, software and tools give developers and enterprises everywhere the building blocks to create an accelerated agentic AI workforce.”

NVIDIA Post-Training Boosts Accuracy and Reliability for Enterprise Reasoning

Built to deliver production-ready AI reasoning, the Llama Nemotron model family is available as NVIDIA NIM™ microservices in Nano, Super and Ultra sizes — each optimized for different deployment needs.

The Nano model delivers the highest accuracy on PCs and edge devices, the Super model offers the best accuracy and highest throughput on a single GPU, and the Ultra model will provide maximum agentic accuracy on multi-GPU servers.

NVIDIA conducted extensive post-training on NVIDIA DGX™ Cloud using high-quality curated synthetic data generated by NVIDIA Nemotron™ and other open models, as well as additional curated datasets cocreated by NVIDIA.

The tools, datasets and post-training optimization techniques used to develop the models will be openly available, giving enterprises the flexibility to build their own custom reasoning models.

Agentic Platforms Team With NVIDIA to Enhance Reasoning for Industries

Agentic AI platform industry leaders are working with the Llama Nemotron reasoning models to deliver advanced reasoning to enterprises.

Microsoft is integrating Llama Nemotron reasoning models and NIM microservices into Microsoft Azure AI Foundry. This expands the Azure AI Foundry model catalog with options for customers to enhance services like Azure AI Agent Service for Microsoft 365.

SAP is tapping Llama Nemotron models to advance SAP Business AI solutions and Joule, the AI copilot from SAP. Additionally, it is using NVIDIA NIM and NVIDIA NeMo™ microservices to promote increased code completion accuracy for SAP ABAP programming language models.

“We are collaborating with NVIDIA to integrate Llama Nemotron reasoning models into Joule to enhance our AI agents, making them more intuitive, accurate and cost effective,” said Walter Sun, global head of AI at SAP. “These advanced reasoning models will refine and rewrite user queries, enabling our AI to better understand inquiries and deliver smarter, more efficient AI-powered experiences that drive business innovation.”

ServiceNow is harnessing Llama Nemotron models to build AI agents that offer greater performance and accuracy to enhance enterprise productivity across industries.

Accenture has made NVIDIA Llama Nemotron reasoning models available on its AI Refinery platform — including new industry agent solutions announced today — to enable clients to rapidly develop and deploy custom AI agents tailored to industry-specific challenges, accelerating business transformation.

Deloitte is planning to incorporate Llama Nemotron reasoning models into its recently announced Zora AI agentic AI platform designed to support and emulate human decision-making and action with agents that include deep functional- and industry-specific business knowledge and built-in transparency.

NVIDIA AI Enterprise Delivers Essential Tools for Agentic AI

Developers can deploy NVIDIA Llama Nemotron reasoning models with new NVIDIA agentic AI tools and software to streamline the adoption of advanced reasoning in collaborative AI systems.

All part of the NVIDIA AI Enterprise software platform, the latest agentic AI building blocks include:

  • The NVIDIA AI-Q Blueprint, which enables enterprises to connect knowledge to AI agents that can autonomously perceive, reason and act. Built with NVIDIA NIM microservices, the blueprint integrates NVIDIA NeMo Retriever™ for multimodal information retrieval and enables agent and data connections, optimization and transparency using the open-source NVIDIA AgentIQ toolkit.
  • The NVIDIA AI Data Platform, a customizable reference design for a new class of enterprise infrastructure with AI query agents built with the AI-Q Blueprint.

  • New NVIDIA NIM microservices
    , which optimize inference for complex agentic AI applications and enable continuous learning and real-time adaptation across any environment. The microservices ensure reliable deployment of the latest models from leading model builders including Meta, Microsoft and Mistral AI.
  • NVIDIA NeMo microservices, which provide an efficient, enterprise-grade solution to quickly establish and maintain a robust data flywheel that enables AI agents to continuously learn from human- and AI-generated feedback. The NVIDIA AI Blueprint for building a data flywheel will offer a reference architecture for developers to easily build and optimize data flywheels using NVIDIA microservices.

Availability

The NVIDIA Llama Nemotron Nano and Super models and NIM microservices are available as a hosted application programming interface from build.nvidia.com and Hugging Face. Access for development, testing and research is free for members of the NVIDIA Developer Program.

Enterprises can run Llama Nemotron NIM microservices in production with NVIDIA AI Enterprise on accelerated data center and cloud infrastructure. Developers can sign up to be notified when NVIDIA NeMo microservices are publicly available.

The NVIDIA AI-Q Blueprint is expected to be available in April. The NVIDIA AgentIQ toolkit is available now on GitHub.

About NVIDIA


NVIDIA
(NASDAQ: NVDA) is the world leader in accelerated computing.

For further information, contact:

Anna Kiachian
NVIDIA Corporation
+1-650-224-9820
[email protected]

Certain statements in this press release including, but not limited to, statements as to: the benefits, impact, availability, and performance of NVIDIA’s products, services, and technologies; third parties adopting NVIDIA’s products and technologies and the benefits and impact thereof; NVIDIA’s open reasoning models, software and tools giving developers and enterprises everywhere the building blocks to create an accelerated agentic AI workforce are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: global economic conditions; our reliance on third parties to manufacture, assemble, package and test our products; the impact of technological development and competition; development of new products and technologies or enhancements to our existing product and technologies; market acceptance of our products or our partners’ products; design, manufacturing or software defects; changes in consumer preferences or demands; changes in industry standards and interfaces; unexpected loss of performance of our products or technologies when integrated into systems; as well as other factors detailed from time to time in the most recent reports NVIDIA files with the Securities and Exchange Commission, or SEC, including, but not limited to, its annual report on Form 10-K and quarterly reports on Form 10-Q. Copies of reports filed with the SEC are posted on the company’s website and are available from NVIDIA without charge. These forward-looking statements are not guarantees of future performance and speak only as of the date hereof, and, except as required by law, NVIDIA disclaims any obligation to update these forward-looking statements to reflect future events or circumstances.

Many of the products and features described herein remain in various stages and will be offered on a when-and-if-available basis. The statements above are not intended to be, and should not be interpreted as a commitment, promise, or legal obligation, and the development, release, and timing of any features or functionalities described for our products is subject to change and remains at the sole discretion of NVIDIA. NVIDIA will have no liability for failure to deliver or delay in the delivery of any of the products, features or functions set forth herein.

© 2025 NVIDIA Corporation. All rights reserved. NVIDIA, the NVIDIA logo, DGX, NVIDIA NeMo, NVIDIA Nemotron, NVIDIA NeMo Retriever and NVIDIA NIM are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated. Features, pricing, availability and specifications are subject to change without notice.

A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/6b111210-07b7-4296-83fa-8c18c9acfbfc



NVIDIA Announces Isaac GR00T N1 — the World’s First Open Humanoid Robot Foundation Model — and Simulation Frameworks to Speed Robot Development

  • Now Available, Fully Customizable Foundation Model Brings Generalized Skills and Reasoning to Humanoid Robots
  • NVIDIA, Google DeepMind and Disney Research Collaborate to Develop Next-Generation Open-Source Newton Physics Engine
  • New Omniverse Blueprint for Synthetic Data Generation and Open-Source Dataset Jumpstart Physical AI Data Flywheel

SAN JOSE, Calif., March 18, 2025 (GLOBE NEWSWIRE) — GTC — NVIDIA today announced a portfolio of technologies to supercharge humanoid robot development, including NVIDIA Isaac GR00T N1, the world’s first open, fully customizable foundation model for generalized humanoid reasoning and skills.

The other technologies include simulation frameworks and blueprints such as the NVIDIA Isaac GR00T Blueprint for generating synthetic data, as well as Newton, an open-source physics engine — under development with Google DeepMind and Disney Research — purpose-built for developing robots.

Available now, GR00T N1 is the first of a family of fully customizable models that NVIDIA will pretrain and release to worldwide robotics developers — accelerating the transformation of industries challenged by global labor shortages estimated at more than 50 million people.

“The age of generalist robotics is here,” said Jensen Huang, founder and CEO of NVIDIA. “With NVIDIA Isaac GR00T N1 and new data-generation and robot-learning frameworks, robotics developers everywhere will open the next frontier in the age of AI.”

GR00T N1 Advances Humanoid Developer Community

The GR00T N1 foundation model features a dual-system architecture, inspired by principles of human cognition. “System 1” is a fast-thinking action model, mirroring human reflexes or intuition. “System 2” is a slow-thinking model for deliberate, methodical decision-making.

Powered by a vision language model, System 2 reasons about its environment and the instructions it has received to plan actions. System 1 then translates these plans into precise, continuous robot movements. System 1 is trained on human demonstration data and a massive amount of synthetic data generated by the NVIDIA Omniverse™ platform.

GR00T N1 can easily generalize across common tasks — such as grasping, moving objects with one or both arms, and transferring items from one arm to another — or perform multistep tasks that require long context and combinations of general skills. These capabilities can be applied across use cases such as material handling, packaging and inspection.

Developers and researchers can post-train GR00T N1 with real or synthetic data for their specific humanoid robot or task.

In his GTC keynote, Huang demonstrated 1X’s humanoid robot autonomously performing domestic tidying tasks using a post-trained policy built on GR00T N1. The robot’s autonomous capabilities are the result of an AI training collaboration between 1X and NVIDIA.

“The future of humanoids is about adaptability and learning,” said Bernt Børnich, CEO of 1X Technologies. “NVIDIA’s GR00T N1 model provides a major breakthrough for robot reasoning and skills. With a minimal amount of post-training data, we were able to fully deploy on NEO Gamma — furthering our mission of creating robots that are not tools, but companions that can assist humans in meaningful, immeasurable ways.”

Among the additional leading humanoid developers worldwide with early access to GR00T N1 are Agility Robotics, Boston Dynamics, Mentee Robotics and NEURA Robotics.

NVIDIA, Google DeepMind and Disney Research Focus on Physics

NVIDIA announced a collaboration with Google DeepMind and Disney Research to develop Newton, an open-source physics engine that lets robots learn how to handle complex tasks with greater precision.

Built on the NVIDIA Warp framework, Newton will be optimized for robot learning and compatible with simulation frameworks such as Google DeepMind’s MuJoCo and NVIDIA Isaac™ Lab. Additionally, the three companies plan to enable Newton to use Disney’s physics engine.

Google DeepMind and NVIDIA are collaborating to develop MuJoCo-Warp, which is expected to accelerate robotics machine learning workloads by more than 70x and will be available to developers through Google DeepMind’s MJX open-source library, as well as through Newton.

Disney Research will be one of the first to use Newton to advance its robotic character platform that powers next-generation entertainment robots, such as the expressive Star Wars-inspired BDX droids that joined Huang on stage during his GTC keynote.

“The BDX droids are just the beginning. We’re committed to bringing more characters to life in ways the world hasn’t seen before, and this collaboration with Disney Research, NVIDIA and Google DeepMind is a key part of that vision,” said Kyle Laughlin, senior vice president at Walt Disney Imagineering Research & Development. “This collaboration will allow us to create a new generation of robotic characters that are more expressive and engaging than ever before — and connect with our guests in ways that only Disney can.”

NVIDIA and Disney Research, along with Intrinsic, announced an additional collaboration to build OpenUSD pipelines and best practices for robotics data workflows.

More Data to Advance Robotics Post-Training

Large, diverse, high-quality datasets are critical for robot development but costly to capture. For humanoids, real-world human demonstration data is limited by a person’s 24-hour day.

Announced today, the NVIDIA Isaac GR00T Blueprint for synthetic manipulation motion generation helps address this challenge. Built on Omniverse and NVIDIA Cosmos Transfer world foundation models, the blueprint lets developers generate exponentially large amounts of synthetic motion data for manipulation tasks from a small number of human demonstrations.

Using the first components available for the blueprint, NVIDIA generated 780,000 synthetic trajectories — the equivalent of 6,500 hours, or nine continuous months, of human demonstration data — in just 11 hours. Then, combining the synthetic data with real data, NVIDIA improved GR00T N1’s performance by 40%, compared with using only real data.

To further equip the developer community with valuable training data, NVIDIA is releasing the GR00T N1 dataset as part of a larger open-source physical AI dataset — also announced at GTC and now available on Hugging Face.

Availability

NVIDIA GR00T N1 training data and task evaluation scenarios are now available for download from Hugging Face and GitHub. The NVIDIA Isaac GR00T Blueprint for synthetic manipulation motion generation is also now available as an interactive demo on build.nvidia.com or to download from GitHub.

The NVIDIA DGX Spark personal AI supercomputer, also announced today at GTC, provides developers a turnkey system to expand GR00T N1’s capabilities for new robots, tasks and environments without extensive custom programming.

The Newton physics engine is expected to be available later this year.

Learn more by watching the NVIDIA GTC keynote and register to attend key Humanoid Developer Day sessions, including:

About NVIDIA


NVIDIA
(NASDAQ: NVDA) is the world leader in accelerated computing.

For further information, contact:

Kristin Bryson
Enterprise Communications
NVIDIA Corporation
+1-203-241-9190
[email protected]

Certain statements in this press release including, but not limited to, statements as to the benefits, impact, availability, and performance of NVIDIA’s products, services, and technologies; NVIDIA’s collaborations with third parties; third parties adopting or offering NVIDIA’s products and technologies; and with NVIDIA Isaac GR00T N1 and new data generation and robot-learning frameworks, robotics developers everywhere opening the next frontier in the age of AI are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: global economic conditions; our reliance on third parties to manufacture, assemble, package and test our products; the impact of technological development and competition; development of new products and technologies or enhancements to our existing product and technologies; market acceptance of our products or our partners’ products; design, manufacturing or software defects; changes in consumer preferences or demands; changes in industry standards and interfaces; unexpected loss of performance of our products or technologies when integrated into systems; as well as other factors detailed from time to time in the most recent reports NVIDIA files with the Securities and Exchange Commission, or SEC, including, but not limited to, its annual report on Form 10-K and quarterly reports on Form 10-Q. Copies of reports filed with the SEC are posted on the company’s website and are available from NVIDIA without charge. These forward-looking statements are not guarantees of future performance and speak only as of the date hereof, and, except as required by law, NVIDIA disclaims any obligation to update these forward-looking statements to reflect future events or circumstances.

Many of the products and features described herein remain in various stages and will be offered on a when-and-if-available basis. The statements above are not intended to be, and should not be interpreted as a commitment, promise, or legal obligation, and the development, release, and timing of any features or functionalities described for our products is subject to change and remains at the sole discretion of NVIDIA. NVIDIA will have no liability for failure to deliver or delay in the delivery of any of the products, features or functions set forth herein.

© 2025 NVIDIA Corporation. All rights reserved. NVIDIA, the NVIDIA logo, NVIDIA Omniverse and NVIDIA Isaac are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated. Features, pricing, availability and specifications are subject to change without notice.

A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/65cf6342-e940-44c0-a40c-dfe09ac433c9



NVIDIA Blackwell RTX PRO Comes to Workstations and Servers for Designers, Developers, Data Scientists and Creatives to Build and Collaborate With Agentic AI

Groundbreaking AI and Graphics Performance Redefine Visualization, Simulation and Scientific Computing From Desktop to Data Center for Millions of Professionals Worldwide

SAN JOSE, Calif., March 18, 2025 (GLOBE NEWSWIRE) — GTC — NVIDIA today announced the NVIDIA RTX PRO™ Blackwell series — a revolutionary generation of workstation and server GPUs redefining workflows for AI, technical, creative, engineering and design professionals with breakthrough accelerated computing, AI inference, ray tracing and neural rendering technologies.

For everything from agentic AI, simulation, extended reality, 3D design and complex visual effects to developing physical AI powering autonomous robots, vehicles and smart spaces, the RTX PRO Blackwell series provides professionals across industries the latest and greatest compute power, memory capacity and data throughput right at their fingertips — from their desktop, on the go with mobile workstations or powered by data center GPUs.

The new lineup includes:

  • Data center GPU: NVIDIA RTX PRO 6000 Blackwell Server Edition
  • Desktop GPUs: NVIDIA RTX PRO 6000 Blackwell Workstation Edition, NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition, NVIDIA RTX PRO 5000 Blackwell, NVIDIA RTX PRO 4500 Blackwell and NVIDIA RTX PRO 4000 Blackwell
  • Laptop GPUs: NVIDIA RTX PRO 5000 Blackwell, NVIDIA RTX PRO 4000 Blackwell, NVIDIA RTX PRO 3000 Blackwell, NVIDIA RTX PRO 2000 Blackwell, NVIDIA RTX PRO 1000 Blackwell and NVIDIA RTX PRO 500 Blackwell

“Software developers, data scientists, artists, designers and engineers need powerful AI and graphics performance to push the boundaries of visual computing and simulation, helping tackle incredible industry challenges,” said Bob Pette, vice president of enterprise platforms at NVIDIA. “Bringing NVIDIA Blackwell to workstations and servers will take productivity, performance and speed to new heights, accelerating AI inference serving, data science, visualization and content creation.”

NVIDIA Blackwell Technology Comes to Workstations and Data Centers

RTX PRO Blackwell GPUs unlock the potential of generative, agentic and physical AI by delivering exceptional performance, efficiency and scale.

NVIDIA RTX PRO Blackwell GPUs feature:

  • NVIDIA Streaming Multiprocessor: Offers up to 1.5x faster throughput and new neural shaders that integrate AI inside of programmable shaders to drive the next decade of AI-augmented graphics innovations.
  • Fourth-Generation RT Cores: Delivers up to 2x the performance of the previous generation to create photoreal, physically accurate scenes and complex 3D designs with optimizations for NVIDIA RTX™ Mega Geometry.
  • Fifth-Generation Tensor Cores: Delivers up to 4,000 AI trillion operations per second and adds support for FP4 precision and NVIDIA DLSS 4 Multi Frame Generation, enabling a new era of AI-powered graphics and the ability to run and prototype larger AI models faster.
  • Larger, Faster GDDR7 Memory: Boosts bandwidth and capacity — up to 96GB for workstations and servers and up to 24GB on laptops. This enables applications to run faster and work with larger, more complex datasets for everything from tackling massive 3D and AI projects to exploring large-scale virtual reality environments.
  • Ninth-Generation NVIDIA NVENC: Accelerates video encoding speed and improves quality for professional video applications with added support for 4:2:2 encoding.
  • Sixth-Generation NVIDIA NVDEC: Provides up to double the H.264 decoding throughput and offers support for 4:2:2 H.264 and HEVC decode. Professionals can benefit from high-quality video playback, accelerate video data ingestion and use advanced AI-powered video editing features.
  • Fifth-Generation PCIe: Support for fifth-generation PCI Express provides double the bandwidth over the previous generation, improving data transfer speeds from CPU memory and unlocking faster performance for data-intensive tasks.
  • DisplayPort 2.1: Drives high-resolution displays at up to 4K at 480Hz and 8K at 165Hz. Increased bandwidth enables seamless multi-monitor setups, while high dynamic range and higher color depth support deliver more precise color accuracy for tasks like video editing, 3D design and live broadcasting.
  • Multi-Instance GPU (MIG): The RTX PRO 6000 data center and desktop GPUs and 5000 series desktop GPUs feature MIG technology, enabling secure partitioning of a single GPU into up to four instances (6000 series) or two instances (5000 series). Fault isolation is designed to prevent workload interference for secure, efficient resource allocation for diverse workloads, maximizing performance and flexibility.

The new laptop GPUs also support the latest NVIDIA Blackwell Max-Q technologies, which intelligently and continually optimize laptop performance and power efficiency with AI.

With neural rendering and AI-augmented tools, NVIDIA RTX PRO Blackwell GPUs enable the creation of stunning visuals, digital twins of real-world environments and immersive experiences with unprecedented speed and efficiency. The GPUs are built to elevate 3D computer-aided design and building information model workflows, offering designers and engineers exceptional performance for complex modeling, rendering and visualization.

Designed for enterprise data center deployments, the RTX PRO 6000 Blackwell Server Edition features a passively cooled thermal design and can be configured with up to eight GPUs per server. For workloads that require the compute density and scale that data centers offer, the RTX PRO 6000 Blackwell Server Edition delivers powerful performance for next-generation AI, scientific and visual computing applications across industries such as healthcare, manufacturing, retail and media and entertainment.

In addition, this powerful data center GPU can be combined with NVIDIA vGPU™ software to power AI workloads across virtualized environments and deliver high-performance virtual workstation instances to remote users. NVIDIA vGPU support for the NVIDIA RTX PRO 6000 Blackwell Server Edition GPU is expected in the latter half of this year.

“Foster + Partners has tested the NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition GPU on Cyclops, our GPU-based ray-tracing product,” said Martha Tsigkari, head of applied research and development and senior partner at Foster + Partners. “The new NVIDIA Blackwell GPU has managed to outperform everything we have tested before. For example, when using it with Cyclops, it has performed at 5x the speed of NVIDIA RTX A6000 GPUs. Rendering speeds also increased 5x, allowing tools like Cyclops to provide feedback on how well our design solutions perform in real time as we design them and resulting in intuitive yet informed decision-making from early conceptual stages.”

“Early evaluation of the RTX PRO 6000 Blackwell technology by GE HealthCare’s engineering team has found the potential for up to 2x GPU processing time improvement on reconstruction algorithms, which could lead to significant benefit to customers,” said Rekha Ranganathan, senior executive and general manager of platforms and digital solutions at GE HealthCare.

“NVIDIA RTX PRO 6000 Blackwell Workstation Edition GPUs enable incredibly sharp and photorealistic graphics,” said Jeff Hammoud, chief design officer at Rivian. “In conjunction with a Varjo XR4 headset and Autodesk VRED, the system delivered the level of crispness necessary for immersive automotive design reviews. With NVIDIA Blackwell support for PCIe Gen 5, we used two powerful 600W GPUs via VR SLI, allowing us to achieve the highest pixel density and the most stunning visuals we have ever experienced in VR.”

“The 96GB memory and massive AI processing power in the NVIDIA RTX PRO 6000 Blackwell Workstation Edition GPU has boosted our productivity up to 3x with AI models like Llama 3.3-70B and Mixtral 8x7b, the NVIDIA Omniverse platform and industrial copilots,” said Shaun Greene, director of industry solutions at SoftServe. “We’ve seen immediate performance improvements and, using workstations, can now handle AI workloads that were previously only possible in the cloud or on rack servers — unlocking new possibilities for interactive demos and production workloads in retail, manufacturing and industrial edge applications.”

RTX PRO GPUs run on the NVIDIA AI platform and feature larger memory capacity and the latest Tensor Cores to accelerate a deep ecosystem of AI-accelerated applications built on NVIDIA CUDA® and RTX technology. With everything from the latest AI-based content creation tools and new reasoning models, such as the NVIDIA Llama Nemotron Reason family of models and NVIDIA NIM™ microservices unveiled today, inferencing is faster than ever. And with over 400 NVIDIA CUDA-X™ libraries, developers can easily build, optimize, deploy and scale new AI applications, from workstations to the data center or cloud.

Enterprises can fast-track their AI development and deployments by prototyping locally with an NVIDIA RTX PRO GPU and the NVIDIA Omniverse™ and NVIDIA AI Enterprise platforms, NVIDIA Blueprints and NVIDIA NIM, which gives access to easy-to-use inference microservices backed by enterprise-level support. They can also run these applications at scale on the ultimate universal data center GPU for AI and visual computing, delivering breakthrough acceleration for the most demanding compute-intensive enterprise workloads with the RTX PRO 6000 Blackwell Server Edition.

Availability

The NVIDIA RTX PRO 6000 Blackwell Server Edition will soon be available in server configurations from leading data center system partners including Cisco, Dell Technologies, Hewlett Packard Enterprise, Lenovo and Supermicro.

Cloud service providers and GPU cloud providers including AWS, Google Cloud, Microsoft Azure and CoreWeave will be among the first to offer instances powered by the NVIDIA RTX PRO 6000 Blackwell Server Edition later this year. In addition, the server edition GPU will be available in data center platforms from ASUS, GIGABYTE, Ingrasys, Quanta Cloud Technology (QCT) and other global system partners.

The NVIDIA RTX PRO 6000 Blackwell Workstation Edition and NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition will be available through global distribution partners such as PNY and TD SYNNEX starting in April, with availability from manufacturers, such as BOXX, Dell, HP Inc., Lambda and Lenovo, starting in May.

The NVIDIA RTX PRO 5000, RTX PRO 4500 and RTX PRO 4000 Blackwell GPUs will be available in the summer from BOXX, Dell, HP and Lenovo and through global distribution partners.

NVIDIA RTX PRO Blackwell laptop GPUs will be available from Dell, HP, Lenovo and Razer starting later this year.

To learn more about the NVIDIA RTX PRO Blackwell GPUs, watch the GTC keynote and register to attend sessions from NVIDIA and industry leaders at the show, which runs through March 21. Plus, explore extended-reality demos running on RTX PRO Blackwell GPUs at the XR Pavilion at The Tech Interactive museum.

About NVIDIA


NVIDIA
(NASDAQ: NVDA) is the world leader in accelerated computing.

For further information, contact:

Pearlina Boc
NVIDIA Corporation
+1-562-275-5781
[email protected]

Certain statements in this press release including, but not limited to, statements as to: the benefits, impact, and performance of NVIDIA’s products, services, and technologies; third parties adopting or offering NVIDIA’s products and technologies; and bringing Blackwell to workstations and servers taking productivity, performance and speed to new heights, accelerating AI inference serving, data science, visualization and content creation are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: global economic conditions; our reliance on third parties to manufacture, assemble, package and test our products; the impact of technological development and competition; development of new products and technologies or enhancements to our existing product and technologies; market acceptance of our products or our partners’ products; design, manufacturing or software defects; changes in consumer preferences or demands; changes in industry standards and interfaces; unexpected loss of performance of our products or technologies when integrated into systems; as well as other factors detailed from time to time in the most recent reports NVIDIA files with the Securities and Exchange Commission, or SEC, including, but not limited to, its annual report on Form 10-K and quarterly reports on Form 10-Q. Copies of reports filed with the SEC are posted on the company’s website and are available from NVIDIA without charge. These forward-looking statements are not guarantees of future performance and speak only as of the date hereof, and, except as required by law, NVIDIA disclaims any obligation to update these forward-looking statements to reflect future events or circumstances.

Many of the products and features described herein remain in various stages and will be offered on a when-and-if-available basis. The statements above are not intended to be, and should not be interpreted as a commitment, promise, or legal obligation, and the development, release, and timing of any features or functionalities described for our products is subject to change and remains at the sole discretion of NVIDIA. NVIDIA will have no liability for failure to deliver or delay in the delivery of any of the products, features or functions set forth herein.

© 2025 NVIDIA Corporation. All rights reserved. NVIDIA, the NVIDIA logo, CUDA, CUDA-X, NVIDIA NIM, NVIDIA Omniverse, NVIDIA RTX, NVIDIA RTX PRO and vGPU are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated. Features, pricing, availability and specifications are subject to change without notice.

A photo accompanying this announcement is available at:
https://www.globenewswire.com/NewsRoom/AttachmentNg/155918f9-2121-4220-9f20-6b968e34a460



NVIDIA Announces DGX Spark and DGX Station Personal AI Computers

Powered by NVIDIA Grace Blackwell, Desktop Supercomputers Place Accelerated AI in the Hands of Developers, Researchers and Data Scientists; Systems Coming From Leading Computer Makers Including ASUS, Dell Technologies, HP and Lenovo

SAN JOSE, Calif., March 18, 2025 (GLOBE NEWSWIRE) — GTC — NVIDIA today unveiled NVIDIA DGX™ personal AI supercomputers powered by the NVIDIA Grace Blackwell platform.

DGX Spark — formerly Project DIGITS — and DGX Station™, a new high-performance NVIDIA Grace Blackwell desktop supercomputer powered by the NVIDIA Blackwell Ultra platform, enable AI developers, researchers, data scientists and students to prototype, fine-tune and inference large models on desktops. Users can run these models locally or deploy them on NVIDIA DGX Cloud or any other accelerated cloud or data center infrastructure.

DGX Spark and DGX Station bring the power of the Grace Blackwell architecture, previously only available in the data center, to the desktop. Global system builders to develop DGX Spark and DGX Station include ASUS, Dell, HP Inc. and Lenovo.

“AI has transformed every layer of the computing stack. It stands to reason a new class of computers would emerge — designed for AI-native developers and to run AI-native applications,” said Jensen Huang, founder and CEO of NVIDIA. “With these new DGX personal AI computers, AI can span from cloud services to desktop and edge applications.”

Igniting Innovation With DGX Spark

DGX Spark is the world’s smallest AI supercomputer, empowering millions of researchers, data scientists, robotics developers and students to push the boundaries of generative and physical AI with massive performance and capabilities.

At the heart of DGX Spark is the NVIDIA GB10 Grace Blackwell Superchip, optimized for a desktop form factor. GB10 features a powerful NVIDIA Blackwell GPU with fifth-generation Tensor Cores and FP4 support, delivering up to 1,000 trillion operations per second of AI compute for fine-tuning and inference with the latest AI reasoning models, including the NVIDIA Cosmos Reason world foundation model and NVIDIA GR00T N1 robot foundation model.

The GB10 Superchip uses NVIDIA NVLink™-C2C interconnect technology to deliver a CPU+GPU-coherent memory model with 5x the bandwidth of fifth-generation PCIe. This lets the superchip access data between a GPU and CPU to optimize performance for memory-intensive AI developer workloads.

NVIDIA’s full-stack AI platform enables DGX Spark users to seamlessly move their models from their desktops to DGX Cloud or any accelerated cloud or data center infrastructure — with virtually no code changes — making it easier than ever to prototype, fine-tune and iterate on their workflows.

Full Speed Ahead With DGX Station


NVIDIA DGX Station
brings data-center-level performance to desktops for AI development. The first desktop system to be built with the NVIDIA GB300 Grace Blackwell Ultra Desktop Superchip, DGX Station features a massive 784GB of coherent memory space to accelerate large-scale training and inferencing workloads. The GB300 Desktop Superchip features an NVIDIA Blackwell Ultra GPU with latest-generation Tensor Cores and FP4 precision — connected to a high-performance NVIDIA Grace™ CPU via NVLink-C2C — delivering best-in-class system communication and performance.

DGX Station also features the NVIDIA ConnectX®-8 SuperNIC, optimized to supercharge hyperscale AI computing workloads. With support for networking at up to 800Gb/s, the ConnectX-8 SuperNIC delivers extremely fast, efficient network connectivity, enabling high-speed connectivity of multiple DGX Stations for even larger workloads, and network-accelerated data transfers for AI workloads.

Combining these state-of-the-art DGX Station capabilities with the NVIDIA CUDA-X™ AI platform, teams can achieve exceptional desktop AI development performance.

In addition, users gain access to NVIDIA NIM™ microservices with the NVIDIA AI Enterprise software platform, which offers highly optimized, easy-to-deploy inference microservices backed by enterprise support.

Availability

Reservations for DGX Spark systems open today at nvidia.com.

DGX Station is expected to be available from manufacturing partners like ASUS, BOXX, Dell, HP, Lambda and Supermicro later this year.

Learn more by watching the NVIDIA GTC keynote and register for sessions from NVIDIA and industry leaders at the show, which runs through March 21.

About NVIDIA


NVIDIA
(NASDAQ: NVDA) is the world leader in accelerated computing.

For further information, contact:
Pearlina Boc
NVIDIA Corporation
+1-562-275-5781
[email protected]  

Certain statements in this press release including, but not limited to, statements as to: the benefits, impact, availability, and performance of NVIDIA’s products, services, and technologies; third parties adopting or offering NVIDIA’s products and technologies; by putting the NVIDIA Grace Blackwell Superchip on every desk, and at every AI developer’s fingertips, NVIDIA empowering millions of people to shape the future of AI; and with new DGX AI supercomputers, software providers, government agencies, startups and researchers being able to prototype, fine-tune and run large AI models — transforming the way they work and create are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: global economic conditions; our reliance on third parties to manufacture, assemble, package and test our products; the impact of technological development and competition; development of new products and technologies or enhancements to our existing product and technologies; market acceptance of our products or our partners’ products; design, manufacturing or software defects; changes in consumer preferences or demands; changes in industry standards and interfaces; unexpected loss of performance of our products or technologies when integrated into systems; as well as other factors detailed from time to time in the most recent reports NVIDIA files with the Securities and Exchange Commission, or SEC, including, but not limited to, its annual report on Form 10-K and quarterly reports on Form 10-Q. Copies of reports filed with the SEC are posted on the company’s website and are available from NVIDIA without charge. These forward-looking statements are not guarantees of future performance and speak only as of the date hereof, and, except as required by law, NVIDIA disclaims any obligation to update these forward-looking statements to reflect future events or circumstances.

© 2025 NVIDIA Corporation. All rights reserved. NVIDIA, the NVIDIA logo, Connect-X, CUDA-X, DGX, DGX Station, NVIDIA Grace, NVIDIA NIM and NVLink are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and/or other countries. Other company and product names may be trademarks of the respective companies with which they are associated. Features, pricing, availability, and specifications are subject to change without notice.

A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/a1933d3b-32bd-450a-88f9-cd3df95857e1



Cumulus Media’s Westwood One Presents Exclusive Multiplatform Audio Coverage of the 2025 NCAA® Men’s Basketball Tournament


March Madness® Broadcasts Connect College Hoops Fans and Consumers across Multiple Platforms – 500+ Radio Stations, SiriusXM, NCAA March Madness Live™; via Free Online Streaming at WestwoodOneSports.com, the Westwood One Sports App, the Varsity Network App, and on Alexa-Enabled Devices
; and on TuneIn Premium 


Network’s Coverage Includes the Final Four® in Spanish, sponsored by Werner Ladder

NEW YORK, March 18, 2025 (GLOBE NEWSWIRE) — Cumulus Media’s Westwood One (NASDAQ: CMLS), the largest audio network in the U.S. and the official network audio broadcast partner of the NCAA, will once again be home to every bounce, basket, and buzzer beater of the NCAA Division I Men’s Basketball Championship. Westwood One’s coverage will tip off at 6:30 p.m. ET prior to the First Four® on Tuesday, March 18 and continue with play-by-play action from every game through the National Championship on Monday, April 7. In addition, Westwood One will broadcast live Spanish-language coverage of the NCAA Men’s Final Four® on Saturday, April 5 and the National Championship Game on Monday, April 7.   

Where to Listen  
Westwood One’s complete NCAA men’s tournament coverage can be heard on terrestrial radio stations nationwide and via SiriusXM. All 67 Division I men’s games will also be streamed online for free at NCAA.com/MarchMadness and on westwoodonesports.com and on the newly launched Westwood One Sports app available in the iTunes and Google Play stores.  Additionally, the broadcasts will be available for free via the NCAA March Madness Live app or The Varsity Network app. Fans can also access live audio via Alexa-enabled devices by asking to “Open Westwood One Sports”. TuneIn premium subscribers can also hear all the action live.  

All-Star Broadcast Team  
The trio of Kevin Kugler, Robbie Hummel, and P.J. Carlesimo will announce all the action for the Final Four and the National Championship in English. Andy Katz will return to serve as courtside reporter and Jason Horowitz will host the pregame, halftime and postgame shows from the Final Four and National Championship in San Antonio, Texas.   

In addition to Hummel and Carlesimo, Westwood One’s coverage of NCAA March Madness® will also feature a wealth of renowned commentators and analysts including former coach Tom Crean, All-Americans Robbie Hummel and Casey Jacobsen as well as former college stars LaPhonso Ellis, Austin Croshere, Jordan Cornette, Jon Crispin, King McClure and Sarah Kustok. The roster of tournament play-by-play announcers will include Kugler, Ryan Radtke, Scott Graham, John Sadak, Jason Benetti, Noah Eagle, Chris Carrino, Ted Emrich, Spero Dedes, and Nate Gatter. Jason Horowitz and Nick Bahe will host the network coverage from Westwood One’s Los Angeles studios. Brad Heller will anchor Westwood One’s coverage on satellite, online, and mobile platforms.  

Westwood One will present the Spanish-language broadcasts of the Final Four and Championship for the seventh time. The games will be called by Rafael Hernandez Brito; former All-American Greivis Vásquez will return as analyst, and former Puerto Rican National Basketball Team coach Carlos Morales will host the broadcasts. 

Westwood One will produce an outstanding lineup of hour-long college basketball programs that air prior to each day’s coverage including “The NCAA Tournament Today,” “The Final Four Show,” and “Championship Monday.” 

Westwood One’s complete schedule of 2025 NCAA Men’s Basketball Tournament broadcasts:    

  • First Four Doubleheaders:  

    • Tuesday, March 18, 6:30 pm ET  
    • Wednesday, March 19, 6:30 pm ET  
  • First Round Quadrupleheaders:
      

    • Thursday, March 20, 12 noon ET  
    • Friday, March 21, 12 noon ET  
  • Second Round Quadrupleheaders:     

    • Saturday, March 22, 12 noon ET  
    • Sunday, March 23, 12 noon ET  
  • Regional Semifinal Doubleheaders:       

    • Thursday, March 27, 7:00 pm ET  
    • Friday, March 28, 7:00 pm ET  
  • Regional Final Doubleheaders:                 

    • Saturday, March 29, 6:00 pm ET  
    • Sunday, March 30, 2:00 pm ET  
  • Men’s Final Four:
             

    • Saturday, April 5, 4:00 pm ET  
  • Men’s National Championship:                 

    • Monday, April 7, 7:00 pm ET 

A complete schedule of games, announcer bios, exclusive audio content, and a list of radio stations airing Westwood One’s broadcast of the NCAA Men’s Division I Basketball Championship can be found on westwoodonesports.com or the Westwood One Sports app.   

NCAA, First Four, Final Four, and March Madness are trademarks of the National Collegiate Athletic Association.

About Westwood One Sports 
Westwood One Sports is home to some of the most exciting sports broadcasts on radio. In addition to being the exclusive network radio partner to the NFL since 1987—featuring regular and post-season NFL football, including the playoffs and the Super Bowl—its other extensive properties include NCAA Basketball, including the NCAA Men’s and Women’s Tournaments and the Final Four®; U.S. Soccer; The Masters; NCAA Football; and other marquee sports events. Westwood One also distributes and represents Infinity Sports Network. On social media, join the Westwood One Sports community on Facebook at facebook.com/westwoodonesports, on Instagram at instagram.com/westwoodonesports, and X (formerly Twitter) at x.com/westwood1sports. For more information, visit www.westwoodonesports.com or download the Westwood One Sports app in the iTunes or Google Play stores. 

About Cumulus Media

Cumulus Media (NASDAQ: CMLS) is an audio-first media company delivering premium content to over a quarter billion people every month — wherever and whenever they want it. Cumulus Media engages listeners with high-quality local programming through 400 owned-and-operated radio stations across 84 markets; delivers nationally-syndicated sports, news, talk, and entertainment programming from iconic brands including the NFL, the NCAA, the Masters, CNN, AP News, the Academy of Country Music Awards, and many other world-class partners across more than 9,500 affiliated stations through Westwood One, the largest audio network in America; and inspires listeners through the Cumulus Podcast Network, its rapidly growing network of original podcasts that are smart, entertaining and thought-provoking. Cumulus Media provides advertisers with personal connections, local impact and national reach through broadcast and on-demand digital, mobile, social, and voice-activated platforms, as well as integrated digital marketing services, powerful influencers, full-service audio solutions, industry-leading research and insights, and live event experiences. Cumulus Media is the only audio media company to provide marketers with local and national advertising performance guarantees. For more information visit www.cumulusmedia.com.

About the NCAA 
The NCAA is a diverse association of more than 1,100 member colleges and universities that prioritize academics, well-being and fairness to create greater opportunities for nearly half a million student-athletes each year. The NCAA provides a pathway to higher education and beyond for student-athletes pursuing academic goals and competing in NCAA sports. More than 54,000 student-athletes experience the pinnacle of intercollegiate athletics by competing in NCAA championships each year. Visit ncaa.org and ncaa.com for more details about the Association and the corporate partnerships that support the NCAA and its student-athletes. The NCAA is proud to have AT&T, Capital One and Coca-Cola as official corporate champions and the following elite companies as official corporate partners: Buffalo Wild Wings, Buick, GEICO, Great Clips, The Home Depot, Intuit Turbotax, Invesco QQQ, LG, Marriott Bonvoy, Nabisco, Nissan, Pizza Hut, Reese’s, Unilever, and Wendy’s. 

Contact: Lisa Dollinger Strategic Communication for Cumulus Media | Westwood One | [email protected] 



NVIDIA Announces Spectrum-X Photonics, Co-Packaged Optics Networking Switches to Scale AI Factories to Millions of GPUs

  • 1.6 Terabits Per Second Per Port Switches to Deliver 3.5x Energy Savings and 10x Resilience in AI Factories
  • Joint Inventions and Collaborations With TSMC, Coherent, Corning Incorporated, Foxconn, Lumentum and SENKO to Create Integrated Silicon, Optics Process and Supply Chain

SAN JOSE, Calif., March 18, 2025 (GLOBE NEWSWIRE) — GTC — NVIDIA today unveiled NVIDIA Spectrum-X™ and NVIDIA Quantum-X silicon photonics networking switches, which enable AI factories to connect millions of GPUs across sites while drastically reducing energy consumption and operational costs. NVIDIA has achieved the fusion of electronic circuits and optical communications at massive scale.

As AI factories grow to unprecedented sizes, networks must evolve to keep pace. NVIDIA photonics switches are the world’s most advanced networking solution. They integrate optics innovations with 4x fewer lasers to deliver 3.5x more power efficiency, 63x greater signal integrity, 10x better network resiliency at scale and 1.3x faster deployment compared with traditional methods.

“AI factories are a new class of data centers with extreme scale, and networking infrastructure must be reinvented to keep pace,” said Jensen Huang, founder and CEO of NVIDIA. “By integrating silicon photonics directly into switches, NVIDIA is shattering the old limitations of hyperscale and enterprise networks and opening the gate to million-GPU AI factories.”

NVIDIA silicon photonics networking switches are available as part of the NVIDIA Spectrum-X Photonics Ethernet and NVIDIA Quantum-X Photonics InfiniBand platforms.

The Spectrum-X Ethernet networking platform delivers superior performance and 1.6x bandwidth density compared with traditional Ethernet for multi-tenant, hyperscale AI factories, including the largest supercomputer in the world.

NVIDIA Spectrum-X Photonics switches include multiple configurations, including 128 ports of 800Gb/s or 512 ports of 200Gb/s, delivering 100Tb/s total bandwidth, as well as 512 ports of 800Gb/s or 2,048 ports of 200Gb/s, for a total throughput of 400Tb/s.

NVIDIA Quantum-X Photonics switches provide 144 ports of 800Gb/s InfiniBand based on 200Gb/s SerDes and use a liquid-cooled design to efficiently cool the onboard silicon photonics. NVIDIA Quantum-X Photonics switches offer 2x faster speeds and 5x higher scalability for AI compute fabrics compared with the previous generation.

A Networked Ecosystem

NVIDIA’s silicon photonics ecosystem includes TSMC, Browave, Coherent, Corning Incorporated, Fabrinet, Foxconn, Lumentum, SENKO, SPIL, Sumitomo Electric Industries and TFC Communication.

“A new wave of AI factories requires efficiency and minimal maintenance to achieve the scale required for next-generation workloads,” said C. C. Wei, chairman and CEO of TSMC. “TSMC’s silicon photonics solution combines our strengths in both cutting-edge chip manufacturing and TSMC-SoIC 3D chip stacking to help NVIDIA unlock an AI factory’s ability to scale to a million GPUs and beyond, pushing the boundaries of AI.”

NVIDIA photonics will drive massive growth for a new wave of state-of-the-art AI factories, alongside pluggable optical transceiver technologies supported by industry leaders including Coherent, Eoptolink, Fabrinet and Innolight.

Availability

NVIDIA Quantum-X Photonics InfiniBand switches are expected to be available later this year, with NVIDIA Spectrum-X Photonics Ethernet switches coming in 2026 from leading infrastructure and system vendors.

Learn more by watching the NVIDIA GTC keynote and register for sessions from NVIDIA and industry leaders at the show, which runs through March 21.

About NVIDIA

NVIDIA (NASDAQ: NVDA) is the world leader in accelerated computing.

For further information, contact:

Alex Shapiro
Enterprise Networking
1-415-608-5044
[email protected]

NVIDIA Sans 9pt font: Certain statements in this press release including, but not limited to, statements as to: the benefits, impact, availability, and performance of NVIDIA’s products, services, and technologies; third parties adopting or offering NVIDIA’s products and technologies; and by integrating silicon photonics directly into switches, NVIDIA is shattering the old limitations of hyperscale and enterprise networks and opening the gate to million-GPU AI factories are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: global economic conditions; our reliance on third parties to manufacture, assemble, package and test our products; the impact of technological development and competition; development of new products and technologies or enhancements to our existing product and technologies; market acceptance of our products or our partners’ products; design, manufacturing or software defects; changes in consumer preferences or demands; changes in industry standards and interfaces; unexpected loss of performance of our products or technologies when integrated into systems; as well as other factors detailed from time to time in the most recent reports NVIDIA files with the Securities and Exchange Commission, or SEC, including, but not limited to, its annual report on Form 10-K and quarterly reports on Form 10-Q. Copies of reports filed with the SEC are posted on the company’s website and are available from NVIDIA without charge. These forward-looking statements are not guarantees of future performance and speak only as of the date hereof, and, except as required by law, NVIDIA disclaims any obligation to update these forward-looking statements to reflect future events or circumstances.

© 2025 NVIDIA Corporation. All rights reserved. NVIDIA, the NVIDIA logo and NVIDIA Spectrum-X are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated. Features, pricing, availability and specifications are subject to change without notice.

A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/fff1c2c4-5853-4e6f-a262-e41e95d2301a



Three UBS Financial Advisors in Pacific Desert Market named to Financial Planning Top 40 under 40

Three UBS Financial Advisors in Pacific Desert Market named to Financial Planning Top 40 under 40

SAN DIEGO & PHOENIX–(BUSINESS WIRE)–
UBS is proud to announce that three of the firm’s financial advisors in its Pacific Desert Market areas of Southern California and Arizona have been named to the Financial Planning Top 40 Brokers Under 40 listing for 2025. Nationwide, UBS held 12 of the 40 top spots.

The Top 40 under 40 list honors advisors who were age 39 or younger by Dec. 31, 2024, and who are registered with a wealth management firm in the employee channel. They are considered by Financial Planning to be among the industry’s top-producing young financial advisors at wirehouses and regional firms.

“These talented advisors already have demonstrated expert knowledge and a deep commitment to helping our clients plan for their families’ financial needs now and into the future,” said Justin Frame, Pacific Desert Market Executive, at UBS. “They represent the future of the wealth management industry, and I’d like to congratulate each of them on this achievement. We look forward to their continued success in helping us deliver exceptional services to our clients.”

The advisors named to the 2025 Financial Planning Top 40 under 40 list include:

  • Stockton Schultz, located in Phoenix, ranks 25th on the list at 36 years of age.
  • Bryan Thompson, located in downtown San Diego, ranks 28th on the list. Bryan is 39 years old.
  • Scott MacDonald, located in Scottsdale, ranks 30th on the list nationwide at 39 years old. Scott works with ultra-high-net-worth clients through UBS Private Wealth Management.

Altogether, the 2025 Top 40 Brokers Under 40 generated just over $215 million in annual revenue. That was up slightly over $207 million last year. On average, brokers on the top 40 under 40 list this year generated nearly $5.4 million a year, similar to last year. Financial Planning uses production totals that are verified by each broker’s employer to determine the rankings. The reported trailing 12-month production and AUM were tabulated as of Sept. 31, 2024.

For the full list, visit: https://www.financial-planning.com/list/financial-plannings-top-40-under-40-in-2025-top-brokers.

Notes to Editors: Photos available upon request

About UBS

UBS is a leading and truly global wealth manager and the leading universal bank in Switzerland. It also provides diversified asset management solutions and focused investment banking capabilities. With the acquisition of Credit Suisse, UBS manages 5.5 trillion dollars of invested assets as per second quarter 2023. UBS helps clients achieve their financial goals through personalized advice, solutions and products. Headquartered in Zurich, Switzerland, the firm is operating in more than 50 markets around the globe. UBS Group shares are listed on the SIX Swiss Exchange and the New York Stock Exchange (NYSE).

© UBS 2025. All rights reserved. The key symbol and UBS are among the registered and unregistered trademarks of UBS. Although neither UBS Financial Services Inc. or its employees pay a fee in exchange for these ratings, UBS may hire RJ Shook to be a speaker for events. Past performance is not an indication of future results.

Media Contact:

Hilary McCarthy

774-364-1440

[email protected]

https://www.ubs.com

KEYWORDS: United States North America California Arizona

INDUSTRY KEYWORDS: Finance Consulting Banking Professional Services Asset Management

MEDIA:

Logo
Logo

Eastern Bank Expands Franchise Lending Offering

Eastern Bank Expands Franchise Lending Offering

David Farwell and Randi Baskin Blume Bring Decades of Franchise Lending Experience

BOSTON–(BUSINESS WIRE)–Eastern Bank is pleased to expand its Franchise Lending group with the arrivals of Senior Vice President and Head of Specialty Finance David Farwell and Senior Vice President of Franchise Finance Randi Baskin Blume. Both bring extensive expertise in franchise lending and commercial banking.

“It is an exciting time for Eastern Bank as we build out our franchise lending offering, and we are thrilled to welcome David Farwell and Randi Baskin Blume to the team,” said Greg Buscone, Executive Vice President, Chief Commercial Banking Officer of Eastern Bank. “David’s track record of developing high-performing teams and Randi’s deep industry relationships will enhance how we support franchisees – many who are multi-generational family-owned businesses – with customized banking solutions.”

Mr. Farwell joins Eastern Bank with over 20 years of experience in specialty finance. Most recently, he served as Executive Vice President, Head of Specialty Banking – Franchise Finance at Citizens Bank, where he built and led multiple industry-focused finance groups, including McDonald’s Franchise Finance, General Restaurant Finance, Retail Petroleum Finance and Sports Finance. Mr. Farwell also played a pivotal role in shaping strategic credit policies and fostering long-term relationships with top-tier franchise operators. He holds an MBA degree in Finance from Boston College’s Carroll School of Management and a Bachelor of Science degree in Business Administration from the University of New Hampshire.

Ms. Blume brings 25 years of commercial lending experience including over 10 years in franchise finance, most recently serving as Vice President, Franchise Finance – Senior Relationship Manager at BMO Bank, N.A. There, she managed a multi-million dollar loan portfolio, specializing in Quick Service Restaurant (QSR) franchises. She was instrumental in securing BMO’s designation as a McDonald’s Approved Lender, growing the bank’s McDonald’s clientele from three to 50 operators in just three years. Throughout her career, Ms. Blume has been dedicated to supporting franchisees by providing tailored lending solutions and strategic financial insights. She holds an MBA degree in Marketing Management from DePaul University and a Bachelor of Science degree in Finance from the University of Illinois Urbana-Champaign.

Eastern Bank’s Franchise Finance division is built on decades of industry expertise and a deep understanding of the unique financial needs of franchise operators. Eastern serves franchisees across top tier brands, and the team consists of highly experienced and dedicated advisors, portfolio managers and underwriters who specialize in the Quick Service Restaurant (QSR) business.

“Eastern Bank has a long-standing reputation for taking the time to understand the needs of its customers and how it can best support their visions, and I am excited to help expand our reach in this space,” said David Farwell, Senior Vice President, Head of Specialty Finance of Eastern Bank. “Our goal is to provide franchise operators with the financial resources and support they need to grow and succeed.”

“I’m thrilled to join Eastern Bank’s franchise lending team and to bring my experience in franchise finance to help clients achieve their business goals,” added Randi Baskin Blume, Senior Vice President, Franchise Finance of Eastern Bank.

Eastern Bank provides a range of commercial financing offerings to help companies across many industries improve cash flow, increase efficiencies and build for the future. Commercial lending solutions include working capital/lines of credit, equipment/term loans, real estate loans, acquisition financing, asset-based lending, franchise lending and employee stock ownership plan-related financing.

About Eastern Bank

Founded in 1818, Eastern Bank is Greater Boston’s leading local bank with more than 110 locations serving communities in eastern Massachusetts, southern and coastal New Hampshire, Rhode Island and Connecticut. As of December 31, 2024, Eastern Bank had approximately $25.6 billion in assets. Eastern provides a full range of banking and wealth management solutions for consumers and businesses of all sizes including through its Cambridge Trust Wealth Management division, the largest bank-owned independent investment advisor in Massachusetts with approximately $8.3 billion in assets under management, and takes pride in its outspoken advocacy and community support that includes more than $240 million in charitable giving since 1994. An inclusive company, Eastern is comprised of deeply committed professionals who value relationships with their customers, colleagues and communities. Join us for good at www.easternbank.com and follow Eastern on Facebook, LinkedIn, X and Instagram. Eastern Bankshares, Inc. (Nasdaq Global Select Market: EBC) is the holding company for Eastern Bank. For investor information, visit investor.easternbank.com.

Media contact:

Andrea Goodman

Eastern Bank

[email protected]

781-598-7847

Investor contact:

Andrew Hersom

Eastern Bankshares, Inc.

[email protected]

860-707-4432

KEYWORDS: United States North America Massachusetts

INDUSTRY KEYWORDS: Restaurant/Bar Banking Retail Professional Services Small Business

MEDIA:

Logo
Logo

NVIDIA Blackwell Ultra AI Factory Platform Paves Way for Age of AI Reasoning

  • Top Computer Makers, Cloud Service Providers and GPU Cloud Providers to Boost Training and Test-Time Scaling Inference, From Reasoning to Agentic and Physical AI
  • New Open-Source NVIDIA Dynamo Inference Software to Scale Up Reasoning AI Services With Leaps in Throughput, Faster Response Time and Reduced Total Cost of Ownership
  • NVIDIA Spectrum-X Enhanced 800G Ethernet Networking for AI Infrastructure Significantly Reduces Latency and Jitter

SAN JOSE, Calif., March 18, 2025 (GLOBE NEWSWIRE) — NVIDIA today announced the next evolution of the NVIDIA Blackwell AI factory platform, NVIDIA Blackwell Ultra — paving the way for the age of AI reasoning.

NVIDIA Blackwell Ultra boosts training and test-time scaling inference — the art of applying more compute during inference to improve accuracy — to enable organizations everywhere to accelerate applications such as AI reasoning, agentic AI and physical AI.

Built on the groundbreaking Blackwell architecture introduced a year ago, Blackwell Ultra includes the NVIDIA GB300 NVL72 rack-scale solution and the NVIDIA HGX™ B300 NVL16 system. The GB300 NVL72 delivers 1.5x more AI performance than the NVIDIA GB200 NVL72, as well as increases Blackwell’s revenue opportunity by 50x for AI factories, compared with those built with NVIDIA Hopper™.

“AI has made a giant leap — reasoning and agentic AI demand orders of magnitude more computing performance,” said Jensen Huang, founder and CEO of NVIDIA. “We designed Blackwell Ultra for this moment — it’s a single versatile platform that can easily and efficiently do pretraining, post-training and reasoning AI inference.”

NVIDIA Blackwell Ultra Enables AI Reasoning

The NVIDIA GB300 NVL72 connects 72 Blackwell Ultra GPUs and 36 Arm
Neoverse-based NVIDIA Grace™ CPUs in a rack-scale design, acting as a single massive GPU built for test-time scaling. With the NVIDIA GB300 NVL72, AI models can access the platform’s increased compute capacity to explore different solutions to problems and break down complex requests into multiple steps, resulting in higher-quality responses.

GB300 NVL72 is also expected to be available on NVIDIA DGX™ Cloud, an end-to-end, fully managed AI platform on leading clouds that optimizes performance with software, services and AI expertise for evolving workloads. NVIDIA DGX SuperPOD™ with DGX GB300 systems uses the GB300 NVL72 rack design to provide customers with a turnkey AI factory.

The NVIDIA HGX B300 NVL16 features 11x faster inference on large language models, 7x more compute and 4x larger memory compared with the Hopper generation to deliver breakthrough performance for the most complex workloads, such as AI reasoning.

In addition, the Blackwell Ultra platform is ideal for applications including:

  • Agentic AI, which uses sophisticated reasoning and iterative planning to autonomously solve complex, multistep problems. AI agent systems go beyond instruction-following. They can reason, plan and take actions to achieve specific goals.
  • Physical AI, enabling companies to generate synthetic, photorealistic videos in real time for the training of applications such as robots and autonomous vehicles at scale.

NVIDIA Scale-Out Infrastructure for Optimal Performance

Advanced scale-out networking is a critical component of AI infrastructure that can deliver top performance while reducing latency and jitter.

Blackwell Ultra systems seamlessly integrate with the NVIDIA Spectrum-X™ Ethernet andNVIDIA Quantum-X800 InfiniBand platforms, with 800 Gb/s of data throughput available for each GPU in the system, through an NVIDIA ConnectX®-8 SuperNIC. This delivers best-in-class remote direct memory access capabilities to enable AI factories and cloud data centers to handle AI reasoning models without bottlenecks.

NVIDIA BlueField®-3 DPUs, also featured in Blackwell Ultra systems, enable multi-tenant networking, GPU compute elasticity, accelerated data access and real-time cybersecurity threat detection.

Global Technology Leaders Embrace Blackwell Ultra

Blackwell Ultra-based products are expected to be available from partners starting from the second half of 2025.

Cisco, Dell Technologies, Hewlett Packard Enterprise, Lenovo and Supermicro are expected to deliver a wide range of servers based on Blackwell Ultra products, in addition to Aivres, ASRock Rack, ASUS, Eviden, Foxconn, GIGABYTE, Inventec, Pegatron, Quanta Cloud Technology (QCT), Wistron and Wiwynn.

Cloud service providers Amazon Web Services, Google Cloud, Microsoft Azure and Oracle Cloud Infrastructure and GPU cloud providers CoreWeave, Crusoe, Lambda, Nebius, Nscale, Yotta and YTL will be among the first to offer Blackwell Ultra-powered instances.

NVIDIA Software Innovations Reduce AI Bottlenecks

The entire NVIDIA Blackwell product portfolio is supported by the full-stack NVIDIA AI platform. The NVIDIA Dynamo open-source inference framework — also announced today — scales up reasoning AI services, delivering leaps in throughput while reducing response times and model serving costs by providing the most efficient solution for scaling test-time compute.

NVIDIA Dynamo is new AI inference-serving software designed to maximize token revenue generation for AI factories deploying reasoning AI models. It orchestrates and accelerates inference communication across thousands of GPUs, and uses disaggregated serving to separate the processing and generation phases of large language models on different GPUs. This allows each phase to be optimized independently for its specific needs and ensures maximum GPU resource utilization.

Blackwell systems are ideal for running new NVIDIA Llama Nemotron Reason models and the NVIDIA AI-Q Blueprint, supported in the NVIDIA AI Enterprise software platform for production-grade AI. NVIDIA AI Enterprise includes NVIDIA NIM microservices, as well as AI frameworks, libraries and tools that enterprises can deploy on NVIDIA-accelerated clouds, data centers and workstations.

The Blackwell platform builds on NVIDIA’s ecosystem of powerful development tools, NVIDIA CUDA-X libraries, over 6 million developers and 4,000+ applications scaling performance across thousands of GPUs.

Learn more by watching the NVIDIA GTC keynote and register for sessions from NVIDIA and industry leaders at the show, which runs through March 21.

About NVIDIA


NVIDIA
(NASDAQ: NVDA) is the world leader in accelerated computing.

For further information, contact:

Kristin Uchiyama
NVIDIA Corporation
+1-408-313-0448
[email protected]

Certain statements in this press release including, but not limited to, statements as to: the benefits, impact, availability, and performance of NVIDIA’s products, services, and technologies; third parties adopting or offering NVIDIA’s products and technologies; Blackwell Ultra being able to easily and efficiently do pretraining, post-training and reasoning AI inference; and advanced networking being a critical component of AI infrastructure that can deliver top performance while reducing latency and jitter are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: global economic conditions; our reliance on third parties to manufacture, assemble, package and test our products; the impact of technological development and competition; development of new products and technologies or enhancements to our existing product and technologies; market acceptance of our products or our partners’ products; design, manufacturing or software defects; changes in consumer preferences or demands; changes in industry standards and interfaces; unexpected loss of performance of our products or technologies when integrated into systems; as well as other factors detailed from time to time in the most recent reports NVIDIA files with the Securities and Exchange Commission, or SEC, including, but not limited to, its annual report on Form 10-K and quarterly reports on Form 10-Q. Copies of reports filed with the SEC are posted on the company’s website and are available from NVIDIA without charge. These forward-looking statements are not guarantees of future performance and speak only as of the date hereof, and, except as required by law, NVIDIA disclaims any obligation to update these forward-looking statements to reflect future events or circumstances.

Many of the products and features described herein remain in various stages and will be offered on a when-and-if-available basis. The statements above are not intended to be, and should not be interpreted as a commitment, promise, or legal obligation, and the development, release, and timing of any features or functionalities described for our products is subject to change and remains at the sole discretion of NVIDIA. NVIDIA will have no liability for failure to deliver or delay in the delivery of any of the products, features or functions set forth herein

© 2025 NVIDIA Corporation. All rights reserved. NVIDIA, the NVIDIA logo, BlueField, Connect-X, CUDA-X, NVIDIA DGX, NVIDIA DGX SuperPOD, NVIDIA Grace, NVIDIA HGX, NVIDIA Hopper, NVIDIA NIM and NVIDIA Spectrum-X are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and/or other countries. Other company and product names may be trademarks of the respective companies with which they are associated. Features, pricing, availability, and specifications are subject to change without notice.

A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/7bb5b0bf-daad-41dc-8d0f-d1706984d616