{"id":890342,"date":"2025-09-29T11:29:15","date_gmt":"2025-09-29T15:29:15","guid":{"rendered":"https:\/\/www.marketnewsdesk.com\/index.php\/nvidia-accelerates-robotics-research-and-development-with-new-open-models-and-simulation-libraries\/"},"modified":"2025-09-29T11:29:15","modified_gmt":"2025-09-29T15:29:15","slug":"nvidia-accelerates-robotics-research-and-development-with-new-open-models-and-simulation-libraries","status":"publish","type":"post","link":"https:\/\/www.marketnewsdesk.com\/index.php\/nvidia-accelerates-robotics-research-and-development-with-new-open-models-and-simulation-libraries\/","title":{"rendered":"NVIDIA Accelerates Robotics Research and Development With New Open Models and Simulation Libraries"},"content":{"rendered":"<div class=\"mw_release\">\n<p align=\"left\">\u00a0<strong>News Summary:<\/strong><\/p>\n<ul type=\"disc\">\n<li>The open-source Newton Physics Engine \u2014 codeveloped with Google DeepMind and Disney Research, and now available in NVIDIA Isaac Lab \u2014 helps researchers and developers create more capable and adaptable robots.<\/li>\n<li>New NVIDIA Isaac GR00T open foundation model brings humanlike reasoning to robots, allowing them to break down complex instructions and execute tasks using prior knowledge and common sense.<\/li>\n<li>New NVIDIA Cosmos world foundation models enable developers to generate diverse data for accelerating training physical AI models at scale.<\/li>\n<li>Global researchers at leading universities such as Stanford University, ETH Zurich and the National University of Singapore are tapping NVIDIA accelerated computing and software to advance robotics research.<\/li>\n<li>Leading robot developers Agility Robotics, Boston Dynamics, Disney Research, Figure AI, Franka Robotics, Hexagon, Skild AI, Solomon and Techman Robot are adopting NVIDIA Isaac and Omniverse technologies.<\/li>\n<\/ul>\n<p>SEOUL, South Korea, Sept.  29, 2025  (GLOBE NEWSWIRE) &#8212; <strong>CoRL <\/strong>&#8212;\u00a0NVIDIA today announced that the open\u2011source <a href=\"https:\/\/developer.nvidia.com\/newton-physics\" rel=\"nofollow\" target=\"_blank\"><u>Newton Physics Engine<\/u><\/a> is now available in <a href=\"https:\/\/developer.nvidia.com\/isaac\/lab\" rel=\"nofollow\" target=\"_blank\"><u>NVIDIA Isaac\u2122 Lab<\/u><\/a>, along with the open <a href=\"https:\/\/developer.nvidia.com\/isaac\/gr00t\" rel=\"nofollow\" target=\"_blank\"><u>NVIDIA Isaac GR00T N1.6<\/u><\/a> reasoning vision language action model for robot skills and new AI infrastructure. Together, these technologies provide developers and researchers with an open, accelerated robotics platform that speeds iteration, standardizes testing, unifies training with on\u2011robot inference and helps robots transfer skills safely and reliably from simulation to the real world.<\/p>\n<p>\u201cHumanoids are the next frontier of physical AI, requiring the ability to reason, adapt and act safely in an unpredictable world,\u201d said Rev Lebaredian, vice president of Omniverse and simulation technology at NVIDIA. \u201cWith these latest updates, developers now have the three computers to bring robots from research into everyday life \u2014 with Isaac GR00T serving as robot\u2019s brains, Newton simulating their body and NVIDIA Omniverse as their training ground.\u201d<\/p>\n<p>\n        <strong>Newton Opens New Standard for Physical Simulation in Robotics<\/strong><br \/>\n        <br \/>Robots learn faster and safer in <a href=\"https:\/\/www.nvidia.com\/en-us\/use-cases\/robotics-simulation\/\" rel=\"nofollow\" target=\"_blank\"><u>simulation<\/u><\/a>, but <a href=\"https:\/\/www.nvidia.com\/en-us\/glossary\/humanoid-robot\/\" rel=\"nofollow\" target=\"_blank\"><u>humanoid robots<\/u><\/a> \u2014 with their complex joints, balance and movements \u2014 push today\u2019s physics engines to the limit. Over a quarter-million robotics developers worldwide need accurate physics, so the skills they teach robots in simulation can be executed safely and reliably in the real world.<\/p>\n<p>Today, NVIDIA announced the beta release of Newton, an open-source, GPU-accelerated physics engine, <a href=\"https:\/\/www.linuxfoundation.org\/press\/linux-foundation-announces-contribution-of-newton-by-disney-research-google-deepmind-and-nvidia-to-accelerate-open-robot-learning.\" rel=\"nofollow\" target=\"_blank\"><u>managed by the Linux Foundation<\/u><\/a>. Built on the <a href=\"https:\/\/developer.nvidia.com\/warp-python\" rel=\"nofollow\" target=\"_blank\"><u>NVIDIA Warp<\/u><\/a> and <a href=\"https:\/\/www.nvidia.com\/en-us\/omniverse\/usd\/\" rel=\"nofollow\" target=\"_blank\"><u>OpenUSD<\/u><\/a> frameworks, and codeveloped by Google DeepMind, Disney Research and NVIDIA, Newton is available now.<\/p>\n<p>With <a href=\"https:\/\/developer.nvidia.com\/blog\/train-a-quadruped-locomotion-policy-and-simulate-cloth-manipulation-with-nvidia-isaac-lab-and-newton\/\" rel=\"nofollow\" target=\"_blank\"><u>Newton<\/u><\/a>\u2019s flexible design and ability to work with different types of physics solvers, developers can now simulate extremely complex robot actions, like walking through snow or gravel and handling cups and fruits, and successfully deploy them in the real world.<\/p>\n<p>The latest adopters of Newton are esteemed research labs and universities such as ETH Zurich Robotic Systems Lab, Technical University of Munich and Peking University, robotics company <a href=\"https:\/\/lightwheel.ai\/media\/lightwheel-newton\" rel=\"nofollow\" target=\"_blank\"><u>Lightwheel<\/u><\/a>, and simulation engine company Style3D.<\/p>\n<p>\n        <strong>Cosmos Reason Improves Robot Reasoning for New Open Isaac GR00T N1.6 Model<\/strong><br \/>\n        <br \/>To perform humanlike tasks in the physical world, humanoids must understand ambiguous instructions and deal with the long tail of never-before-seen experiences.<\/p>\n<p>The latest release of the open Isaac GR00T N1.6 robot foundation model, available soon on Hugging Face, will integrate <a href=\"https:\/\/github.com\/nvidia-cosmos\/cosmos-reason1\" rel=\"nofollow\" target=\"_blank\"><u>NVIDIA Cosmos\u2122 Reason<\/u><\/a>, an open, customizable reasoning vision language model built for <a href=\"https:\/\/www.nvidia.com\/en-us\/glossary\/generative-physical-ai\/\" rel=\"nofollow\" target=\"_blank\"><u>physical AI<\/u><\/a>. Acting as the robot\u2019s deep-thinking brain, Cosmos Reason turns vague instructions into step-by-step plans, using prior knowledge, common sense and physics to handle new situations and generalize across many tasks.<\/p>\n<p>Cosmos Reason, downloaded over 1 million times and currently at the top of the <a href=\"https:\/\/huggingface.co\/spaces\/facebook\/physical_reasoning_leaderboard\" rel=\"nofollow\" target=\"_blank\"><u>Physical Reasoning Leaderboard<\/u><\/a> on Hugging Face, can also curate and annotate large sets of real and synthetic data for model training.\u00a0<a href=\"https:\/\/catalog.ngc.nvidia.com\/orgs\/nim\/teams\/nvidia\/containers\/cosmos-reason1-7b?version=1\" rel=\"nofollow\" target=\"_blank\">Cosmos Reason 1 is now available as an easy-to-use NVIDIA NIM<\/a>\u2122 microservice for AI model deployment.<\/p>\n<p>Isaac GR00T N1.6 now lets <a href=\"https:\/\/www.nvidia.com\/en-us\/use-cases\/humanoid-robots\/\" rel=\"nofollow\" target=\"_blank\"><u>humanoids<\/u><\/a> move and handle objects simultaneously, allowing more torso and arm freedom to complete tough tasks like opening heavier doors.<\/p>\n<p>Developers can post-train Isaac GR00T N models using the open-source <a href=\"https:\/\/huggingface.co\/collections\/nvidia\/physical-ai-67c643edbb024053dcbcd6d8\" rel=\"nofollow\" target=\"_blank\"><u>NVIDIA Physical AI Dataset<\/u><\/a> on Hugging Face. Downloaded over 4.8 million times, the dataset now includes thousands of synthetic and real-world trajectories.<\/p>\n<p>Leading robot makers such as AeiROBOT, Franka Robotics, LG Electronics, Lightwheel, Mentee Robotics, Neura Robotics, Solomon, Techman Robot and UCR are evaluating <a href=\"https:\/\/huggingface.co\/nvidia\/GR00T-N1.5-3B\" rel=\"nofollow\" target=\"_blank\"><u>Isaac GR00T N models<\/u><\/a> for building general-purpose robots.<\/p>\n<p>\n        <strong>New Cosmos World Foundation Models for Physical AI Development <\/strong><br \/>\n        <br \/>NVIDIA <a href=\"https:\/\/research.nvidia.com\/publication\/2025-09_world-simulation-video-foundation-models-physical-ai\" rel=\"nofollow\" target=\"_blank\"><u>announced new updates<\/u><\/a> to its open <a href=\"https:\/\/www.nvidia.com\/en-us\/ai\/cosmos\/\" rel=\"nofollow\" target=\"_blank\"><u>Cosmos WFMs<\/u><\/a>, downloaded over 3 million times, that let developers generate diverse data for accelerating training physical AI models at scale using text, image and video prompts.<\/p>\n<ul type=\"disc\">\n<li>Cosmos Predict 2.5, coming soon, combines the power of three Cosmos WFMs into one powerful model, cutting complexity, saving time and boosting efficiency. It supports longer video generation \u2014 capable of creating up to 30-second videos \u2014 as well as multi-view camera outputs for richer world simulations.<\/li>\n<li>Cosmos Transfer 2.5, coming soon, delivers faster, higher-quality results than previous models, while being 3.5x smaller. It can generate photorealistic synthetic data from ground-truth 3D simulation scenes and spatial control inputs like depth, segmentation, edges and high-definition maps.<\/li>\n<\/ul>\n<p>\n        <strong>New Workflow for Teaching Robot Grasping <\/strong><br \/>\n        <br \/>Teaching a robot to grab an object is one of the most difficult challenges in robotics. It is not just about moving an arm but turning a thought into a precise action \u2014 a skill robots must learn through trial and error.<\/p>\n<p>The new <a href=\"https:\/\/github.com\/NVlabs\/DEXTRAH\" rel=\"nofollow\" target=\"_blank\"><u>dexterous grasping<\/u><\/a> workflow in the <a href=\"https:\/\/developer.nvidia.com\/blog\/streamline-robot-learning-with-whole-body-control-and-enhanced-teleoperation-in-nvidia-isaac-lab-2-3\/\" rel=\"nofollow\" target=\"_blank\"><u>developer preview of Isaac Lab 2.3<\/u><\/a>, built on the <a href=\"https:\/\/www.nvidia.com\/en-us\/omniverse\/\" rel=\"nofollow\" target=\"_blank\"><u>NVIDIA Omniverse<\/u><\/a>\u2122 platform, <a href=\"https:\/\/research.nvidia.com\/publication\/2025-09_world-simulation-video-foundation-models-physical-ai\" rel=\"nofollow\" target=\"_blank\"><u>trains multi-fingered hand and arm robots<\/u><\/a> in a virtual world using an automated curriculum. It starts with simple tasks and gradually ramps up the complexity. The workflow changes aspects like gravity, friction and the weight of an object, training robots to learn skills even in unpredictable environments.<\/p>\n<p>\n        <a href=\"https:\/\/bostondynamics.com\/video\/arm-you-glad-to-see-me-atlas\/\" rel=\"nofollow\" target=\"_blank\"><br \/>\n          <u>Boston Dynamics<\/u><br \/>\n        <\/a>\u2019 Atlas robots learned grasping using this workflow to significantly improve its manipulation capabilities.<\/p>\n<p>Leading robot developers Agility Robotics, Boston Dynamics, Figure AI, Hexagon, Skild AI, Solomon and Techman Robot are adopting NVIDIA Isaac and Omniverse technologies.<\/p>\n<p>\n        <strong>Evaluating Learned Robot Skills in Simulation<\/strong><br \/>\n        <br \/>Getting a robot to master a new skill \u2014 like picking up a cup or walking across a room \u2014 is incredibly difficult, and testing these skills on a physical robot is slow and expensive.<\/p>\n<p>The solution lies in simulation, which offers a way to test a robot\u2019s learned skills against countless scenarios, tasks and environments. But even in simulation, developers tend to build fragmented, simplified tests that do not reflect the real world. A robot that learns to navigate a perfect, simple simulation will fail the moment it faces real-world complexity.<\/p>\n<p>To let developers run complex, large-scale evaluations in a simulated environment without having to build the system from scratch, NVIDIA and <a href=\"https:\/\/lightwheel.ai\/media\/lightwheel-benchmark-anouncement\" rel=\"nofollow\" target=\"_blank\"><u>Lightwheel<\/u><\/a> are codeveloping Isaac Lab &#8211; Arena, an open-source policy evaluation framework for scalable experimentation and standardized testing. The framework will be available soon.<\/p>\n<p>\n        <strong>New NVIDIA AI Infrastructure Powers Robotics Workloads Anywhere<\/strong><br \/>\n        <br \/>To enable developers to take full advantage of these advanced technologies and software libraries, NVIDIA announced AI infrastructure designed for the most demanding workloads, including:<\/p>\n<ul type=\"disc\">\n<li>\n          <a href=\"https:\/\/www.nvidia.com\/en-us\/data-center\/gb200-nvl72\/\" rel=\"nofollow\" target=\"_blank\"><br \/>\n            <u>NVIDIA GB200 NVL72<\/u><br \/>\n          <\/a>, a rack-scale system integrating 36 <a href=\"https:\/\/www.nvidia.com\/en-us\/data-center\/grace-cpu\/\" rel=\"nofollow\" target=\"_blank\"><u>NVIDIA Grace\u2122 CPUs<\/u><\/a> and 72 <a href=\"https:\/\/www.nvidia.com\/en-us\/data-center\/technologies\/blackwell-architecture\/\" rel=\"nofollow\" target=\"_blank\"><u>NVIDIA Blackwell GPUs<\/u><\/a>, which is being adopted by major cloud providers to accelerate AI training and inference, including complex reasoning and physical AI tasks.<\/li>\n<li>\n          <a href=\"https:\/\/www.nvidia.com\/en-us\/data-center\/products\/rtx-pro-server\/\" rel=\"nofollow\" target=\"_blank\"><br \/>\n            <u>NVIDIA RTX PRO\u2122 Servers<\/u><br \/>\n          <\/a>, which offer a single architecture for every robot development workload across training, <a href=\"https:\/\/www.nvidia.com\/en-us\/glossary\/synthetic-data-generation\/\" rel=\"nofollow\" target=\"_blank\"><u>synthetic data generation<\/u><\/a>, <a href=\"https:\/\/www.nvidia.com\/en-us\/glossary\/robot-learning\/\" rel=\"nofollow\" target=\"_blank\"><u>robot learning<\/u><\/a> and <a href=\"https:\/\/www.nvidia.com\/en-us\/use-cases\/robotics-simulation\/\" rel=\"nofollow\" target=\"_blank\"><u>simulation<\/u><\/a>. RTX PRO Servers have been adopted by the RAI Institute.<\/li>\n<li>\n          <a href=\"https:\/\/www.nvidia.com\/en-us\/autonomous-machines\/embedded-systems\/jetson-thor\/\" rel=\"nofollow\" target=\"_blank\"><br \/>\n            <u>NVIDIA Jetson Thor<\/u><br \/>\n          <\/a>\u2122, powered by a Blackwell GPU, which enables robots to run multi-AI workflows for real-time, intelligent interactions and unlocks real-time on-robot inference \u2014 a breakthrough for high-performance physical AI workloads and applications such as humanoid robotics. Jetson Thor has been adopted by partners including Figure AI, Galbot, Google DeepMind, Mentee Robotics, Meta, Skild AI and Unitree.<\/li>\n<\/ul>\n<p>\n        <strong>NVIDIA Advances Robotics Research<\/strong><br \/>\n        <br \/>NVIDIA technologies, including GPUs, simulation frameworks and CUDA<sup>\u00ae<\/sup>\u2011accelerated libraries, were referenced in nearly half of CoRL\u2019s accepted papers \u2014 with adoption across leading research labs and institutions such as <a href=\"https:\/\/arxiv.org\/abs\/2505.00779\" rel=\"nofollow\" target=\"_blank\"><u>Carnegie Mellon<\/u><\/a>, <a href=\"https:\/\/arxiv.org\/abs\/2508.03890\" rel=\"nofollow\" target=\"_blank\"><u>University of Washington<\/u><\/a>, <a href=\"https:\/\/arxiv.org\/abs\/2506.05168\" rel=\"nofollow\" target=\"_blank\"><u>ETH Zurich<\/u><\/a> and <a href=\"https:\/\/arxiv.org\/abs\/2508.02093\" rel=\"nofollow\" target=\"_blank\"><u>National University of Singapore<\/u><\/a>.<\/p>\n<p>Also highlighted at CoRL is <a href=\"https:\/\/arxiv.org\/abs\/2403.09227\" rel=\"nofollow\" target=\"_blank\"><u>BEHAVIOR<\/u><\/a>, a robotic learning benchmark project by the Stanford Vision and Learning Lab and <a href=\"https:\/\/taccel-simulator.github.io\/supercharging\" rel=\"nofollow\" target=\"_blank\"><u>Taccel<\/u><\/a>, a high-performance simulation platform for advancing vision-based tactile robotics, developed by Peking University.<\/p>\n<p>Learn more about NVIDIA\u2019s robotics research work at <a href=\"https:\/\/www.nvidia.com\/en-us\/events\/corl\/\" rel=\"nofollow\" target=\"_blank\"><u>CoRL<\/u><\/a>, running Sept. 27-Oct. 2 in Seoul.<\/p>\n<p>\n        <strong>About NVIDIA<\/strong><br \/>\n        <br \/>\n        <a href=\"https:\/\/www.nvidia.com\/\" rel=\"nofollow\" target=\"_blank\"><br \/>\n          <u>NVIDIA<\/u><br \/>\n        <\/a> (NASDAQ: NVDA) is the world leader in AI and accelerated computing.<\/p>\n<p>\n        <strong>For further information, contact:<\/strong><br \/>\n        <br \/>Paris Fox<br \/>Corporate Communications<br \/>NVIDIA Corporation<br \/>408-242-0035<br \/><a href=\"mailto:pfox@nvidia.com\" rel=\"nofollow\" target=\"_blank\"><u>pfox@nvidia.com<\/u><\/a><\/p>\n<p>Certain statements in this press release including, but not limited to, statements as to: the benefits, impact, performance, and availability of NVIDIA\u2019s products, services, and technologies; expectations with respect to NVIDIA\u2019s third party arrangements, including with its collaborators and partners; expectations with respect to technology developments; and other statements that are not historical facts are forward-looking statements within the meaning of Section 27A of the Securities Act of 1933, as amended, and Section 21E of the Securities Exchange Act of 1934, as amended, which are subject to the \u201csafe harbor\u201d created by those sections based on management\u2019s beliefs and assumptions and on information currently available to management and are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: global economic and political conditions; NVIDIA\u2019s reliance on third parties to manufacture, assemble, package and test NVIDIA\u2019s products; the impact of technological development and competition; development of new products and technologies or enhancements to NVIDIA\u2019s existing product and technologies; market acceptance of NVIDIA\u2019s products or NVIDIA\u2019s partners\u2019 products; design, manufacturing or software defects; changes in consumer preferences or demands; changes in industry standards and interfaces; unexpected loss of performance of NVIDIA\u2019s products or technologies when integrated into systems; and changes in applicable laws and regulations, as well as other factors detailed from time to time in the most recent reports NVIDIA files with the Securities and Exchange Commission, or SEC, including, but not limited to, its annual report on Form 10-K and quarterly reports on Form 10-Q. Copies of reports filed with the SEC are posted on the company\u2019s website and are available from NVIDIA without charge. These forward-looking statements are not guarantees of future performance and speak only as of the date hereof, and, except as required by law, NVIDIA disclaims any obligation to update these forward-looking statements to reflect future events or circumstances.<\/p>\n<p>Many of the products and features described herein remain in various stages and will be offered on a when-and-if-available basis. The statements above are not intended to be, and should not be interpreted as a commitment, promise, or legal obligation, and the development, release, and timing of any features or functionalities described for our products is subject to change and remains at the sole discretion of NVIDIA. NVIDIA will have no liability for failure to deliver or delay in the delivery of any of the products, features or functions set forth herein.<\/p>\n<p>\u00a9 2025 NVIDIA Corporation. All rights reserved. NVIDIA, the NVIDIA logo, CUDA, NVIDIA Cosmos, NVIDIA Grace, NVIDIA Isaac, NVIDIA Jetson Thor, NVIDIA NIM, NVIDIA Omniverse and NVIDIA RTX PRO are trademarks and\/or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated. Features, pricing, availability and specifications are subject to change without notice.<\/p>\n<p>A photo accompanying this announcement is available at:<br \/><a href=\"https:\/\/www.globenewswire.com\/NewsRoom\/AttachmentNg\/87ee19eb-0691-419d-aeae-a3fc89436c6d\" rel=\"nofollow\" target=\"_blank\">https:\/\/www.globenewswire.com\/NewsRoom\/AttachmentNg\/87ee19eb-0691-419d-aeae-a3fc89436c6d<\/a><\/p>\n<p>      <img decoding=\"async\" alt=\"\" class=\"__GNW8366DE3E__IMG\" src=\"https:\/\/www.globenewswire.com\/newsroom\/ti?nf=OTUzNjc5OSM3MTY5MDUzIzIwMDY5MTI=\" \/><br \/>\n      <br \/>\n      <img decoding=\"async\" alt=\"\" src=\"https:\/\/ml.globenewswire.com\/media\/ZGU3MTMyN2YtOWZkNS00OTBjLWI2N2YtMDljMmY4ZjFiNjBiLTEwMTg0ODUtMjAyNS0wOS0yOS1lbg==\/tiny\/NVIDIA-CORPORATION.png\" \/>\n    <\/div>\n<div class=\"mw_contactinfo\"><\/div>\n","protected":false},"excerpt":{"rendered":"<p>\u00a0News Summary: The open-source Newton Physics Engine \u2014 codeveloped with Google DeepMind and Disney Research, and now available in NVIDIA Isaac Lab \u2014 helps researchers and developers create more capable and adaptable robots. New NVIDIA Isaac GR00T open foundation model brings humanlike reasoning to robots, allowing them to break down complex instructions and execute tasks using prior knowledge and common sense. New NVIDIA Cosmos world foundation models enable developers to generate diverse data for accelerating training physical AI models at scale. Global researchers at leading universities such as Stanford University, ETH Zurich and the National University of Singapore are tapping NVIDIA accelerated computing and software to advance robotics research. Leading robot developers Agility Robotics, Boston Dynamics, Disney Research, Figure AI, &hellip; <\/p>\n<p class=\"link-more\"><a href=\"https:\/\/www.marketnewsdesk.com\/index.php\/nvidia-accelerates-robotics-research-and-development-with-new-open-models-and-simulation-libraries\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;NVIDIA Accelerates Robotics Research and Development With New Open Models and Simulation Libraries&#8221;<\/span><\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[],"tags":[],"class_list":["post-890342","post","type-post","status-publish","format-standard","hentry"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.6 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>NVIDIA Accelerates Robotics Research and Development With New Open Models and Simulation Libraries - Market Newsdesk<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.marketnewsdesk.com\/index.php\/nvidia-accelerates-robotics-research-and-development-with-new-open-models-and-simulation-libraries\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"NVIDIA Accelerates Robotics Research and Development With New Open Models and Simulation Libraries - Market Newsdesk\" \/>\n<meta property=\"og:description\" content=\"\u00a0News Summary: The open-source Newton Physics Engine \u2014 codeveloped with Google DeepMind and Disney Research, and now available in NVIDIA Isaac Lab \u2014 helps researchers and developers create more capable and adaptable robots. New NVIDIA Isaac GR00T open foundation model brings humanlike reasoning to robots, allowing them to break down complex instructions and execute tasks using prior knowledge and common sense. New NVIDIA Cosmos world foundation models enable developers to generate diverse data for accelerating training physical AI models at scale. Global researchers at leading universities such as Stanford University, ETH Zurich and the National University of Singapore are tapping NVIDIA accelerated computing and software to advance robotics research. Leading robot developers Agility Robotics, Boston Dynamics, Disney Research, Figure AI, &hellip; Continue reading &quot;NVIDIA Accelerates Robotics Research and Development With New Open Models and Simulation Libraries&quot;\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.marketnewsdesk.com\/index.php\/nvidia-accelerates-robotics-research-and-development-with-new-open-models-and-simulation-libraries\/\" \/>\n<meta property=\"og:site_name\" content=\"Market Newsdesk\" \/>\n<meta property=\"article:published_time\" content=\"2025-09-29T15:29:15+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.globenewswire.com\/newsroom\/ti?nf=OTUzNjc5OSM3MTY5MDUzIzIwMDY5MTI=\" \/>\n<meta name=\"author\" content=\"Newsdesk\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Newsdesk\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"10 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/www.marketnewsdesk.com\\\/index.php\\\/nvidia-accelerates-robotics-research-and-development-with-new-open-models-and-simulation-libraries\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.marketnewsdesk.com\\\/index.php\\\/nvidia-accelerates-robotics-research-and-development-with-new-open-models-and-simulation-libraries\\\/\"},\"author\":{\"name\":\"Newsdesk\",\"@id\":\"https:\\\/\\\/www.marketnewsdesk.com\\\/#\\\/schema\\\/person\\\/482f27a394d4fda80ecb5499e519d979\"},\"headline\":\"NVIDIA Accelerates Robotics Research and Development With New Open Models and Simulation Libraries\",\"datePublished\":\"2025-09-29T15:29:15+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/www.marketnewsdesk.com\\\/index.php\\\/nvidia-accelerates-robotics-research-and-development-with-new-open-models-and-simulation-libraries\\\/\"},\"wordCount\":1924,\"image\":{\"@id\":\"https:\\\/\\\/www.marketnewsdesk.com\\\/index.php\\\/nvidia-accelerates-robotics-research-and-development-with-new-open-models-and-simulation-libraries\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/www.globenewswire.com\\\/newsroom\\\/ti?nf=OTUzNjc5OSM3MTY5MDUzIzIwMDY5MTI=\",\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/www.marketnewsdesk.com\\\/index.php\\\/nvidia-accelerates-robotics-research-and-development-with-new-open-models-and-simulation-libraries\\\/\",\"url\":\"https:\\\/\\\/www.marketnewsdesk.com\\\/index.php\\\/nvidia-accelerates-robotics-research-and-development-with-new-open-models-and-simulation-libraries\\\/\",\"name\":\"NVIDIA Accelerates Robotics Research and Development With New Open Models and Simulation Libraries - Market Newsdesk\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.marketnewsdesk.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/www.marketnewsdesk.com\\\/index.php\\\/nvidia-accelerates-robotics-research-and-development-with-new-open-models-and-simulation-libraries\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/www.marketnewsdesk.com\\\/index.php\\\/nvidia-accelerates-robotics-research-and-development-with-new-open-models-and-simulation-libraries\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/www.globenewswire.com\\\/newsroom\\\/ti?nf=OTUzNjc5OSM3MTY5MDUzIzIwMDY5MTI=\",\"datePublished\":\"2025-09-29T15:29:15+00:00\",\"author\":{\"@id\":\"https:\\\/\\\/www.marketnewsdesk.com\\\/#\\\/schema\\\/person\\\/482f27a394d4fda80ecb5499e519d979\"},\"breadcrumb\":{\"@id\":\"https:\\\/\\\/www.marketnewsdesk.com\\\/index.php\\\/nvidia-accelerates-robotics-research-and-development-with-new-open-models-and-simulation-libraries\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/www.marketnewsdesk.com\\\/index.php\\\/nvidia-accelerates-robotics-research-and-development-with-new-open-models-and-simulation-libraries\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/www.marketnewsdesk.com\\\/index.php\\\/nvidia-accelerates-robotics-research-and-development-with-new-open-models-and-simulation-libraries\\\/#primaryimage\",\"url\":\"https:\\\/\\\/www.globenewswire.com\\\/newsroom\\\/ti?nf=OTUzNjc5OSM3MTY5MDUzIzIwMDY5MTI=\",\"contentUrl\":\"https:\\\/\\\/www.globenewswire.com\\\/newsroom\\\/ti?nf=OTUzNjc5OSM3MTY5MDUzIzIwMDY5MTI=\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/www.marketnewsdesk.com\\\/index.php\\\/nvidia-accelerates-robotics-research-and-development-with-new-open-models-and-simulation-libraries\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/www.marketnewsdesk.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"NVIDIA Accelerates Robotics Research and Development With New Open Models and Simulation Libraries\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/www.marketnewsdesk.com\\\/#website\",\"url\":\"https:\\\/\\\/www.marketnewsdesk.com\\\/\",\"name\":\"Market Newsdesk\",\"description\":\"Latest Business News in Real Time\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/www.marketnewsdesk.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/www.marketnewsdesk.com\\\/#\\\/schema\\\/person\\\/482f27a394d4fda80ecb5499e519d979\",\"name\":\"Newsdesk\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a0d0bd5b0f0ca12a265a459b13169dac35f33776d8501eda5e68844a366f2f46?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a0d0bd5b0f0ca12a265a459b13169dac35f33776d8501eda5e68844a366f2f46?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a0d0bd5b0f0ca12a265a459b13169dac35f33776d8501eda5e68844a366f2f46?s=96&d=mm&r=g\",\"caption\":\"Newsdesk\"},\"url\":\"https:\\\/\\\/www.marketnewsdesk.com\\\/index.php\\\/author\\\/newsdesk\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"NVIDIA Accelerates Robotics Research and Development With New Open Models and Simulation Libraries - Market Newsdesk","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.marketnewsdesk.com\/index.php\/nvidia-accelerates-robotics-research-and-development-with-new-open-models-and-simulation-libraries\/","og_locale":"en_US","og_type":"article","og_title":"NVIDIA Accelerates Robotics Research and Development With New Open Models and Simulation Libraries - Market Newsdesk","og_description":"\u00a0News Summary: The open-source Newton Physics Engine \u2014 codeveloped with Google DeepMind and Disney Research, and now available in NVIDIA Isaac Lab \u2014 helps researchers and developers create more capable and adaptable robots. New NVIDIA Isaac GR00T open foundation model brings humanlike reasoning to robots, allowing them to break down complex instructions and execute tasks using prior knowledge and common sense. New NVIDIA Cosmos world foundation models enable developers to generate diverse data for accelerating training physical AI models at scale. Global researchers at leading universities such as Stanford University, ETH Zurich and the National University of Singapore are tapping NVIDIA accelerated computing and software to advance robotics research. Leading robot developers Agility Robotics, Boston Dynamics, Disney Research, Figure AI, &hellip; Continue reading \"NVIDIA Accelerates Robotics Research and Development With New Open Models and Simulation Libraries\"","og_url":"https:\/\/www.marketnewsdesk.com\/index.php\/nvidia-accelerates-robotics-research-and-development-with-new-open-models-and-simulation-libraries\/","og_site_name":"Market Newsdesk","article_published_time":"2025-09-29T15:29:15+00:00","og_image":[{"url":"https:\/\/www.globenewswire.com\/newsroom\/ti?nf=OTUzNjc5OSM3MTY5MDUzIzIwMDY5MTI=","type":"","width":"","height":""}],"author":"Newsdesk","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Newsdesk","Est. reading time":"10 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.marketnewsdesk.com\/index.php\/nvidia-accelerates-robotics-research-and-development-with-new-open-models-and-simulation-libraries\/#article","isPartOf":{"@id":"https:\/\/www.marketnewsdesk.com\/index.php\/nvidia-accelerates-robotics-research-and-development-with-new-open-models-and-simulation-libraries\/"},"author":{"name":"Newsdesk","@id":"https:\/\/www.marketnewsdesk.com\/#\/schema\/person\/482f27a394d4fda80ecb5499e519d979"},"headline":"NVIDIA Accelerates Robotics Research and Development With New Open Models and Simulation Libraries","datePublished":"2025-09-29T15:29:15+00:00","mainEntityOfPage":{"@id":"https:\/\/www.marketnewsdesk.com\/index.php\/nvidia-accelerates-robotics-research-and-development-with-new-open-models-and-simulation-libraries\/"},"wordCount":1924,"image":{"@id":"https:\/\/www.marketnewsdesk.com\/index.php\/nvidia-accelerates-robotics-research-and-development-with-new-open-models-and-simulation-libraries\/#primaryimage"},"thumbnailUrl":"https:\/\/www.globenewswire.com\/newsroom\/ti?nf=OTUzNjc5OSM3MTY5MDUzIzIwMDY5MTI=","inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/www.marketnewsdesk.com\/index.php\/nvidia-accelerates-robotics-research-and-development-with-new-open-models-and-simulation-libraries\/","url":"https:\/\/www.marketnewsdesk.com\/index.php\/nvidia-accelerates-robotics-research-and-development-with-new-open-models-and-simulation-libraries\/","name":"NVIDIA Accelerates Robotics Research and Development With New Open Models and Simulation Libraries - Market Newsdesk","isPartOf":{"@id":"https:\/\/www.marketnewsdesk.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.marketnewsdesk.com\/index.php\/nvidia-accelerates-robotics-research-and-development-with-new-open-models-and-simulation-libraries\/#primaryimage"},"image":{"@id":"https:\/\/www.marketnewsdesk.com\/index.php\/nvidia-accelerates-robotics-research-and-development-with-new-open-models-and-simulation-libraries\/#primaryimage"},"thumbnailUrl":"https:\/\/www.globenewswire.com\/newsroom\/ti?nf=OTUzNjc5OSM3MTY5MDUzIzIwMDY5MTI=","datePublished":"2025-09-29T15:29:15+00:00","author":{"@id":"https:\/\/www.marketnewsdesk.com\/#\/schema\/person\/482f27a394d4fda80ecb5499e519d979"},"breadcrumb":{"@id":"https:\/\/www.marketnewsdesk.com\/index.php\/nvidia-accelerates-robotics-research-and-development-with-new-open-models-and-simulation-libraries\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.marketnewsdesk.com\/index.php\/nvidia-accelerates-robotics-research-and-development-with-new-open-models-and-simulation-libraries\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.marketnewsdesk.com\/index.php\/nvidia-accelerates-robotics-research-and-development-with-new-open-models-and-simulation-libraries\/#primaryimage","url":"https:\/\/www.globenewswire.com\/newsroom\/ti?nf=OTUzNjc5OSM3MTY5MDUzIzIwMDY5MTI=","contentUrl":"https:\/\/www.globenewswire.com\/newsroom\/ti?nf=OTUzNjc5OSM3MTY5MDUzIzIwMDY5MTI="},{"@type":"BreadcrumbList","@id":"https:\/\/www.marketnewsdesk.com\/index.php\/nvidia-accelerates-robotics-research-and-development-with-new-open-models-and-simulation-libraries\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.marketnewsdesk.com\/"},{"@type":"ListItem","position":2,"name":"NVIDIA Accelerates Robotics Research and Development With New Open Models and Simulation Libraries"}]},{"@type":"WebSite","@id":"https:\/\/www.marketnewsdesk.com\/#website","url":"https:\/\/www.marketnewsdesk.com\/","name":"Market Newsdesk","description":"Latest Business News in Real Time","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.marketnewsdesk.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/www.marketnewsdesk.com\/#\/schema\/person\/482f27a394d4fda80ecb5499e519d979","name":"Newsdesk","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/a0d0bd5b0f0ca12a265a459b13169dac35f33776d8501eda5e68844a366f2f46?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/a0d0bd5b0f0ca12a265a459b13169dac35f33776d8501eda5e68844a366f2f46?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/a0d0bd5b0f0ca12a265a459b13169dac35f33776d8501eda5e68844a366f2f46?s=96&d=mm&r=g","caption":"Newsdesk"},"url":"https:\/\/www.marketnewsdesk.com\/index.php\/author\/newsdesk\/"}]}},"_links":{"self":[{"href":"https:\/\/www.marketnewsdesk.com\/index.php\/wp-json\/wp\/v2\/posts\/890342","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.marketnewsdesk.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.marketnewsdesk.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.marketnewsdesk.com\/index.php\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.marketnewsdesk.com\/index.php\/wp-json\/wp\/v2\/comments?post=890342"}],"version-history":[{"count":0,"href":"https:\/\/www.marketnewsdesk.com\/index.php\/wp-json\/wp\/v2\/posts\/890342\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.marketnewsdesk.com\/index.php\/wp-json\/wp\/v2\/media?parent=890342"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.marketnewsdesk.com\/index.php\/wp-json\/wp\/v2\/categories?post=890342"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.marketnewsdesk.com\/index.php\/wp-json\/wp\/v2\/tags?post=890342"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}