Nvidia remains committed to investing in the Industrial Metaverse and showcases fresh procedures connecting Omniverse Cloud with Apple Vision Pro.
Nvidia has introduced a software framework called Omniverse Cloud APIs, enabling developers to transmit their OpenUSD scenes from creation tools to the Nvidia Graphics Delivery Network (GDN). Nvidia claims that this cloud-based solution enables seamless streaming of real-time renderings to Apple Vision Pro without sacrificing the accuracy of complex and data-intensive sets.
Today, Nvidia showcased a live demonstration at GTC keynote where they streamed a realistic digital replica of a car to the Apple Vision Pro. The person wearing the Vision Pro device utilized a car configuration application created by CGI studio Katana on the Omniverse platform. This allowed them to easily switch between various paint and trim choices, and they could even virtually step inside the vehicle.
The workflow also introduces a hybrid rendering technique that combines local and remote rendering on the device. Users can render fully interactive experiences in a single application using Apple’s native SwiftUI and Reality Kit, while the Omniverse RTX renderer is streamed from the GDN.
Nvidia sees the metaverse as the 3D evolution of the internet and wants to make it easier for companies in particular to work in it. This “industrial metaverse” can be used to create digital twins of factories or products, simulate rail networks or enable collaborative work on 3D developments.
In 2022, the Omniverse Cloud was introduced by CEO Jensen Huang in order to provide greater access to the Omniverse for both companies and individuals, as referred to by Nvidia. As an illustration, teams can utilize the Omniverse Cloud to easily create 3D workflows and utilize features like physics simulation, ray tracing, and AI capabilities, without the need for high-performance edge devices.