If at first glance you thought that NVIDIA NeRF was a sign that the graphics giant was attempting to pivot to the toy dart gun market, no one could blame you. Fortunately, the reality is much cooler. NeRF stands for “Neural Radiance Fields”, an AI-based technology that aims to recreate objects or entire scenes with very little information to go on.
A major goal of NeRF is to deliver a resulting scene with as little file size weight as possible. You can feed the technology a couple of images, and AI takes over to create a scene that you can pan around. In the future, deployment of NeRF could mean that game install sizes could dramatically decrease, simply because AI would be smart enough to recreate an accurate representation of a scene with little initial data.
While NVIDIA has talked about NeRF since last year, it just came back on our radar. YouTube channel Two Minute Papers published a detailed look at what NeRF is capable of, and it’s a great watch:
Naturally, the more image input NeRF AI has, the more accurate the resulting outcome will be, but on the whole, very little data is actually required to produce a believable scene. An example is provided where four photos at different angles are taken of a woman, which the AI turns into a 3D model, and enables the environment to be panned around in.
The NeRF process is reminiscent of photogrammetry, but requires much less information to generate an object or scene. Even simple objects in photogrammetry could require tens of photos to generate a useful 3D object, whereas NeRF requires just a fraction of that. The end result of photogrammetry can often show jaggy edges, which generally doesn’t matter so much if the asset is shrunk down and placed into a game engine, but NeRF aims to recreate a scene with no notable issues.
NeRF has evolved dramatically over the past year, with NVIDIA’s Instant NeRF GitHub repository brimming with activity all year. The tech even managed to snag TIME‘s invention of the year award just a month ago.
While NeRF can recreate a great-looking scene as a 3D environment, the need for strong processing power doesn’t go away. You ultimately still need to render the scene, so the more powerful your graphics solution, the better. Further, inference performance is also crucial, which means GPUs with accelerated matrix multiplication are going to benefit heavily – such as NVIDIA’s that are equipped with Tensor cores.
What strikes us most about NeRF is just how far AI has come in what seems like a short time. Some of what may have seemed unlikely – if not impossible – just a couple of decades ago, are becoming a reality. And no, an AI GPT did not write this post, thank you very much.