Instant NGP, Nerfstudio, and NeRF from nothing: ttftw 2023w18
By Robert Russell
- 4 minutes read - 657 wordsThree things from this week.
This week I’ve had NeRFs on my mind again. Last week I had fun making NeRFs of some plants and a couple weeks ago I wrote about how I make NeRFs with COLMAP and Instant NGP. I follow almost the same process whether my images come from a cellphone, a GoPro, or my gammacam flexible camera array.
I’ve been doing two flavours of NeRF tinkering: understanding how to use the tools better and skimming a lot of papers covering developments beyond the original NeRF paper.
Instant NGP
Since most of the time I’m just using Instant NGP on my own Windows machine, I’ve been doing basic things like trying to figure out how many frames to use, how to position cameras in the Instant NGP UI for nice video fly-throughs, and generally just being a user.
When I started using early builds of Instant NGP I used the devcontainer. That was too restrictive to work with on a regular basis. Eventually I ran into issues with the devcontainer using a very old base image. So instead of building from scratch I switched to the more direct option of just grabbing the latest prebuilt zip for Windows. I do separately clone the repo for the colmap2nerf.py script and to have the source for reference. Generally anytime I’m developing code I try to keep that on the Linux side and on Windows I stick to prebuilt software. If I do end up building again, it’s worth mentioning that since WSLg is generally available on Windows 11 it’s now possible to run tools that create a GUI from inside WSL as well.
The rewarding thing about using the prebuilt binary is that you just drag the dataset folder onto the application window and as it starts training you get this mysterious look as if the clouds are parting to reveal your 3d scene.
Nerfstudio
Nerfstudio really lines up with my goals. It’s meant to be a user-friendly way to expermiment with different methods of creating NeRFs. Unfortunately it’s been challenging to get it installed. Probably because I have an opinionated development environment. I’ve tried and given up a couple times but this week I watched a talk Angjoo Kanazawa gave at GTC. She leads the AI research lab at Berkley that produces nerfstudio and a lot of other leading edge NeRF research. So I gave it another shot and got a demo working with sample data. I’m still wrangling the commands to train on my own data but I feel like this time it’ll get there.
The Blender VFX add-on for nerfstudio is particularly interesting to me. It seems like it brings in the NeRF as a point cloud to Blender but I’m not sure. Bringing NeRFs into Blender enables some interesting crossovers.
The API for nerfstudio provided also opens up some exciting possibilities to integrate NeRF training with other software.
NeRF from Nothing
The research papers are often not that hard to read - if you don’t have to implement the algorithms yourself. I liked reading this “NeRF from Nothing” article. It’s about a year old now but still relevant. The authour builds a minimal NeRF implementation of NeRF training and rendering. Walking through salient details of the paper and seeing a corresponding basic implementation in PyTorch makes it feel like a tidy, coherent system.
So that’s it for this week. Here’s one last too-fast flythrough. This is a glamping yurt near Oregon Shingletown, California 1 that I took some pictures of last summer. Maybe I’ll go back and NeRF the interior some day.
-
In the first revision of this post I thought this was later on in our roadtrip. If you are up in Oregon, I highly recommend Umpqua’s Last Resort. ↩︎