Given how insanely popular my ‘trying photogrammetry software‘ series has been this year, I thought I’d round up what I’ve tried, what’s worked well, and what hasn’t.
Obviously I gave each piece of software in my blog posts a go with a standard dataset. That photo set was not ideal – it includes photos taken at different focal lengths, and the object isn’t perfectly, systematically, covered by photos. This is partly lazy (I just quickly got a bunch of photos at the time because I wasn’t envisioning it going on for so long), but also partly deliberate (at least that’s the line I’m going with now): I want software that’s robust so that students and colleagues in my lab can just go capture photos with minimal training. I’m sure there’s an argument that I’m encouraging bad practice, and a couple of years ago I’d likely have agreed. But these days, with a higher than ideal teaching load [to say the least], I just don’t have time for that.
But, in addition to testing with that dataset, I’ve also gone back throughout the year and tried them with whatever dataset I might be playing with at the time. In most cases, software performed with my test data in a similar way to other datasets (which may well be more of an indication that I always take photos in a similar manner), but in some cases my opinion of a workflow has improved or diminished.
The best free photogrammetry software (TLDR:)
My go-to software is COLMAP in conjunction with openMVS. COLMAP does the camera matching/alignment, and openMVS constructs the mesh. I find this combination the easiest to use (I wrote a batch file that can be just dropped into a folder full of photos and double clicked), and the most robust to my photograph taking process. It’s also pretty quick in comparison to other workflows. My only complaint is that openMVS produces textures with a vast amount of the texture file blank (or rather orange) – it’s not a particularly efficient texture file, and that means I usually end up re-baking the textures in maya (which takes ages!).
Rachel, an undergrad volunteering in my lab accomplished this great model with COLMAP and openMVS:
That being said, COLMAP + openMVS isn’t that user friendly to non-technical people. When I have students in the lab that don’t want to mess around with the command line, I always default to Agisoft Photoscan. It’s good, and it’s dead easy to use. It doesn’t get my top spot though because it’s not free (it is pretty affordable in the grand scheme of things)!
So, with that out of the way, here’s a round up of everything I’ve tried over the past year.
|Software||User interface (GUI) or command line (CLI)?||Approx. time taken on sample dataset to textured mesh (minutes)||Qualitative assessment (0-10, 10 being best)^||Notes|
|COLMAP v3.0||GUI (or CLI)||50*||9|
|COLMAP v3.3 + openMVS||CLI||37||9||Timed with latest version of COLMAP, using my script|
|VisualSFM + openMVS||GUI + CLI||~30||3||Some issues with VSFM input for openMVS, as COLMAP works fine|
|VisualSFM + Meshrecon||GUI + CLI||11.5*||7|
|VisualSFM + PMVS+Meshlab||GUI||18*||5|
|openMVG + MVE||CLI||38**||3||All my photosets struggle with openMVG|
|MicMac||CLI||28**||2||All my photosets struggle with MicMac|
|Regard3D 0.9.3||GUI||60**||5||I didn’t write a post about this. Here’s a link to the model on sketchfab.|
|3dFZephyr Free||GUI||33||8||Limited to 50 photos (took several out of my photoset)|
^This is completely subjective and in no way quantitative.
*These times do not include texturing, as the workflow ends at mesh generation.
**In poor reconstructions, when not all cameras are matched, the time represents dense/mesh processing of only matched cameras.
The way I take photos is pretty… haphazard, shall we say. All my photosets generally struggle with both openMVG (which is also what regard3D is based on) and MicMac. I am assured that better results can be achieved with more robust datasets, and I don’t doubt that. But for me, they just don’t seem to match cameras as robustly as other options.
3dF Zephyr free is really quite a nice piece of software, but that 50 photo limit is killing me. That being said, the paid version is about the same price as photoscan, i.e. not that expensive ($149 at the time of writing, which ups the max photo count to 500).
Autodesk remake was really good, and ideal for students who just wanted to give photos to software and get a textured model out. But that’s been discontinued. Which is a shame. Autodesk’s replacement, Recap Photo requires a subscription and all processing is carried out online.
So, that’s what I’ve tested through the year, and what I’m using at the end of 2017. It’s a bit of a full time hobby trying to keep on top of everything as it comes out. I’ll occasionally test these again when major versions are released, but I won’t report on them unless I see significant changes in usability, quality of reconstruction, or speed. What I will report is when/if I find new software packages. If I’ve learnt anything since I first published on photogrammetry, it’s that you can get into the groove of using a package only for it to cease being maintained. Meanwhile, new programs are being released fairly regularly, and occasionally they happen to be awesome.
My first paper used Bundler and PMVS, and it took a day or so on a massive workstation to process a relatively small photo set. Between submitting that paper and it being published, VisualSFM was released which used the GPU and sped things up immensely. Now it’s unusual when software doesn’t use the GPU.
I haven’t seen anything to suggest that 2018 will see a similar jump in processing speed/hardware utilization, but here’s hoping.