I am Loïc, a French “lots-of-things enthusiast” as I like to describe myself, and recently I got into 3D as a hobby in parallel to my job as a research engineer in scientific computing, for which I am used to “scientific” 3D, especially in the fields of applied mathematics.
I have close-to-zero artistic skills, a bit more practical skills related to programming in general, and I really enjoy discovering and messing around with 3D techniques, mostly in Blender.
1 – Introduction
Lately I’ve been creating and sharing a few collections of lowpoly assets by reprocessing some 3D scans hosted on Sketchfab under Creative Commons licensing schemes: body scans, rocks, sticks, mushrooms, but also fruits and vegetables scanned for the Horn of Plenty scanning challenge, or animal skulls from the great collection of UVic Libraries…
In this post I’ll therefore detail the workflow and tools I use to create such collections of assets – using Python in Blender!
Why I think Sketchfab is great 🙂
Sketchfab is a great place to admire amazing models of creature and character sculpts, ancient artifacts, geology sites, technology designs, and wonderful 3D stuff in general… And I guess we all get that 😉
But it is also a great platform to get access to free and high (sometimes very high) quality 3D assets and props that many of us – the community – like to download and re-use for various purposes. Some of us are simply collectors by nature, while others are architects, video game designers, scientists, or simply 3D-geeks looking for nice models to fiddle with (and I definitely fall into this category!)…
And Sketchfab offers all of us a great resource: plenty of freely downloadable models that you can actually inspect before downloading, and re-use under the terms of Creative-Commons licenses, the most common being the CC-attribution. In a few words: give credit and enjoy!
And there you go: plenty of high quality 3D models to play with, whatever your application is!
What realistic “lowpoly” assets are, and why I think they’re great
I guess that everyone here knows – or has their own definition of – what lowpoly means. LOW-POLY. A model made from a low number of polygons.
A polygon has a straightforward meaning in 3D graphics, and comes down to triangles and quantifiable arrays of coordinates and indices stored on your GPU or processor. But what is a “low” number of such polygons? That is of course totally dependent on your usage, and while in mobile games a few years ago you’d try to stick to a few hundreds polygons, a mesh made of millions of triangles can still be too coarse in specific scientific applications…
Although to me lowpoly mostly characterizes an art-style – think bright colors and sharp edges, like on the wonderful model above from tzeshi – I’ll define a lowpoly asset in the context of 3D scanning as a realistic 3D model made from 100 to 5,000 triangles. That satisfies most of my needs, my criteria being based on performance as I use such assets as particles in Blender, and as “game assets” in Unreal Engine, Unity 3D or in WebGL apps.
Those models are often associated with at least an albedo map and a normal map, the latter being fundamental, as it can give to a potato-shaped blob of 200 triangles the appearance of a highly realistic carved rock!
Needless to say that using this normal map trick to reduce the number of polygons down from a few millions – which is common for raw 3D scan results – to a few hundreds will make your computer happy, and give you back some processing power:
- In the context of video games this means more FPS, a more complex physics system, more evolved interaction mechanisms…
- Concerning web technologies, your app will have a lower bandwidth usage and loading time (who wants to wait 15 minutes for an environment asset to load?), run on less sophisticated computers and more easily on mobile, and you will be allowed a greater complexity for the same “performance budget”.
- And the advantages are even present for offline renderings in your favorite 3D software, especially for animations where decreasing your polycount can drastically cut down the rendering times (and electricity bill)!!
But you all know that of course ;), so let’s move to the workflow I use!
2 – Workflow (Blender)
The workflow I use is nothing but classic in this context, and can be segmented into the following parts:
Import the model
- After checking the model’s license (to make sure that every model is on CC-attribution or another adequate license) on Sketchfab, I download it and unzip it to a specified directory.
- I then import the model in Blender and assign its albedo and – if it exists – its normal map to a new material.
- I usually center the model in my 3D scene, make sure that no rogue elements (disconnected vertices and faces, inside and invisible parts…) persist, and rescale it so that its longest dimension is equal to one Blender unit.
- Optionally, and in order to smooth out artifacts and obtain a clean surface, which is not obvious with raw scan results, I often remesh the model to a high poly version (around 500k triangles) with mmgs, which you could easily replace on Windows with Meshlab and its backend version, meshlabserver, for instance.
A note on mmgs:
Mmgs is a neat little command line tool from the MMG Platform, a suite of open source softwares and libraries for linux and MacOs (sorry Windows users!) used for mesh generation, adaptation, optimization of for Level-set discretization.
It relies on the mathematical concept of the Hausdorff distance: instead of fixing a target number of faces, you basically specify the maximal distance you wish to have between the initial model and the remeshed one, and mmgs will do its best to give you a nice surface approximation, either decimating or refining your mesh. On this model for instance, I used a lower Hausdorff distance for the details than the rest of the horse’s body.
If you wish to give it a go, you can install the software for linux or Mac from the MMGTools github repository, and install the BakeMyScan add-on to interface it with Blender (as the software usually runs in command line with the uncommon MEDIT .mesh file format).
The most critical – and interesting – part of the workflow is the remeshing part: transforming a copy of the initial model to a similar-looking geometry, except with far fewer polygons:
- I first estimate the final number of faces I wish my final model to be: a round rock can look convincing at 500 triangles, while a more complex type of model will require more triangles: for the animal skulls embedded above, for instance, I chose to go for 2500 triangles to keep some of the thin details.
- I then remesh the initial model to a “medium-poly” model (20k to 100k triangles depending on the complexity) with mmgs.
- To get to the final low number of faces, I then iteratively use a sequence of some useful Blender modifiers (for Blender-agnostics out there, those act like filters on your object), which have the effect of keeping more geometry in the model’s high curvature zones (highly concave and convex zones) than on the flatter regions:
- Decimate -> Planar: Merge faces separated by an angle under a certain threshold, I usually keep the default value of 5°.
- Triangulate: Retriangulate the model resulting from the planar decimation, which tends to have strange n-gons (faces with more than 4 edges).
- Smooth: At this step, the resulting geometry is often stretched, and a few steps of laplacian smoothing help to keep a decent topology and avoid intersecting faces!
- Decimate -> Ratio: Automatic merge of faces, according to a ratio which I often leave at the default value of 0.8.
- Shrinkwrap: This has the effect of making the current geometry “stick” to the original one, making sure that the previous decimation and smoothing operations did not alter the geometry too much.
- At each iteration of the above sequence, the mesh is decimated, by a variable factor. I therefore have to repeat these steps until the last iteration lowers the number of faces under the threshold I set.
- I then cancel the last iteration, and let Blender do a final decimation with the Decimate -> Ratio modifier, in order to obtain the “nearly correct” number of faces.
Here is a gif of the iterative process, transforming a 2.5M triangles scan of a hand from weareprintlab to a 1.5k (0.06% ratio) lowpoly model (visible on Sketchfab here, think about turning on the wireframe view in the model inspector):
Unwrap, bake and export
- Now that my target number of faces is reached, I unwrap the final lowpoly model with Smart UV unwrap, Blender automatic (and dirty) UV unwrapper.
- I finally bake the normals from the original model – or the high resolution model previously created by mmgs – to the newly created model, as well as the albedo map.
- Time to directly export to Sketchfab (Blender has an addon for this), or as I usually do to export the model as a .fbx or .obj file (understood by game engines and most 3D software).
- I usually export the textures to JPEG format, which offers a nice compression and a lower file-size, in exchange for a slight loss of quality, especially visible on normal maps.
3 – Automation
Many parts of this section are quite “advanced” regarding command-line usage, sorry in advance for the technical bits!
Also, the following scripts are still in an unstable “Work in Progress” state. Their behavior is not consistent and although you might get lucky on the majority of the models you try to play with, plenty of bugs and unexpected errors may – and will – happen.
The workflow exposed above is very classic, and except for the iterative process trick (of which I’m quite proud as it works well!), which you could replace with Meshlab filters such as “Quadric Edge Collapse Decimation”, and my use of a specific remeshing tool – mmgs – nothing is really special about it. But…
Here comes the fun: Blender scripting
Blender has a Python API available, which basically allows automation of most, if not all, of the tasks we manually do by selecting objects, dragging the mouse, clicking the mouse, pressing keys… Who still does that in 2018!?
In a few words, when you execute an action manually in Blender, you’ll see a line of code appear in the “Info” area, which you’ll find by dragging the main menu toolbar down or selecting “Info” in the area selector (like when switching between 3D view and Node Editor or UV/Image editor).
- Copy and paste this line into the python console and you’ll see the action you executed repeated.
- Paste it into a new file of the text editor and you’ll be creating a script.
- Paste other lines in, tweak them, and you’ll be a programmer!
You’ll find all the information required to start scripting in this Blender manual section, which will provide you with a nice quickstart as well as the full API documentation!
If you wish to reproduce my workflow, you’ll find on github a Blender add-on I wrote to automate most of the process described above, and which I humbly call the
BakeMyScan Add-on 😉
This add-on should work correctly for Blender 2.79 (provided you follow the installation and usage instructions). Note that it could work for older versions, but was untested and you know the deal: use at your own risk!
If you tried to use it and found a bug or a missing (and somewhat “basic”) feature, please create a new issue on github, and if you have tweaked and improved it, of course feel free to fork the repository and create a new pull request so that I can add your modifications!
The fun goes on: using the command line
Writing this add-on came with yet another advantage: each button available in Blender UI corresponds to a function I wrote, which can be used like any other from the Blender python API. And as this API can be used without even opening Blender (by using Blender in background mode from the terminal), I wrote a higher level “wrapper” script incorporating my add-on functions and available here, which can be executed by Blender.
For instance, executing the following command line in a terminal would remesh the model “scan.obj” to an object saved in “scan01.blend” and made of 500 triangles, with the initial textures “model_albedo.jpg” and “model_normals.jpg” baked to 2048px textures:
blender --background --python bakeOne.py -- -i scan.obj -o scan01.blend -t 500 -r 2048 -a model_albedo.jpg -n model_normals.jpg
Finally: multiple models!
Equipped with an “automatic” and DIY way to create a lowpoly asset from a high resolution scan, without even opening Blender, the last step consists of treating multiple models in one shot.
Concerning Sketchfab models, I simply wrote a python program (available here), which takes as input a directory containing directories, each of them containing a 3D model and – if they exist – associated albedo and normal texture maps. This script runs the previous Blender script on all available models, and exports everything as .fbx 3d models and jpeg images to a common location on my computer.
The only task left is to import everything in an orderly fashion, and export to Sketchfab. And there you have it. A nice pack of lowpoly assets made from 3D scans!
All the documentation associated to those python scripts is available on this page.
Bonus: Directly download a Sketchfab model or collection of models
In order to further ease the automation of the tasks, I tried my hand at writing yet another script which takes as input the URL of a Sketchfab model or collection, and proceeds to automatically download the corresponding model(s).
This script uses the Sketchfab API to retrieve models’ information from a collection and store them in a credits.md file (easing the “attribution” part in “CC-attribution”).
It also uses Python bindings for a browser-automation tool called Selenium (used by web developers to automatically test their website) which automates the part of opening a web browser, navigating to the model page and hitting the download button (useful for collections).
You’ll find the documentation for this script here.
4 – Conclusion
Concerning the add-on:
- I can’t guarantee that everything will work fine for you, but I can try to help you if you create an issue on github.
- The latest version and documentation of the add-on will always be found on its github repository, where I might add features or modify scripts, therefore making (minor) parts of this post obsolete…
So here you go, this is how I create some high quality lowpoly assets from great models found on Sketchfab!
I still have to experiment with the photogrammetry process for various types of objects, but so far I’ve found that mixing colmap and openmvs (as explained by Dr. Peter L. Falkingham in this Sketchfab blog post) gives great results! The next step for me would be to create a cheap automated turntable based on Arduino and Raspberry PI hardware, but I guess that this will be for another story!
Finally, thank you to the Sketchfab team for creating and maintaining such a badass 3D website, with a special shout-out to Thomas Flynn who invited me to write this post as well as Abby Crawford for the patience and corrections!
But more importantly, a huge THANK YOU to all of the Sketchfab users out there who upload amazing models, and make them available for anyone to re-use. You guys rock!