Tutorial: Low Poly Assets from 3D Scans

Back to overview

I am Loïc, a French “lots-of-things enthusiast” as I like to describe myself, and recently I got into 3D as a hobby in parallel to my job as a research engineer in scientific computing, for which I am used to “scientific” 3D, especially in the fields of applied mathematics.

I have close-to-zero artistic skills, a bit more practical skills related to programming in general, and I really enjoy discovering and messing around with 3D techniques, mostly in Blender.

Update (2018/11/10): The addon I am developing, BakeMyScan, is now hosted on bakemyscan.org. I also modified a lot of the python code while improving the addon, but this had the side-effect of making almost every link in this article obsolete… I still hope you’ll enjoy the read!

1 – Introduction

Lately I’ve been creating and sharing a few collections of lowpoly assets by reprocessing some 3D scans hosted on Sketchfab under Creative Commons licensing schemes: body scans, rocks, sticks, mushrooms, but also fruits and vegetables scanned for the Horn of Plenty scanning challenge, or animal skulls from the great collection of UVic Libraries

In this post I’ll therefore detail the workflow and tools I use to create such collections of assets – using Python in Blender!

Why I think Sketchfab is great 🙂

Sketchfab is a great place to admire amazing models of creature and character sculpts, ancient artifacts, geology sites, technology designs, and wonderful 3D stuff in general… And I guess we all get that 😉

But it is also a great platform to get access to free and high (sometimes very high) quality 3D assets and props that many of us – the community – like to download and re-use for various purposes. Some of us are simply collectors by nature, while others are architects, video game designers, scientists, or simply 3D-geeks looking for nice models to fiddle with (and I definitely fall into this category!)…

And Sketchfab offers all of us a great resource: plenty of freely downloadable models that you can actually inspect before downloading, and re-use under the terms of Creative-Commons licenses, the most common being the CC-attribution. In a few words: give credit and enjoy!

And there you go: plenty of high quality 3D models to play with, whatever your application is!

What realistic “lowpoly” assets are, and why I think they’re great

I guess that everyone here knows – or has their own definition of – what lowpoly means. LOW-POLY. A model made from a low number of polygons.

A polygon has a straightforward meaning in 3D graphics, and comes down to triangles and quantifiable arrays of coordinates and indices stored on your GPU or processor. But what is a “low” number of such polygons? That is of course totally dependent on your usage, and while in mobile games a few years ago you’d try to stick to a few hundreds polygons, a mesh made of millions of triangles can still be too coarse in specific scientific applications…

Although to me lowpoly mostly characterizes an art-style – think bright colors and sharp edges, like on the wonderful model above from tzeshi – I’ll define a lowpoly asset in the context of 3D scanning as a realistic 3D model made from 100 to 5,000 triangles. That satisfies most of my needs, my criteria being based on performance as I use such assets as particles in Blender, and as “game assets” in Unreal Engine, Unity 3D or in WebGL apps.

Those models are often associated with at least an albedo map and a normal map, the latter being fundamental, as it can give to a potato-shaped blob of 200 triangles the appearance of a highly realistic carved rock!

Needless to say that using this normal map trick to reduce the number of polygons down from a few millions – which is common for raw 3D scan results – to a few hundreds will make your computer happy, and give you back some processing power:

  • In the context of video games this means more FPS, a more complex physics system, more evolved interaction mechanisms…
  • Concerning web technologies, your app will have a lower bandwidth usage and loading time (who wants to wait 15 minutes for an environment asset to load?), run on less sophisticated computers and more easily on mobile, and you will be allowed a greater complexity for the same “performance budget”.
  • And the advantages are even present for offline renderings in your favorite 3D software, especially for animations where decreasing your polycount can drastically cut down the rendering times (and electricity bill)!!

But you all know that of course ;), so let’s move to the workflow I use!

2 – Workflow (Blender)

The workflow I use is nothing but classic in this context, and can be segmented into the following parts:

Import the model

  1. After checking the model’s license (to make sure that every model is on CC-attribution or another adequate license) on Sketchfab, I download it and unzip it to a specified directory.
  2. I then import the model in Blender and assign its albedo and – if it exists – its normal map to a new material.

Preprocess it

  1. I usually center the model in my 3D scene, make sure that no rogue elements (disconnected vertices and faces, inside and invisible parts…) persist, and rescale it so that its longest dimension is equal to one Blender unit.
  2. Optionally, and in order to smooth out artifacts and obtain a clean surface, which is not obvious with raw scan results, I often remesh the model to a high poly version (around 500k triangles) with mmgs, which you could easily replace on Windows with Meshlab and its backend version, meshlabserver, for instance.
A note on mmgs:
Mmgs is a neat little command line tool from the MMG Platform, a suite of open source softwares and libraries for linux and MacOs (sorry Windows users!) used for mesh generation, adaptation, optimization of for Level-set discretization.
It relies on the mathematical concept of the Hausdorff distance: instead of fixing a target number of faces, you basically specify the maximal distance you wish to have between the initial model and the remeshed one, and mmgs will do its best to give you a nice surface approximation, either decimating or refining your mesh. On this model for instance, I used a lower Hausdorff distance for the details than the rest of the horse’s body.
If you wish to give it a go, you can install the software for linux or Mac from the MMGTools github repository, and install the BakeMyScan add-on to interface it with Blender (as the software usually runs in command line with the uncommon MEDIT .mesh file format).

Remesh it

The most critical – and interesting – part of the workflow is the remeshing part: transforming a copy of the initial model to a similar-looking geometry, except with far fewer polygons:

  1. I first estimate the final number of faces I wish my final model to be: a round rock can look convincing at 500 triangles, while a more complex type of model will require more triangles: for the animal skulls embedded above, for instance, I chose to go for 2500 triangles to keep some of the thin details.
  2. I then remesh the initial model to a “medium-poly” model (20k to 100k triangles depending on the complexity) with mmgs.
  3. To get to the final low number of faces, I then iteratively use a sequence of some useful Blender modifiers (for Blender-agnostics out there, those act like filters on your object), which have the effect of keeping more geometry in the model’s high curvature zones (highly concave and convex zones) than on the flatter regions:
    1. Decimate -> Planar: Merge faces separated by an angle under a certain threshold, I usually keep the default value of 5°.
    2. Triangulate: Retriangulate the model resulting from the planar decimation, which tends to have strange n-gons (faces with more than 4 edges).
    3. Smooth: At this step, the resulting geometry is often stretched, and a few steps of laplacian smoothing help to keep a decent topology and avoid intersecting faces!
    4. Decimate -> Ratio: Automatic merge of faces, according to a ratio which I often leave at the default value of 0.8.
    5. Shrinkwrap: This has the effect of making the current geometry “stick” to the original one, making sure that the previous decimation and smoothing operations did not alter the geometry too much.
  4. At each iteration of the above sequence, the mesh is decimated, by a variable factor. I therefore have to repeat these steps until the last iteration lowers the number of faces under the threshold I set.
  5. I then cancel the last iteration, and let Blender do a final decimation with the Decimate -> Ratio modifier, in order to obtain the “nearly correct” number of faces.

Here is a gif of the iterative process, transforming a 2.5M triangles scan of a hand from weareprintlab to a 1.5k (0.06% ratio) lowpoly model (visible on Sketchfab here, think about turning on the wireframe view in the model inspector):

Unwrap, bake and export

  1. Now that my target number of faces is reached, I unwrap the final lowpoly model with Smart UV unwrap, Blender automatic (and dirty) UV unwrapper.
  2. I finally bake the normals from the original model – or the high resolution model previously created by mmgs – to the newly created model, as well as the albedo map.
  3. Time to directly export to Sketchfab (Blender has an addon for this), or as I usually do to export the model as a .fbx or .obj file (understood by game engines and most 3D software).
  4. I usually export the textures to JPEG format, which offers a nice compression and a lower file-size, in exchange for a slight loss of quality, especially visible on normal maps.

3 – Automation

Many parts of this section are quite “advanced” regarding command-line usage, sorry in advance for the technical bits!

Also, the following scripts are still in an unstable “Work in Progress” state. Their behavior is not consistent and although you might get lucky on the majority of the models you try to play with, plenty of bugs and unexpected errors may – and will – happen.

The workflow exposed above is very classic, and except for the iterative process trick (of which I’m quite proud as it works well!), which you could replace with Meshlab filters such as “Quadric Edge Collapse Decimation”, and my use of a specific remeshing tool – mmgs – nothing is really special about it. But…

Here comes the fun: Blender scripting

Blender has a Python API available, which basically allows automation of most, if not all, of the tasks we manually do by selecting objects, dragging the mouse, clicking the mouse, pressing keys… Who still does that in 2018!?

In a few words, when you execute an action manually in Blender, you’ll see a line of code appear in the “Info” area, which you’ll find by dragging the main menu toolbar down or selecting “Info” in the area selector (like when switching between 3D view and Node Editor or UV/Image editor).

  • Copy and paste this line into the python console and you’ll see the action you executed repeated.
  • Paste it into a new file of the text editor and you’ll be creating a script.
  • Paste other lines in, tweak them, and you’ll be a programmer!

You’ll find all the information required to start scripting in this Blender manual section, which will provide you with a nice quickstart as well as the full API documentation!

BakeMyScan add-on

If you wish to reproduce my workflow, you’ll find on github a Blender add-on I wrote to automate most of the process described above, and which I humbly call the

BakeMyScan Add-on 😉

This add-on should work correctly for Blender 2.79 (provided you follow the installation and usage instructions). Note that it could work for older versions, but was untested and you know the deal: use at your own risk!

If you tried to use it and found a bug or a missing (and somewhat “basic”) feature, please create a new issue on github, and if you have tweaked and improved it, of course feel free to fork the repository and create a new pull request so that I can add your modifications!

The fun goes on: using the command line

Writing this add-on came with yet another advantage: each button available in Blender UI corresponds to a function I wrote, which can be used like any other from the Blender python API. And as this API can be used without even opening Blender (by using Blender in background mode from the terminal), I wrote a higher level “wrapper” script incorporating my add-on functions and available here, which can be executed by Blender.

For instance, executing the following command line in a terminal would remesh the model “scan.obj” to an object saved in “scan01.blend” and made of 500 triangles, with the initial textures “model_albedo.jpg” and “model_normals.jpg” baked to 2048px textures:

blender --background --python bakeOne.py -- -i scan.obj -o scan01.blend -t 500 -r 2048 -a model_albedo.jpg -n model_normals.jpg

Finally: multiple models!

Equipped with an “automatic” and DIY way to create a lowpoly asset from a high resolution scan, without even opening Blender, the last step consists of treating multiple models in one shot.

Concerning Sketchfab models, I simply wrote a python program (available here), which takes as input a directory containing directories, each of them containing a 3D model and – if they exist – associated albedo and normal texture maps. This script runs the previous Blender script on all available models, and exports everything as .fbx 3d models and jpeg images to a common location on my computer.

The only task left is to import everything in an orderly fashion, and export to Sketchfab. And there you have it. A nice pack of lowpoly assets made from 3D scans!

All the documentation associated to those python scripts is available on this page.

Bonus: Directly download a Sketchfab model or collection of models

In order to further ease the automation of the tasks, I tried my hand at writing yet another script which takes as input the URL of a Sketchfab model or collection, and proceeds to automatically download the corresponding model(s).

This script uses the Sketchfab API to retrieve models’ information from a collection and store them in a credits.md file (easing the “attribution” part in “CC-attribution”).

It also uses Python bindings for a browser-automation tool called Selenium (used by web developers to automatically test their website) which automates the part of opening a web browser, navigating to the model page and hitting the download button (useful for collections).

You’ll find the documentation for this script here.

4 – Conclusion

Concerning the add-on:

  1. I can’t guarantee that everything will work fine for you, but I can try to help you if you create an issue on github.
  2. The latest version and documentation of the add-on will always be found on its github repository, where I might add features or modify scripts, therefore making (minor) parts of this post obsolete…

So here you go, this is how I create some high quality lowpoly assets from great models found on Sketchfab!

I still have to experiment with the photogrammetry process for various types of objects, but so far I’ve found that mixing colmap and openmvs (as explained by Dr. Peter L. Falkingham in this Sketchfab blog post) gives great results! The next step for me would be to create a cheap automated turntable based on Arduino and Raspberry PI hardware, but I guess that this will be for another story!

Finally, thank you to the Sketchfab team for creating and maintaining such a badass 3D website, with a special shout-out to Thomas Flynn who invited me to write this post as well as Abby Crawford for the patience and corrections!

But more importantly, a huge THANK YOU to all of the Sketchfab users out there who upload amazing models, and make them available for anyone to re-use. You guys rock!

PS: I’m not much of a social networks guy, but do not hesitate to follow me on Sketchfab or on github!

 

About the author

Loïc Norgeot

Lots-of-things enthusiast


Leave a Reply

Your email address will not be published. Required fields are marked *

  • Kevin Gidusko says:

    Loïc, I wanted to take a moment to thank you for an outstanding blog post here. Really useful and explained so well. You are one of the reasons that I love this community of folks so much; everyone is just trying to help everyone get better so that we can push the boundaries of what we can do. Thank you!

    • Loïc Norgeot says:

      Wow, thank you very muck for your kind wods Kevin, I’m really glad you appreciated it 🙂
      One of the main factors I tried to explain this process in details was exactly because I appreciate the spirit of Sketchfab’s community: as you said, a bunch of people eager to help and share tricks… What to ask for more!
      Thanks for your comment!

  • Rovo says:

    How to bake proper albedo map from high-poly to low-poly?

    • Loïc Norgeot says:

      Hi Rovo.
      If you ONLY want to bake the albedo map, I can see three cases you might be in:
      * You don’t care about using blender: I guess that every 3D editor has its own baking tools, and it appears that Substance Painter, Marmoset and xNormal do a great job at baking maps. Note that the first two are not free, and you’ll have to consult tutorials to use them. Sorry I can’t help you more on this.
      * If you’re using blender and your albedo map is a texture: no need to use the add-on I wrote. Just use the “Blender Render” render engine instead of “Cycles”, and follow the instructions on this page of blender’s documentation (pay extra care to the “workflow” part at the end of the page): https://docs.blender.org/manual/ja/dev/render/blender_render/bake.html
      * Finally, if the albedo map you want to bake from the high poly model was created proceduraly by using nodes in Cycles ( if you have no clue of what this sentence means, it is most probably not the case 😉 ), then you can use this add-on. You’ll have to make sure that the high poly object’s material uses a “Principled BSDF” node as a shader (and not a “Diffuse” one). Then in the 3D view just select your high poly object first, then the lowpoly one, and hit the button called “Bake textures” in the BakeMyScan add-on panel. Select a directory to export your texture to in the file selector, as well as the texture resolution in the bottom left part of the screen. And only check the “albedo” option.
      Hope I could help you!

  • Oke says:

    Fantastic! Shrinkwrap was the step i was missing in my attempts. I have a growing number of photogrammetry objects i plan to convert to low poly. You used MMG for your initial remesh step. What is the advantage of MMG over Blender’s remesh modifier?

    Also I’ve been checking for a clean manifold using the 3D-Printing addon. Is that overkill?

    • Loïc Norgeot says:

      Hi Oke, and thanks for your interest!
      Shrinkwrap is indeed the “magic step” in this process 😉

      MMG is not mandatory to use. I got used to it in a professional context linked to numerical simulation, for which it is a really good tool, and realized that I could also use it for “hobbyist CG”. So I am a little biased about it…
      And honestly, in the process I described in this post, using MMG as an initial remesh has no particular advantage over the remesh modifier, as the geometry is altered a lot in the following steps.
      However, I find it very useful as a “one-step” triangular remesher: the input the program takes is the Hausdorff distance, which is directly linked to the quality of the surface representation (instead of just a ratio as in the decimate modifier or a voxel resolution as in the remesh modifier). Plus you can specify “advanced” options when using it in command line (which is its main purpose) such as the ratio between different edges, the minimal and maximal edge sizes or a scalar map to specify the size of edges on your object. Let me know if you’d like more info on this matter, I could write an add-on (and guarantee it to work fine this time) for this sole purpose if some people seem interested to use it for CG.

      And aiming for “manifoldness” really depends on your usage scenario, and most of the times you don’t necessarily need a manifold mesh for CG, as opposed to most numerical simulation codes for instance.
      Also, the 3D print add-on “make manifold” operator linked with modifiers such as shrinkwrap and remesh can sometimes produce ugly meshes if your input object is “too far from manifold”, and takes a really long time to do so.
      My advice would be to only go for a manifold model if you are 100% sure you need a watertight and non-intersecting model.
      Otherwise, I indeed think that it is overkill.

      Let me know if you need help or manage to use the scripts, I’d be glad to see some results !
      Cheers!

  • Hello everyone!

    I’m back for an update concerning BakeMyScan. The addonis much more stable than it was previously (although the python part is still messed up), and I have switched all the documentation to a website I had fun creating for the occasion: http://bakemyscan.org

    Most of the links in the article won’t work anymore, I’m sorry for the inconvenience…

    If you have tried to use it, make sure to let me know what you think, I’d love to have some feedback as I’m convinced that this little addon can get bigger and better!

Related articles