Analysis of Google Draco
Google Draco is a library that aims to do for mesh data what MP3 and OGG did for music. It does not reduce memory usage once a mesh is loaded, but it could reduce file sizes and improve download times. Although mesh data does not tend to use much disk space, I am always interested in optimization. Furthermore, some of the NASA models I work with are very high-poly, and do take up significant disk space. Google offers a very compelling chart showing a compression ratio of about 95%:
However, there is not much information given about the original mesh. Is it an ASCII .obj file? Of course that would be much bigger than binary data. I wanted to get a clear look at what kind of compression ratios I could expect, within the context of glTF files. I found a farily high-poly model on SketchFab here to work with.
This model has 2.1 million triangles and 1 million vertices. That should be plenty to test with.
Now, glTF is actually three different file formats. Normal glTF files store JSON data and come with an extra .bin file for binary data. This stores things like vertex positions and animation data, stuff you probably won't want to edit by hand. The .glb version of the format combines JSON and binary data into a single file, which can be viewed but not edited in a text editing program. Finally, there is also base64 glTF, which stores JSON together with binary data with base64 encoding in a single file. The base64 data looks like gibberish, but the file can be opened in a text editor, modified, and resaved without destroying the binary data.
I was very curious to see what advantage Google Draco mesh compression would offer. Would it make glTF files significantly smaller, so that your games take up less space and have faster download times?
To answer this question, I imported the model into Blender and exported several versions. I only exported position, normal, and texture coordinates. I also loaded the uncompressed .glb file in Ultra Engine and resaved it with simple mesh quantization.
As you can see, mesh quantization (using one byte for each normal component, plus one byte for padding, and two bytes for each texture coordinate component) combined with regular old ZIP compression comes in significantly smaller than Draco compression at the maximum compression level. It's not in the chart, but I also tried ZIP compression the smallest Draco file, and that was still bigger at 28.8 MB.
You can look at the models yourself here:
Based on this test, it appears that Google Draco is only marginally smaller than an uncompressed quantitized mesh, and still slightly bigger when ZIP compression are applied to both. Unless someone can show me otherwise, it does not appear that Google Draco mesh compression offers the 95% reduction in file sizes they seem to promise.
Correction:
This model was made up of several sub-objects. I collapsed the model and resaved it, and Draco now produces compression more like I was expecting to see:
Presumably this means whatever data structure they use takes up a certain amount of space (probably an n-dimensional tree), and having fewer of these structures is more optimal.
Here is the corrected comparison chart. This is good. Draco shrank this model to 7% the size of the uncompressed .glb export:
This will be very useful for 3D scans and CAD models, as long as they don't contain a lot of articulated subobjects. Original model is on the left, Draco compressed model is on the right:
- 1
7 Comments
Recommended Comments