Unsupervised Conversion of 3D models for Interactive Metaverses
Published in Proceedings of the IEEE International Conference on Multimedia and Expo (ICME), July 2012.
Abstract
A virtual-world environment becomes a truly engaging platform when users have the ability to insert 3D content into the world. However, arbitrary 3D content is often not optimized for real-time rendering, limiting the ability of clients to display large scenes consisting of hundreds or thousands of objects. We present the design and implementation of an automatic, unsupervised conversion process that transforms 3D content into a format suitable for real-time rendering while minimizing loss of quality. The resulting progressive format includes a base mesh, allowing clients to quickly display the model, and a progressive portion for streaming additional detail as desired. Sirikata, an open virtual world platform, has processed over 700 models using this method.
BibTeX entry
@inproceedings{icme12terrace,
author = "Jeff Terrace and Ewen Cheslack-Postava and Philip Levis and Michael Freedman",
title = "{Unsupervised Conversion of 3D models for Interactive Metaverses}",
booktitle = "{Proceedings of the IEEE International Conference on Multimedia and Expo (ICME)}",
year = {2012},
month = {July}
}