Recently I received a support request from a customer having issues getting their overall head shape correct. In this blog post I provide advice for handling this issue, with a tad bit more information, as I’ve had time to think about it.
What format are lip-synch animations using? How are they connected to avatars generated by your system?
Three full body Fuse3DAvatars and one Auto3DAvatar
Upper left is the source photo used to create the bald avatar next to it.
Update 12-29-2014: available now!
Here's some quick screen shots of the quality we're getting from our new full body Fuse3DAvatars:
We have an active Kickstarter campaign during the month of October, 2014.
We are trying to raise financing for more consumer level interfaces here at the 3D Avatar Store. Our professional level technologies, secure WebAPI access, remote 3D Avatar creation, editing, accessorizing, automated lip sync generation, and quite a bit more support for professionals is in active use.
The 3D Avatar Store's technology is a key component of creating Personalized Media: which is a new form of media that has never been possible before, as it is media with the end-user's identity incorporated into the media's message.
We've been working on an interesting project for the last few months, a collaboration of companies for the promotion of a new science fiction novel authored by one of the owners of the companies involved.
We are proud to announce the public release of our new Facial Feature Finder! This is the software that given a photo of someone, it locates and outlines their eyes, nose, mouth, and face outline. Our new face finder is a significant improvement over our previous version. Where previously our face finder's neural net had a training dataset of 70,000 real people's photos and 3D scan data, the new finder's data set is 300,000 real people. Check out these results, created with no facial feature editing: