Autodesk’s Maya 2018
Autodesk used to released its M&E products at the same time, giving you a burst of new features in Maya, Max, MotionBuilder, etc. However, now it appears that with the migration to a subscription-based model, Autodesk is pushing out releases and updates throughout the year without all the fanfare that existed in the past. For instance, Maya 2018 had its debut at this year’s SIGGRAPH, and within two months, 2018.1 is already out and available for subscribers. It is a tiny, almost utilitarian point release in comparison to 3DStudio Max 2018 (which we reviewed last month), but it is indicative of a development approach that puts forth the idea that some features or fixes don’t need to sit around for a year waiting for other, farther-reaching features to be ready. You just quietly send out new stuff to the subscriber base.
So what about this subscriber model anyway? I’ve heard pros and cons for the shift ever since Adobe really went big with it, and I can see both sides. You have those who would prefer to buy a car and own it rather than renew a lease every two years and continually be driving the latest model. But from a developer standpoint, it seems like everyone is moving toward the subscription mode — and enough users don’t mind that it isn’t going to change anytime soon. Autodesk is sweetening the deal beyond the consistent updates with Cloud Rights, which gives to you the ability to launch UI-less licenses of Max or Maya in the Cloud in order to expand your computing power. The license (or licenses if you have a multi-user subscription) covers renders, simulations or caching.
But enough with the boring stuff. What’s new in Maya 2018? Well, it seems like everyone is excited about a new UV workflow with tools that make laying out UVs faster and more efficient. And if you’ve read my past articles you know how much I’m not a fan of UVs. Anything to speed along the process is a godsend.
The love is spread out through the process. XGen keeps getting more robust with clumping in interactive grooming. Animation and rigging have some UI updates to make the process less cluttered with rig controllers that turn on and off based on cursor proximity. And mographers are still gaining stride with tools like advanced text tools, direct dynamics through MASH and, most significantly in my book, a Live Link to After Effects — which up now, has really been the domain for Cinema4D.
Of course, I won’t be able to fit everything in this space, and now with more updates coming more frequently, I’m not going to be able to stay on top of everything. But I promise to do my best!
Price: $1,470 for annual subscription; $185 monthly
Faceware, Xsens & Reallusion iClone
There is a great synergistic relationship between Reallusion’s iClone real-time animation system, FaceWare’s headcams and software, and the Xsens MVN motion-capture system, which delivers real-time performance capture of both body and face, and applies it directly to CG characters with real-time feedback. Recently, Katie Jo Turk from Faceware, Chris Adamson from Xsens and John Martin from Reallusion met up with me to squeeze me into a tight Lycra suit with a helmet and camera strapped to my head.
In the past, I have written about the flexibility of iClone for character creation and manipulation, motion and motion-capture application, animation tools, and a vast library of materials and assets you can choose from. But a couple of months ago, Faceware got in the mix to open up a whole new world to indie animators and studios.
Faceware Technologies, as its name implies, specializes in markerless facial tracking using computer vision and machine learning to obtain highly detailed data of a performer’s facial movements. Faceware offers headgear hardware: a high-end model Pro HD Mark III system with tiny HD camera (to prevent occluding the performer’s eye-line), onboard lights for consistent lighting on the face, a belt for extra battery power and the video transmitter, and a USB to HDMI converter that can feed any number of recording devices. Or you can opt for a less expensive model that uses a GoPro instead of the fancier camera, with the benefit of controlling the camera with the iPhone app. But really, any video device that captures your performance will work with Faceware because the magic is in the code, which now has a plugin that interfaces directly with iClone. This allows a multitude of controls and sliders for finessing or exaggerating your captured performance and to remap it onto any number of characters.
So, now we have a robust animation system with a world of character assets, and a way to capture facial performance and map it onto those characters. If you are using headgear, then the performer can move around — because physical motion does affect the performance — without the camera losing the facial features. We now need to capture the body motion!
This is where the Xsens MVN suit comes in. For my demonstration, we had the latest Awinda suit which consists of a shirt and a bunch of straps that you loop around your limbs at key points. Each strap holds a matchbook size wireless tracker with a bunch of technology in it that measures acceleration and magnetic fields and stuff to assess where it is in space. All that data is fed back to a receiver which interfaces with iClone! The higher- end MVN Link suit is a complete Lycra outfit with the trackers wired together, which can sample at a higher rate for more fidelity. But we aren’t assessing pro athlete dynamics or planning for the best prosthetic to replace a leg — I’m just running and jumping around my house like a fool, so the Awinda is perfectly fine. No matter where I ran around, back at my HP Mobile Workstation, we were recording both my antics and my face, and applying it to the 3D character on the desktop.
That recorded data can now be post-processed after the performance — either through sliders, which tweak the performance on the fly as you watch it play, or if you want to dig in deep, you can edit at a keyframe level. We can now refine the data within iClone itself and get some cool stuff going — especially since the latest iClone is getting into the Physically Based Rendering game, and integrating real camera data. But what if we have a broader pipeline that uses Maya, or Max, or we are using Unreal or Unity? Reallusion has 3DXchange for both importing assets and data, and exporting them out to a plethora of other 3D packages.
The overall setup time for the whole system was maybe a half hour, which included making sure that all the software was set up on my Mobile Workstation with the latest and greatest builds; getting the right-sized headgear for my big head, calibrating the cameras, and then getting the suit situated. (I totally recommend getting a friend to help out with this. Putting the suit on alone is a bit frustrating. Maybe not as frustrating as say, tying a bow tie, but it’s way easier with someone to help out.) The headgear fits in a small Pelican case, including the cables, batteries, adapters and Teradek video transmitter. The Xsens fits in a backpack, and iClone fits inside your laptop. You are quite literally a walking motion-capture studio.
My mind is swirling with applications from previz and pre-production on films, to planning shoots in VR (blocking out actors in a space before even getting to the space), as well as even non-film applications (God forbid), like dance analysis, or martial arts schools — any place where feedback on physical performance is necessary to bring people to the next level. And the portability of the whole system makes it convenient and accessible.
All of these tools fitting together gets me excited about how this can benefit independent artists who may not have access to a large motion-capture volume, or animation teams that just want to quickly block out beats in a performance and then add on top of it. We are living in the future, people!
[This demonstration was accomplished using an HP ZBook 15 G3 running Windows 7.]