Based on my discoveries & decisions last weekend about how to go about including pose animation support in XSI, I finished ripping out the old animation detection code in the XSI exporter and replacing it with the new, entirely mixer-based version. I also took the opportunity to ruthlessly excise a whole bunch of code which used to export skeletal animation directly by reading the animation fcurves on the various deformers, in favour of the alternative IK sampling routine I’d added afterwards. The direct export approach wasn’t watertight due to the sheer number of derived relationships & constraints (on top of the IK itself too) you can set up in XSI; without decoding and recalculating all of them it was impossible to generate the correct animation. Sampling the results then getting OGRE to optimise the result (removing redundant tracks & keyframes) was much more reliable, so now that’s the only option for skeletal animation.
The other advantage of basing everything in the mixer is that you will be able to tell it to export animations comprised of compound animation clips inside XSI; which means an animator can keep their animations separate in XSI but export a combined version which is optimal for real time, which makes the workflow a bit more flexible and matches XSI’s preferred way of working (non-destructive, non-linear etc). This applies even if the animations are of different types (skeletal and pose/shape animation) - in this case it will actually become 2 animations after export, one in the skeleton and one in the mesh, but only requires a single animation state (using the shared name) on the entity to control both. In theory, I have yet to test this 😉 I’m determined to get the XSI pose animation finished by the end of this weekend.