Yes, pose animation has now been put on steroids. Multiple poses can now be referenced in a single pose track’s keyframe, and they will be blended together using an individual influence per pose, interpolated along the track, and scaled overall by the animation weight. The result is that you can define reference facial animation poses (expressions, mouth shapes) and script them being combined in various ways over the life of an animation, for example to create the sequence of of a character making a particular speech, with all the right mouth shapes matching up with the words and yet not taking very much in the way of storage.
This all functions in both software mode (CPU interpolation) and hardware mode (vertex shader interpolation) and can be combined with skeletal animation too if you want. The only limitation with hardware animation is that you’re limited on the number of poses you can interpolate, since each pose needs a separate vertex stream. Still, you can blend 3-4 poses quite comfortably and the benefit of not having to upload that vertex data every frame is worth it. I hope to be allowed to export this XSI example using it for a demo since my own modelling skills are terrible, and my existing tests are so much programmer art 😕