Well, I managed to hit my self-imposed deadline - the XSI pose / facial animation exporter is working! It works beautifully, and here’s a small video to show it off.
Yes, I know his smiling is a little extreme at the end, that’s how it was in XSI. They were doing that in a rather exaggerated fashion to show how you would combine ‘emotion’ poses with lip sync poses, and that’s exactly how OGRE is playing it back. In the mesh, there are 2 main overarching expression poses which are used in this animation, ‘mad’ (used at the start) and ‘happy’ (used at the end). There are also a number of mouth shape poses which are forming the words, specifically shapes for A, U, O, E, I, C, W, M, L, F, T, P, R, S and TH - these only affect the mouth area, whilst the emotion poses affect the mouth and the eyes / cheeks.
Keyframes then take these poses and blend them at various weights to give you a combination of lip syncing and different emotional expressions. These are being combined completely on the fly and totally flexibly - there is no cheating going on here by pre-baking that particular sequence for example. With exactly the same pose data present in this mesh you could make this head say any sentence at all, and change expression at any point in parallel with that between happy, sad, mad etc.
Note - I’m aware he hasn’t got a toungue 😉 That’s because the toungue is a nurbs object in the original which I haven’t gotten around to converting to a polymesh yet.