Well, well, interesting article on Wired this week:
"Musicians, Real Performances: How Artificial Intelligence Will Change Music"
In short, it's about using music analysis to capture the musical style of a certain performer, putting that into a model, and using it to apply that style to another work. I know people have been doing work on this for some years now (I remember seeing some demos of the work by Roberto Bresin from the Swedish KTH).
Now, apparently, some people are seeing a big business in this: as the articles says: "Venture capital firm Intersouth Partners led a $10.7-million round of Series A funding in the company" (the company being Zenph Sound Innovations).
Oh, and of course the whole licensing issues aspect comes into play again: new models for licensing the "personality" of a performer are being considered. Want your new composition to be played by Keith Moon? OK, just buy a license to use his playing style...
Read the full article here.