Topic: Feature Request: Clone Piano from Audio Recording

I think this would be a killer feature if the Modartt team is up for a little machine learning.

What if a user could estimate Pianoteq parameters from an audio recording? Not only the piano intrinsics, but the model could even make a guess at the effects chain.

This is definitely not a trivial task, but DDSP (differentiable signal processing) has been an area of active research for quite a while. For example, Searching For Music Mixing Graphs: A Pruning Approach

Re: Feature Request: Clone Piano from Audio Recording

It's a really cool idea, I agree.

There is an independent piece of software created by one of the users of the forums that takes a recording repository and translates it into a tone mapping for the note edit window.  I believe it's a FFT-based script, and I'm not certain of the requirements.  So this isn't the first time it's been tried.  I don't know how readily this could be developed and integrated into PTQ, as there are so many variables involved just from the perspective of how different individual recordings are.  You'd need a lot of front-end machine learning just to collect what notes are what and isolate them into a training set that then would be reverse engineered into parameter instructions.  So, I doubt that there are a lot of turn-key solutions yet, but again, it's definitely an interesting idea.

On a side note, I wish that more of the AI buzz went into things with more practical research and development uses than making cheating on school assignments easier...  Better AI related to music and music-data analysis (or weather forecasting, biomedical research, etc., etc., etc.) seems to be a much nicer use for the principles behind the tech than investing so heavily in telling people to put glue on pizza just to try a new shortcut towards a deeper share in the advertising market for the entire planet...

Spotify: https://open.spotify.com/artist/2xHiPcCsm29R12HX4eXd4J
Pianoteq Studio & Organteq
Casio GP300 & Custom organ console

Re: Feature Request: Clone Piano from Audio Recording

I like this idea too.

Occasionally I use Melodyn in stand-alone mode, import an audio file (like a piano or other instrument), copy it's spectrum data, then import another file and paste the spectral data into it.. then the output file will be the performance of one instrument but with largely the sound of the desired instrument. It's not perfect - but for sure, you can play around with levels of the effect and sometimes, just a recording of humming a melody can sound 'interesting' when given the sound of a wind instrument - and so on.

But - I love the idea of clashing spectrum profiles around inside Pianoteq closer to a real-time thing. It's possible that AI may not need to be involved if users choose a source recording to perhaps drag-drop into the spectrum panel (for a basic use-case concept). I have posted before about a desire for a spectral mix utility (but that was limited to thinking just about the spectrum data), so it's definitely a thing of particular interest to me, esp. going behond that.. I just don't know how difficult such a thing could be for Modartt to implement to the satisfaction of the users who like to do this kind of thing.. but extra mile value would be likely, esp. with some AI (possibly linked with the existing morph tools somehow) to manage things like the other internal controls along with just spectral data. Cool idea!

Pianoteq Studio Bundle (Pro plus all instruments)  - Kawai MP11 digital piano - Yamaha HS8 monitors

Re: Feature Request: Clone Piano from Audio Recording

Qexl wrote:

Occasionally I use Melodyn in stand-alone mode, import an audio file (like a piano or other instrument), copy it's spectrum data, then import another file and paste the spectral data into it.. then the output file will be the performance of one instrument but with largely the sound of the desired instrument.

Sounds interesting. I think this is sometimes called timbre transfer (and sometimes more broadly style transfer). Google Magenta Lab has put out a lot of related research and created DDSP VST plugins to transform an input signal into a preset list of instrument models.

If you want to get into the weeds, you can try training a model on your own instruments with DDSP Timbre Transfer Colab. I have tried it myself with middling results; it needs a lot of training data.  Of course, this line of inquiry is very similar to voice cloning, which has gone from impossible to utterly ubiquitous in the past 4 years.