It's a really cool idea, I agree.
There is an independent piece of software created by one of the users of the forums that takes a recording repository and translates it into a tone mapping for the note edit window. I believe it's a FFT-based script, and I'm not certain of the requirements. So this isn't the first time it's been tried. I don't know how readily this could be developed and integrated into PTQ, as there are so many variables involved just from the perspective of how different individual recordings are. You'd need a lot of front-end machine learning just to collect what notes are what and isolate them into a training set that then would be reverse engineered into parameter instructions. So, I doubt that there are a lot of turn-key solutions yet, but again, it's definitely an interesting idea.
On a side note, I wish that more of the AI buzz went into things with more practical research and development uses than making cheating on school assignments easier... Better AI related to music and music-data analysis (or weather forecasting, biomedical research, etc., etc., etc.) seems to be a much nicer use for the principles behind the tech than investing so heavily in telling people to put glue on pizza just to try a new shortcut towards a deeper share in the advertising market for the entire planet...
Spotify:
https://open.spotify.com/artist/2xHiPcCsm29R12HX4eXd4JPianoteq Studio & Organteq
Casio GP300 & Custom organ console