Olivier W wrote:Hi everyone,
Yamaha hybrid pianos such as the N1X make use of Polyphonic Aftertouch to control the damper duration, depending on the exact key release position and speed.
In other words, the way the damper comes back is captured in real time and translated into detailed MIDI messages.
Here is a short video showing the relation between different release speeds/positions and the MIDI messages generated:
https://youtu.be/f8BN2OpVljc
Currently, Pianoteq already has a Note Off Velocity curve, which is great.
It would be fantastic if Pianoteq 9 could also offer an option to drive this curve with Polyphonic Aftertouch instead of Note Off Velocity, for users with hybrid instruments that send this data.
This would make it possible to fully exploit the Yamaha hybrid MIDI implementation, resulting in much greater realism and precision in damper release behavior.
Regards,
Olivier F.
I've watched this carefully and looked at the Logic script you wrote, and it still appears to me that there are insufficient MIDI data to construct a full range of note-off velocities. It appears that aftertouch signals are only sent in the slow-very slow release range, and therefore, any releases that are in the moderate-fast range have no data from which to calculate them (all you have is that '64' placeholder it sends regardless of speed, and no aftertouch data to compensate with). So you can use this slow-very slow data to deal with very long release times, but you still have nothing from which to derive varying levels of articulation (for example, different styles of staccato from gentle to sharp).
Which brings me back to my (originally incorrect on technical grounds in terms of aftertouch) conclusion that the necessary data just aren't broadcast. The range from moderate-fast release and all its nuances is just as important as very slow releases, and in my use case at least, having one and not the other would be of no use. In fact, at least in classical playing, I'd argue that the moderate to fast range is more critical in terms of nuance and accurate playability.
It's even audible in your video - you demonstrate audible differences in articulation with the inbuilt processor that have no corresponding midi data to distinguish them. So the question is - what is being used internally that isn't being broadcast? And if it is being broadcast, which bits are being used to represent it?
Last edited by thesloth (05-09-2025 18:57)