Topic: Usage scenarios for embedded/headless Pianoteq
Usage scenarios for embedded/headless Pianoteq
The ARM developments for P6 really excite me (as per my post about it back when in 2012: https://www.forum-pianoteq.com/viewtopic.php?id=2592
I have looked over the experiments people are running and discussion on various small system boards. Zynthian is really exciting.
All this got me thinking: What are the actual usage scenarios for the embedded/headless Pianoteq? An understanding of this, I think, will help direct and focus efforts. From my readings of the forum posts, here is what I can gather in usage scenarios:
1. Performance: A small, quiet form factor that sounds good and reliable for live performance sound generation. In this scenario, the performance of the system must generate tones good enough for live performance and player monitoring. Sample rates and polyphony won't matter as much as reliability.
This means perhaps a 3-4 mic setup: two mics from the keyboard for the musician, and one mic to capture the directionality/perspective for the audience and/or bandmates (in the case of a grand piano), separately routed. This is because bandmates and audience will likely hear even an acoustic instrument as a mono sound source. It is really only stereo for the player. So that would be a stereo route for player monitoring, and one or two separate mono routes for bandmates/audience. I think only a really picky bandmate would want their own directional mic simulation.
Presence and EQ to sit well in the mix is more important than ambience, sample rate and polyphony.
2. Solo Practice/Recording: An "instant on" way to pick up an instrument and practice, without the added distractions of a full computer/desktop environment. This should mimic the workflow of an acoustic instrument as much as possible, "ready and waiting" for the player. Ambience and polyphony may be more important in this scenario for sound quality than presence, EQ and sample rate. In cases of group practice, the "Performance" usage scenario above more directly applies.
For solo recording, scenario 3 (below, MIDI performance capture) handles any sound quality issues.
3. MIDI performance capture: A means of capturing good performances from either scenario above in MIDI, for later rendering under ideal/non-real-time conditions into an optimal sound file, probably on a full computer (export midi, import to full-size computer running Pianoteq, render midi file into wav/flac with optimal recording settings). Thus even if the initial performance was done at a low quality, the performance can still be rendered at high sound quality to withstand repeated listenings, with ample headroom and depth for any post-processing.
I don't think that these three cases are necessarily exclusive of each other, but probably point to a single setup/optimization path of Pianoteq performance on small form factors. These could be the basis for default settings in Pianoteq Stage since those settings are less tweakable. The Standard (which I have) and Pro versions would also benefit from such default settings based on usage scenarios, even if as a starting point for further modification.
Sound quality: Past a fairly moderate quality setting, no one will hear the difference of quality "in the moment." What is this threshold? I suspect ~25-30k sample rate, 16-24 note polyphony (maybe a little more, say 30 note polyphony for complex solo practice). The third scenario -- MIDI capture -- completely mitigates any shortcomings of real-time processing for the first two usage scenarios. So long as the embedded option allows for midi data capture and rendering into audio later. This means that the first two scenarios need to simply sound/feel "good enough" like an acoustic/analog instrument to both the musician in the moment (both practice and performance) and the audience (performance). A lot of this has more to do with speaker placement than internal sound quality. (For example: https://www.forum-pianoteq.com/viewtopic.php?id=4625)
Live recording: Even if there is a live band recording going on, the mix of instruments and live sound will likely mean the lower audio quality of the embedded Pianoteq will not be noticeable. Under scenarios where piano is the focus (e.g., a piano recital or trio), an acoustic instrument is likely in use. Even if not, then it could still be possible to mix in properly-rendered Pianoteq audio based on the midi performance after-the-fact in a multi-track recording. The piano sound recorded in real-time does not need to be completely muted from the mix, only "turned down." This is possible in any multi-track recording process where individual instruments are mic'd. In cases where instruments share mics, or simple stereo recording, audio quality already suffers in the recording chain to the point where it may not make sense to sub in higher quality Pianoteq renders, anyway (as it would stick out in the mix).
4. The last thing I can think of is the question of accessing and changing Pianoteq software parameters in the headless setup as necessary.
I think there is also fantastic opportunity to develop Android/iOS app front-ends for modifying headless settings in the Pianoteq software, perhaps over bluetooth or wifi connection. But networking and live sound production have never been very compatible in my book, so maybe this is not practical.
Whatever the interface (Android/iOS app, embedded touchscreen, ssh, etc), the parameters most important to the usage scenario should be the first/easiest to modify.