Topic: Finite difference-based sound synthesis using GPU

Any inspiration for Modartt, or just nothing new under the sky?

http://queue.acm.org/detail.cfm?id=2484010
http://unixlab.sfsu.edu/~whsu/FDGPU/

Re: Finite difference-based sound synthesis using GPU

Umm. Can you boil this down to a sentence or two?

Last edited by Jake Johnson (11-05-2013 18:31)

Re: Finite difference-based sound synthesis using GPU

Not me. I have no clue about these things. It was just a question out of curiosity, rather than any suggestion. I hope the Modartt folk can tell us whether this is anyhow related or relevant to their work or whether there is any new interesting idea in it... If nothing else, then maybe just the idea to harness GPU, as discussed here some time ago...?

Re: Finite difference-based sound synthesis using GPU

The mp3's linked at the bottom of your second reference ( http://unixlab.sfsu.edu/~whsu/FDGPU/ ) are good, particularly the multiple strike mp3's with someone hitting different parts of the controller.

I see that this same page offers a download of the source code of the program that they are using, but it is for Macs only. Alas for me. Looking forward to the reports.

Last edited by Jake Johnson (11-05-2013 18:50)

Re: Finite difference-based sound synthesis using GPU

The NVIDIA CUDA system used to be used (huh, wotta a pleonasm! make that used to be usable) on HP laptops, and Windows systems, for non-immediate graphics crunching such as FAST format conversion like BluRay-to-MP4, before HP was forced by economics to switch away from NVIDIA to that lamer Canada company, won't name. The conversion-speedups were on the order of 10x, so the added cost of the GPU was in those days very worthwhile to me, who now do big conversions much less freely.

But to the topic. The point is that Windows systems not struck-out like HP, perhaps Sony VAIO, still may use NVIDIA, and this tech so be available, though the present developers seem unaware of it. The Linux window here points to Sourceforge, and a bit of Googling using the key terms plus that name might turn up more Windows-availability than this article offers.

Though that first article looks like a pretty good method-primer without regard to OS.

Re: Finite difference-based sound synthesis using GPU

Yes, the recordings do sound good.

Just btw, there's a competitor to NVIDIA's Cuda called OpenCL:
http://www.theinquirer.net/inquirer/new...-and-linux

Greg.

Re: Finite difference-based sound synthesis using GPU

So, essentially, the idea is to use the processor(s) on the video card along with the computer's normal CPU for parallel processing? Just like for gaming that uses intensive animation. The worry for a developer, then, would be that the potential users would have to own a video card that followed the NVIDIA standards that the program was designed for?  (My video card follows that standard, and I know that many, many others do, but I don't know what percentage of video cards do.)

Or could this be an option for a program, much as using conventional multiple processors is an option: Use the NVIDIA card if it is available?

(I hate to admit to using it, but this wikipedia explanation and history of "GPU" helped me to understand what was being discussed: http://en.wikipedia.org/wiki/Graphics_processing_unit )

Last edited by Jake Johnson (12-05-2013 13:22)

Re: Finite difference-based sound synthesis using GPU

http://en.wikipedia.org/wiki/GPGPU

Hard work and guts!

Re: Finite difference-based sound synthesis using GPU

We cross-posted. I just added a link to my post. Note that these are different articles. Your's is more detailed.

Re: Finite difference-based sound synthesis using GPU

Modelling the motion of the body of an instrument with finite elements and time-steps really is the ultimate in physical modelling, but very computationally expensive, hence the need for GPGPU. The parameters you need are the instrument geometry and the physical properties of the materials. On the other hand, "modal" synthesis, where the known vibrational modes of the instrument are known, along with their characteristic decay and coupling, is cheaper and more practical. But you need to know lots of parameters to feed into the model. A nice combination would be to use non-realtime finite elements and differences to model your chosen instrument and extract the modal parameters as a function of strike position and intensity, etc. It would be a kind of "rendering" of the instrument into parameterized form, that can then be played more cheaply in realtime. It still might take an overnight job using GPGPU to extract everything you want! Of course, you can also render the thing out as a load of tradtional samples and fill a few hard disks

Re: Finite difference-based sound synthesis using GPU

So, is the "modal synthesis" what Pianoteq is using now?

Re: Finite difference-based sound synthesis using GPU

I think Pianoteq is cleverly using a bit of everything.

Re: Finite difference-based sound synthesis using GPU

wanthalf wrote:

So, is the "modal synthesis" what Pianoteq is using now?

I don't think so.

Hard work and guts!