FAQ
"Will you release this?"
Initially this was the plan, however I've gotten a lot client work on my plate, and the bills do need to get paid. I simply don't have time or means to develop this into a commercial product and maintain it. Currently this project is pretty much on hold, but I'm considering doing a walk-through of the systems and how it all works at some point.
"How does this differ from Steam Audio?"
Steam Audio is a much much more extensive library, and includes HRTFs. So, if you would like to see a plug-and-play extension I'd suggest contributing to
Stechyo's godot-steam-audio repository. A Steam-Audio extension is the most straightforward way to bring realistic audio spatialisation to Godot, and Stechyo has done some excellent work to make that a reality!
"Where can I find updates on this project?"
When/If I find more time to make more progress on this, my
Bluesky is probably the first place I'd post about it.
I'd love to do another showcase on my
Youtube page at some point, but these showcases are a big time investment to create.
"Why don't you just make this open source?"
Mostly because it's programmed really shoddily, and it's not in a state that it would be a good idea to implement this into any project. This is why I'm considering just doing a walk-through of the systems.
"How does this work?"
This is a difficult question to answer because the approach I'm using constantly changing.
In order to calculate acoustics properties, an audio source first needs to figure out what the surrounding geometry is like. For this the audio source does a series of spread out ray-casts. These ray-casts sample the geometry: when they hit a surface, an estimated surface area and volume is calculated, and acoustic properties are gathered based on the acoustic material of the hit surface. These ray-casts represent a very simplified version of audio waves. We can simulate bounces specularly to gather more data of a space, which gives more accurate and stable results.
All the data gathered from these samples is used to compute acoustic properties, such as reverb length, early reflections, down the line more accurate attenuation/occlusion, .... To finally auralize this dataset, a combination of these acoustic properties and other relevant data (such as listener position relative to the sound source, ...) drive a set of audio filters.
"Does this implement HRTF?"
Nope, HRTFs are way out of my wheelhouse.