Hi Alex,
I would direct you to this link as an example of 3D depth mapping with a visual to sound conversion.
https://www.seeingwithsound.com/binocular.htm
I reviewed your page, and I think your method is interesting because of the high speed. I wonder if you will be able to achieve that on SBC's? what were the specs of the system you used? Have you tried it with actual dual camera input? What kind of device are you using to generate depth information?
As the efforts here are all open source, just adhere to the licence requirements, and you can take what you like.
If you came up with something great, and wanted to share, we would love to add it to the growing stack.
With our goal being creating the most economical device, I do foresee one problem, which is this: generating the depth information generally requires two cameras. Cameras are an expensive component, and adding a camera sounds like it would add a lot of cost.
The impression I've gotten from the speed of the raspberry pi's is that as technology improves, it will make efforts like yours achievable. In fact the speed boost going from the rpi 2 to the rpi 3 has completely bridged the gap for the system we've made. any lag seen on the earlier hardware was pretty much eliminated with the exception of some startup lag.
Unfortunately, it has utterly destroyed battery life. The tradeoffs of power/performance are never fun.
At this time, I would actually prefer to produce with the rpi 2 until things mature more with the rpi 3.
We have certainly made the raspberry pi a wearable device, and would be happy to share.
For more detailed information I would refer you to the git repo which has everything from the software to the hardware designs including custom electronics and 3D printable case designs.
https://github.com/aftersight/After-Sight-Model-1
As a final note, I have already looked into 3D depth map generating devices that are cheap, and have found that it boils down to used kinect sensors. the old version uses structured light, and the new version uses time of flight. Both were found to have outlandish power requirements (ie. the user would need to wear literally pounds of batteries to get useful lifetimes).
I have seen some compact low power consumption devices that are starting to come out, but it appears the price tag is >$500 USD for the sensor without any other components, and the power draw was still high (about 500 mA at 5V).
If cheap depth could generating devices become ubiquitous, my interest level will elevate substantially.
Or if people can find economical dual camera glasses/ultracompact 3D cameras.
Those are my thoughts, I like what you are doing, and if you can achieve the ultrafast refresh rates you demonstrate I think you have a winning idea.