Pranav,
running the raspistill process to generate a single frame in a loop introduces a 0.8 second delay in addition to the processing and sound playback. This is why I use the timelapse function. The timelapse function incurs the penalty only when starting the raspistill process. Afterwards, frames can be grabbed at ~100-200ms frequency for extended periods of time.
I am unfamiliar with gdb or really, using C in general. I find that language finicky, and everything I've ever tried to do has not worked.
As for synching up the processes, it won't happen easily. The reason why is that raspistill is out of my control, but it is the single most efficient way to generate image captures without bunging up the cpu.
If you use raspistill and only capture to one filename, the file is actually unavailable for large portions of time when the C routine is looking for it.
However, raspistill will capture to sequentially increasing filenames using the flag %04d in the filename.
what I need to do is take the most recently generated, complete file, and then have the C code process it.
My plan to achieve this is to have the python loop I made look in /dev/shm, and rename the most recently completed file to /dev/shm/picture0001.jpg
the C process will look for /dev/shm/picture0001.jpg after loading the file, but before the soundscape is generated, the C process will drop down and delete all files in /dev/shm. This lets raspistill keep populating the directory with new images (picture0002-picture9999.jpg) while the soundscape is being played back. As long as the capture frequency is faster than the sound playback, this should be golden. so I am setting the timelapse to 300ms for testing purposes.
This will make the python script, raspistill, and the C process play nicely.
Now, to address the C process.
You indicate that the majority of time is spent on calculations preparing the sound file.
I've identified some areas for improvement already. I don't know how to implement all of these yet, but some I do know how to do.
1. raspistill can make the captures in grayscale using GPU functions. the C process can stop converting to grayscale. The GPU will be faster, and free up more CPU time for sound generation. I will implement this.
2. The sound generation currently says that it is making a 16 bit file. While I understand that 16 bit sound would provide advantages to the listener over 8 bit, I also have found in forum after forum that the rPi is only capable of 48kHz 11 bit sound. If the sound routines can be partially 'lo-fi' to match the rPi, the CPU will be less engaged in creating the sound file. I don't know how to implement this yet, but it looks like the most proximal goal for reducing CPU load, thus increasing playback frequency.
There are other things about the C process that I want to look at. I want it to be headless (ie. no keyboard interaction, and no screen output/interference). I want it to run constantly in the background.
After these optimizations have been made, I want to see how things round out. Can we get the playback every 5-7 seconds down to once every 3 seconds?
I would be over the moon if we got there, and it would be a huge milestone on the way to a once/sec frequency.
Last but not least:
http://raspberrypi.stackexchange.com/qu ... nt-support
I need to confirm that I have the hard float engaged, and that the math being done in the C process is taking advantage of it. If it isn't, then this is also key to making things sing.
Here is how to check:
http://www.raspberrypi.org/phpBB3/viewt ... 33&t=25350
but I won't know for a while due to obligations.