pmaersk wrote:The camera component claim to support RGBA and BGRA (among others), but only RGBA seems to come out of it, so what a component claim does not always hold true.
It should hold true.
The A from the camera will always be 0, but it is a 32bit packing. I'm fairly certain that I checked that through V4L2.
pmaersk wrote:So the image_decode can only decode compressed images. What about the resize (port 60 and 61) component? I read somewhere that that module can convert image formats. I hope to be able to use that to convert camera RGBA frames to BGRA, but haven't figured out yet how.
resize can convert between some formats, but it's a great solution as it will also be trying to resize at the same time.
video_splitter may be a better bet as it can do some conversions - I couldn't give you a list off the top of my head though.
None of these help you much though as they'll all still deliver images that you'll try to render through video_render, and that can't cope with per pixel alpha. I recall looking into it for a PiCamera bug and just finding there were a couple of places that just always set a global alpha which overrode the per pixel channel.
6by9 wrote:EGL is an option, but not one I know enough detail on what is possible.
Should be possible to create a display and bind a texture thus displaying it, but it requires dispmanx. Should be straightforward although I'm not sure that is the most lightweight option. And I am not sure the glBindTexture() will support BGRA on a Pi. glBindTexture() should, but maybe not on this platform.
As I say, I just don't know.
Lightest weight option is to talk to dispmanx directly. It's not the easiest API, but there are a number of internet posts that probably take the guesswork out of it for you. There's also an example in the userland repo at https://github.com/raspberrypi/userland ... o_dispmanx
That example seems straight forward. However wonder if the dispmanx solution (without EGL) can take a BGRA image and not just pixel by pixel by pixel ... Do you know?
The reason for using BGRA is that I am using it in Snowmix (OSS video mixer), that heavily uses Cairo Graphics and Cairo Graphics uses BGRA internally.[/quote]
Dispmanx is the lowest level API down onto the composition hardware. video_render and EGL will both use that for getting images onto the video output. The Linux frame buffer is also just another dispmanx layer. It should take almost any ordering of R, G, B, and A, although there are some combinations that won't have an enum brought out because they're too unlikely.
Or am I misreading you and you don't really have per pixel alpha that you care about, but need 32bit/pixel packing? In which case video_render I thought video_render could do it.
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
I'm not interested in doing contracts for bespoke functionality - please don't ask.