rcasiodu wrote: ↑
Thu May 30, 2019 9:22 am
Hi 6by9, what's the main different of the two methods you mentioned?
The main ones:
- Licence. Any kernel driver almost has to be GPLv2 and therefore open source. Check with your sensor supplier that they are happy for the register set to be released. Copying raspiraw means that the register set can be hidden in your userspace app which can be under any licence you fancy (including closed). It's all a little silly as it is trivial to put an I2C analyser on the lines to the camera module, run your app, and capture the relevant commands.
- Framework. V4L2 is the Linux standard APIs and is therefore portable to other platforms. MMAL is specific to the Pi.
- Support. I'm less inclined to fix any issues discovered in rawcam than in V4L2. V4L2 is seen as the way forward.
rcasiodu wrote:In the first method, can you give me an example of drive file? Could I just modify the ov5647.c(/drivers/media/i2c/ov5647.c) to other sensor?
/drivers/media/i2c/ov5647.c is a relatively basic driver, but would work. imx258.c is a little more comprehensive, but lacks the dt/fwnode configuration and regulator control (not essential if you set the power control GPIO some other way). I believe ov5640.c is a reasonable example of that.
rcasiodu wrote:How to use the v4l2 driver to get raw data from sensor? Is it possible to transfer raw10/raw12 to yuv data?(use arm core?)
You use the V4L2 API to get raw frames out.
"v4l2-ctl --stream--map=3 --stream-to=foo.raw --stream-count=100" would be a simple existing tool.
https://linuxtv.org/downloads/v4l-dvb-a ... ure.c.html
is the standard example for grabbing frames. Do what you want in process_image(). You want to be using the MMAP method, not read (userptr isn't supported).
The driver delivers the raw data, so that is whatever the sensor produces. You could try converting Bayer to YUV on the ARM, but you get a LOT of data, and the image processing required is not exactly lightweight.
Raw10/12 are a pain to unpack.There is an option to expand the raw10/12 to 16bpp (lowest bits used), but it's not exposed at the moment. unicam_set_packing_config does the relevant config, but currently mbus_depth will always equal v4l2_depth.
You can link V4L2 with MMAL. My fork of yavta (Yet Another V4L2 Test Application - https://github.com/6by9/yavta
) does that, and uses dmabufs so there is no copying required of the data between the two subsystems.
Similarly in the 4.19 kernel there is now a V4L2 wrapper around the ISP which recent versions of GStreamer should talk to as v4l2videoconvert.
There is some work going on with http://libcamera.org/
to produce a standardised Linux framework for complex camera subsystems too.
rcasiodu wrote:Do the "vc.ril.isp" use the same pipeline of the isp in bcm283x for officially supported camera(ov5647 & imx219)? Or it just use the GPU core of bcm283x through MMAL?
Yes, it is the same hardware block, but with none of the control loops (AE/AGC/AWB/Len shading) running on the data.
On my list of tasks to support the libcamera stuff is to expose the statistics from the ISP so that those algorithms can be run more easily from userspace.
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
I'm not interested in doing contracts for bespoke functionality - please don't ask.