jamesh
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 24971
Joined: Sat Jul 30, 2011 7:41 pm

Re: Official V4L2 driver

Tue Jan 07, 2014 2:33 pm

Note that if the encoder is running at 1080p30 there is no spare processing for decoding (they use the same HW in some places), so transcoding 1080p30 in real time isn't really possible. I think 720p30 transcode is possible since it's less than half the requirement of 1080p.
Principal Software Engineer at Raspberry Pi (Trading) Ltd.
Contrary to popular belief, humorous signatures are allowed. Here's an example...
“I own the world’s worst thesaurus. Not only is it awful, it’s awful."

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 7906
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: Official V4L2 driver

Tue Jan 07, 2014 3:08 pm

laurent wrote:Thanks for your answers (and your impressive work).

The solution I was thinking about was much like having a first stream in full resolution on a V4L2 device, and another virtual device, (like Alex111 said) using this first device, but transcoding this first stream with the GPU (thanks to OpenMAX API ?).
The goal is to offer standard V4L2 interfaces for softwares which manipulates video streams, with minimum CPU usage.

I read somewhere here that transcoding is not that easy thing, and the MJPG encoder is not operational. Adding the difficulties of exposing a V4L2 interface, and the internal chipset limitations, I know that I'm expecting a lot.


(and, sorry if I'm hard to understand, but english is not my native language :oops: )
So this V4L2 driver and the OpenMAX code are actually using the same blocks of GPU code. Sadly we can't connect them together efficiently via the ARM as they have differing APIs, plus V4L2 is kernel space vs OpenMAX being userspace.

The V4L2 driver is already connecting our GPU "building blocks" together to provide JPEG, MJPEG and H264 encoding. We can take that further and feed data off at various different points and in various different formats. V4L2 also supports the concept of multiple subdevices under one device, so loading bcm2835-v4l2 could present /dev/video0, /dev/video1, and /dev/video2, and each could select a different encoding.
The issue is finding the time to do this work :( It is on the to-do list, but it's a fairly large task.

MJPEG encode is now working - see the posts from the last 3 weeks on this thread.
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

Alex111
Posts: 32
Joined: Sun Oct 07, 2012 6:56 am

Re: Official V4L2 driver

Tue Jan 07, 2014 3:23 pm

Thanks for your answer and the progress with MJPEG.

Do you think the following behaviour is doable with reasonable efforts:

Instead of having only one device (/dev/video0) we have a second virtual one (/dev/video1). The second one just delivers raw rgb data (maybe JPEG would be interesting for other people in this forum). Resizing of the image data in the GPU would be fine, too if possible! For the FPS rate we just deliver every x th frame (e.g. every 15th frame -> 2FPS).

That way we could have H264 FullHD video on device one (for recording/streaming etc.) and another "virtual" output for custom image recognition software...

laurent
Posts: 328
Joined: Thu Jul 26, 2012 11:24 am

Re: Official V4L2 driver

Tue Jan 07, 2014 3:56 pm

jamesh wrote:Note that if the encoder is running at 1080p30 there is no spare processing for decoding (they use the same HW in some places), so transcoding 1080p30 in real time isn't really possible. I think 720p30 transcode is possible since it's less than half the requirement of 1080p.
6by9 wrote:
laurent wrote:Thanks for your answers (and your impressive work).

The solution I was thinking about was much like having a first stream in full resolution on a V4L2 device, and another virtual device, (like Alex111 said) using this first device, but transcoding this first stream with the GPU (thanks to OpenMAX API ?).
The goal is to offer standard V4L2 interfaces for softwares which manipulates video streams, with minimum CPU usage.

I read somewhere here that transcoding is not that easy thing, and the MJPG encoder is not operational. Adding the difficulties of exposing a V4L2 interface, and the internal chipset limitations, I know that I'm expecting a lot.


(and, sorry if I'm hard to understand, but english is not my native language :oops: )
So this V4L2 driver and the OpenMAX code are actually using the same blocks of GPU code. Sadly we can't connect them together efficiently via the ARM as they have differing APIs, plus V4L2 is kernel space vs OpenMAX being userspace.

The V4L2 driver is already connecting our GPU "building blocks" together to provide JPEG, MJPEG and H264 encoding. We can take that further and feed data off at various different points and in various different formats. V4L2 also supports the concept of multiple subdevices under one device, so loading bcm2835-v4l2 could present /dev/video0, /dev/video1, and /dev/video2, and each could select a different encoding.
The issue is finding the time to do this work :( It is on the to-do list, but it's a fairly large task.

MJPEG encode is now working - see the posts from the last 3 weeks on this thread.
Ok, thank you for your explanations. :)

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 7906
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: Official V4L2 driver

Tue Jan 07, 2014 4:32 pm

Alex111 wrote:Thanks for your answer and the progress with MJPEG.

Do you think the following behaviour is doable with reasonable efforts:

Instead of having only one device (/dev/video0) we have a second virtual one (/dev/video1). The second one just delivers raw rgb data (maybe JPEG would be interesting for other people in this forum). Resizing of the image data in the GPU would be fine, too if possible! For the FPS rate we just deliver every x th frame (e.g. every 15th frame -> 2FPS).

That way we could have H264 FullHD video on device one (for recording/streaming etc.) and another "virtual" output for custom image recognition software...
The route we are intending to go when time permits is having 3 V4L2 subdevices within the one kernel module (ie /dev/video[0|1|2] by calling video_register_device() multiple times in the module load).
  • - video0 would be for YUV/RGB preview images (up to 1080P) and the overlay. Up to 30fps.
    - video1 would be for H264 or MJPEG encoded data at the same resolution as video0. Up to 30fps.
    - video2 would be for JPEG or high resolution YUV (or possibly RGB) frames. Probably in the range 5-8fps.
As they are separate V4L2 devices, you can tell them to stream at different times to one another as you require.

Adding support for differing framerates on video 0 and 1 would be possible if one is an integer multiple of the other (ie we can just discard frames).
We are NOT looking at supporting differing resolutions between the two (arbitrary resize is expensive in processing terms), although there may be a potential for a 2x2 subsample on video0 cf video1 (eg 1080P on video1 and 960x540 on video0). video2 can be any resolution independent of video0 & 1, although you'll probably want to keep the aspect ratio the same on all of them to avoid FOV changes.

All of this is just building on the functionality we have in the existing MMAL camera component, as used by RaspiStill/Vid/YUV. We are not intending on rewriting large amounts of GPU code if we can avoid it. Asking for massive extra functionality is unlikely to happen.

If I were allowed to work on this full time, I'd be looking at a good couple of weeks of effort, and there aren't any simple interim steps. Unfortunately company time for this is very restricted due to other work, so it won't be happening for a few months or so - sorry, but that's the way it is. I may try to rope jamesh in, but he's never looked at the V4L2 side before, although he does know most of the MMAL stuff.
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

jamesh
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 24971
Joined: Sat Jul 30, 2011 7:41 pm

Re: Official V4L2 driver

Tue Jan 07, 2014 5:43 pm

V4L2 == "Here be dragons"

OpenMAX == "Here be dragons. Who will eat you, poop you out, then stamp on the bits. The run those through a shredder. Then eat you again"
Principal Software Engineer at Raspberry Pi (Trading) Ltd.
Contrary to popular belief, humorous signatures are allowed. Here's an example...
“I own the world’s worst thesaurus. Not only is it awful, it’s awful."

User avatar
jbeale
Posts: 3578
Joined: Tue Nov 22, 2011 11:51 pm
Contact: Website

Re: Official V4L2 driver

Tue Jan 07, 2014 6:50 pm

6by9 wrote: The route we are intending to go when time permits is having 3 V4L2 subdevices within the one kernel module (ie /dev/video[0|1|2] by calling video_register_device() multiple times in the module load).
  • - video0 would be for YUV/RGB preview images (up to 1080P) and the overlay. Up to 30fps.
    - video1 would be for H264 or MJPEG encoded data at the same resolution as video0. Up to 30fps.
    - video2 would be for JPEG or high resolution YUV (or possibly RGB) frames. Probably in the range 5-8fps.
As they are separate V4L2 devices, you can tell them to stream at different times to one another as you require.
This would be really excellent for several different uses, security-camera type applications being one example. While we wait for this to happen, I'd like to point out something that is working already: the RPi Cam Web Interface from silvanmelchior as described at http://www.raspberrypi.org/phpBB3/viewt ... 43&t=63276
This is sending MJPEG to a browser as well as into the 'motion' application, which in turn triggers full-HD H.264 recording whenever motion is detected, and stops it when the motion is gone. His application 'raspimjpeg' is (as far as I know) internally connecting the mmal JPEG encoder to the video output, and not using v4l2 or the just-released MJPEG codec functionality. I'm sure performance could be better with more GPU help, but at any rate this is already quite useful.

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 7906
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: Official V4L2 driver

Tue Jan 07, 2014 9:15 pm

jbeale wrote:This would be really excellent for several different uses, security-camera type applications being one example. While we wait for this to happen, I'd like to point out something that is working already: the RPi Cam Web Interface from silvanmelchior as described at http://www.raspberrypi.org/phpBB3/viewt ... 43&t=63276
This is sending MJPEG to a browser as well as into the 'motion' application, which in turn triggers full-HD H.264 recording whenever motion is detected, and stops it when the motion is gone. His application 'raspimjpeg' is (as far as I know) internally connecting the mmal JPEG encoder to the video output, and not using v4l2 or the just-released MJPEG codec functionality. I'm sure performance could be better with more GPU help, but at any rate this is already quite useful.
That's a very nicely done app, and almost the same arrangement as I was intending for V4L2 (I wasn't intending to use the JPEG encoder on output[0]). The more awkward bit is fitting it in with V4L2 and all the options that leave open to the app.
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

dom
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 5398
Joined: Wed Aug 17, 2011 7:41 pm
Location: Cambridge

Re: Official V4L2 driver

Tue Jan 07, 2014 10:49 pm

6by9 wrote:
Alex111 wrote:I updated my Pi to the latest firmware. THanks for improving camera more and more!

Streaming in MJPEG mode seems to be stable now, but now there is another new issue. When closing the /dev/video0 device (e.g. crash of Video application or manually with stream.close() ) then the Pi completly hangs. Have to remove power to get it alive again.
This was not in the former firmware... Any suggestions?
Sorry, forgot to push that change to the V4L2 driver itself. The codec flush now takes longer than before and the V4L2 driver gives up too soon. When the reply does arrive, things have gone away and hence we get a kernel panic.
This fix is now in latest rpi-update.

Wulf0123
Posts: 19
Joined: Fri May 24, 2013 3:46 am

Re: Official V4L2 driver

Wed Jan 08, 2014 3:09 am

Hi,
Sorry if this is a stupid question but does this work for Pidora, or what OS should I use in order to install the dirver for V4L2? I'm trying to capture images from my webcam and so far the most promising tutorial is using V4L2

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 7906
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: Official V4L2 driver

Wed Jan 08, 2014 7:53 am

Wulf0123 wrote:Hi,
Sorry if this is a stupid question but does this work for Pidora, or what OS should I use in order to install the dirver for V4L2? I'm trying to capture images from my webcam and so far the most promising tutorial is using V4L2
This driver is solely for the RPi camera module and not for a generic webcam. V4L2 is the standard Linux API for talking to cameras, so you may well find that your webcam already has a V4L2 driver (do you get a device /dev/video0 when you plug your webcam in?).

I've not looked at Pidora in detail, but if it is pulling from the official RaspberryPi linux repository on github, then it should be getting all the updates to this driver (although probably slower than Raspbian as I suspect there is a manual step involved). We're developing it against Raspbian, so if you're using the RPi camera module and want this driver, then that's the OS I'd suggest you use too.
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

shuckle
Posts: 565
Joined: Sun Aug 26, 2012 11:49 am
Location: Finland

Re: Official V4L2 driver

Wed Jan 08, 2014 8:47 am

Tested with the newest versions:

Code: Select all

Linux raspberry2 3.10.25+ #624 PREEMPT Tue Jan 7 20:10:18 GMT 2014 armv6l GNU/Linux
Jan  7 2014 22:04:19 
Copyright (c) 2012 Broadcom
version 971028d3d656de279fcef4f502bc867b8a91cf22 (clean) (release)
and indeed mjpeg hard hang is gone and motion can be now restarted.

But it is strange that the new GPU mjpeg takes actually more CPU from ARM than the YU12 palette 8, where motion converts those images to mjpeg. Running with palette 8 and 1280x720 with 3 fps, I have about 25 % CPU idle on ARM, but when running with palette 2 mjpeg, I have less than 5%.

mjpg_streamer still works very well.

(good thing is that I have not seen any SD card corruptions for a long time even with these very regular emergency boots :D )

mcornils
Posts: 2
Joined: Thu Dec 19, 2013 10:53 am

Re: Official V4L2 driver

Wed Jan 08, 2014 11:22 am

Hello Dave,
6by9 wrote: Mainly green normally means the buffer is all 0, although your green is a little more vibrant than I'd expect from YUV(0,0,0), but that may just be the conversion to PNG.

I'm a little confused by your log, as part of it says "Setting video size 352x288", and then later it says "Size of webcam delivered pictures is 176x144". I suspect that 176x144 is what you're actually asking for, in which case we have a small issue at the moment that our hardware requires the buffer to be a multiple of 32 horizontally, and 16 vertically. 176 isn't a multiple of 32, so you'll actually get 192x144. It shouldn't all be green though. If you could try 352x288 then that may give a different result.

If we get the chance to develop this driver more, then we are intending to provide multiple subdevices (see the messages between jbeale and I), and we should be able to provide one stream of YUV buffers for the local display, and a ready encoded H264 version for you to send over to far end. Sadly that's going to be a little while away, but may be something to look forward to.

Dave
Sorry it took a while to get back to you; the Christmas break is to be blame for that. You were quite right with your initial diagnosis. It was hard, however, to make linphone accept a resolution different from 176x144 for H.263 - it is the "preferred codec resolution", and only hardcoding the video size to 352x288 in mediastreamer2's msv4l2.c/msv4l2_configure method made linphone use the "unwanted" resolution.

On 352x288, everything works fine, with the caveat that it *is* rather slow. We'd still very much prefer the lower 172x144 resolution, and linphone would then work out of the box. Is the implementation for cropping in the cards? I know you are doing this in your free time, so we will not expect any miracles...

By the way, yes, hooking up the H.264 engine to linphone makes a lot of sense, but that is probably out of our league. Alternatively, gstreamer could get a working SIP module. If you have any further ideas, feel free to share!

Best regards,
-Malte

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 7906
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: Official V4L2 driver

Wed Jan 08, 2014 11:44 am

shuckle wrote:Tested with the newest versions:

Code: Select all

Linux raspberry2 3.10.25+ #624 PREEMPT Tue Jan 7 20:10:18 GMT 2014 armv6l GNU/Linux
Jan  7 2014 22:04:19 
Copyright (c) 2012 Broadcom
version 971028d3d656de279fcef4f502bc867b8a91cf22 (clean) (release)
and indeed mjpeg hard hang is gone and motion can be now restarted.

But it is strange that the new GPU mjpeg takes actually more CPU from ARM than the YU12 palette 8, where motion converts those images to mjpeg. Running with palette 8 and 1280x720 with 3 fps, I have about 25 % CPU idle on ARM, but when running with palette 2 mjpeg, I have less than 5%.

mjpg_streamer still works very well.

(good thing is that I have not seen any SD card corruptions for a long time even with these very regular emergency boots :D )
Can I point you at the Motion code? File video2.c, function v4l2_next and around line 852 for me.

Code: Select all

    {
        netcam_buff *the_buffer = &s->buffers[s->buf.index];

        switch (s->fmt.fmt.pix.pixelformat) {
...
        case V4L2_PIX_FMT_YUV420:
            memcpy(map, the_buffer->ptr, viddev->v4l_bufsize);
            return 0;
...
        case V4L2_PIX_FMT_JPEG:
        case V4L2_PIX_FMT_MJPEG:
            return mjpegtoyuv420p(map, (unsigned char *) the_buffer->ptr, width, height,
                                   s->buffers[s->buf.index].content_length);
So whilst V4L2 may be spitting out MJPEG, and the web interface is spitting out MJPEG, it is always going via yuv420p in between. If you select MJPEG you are therefore doing both a decode and an encode on the ARM.

Also read the comment in video2.c v4l2_set_pix_format
/*
* Note that this array MUST exactly match the config file list.
* A higher index means better chance to be used
*/
The higher the index/palette value the better as the ARM is having to do less work with it.
Also remember that this is being done on every frame, so this is at the V4L2 framerate of 30fps, not the detection framerate of 3fps.

I had raised my doubts over MJPEG giving any benefit back in http://www.raspberrypi.org/phpBB3/viewt ... 60#p479760, but no one seemed to follow it up from that.
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 7906
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: Official V4L2 driver

Wed Jan 08, 2014 12:45 pm

mcornils wrote:Sorry it took a while to get back to you; the Christmas break is to be blame for that. You were quite right with your initial diagnosis. It was hard, however, to make linphone accept a resolution different from 176x144 for H.263 - it is the "preferred codec resolution", and only hardcoding the video size to 352x288 in mediastreamer2's msv4l2.c/msv4l2_configure method made linphone use the "unwanted" resolution.

On 352x288, everything works fine, with the caveat that it *is* rather slow. We'd still very much prefer the lower 172x144 resolution, and linphone would then work out of the box. Is the implementation for cropping in the cards? I know you are doing this in your free time, so we will not expect any miracles...
The implementation for cropping will be based around http://hverkuil.home.xs4all.nl/spec/med ... ection-api. This relies on the the client/app setting things appropriately. We will always have the restriction that the buffer must be sized as a multiple of 32 horizontally and 16 vertically (ie 192x144 in your case). What the extension allows you to do is specify that the active part within that buffer is 176x144.
Requiring Linphone as the app to be aware of the limitations and make the right calls to set everything up in the right way is a pain, and even then the codec may be expecting no padding and misinterpret the image data. So that may not work as a solution.
There is an option where we could strip the extra padding on the GPU. We don't like doing it as it requires memcpying the entire image line by line. It is still likely to be more efficient than doing it on the ARM though, so I may look into it. (There would be a slight oddity that the buffer size would still be big enough for the padded image, but the active data won't padded and just won't quite fill the buffer. Doing that means we can remove the padding effectively in place rather than needing another buffer for it)
mcornils wrote:By the way, yes, hooking up the H.264 engine to linphone makes a lot of sense, but that is probably out of our league. Alternatively, gstreamer could get a working SIP module. If you have any further ideas, feel free to share!
If you could point me at a set of noddy instructions to get something set up, then I'm happy to spend an hour looking at what can be done simply. I don't have a suitable SIP server either, so a recommendation would be good. I know I could trawl through the web for something, but life is too short for that, and you're the one asking for support ;)
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

shuckle
Posts: 565
Joined: Sun Aug 26, 2012 11:49 am
Location: Finland

Re: Official V4L2 driver

Wed Jan 08, 2014 12:59 pm

Indeed. Now I understand. palette 8 V4L2_PIX_FMT_YUV420 is as good as it can be for motion.
Even if motion detection is disabled, it still does those conversions. Yes, MJPEG was wated time for motion.

But mjpeg code was still well worth the effort as mjpg_streamer now works brilliantly.

Thank you very much.

prb
Posts: 1
Joined: Sun Jan 05, 2014 2:58 pm
Location: Toronto, Canada

Re: Official V4L2 driver

Wed Jan 08, 2014 8:00 pm

I've been playing with mjpg-streamer and motion and the v4l2
drivers and run into a bit of a problem.

I'm using the latest firmware and drivers:

Code: Select all

# uname -a
Linux roy 3.10.25+ #624 PREEMPT Tue Jan 7 20:10:18 GMT 2014 armv6l GNU/Linux
# cat /boot/.firmware_revision 
b42b4d8a038b2d3f13c3c7b4dc9e9cb9307b78ed
Motion itself is running on a bigger machine and configured to capture
1280x1024 at 3 fps from mjpg-streaming running on the pi. The both
machines are on a wired network. The camera is pointed outside.
After the latest driver update it's working a treat. (Previously it'd
hang or otherwise fail if I tried going higher than 1024x768.)

Currently, the camera system seems to be unable to handle the
transitions from day to night and from night to day. ie. If I start
it during the day by early evening the image will underexpose and be
black. Restarting the camera will set the appropriate exposure times.
It also seems that running the following acts as a bit of a workaround:

Code: Select all

/usr/bin/v4l2-ctl --set-ctrl=exposure_time_absolute=10000 --set-ctrl=auto_exposure=1
/usr/bin/v4l2-ctl --set-ctrl=exposure_time_absolute=10000 --set-ctrl=auto_exposure=0
which will also trick the camera into setting more appropriate exposure
times for the lighting available and produces reasonable images all night.

Come dawn, the image will go to the other extreme and overexpose.
Restarting the camera will fix it again. However using the above
trick does not work to bring the exposure time back down again.

I'm starting mjpg-streamer with:

Code: Select all

#!/bin/bash
cd /home/pi/mjpg-streamer
sudo modprobe bcm2835-v4l2

v4l2-ctl -p 3
v4l2-ctl --set-ctrl=power_line_frequency=2
v4l2-ctl --set-ctrl=sharpness=0
v4l2-ctl --set-ctrl=exposure_time_absolute=10000 --set-ctrl=auto_exposure=0

export LD_LIBRARY_PATH=.
./mjpg_streamer -i "input_uvc.so -d /dev/video0 -r SXGA -f 3 -n" -o "output_http.so -w ./www"
I've fiddled with v4l2-ctl control settings to see if I could find another
workaround but I've not stumbled across anything reliable. Restarts
of mjpg_streamer at dusk and dawn don't seem to be viable at the
moment either as after a couple my pi locked up and needed to be
booted.

Is this operator error, a limitation or a bug?

User avatar
jbeale
Posts: 3578
Joined: Tue Nov 22, 2011 11:51 pm
Contact: Website

Re: Official V4L2 driver

Wed Jan 08, 2014 8:54 pm

6by9 wrote:
jbeale wrote:This would be really excellent for several different uses, security-camera type applications being one example. While we wait for this to happen, I'd like to point out something that is working already: the RPi Cam Web Interface from silvanmelchior as described at http://www.raspberrypi.org/phpBB3/viewt ... 43&t=63276
This is sending MJPEG to a browser as well as into the 'motion' application, which in turn triggers full-HD H.264 recording whenever motion is detected, and stops it when the motion is gone. His application 'raspimjpeg' is (as far as I know) internally connecting the mmal JPEG encoder to the video output, and not using v4l2 or the just-released MJPEG codec functionality. I'm sure performance could be better with more GPU help, but at any rate this is already quite useful.
That's a very nicely done app, and almost the same arrangement as I was intending for V4L2 (I wasn't intending to use the JPEG encoder on output[0]). The more awkward bit is fitting it in with V4L2 and all the options that leave open to the app.
I'm going to have to backtrack on my enthusiasm for this, for now. It turns out there is a bug somewhere between the raspimjpeg and motion applications, which frequently (not always) causes what looks like a not-quite-complete JPEG file (a variable-sized plain grey block of pixels, always in the lower right corner) to trigger the motion-sensing and thus give a host of false-alarms when there is no actual motion. However this missing-image-data never shows up in the browser MJPEG display or in the recorded H264 file so it seems to be something about 'motion' reading the JPEG data in particular. See for example http://www.raspberrypi.org/forum/viewto ... 00#p482928 and http://www.raspberrypi.org/forum/viewto ... 00#p483294 I know this isn't quite on-topic as these programs are not using V4L2, just mentioning in case anyone had any ideas what could cause this.

By the way I just wanted to respond to below comment. I don't doubt H.264 is technically superior for bandwidth, and uncompressed YUV/RGB for saving CPU time decoding, but my experience with MJPEG is better for simple low-latency links over flaky wifi connections.
I had raised my doubts over MJPEG giving any benefit back in http://www.raspberrypi.org/phpBB3/viewt ... 60#p479760, but no one seemed to follow it up from that.

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 7906
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: Official V4L2 driver

Thu Jan 09, 2014 1:21 pm

prb wrote:I've been playing with mjpg-streamer and motion and the v4l2
drivers and run into a bit of a problem.
...
Currently, the camera system seems to be unable to handle the
transitions from day to night and from night to day. ie. If I start
it during the day by early evening the image will underexpose and be
black. Restarting the camera will set the appropriate exposure times.
It also seems that running the following acts as a bit of a workaround:
...
I've fiddled with v4l2-ctl control settings to see if I could find another
workaround but I've not stumbled across anything reliable. Restarts
of mjpg_streamer at dusk and dawn don't seem to be viable at the
moment either as after a couple my pi locked up and needed to be
booted.

Is this operator error, a limitation or a bug?
Certainly not a limitation of the overall system, but may be something odd or misconfigured.
The physical limitation would be that the exposure time can never exceed the frametime of each frame, ie can never exceed 33ms if running at 30fps. Seeing as you're running at 3fps, that would only limit you to 330ms exposure times.

The complication is that within the sensor tuning parameters we have a load of configuration modes for how each exposure mode operates. It has ranges within which the exposure time and analogue gain (and in theory aperture if the sensor supported it) can vary. Because of the way the default mode is set up the max exposure time is 33ms :(
The simplest way around this is to select one of the other modes (eg night), but they're currently blocked in the driver (will be fixed).

Even having tried fixed that, the exposure time seems to stick at 10ms in night mode when it should go up to 250ms :? I'm getting out of my depth on the intricacies of this algorithm, but I will ask one of the experts to have a quick look. I'm guessing that your camera is also sticking at the boundary between two of these sets of parameters and hence ends up under or over exposed.

jamesh
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 24971
Joined: Sat Jul 30, 2011 7:41 pm

Re: Official V4L2 driver

Thu Jan 09, 2014 1:32 pm

I'm looking at night mode at the moment. I want to bump max exposure to 1s (the max the sensor does at the moment), but its currently limited to 250ms. I'll also increase the max analog gain and main gain product which should pull a bit more detail out in very dark scenes.
Principal Software Engineer at Raspberry Pi (Trading) Ltd.
Contrary to popular belief, humorous signatures are allowed. Here's an example...
“I own the world’s worst thesaurus. Not only is it awful, it’s awful."

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 7906
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: Official V4L2 driver

Thu Jan 09, 2014 1:37 pm

jbeale wrote:I'm going to have to backtrack on my enthusiasm for this, for now. It turns out there is a bug somewhere between the raspimjpeg and motion applications, which frequently (not always) causes what looks like a not-quite-complete JPEG file (a variable-sized plain grey block of pixels, always in the lower right corner) to trigger the motion-sensing and thus give a host of false-alarms when there is no actual motion. However this missing-image-data never shows up in the browser MJPEG display or in the recorded H264 file so it seems to be something about 'motion' reading the JPEG data in particular. See for example http://www.raspberrypi.org/forum/viewto ... 00#p482928 and http://www.raspberrypi.org/forum/viewto ... 00#p483294 I know this isn't quite on-topic as these programs are not using V4L2, just mentioning in case anyone had any ideas what could cause this.
It is off-topic, but I'll respond. That does sound like the decoder is aborting early in the frame and just not decoding a set of macroblocks. I had a quick read through your other thread, and there is no guarantee that the JPEG will fit within a single buffer. However seeing as Dom has done a hack so the buffer size is max(width*height, 80kB) to get around a different issue, I would be slightly horrified if it ever exceeded a single buffer, even with Q of 100 (8bpp for a JPEG!). The obvious sign of this would be that the decoder doesn't see the end of image flag in the JPEG (FF D9), so it should be quite easy to get the decoder to make a lot of noise should that happen.
jbeale wrote:By the way I just wanted to respond to below comment. I don't doubt H.264 is technically superior for bandwidth, and uncompressed YUV/RGB for saving CPU time decoding, but my experience with MJPEG is better for simple low-latency links over flaky wifi connections.
I had raised my doubts over MJPEG giving any benefit back in http://www.raspberrypi.org/phpBB3/viewt ... 60#p479760, but no one seemed to follow it up from that.
I was actually referring to MJPEG specifically with Motion, where I had a suspicion that it wasn't going to make a difference, and lo and behold it didn't.

Generally speaking, then yes MJPEG is more robust to errors than H264, although you can tweak H264 to send I-frames more frequently to reduce the effect, or even split each frame into multiple slices to reduce the proportion of the picture that an error can affect.
There should be minimal difference in latency between MJPEG and H264 if you aren't using B-frames in the H264 encoder.
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

towolf
Posts: 421
Joined: Fri Jan 18, 2013 2:11 pm

Re: Official V4L2 driver

Thu Jan 09, 2014 2:34 pm

jamesh wrote:I'm looking at night mode at the moment. I want to bump max exposure to 1s (the max the sensor does at the moment), but its currently limited to 250ms. I'll also increase the max analog gain and main gain product which should pull a bit more detail out in very dark scenes.
Kul, thanks.

towolf
Posts: 421
Joined: Fri Jan 18, 2013 2:11 pm

Re: Official V4L2 driver

Thu Jan 09, 2014 2:38 pm

6by9 wrote:split each frame into multiple slices to reduce the proportion of the picture that an error can affect.
There should be minimal difference in latency between MJPEG and H264 if you aren't using B-frames in the H264 encoder.
There's always lookahead latency for H264. Intra-refresh is mentioned in the headers somewhere. If this were possible I'm sure encoding latencies would be lower.

jamesh
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 24971
Joined: Sat Jul 30, 2011 7:41 pm

Re: Official V4L2 driver

Thu Jan 09, 2014 2:54 pm

It's surprising how low the latencies can go with H264 - I've seen stuff that appears instantaneous, and that was encoded, transmitted, and decoded on a different device (Both VC4's). That was pretty sneaky stuff though, not an entirely standard H264 stream, although using the same compression.

You only need look ahead on B frames, which we don't use. We just have I frames (complete frames) and P frames, which are based on data from a number of previous frames. B frames look *ahead* to future frames for information which is why there is more latency at the encode end.

I'm submitted a change to night mode that increases the exposure to 0.97s (max), increases the max analog gain to 8 , and increases the total gain (analog* digital) to 16 (so max digital gain = 2). Not sure when the next firmware drop will be though. Certainly seem to make a difference in v. dark conditions.
Principal Software Engineer at Raspberry Pi (Trading) Ltd.
Contrary to popular belief, humorous signatures are allowed. Here's an example...
“I own the world’s worst thesaurus. Not only is it awful, it’s awful."

lingon
Posts: 128
Joined: Fri Aug 26, 2011 7:31 am

Re: Official V4L2 driver

Sat Jan 11, 2014 8:33 pm

jamesh wrote: I'm submitted a change to night mode that increases the exposure to 0.97s (max), increases the max analog gain to 8 , and increases the total gain (analog* digital) to 16 (so max digital gain = 2). Not sure when the next firmware drop will be though. Certainly seem to make a difference in v. dark conditions.
Thanks! This is a great improvement for dark conditions! What is the expected maximum ISO setting for the night mode?
Currently it seems to go to 500 ASA, but not higher on my camera.
Resolution : 960 x 720
Flash used : No
Focal length : 3.6mm
Exposure time: 1.000 s
Aperture : f/2.9
ISO equiv. : 500
Whitebalance : Auto
Metering Mode: center weight
Exposure : aperture priority (semi-auto)
I upgraded the firmware last night and then the exposure did not go to 1 s, but to 1/8173 s. After a reboot in the morning in lighter conditions the exposure has been fine even after sunset. Fixing the camera exposure to work reliably in dark would be another great improvement.

Return to “Camera board”