gasguru
Posts: 10
Joined: Sat Nov 23, 2013 1:16 am

Re: Minimizing latency with streaming encoded h264?

Tue Nov 26, 2013 9:20 pm

jamesh wrote:
gasguru wrote:
gasguru wrote: ..edit: I hit the "You may embed only 3 quotes within each other." ;o)
..is there a way to reset the camera without rebooting the rPi?
..yes or no?
No.
..come on, I've got grey beard. ;o)
Are you saying the camera runs in kernel space and with the camera modules compiled in statically, or somesuch???
jamesh wrote:
gasguru wrote:
jamesh wrote:If you go above 1080p then you switch to a stills camera mode, which has a max frame rate of 15fps. I've never tested above 1080p, so all bets are off.
..ok, so is this mode switch "an hardware switch", or is it triggered by software?
In the latter case, can it be overridden?
The camera is incapable of going to full resolution mode at anything more than 15fps.
..nope, it is _spec'ed_ to 15fps to keep them out of court, if you push it beyond spec and burn it, you get to keep the ash and the judge's foot print on your ass as you fly out his door.

..now, tell us how 1920x960@45fps is easier to do than full frame flames at 15fps. ;o)
jamesh wrote: There may be some inbetween modes that can do between 15 and 30, but that would require all the numbers to be worked out and the new modes added to the GPU software, for very little benefit, so I'll not be doing that.
..hum, let's let e.g. the fpv market decide that one. ;o)
jamesh wrote: The switch is done in software - when you request a resolution higher than 1080p, it uses the full frame mode - that's the only mode higher than 1080p.
..ok, is it kicked into a "still mode", or is "full frame" a video mode? What do you call it, "1944p"?
Should I then still run the camera with raspivid, or, with e.g. "raspistill -tl 66 " for 15fps?
If it can do full frame video at 15fps, we should be able to stream it.
jamesh wrote:
gasguru wrote:
jamesh wrote:-ex extralong may still cause problems (maybe not the ones you are seeing) so I would advice not using it.
..well, if we want it fixed, some of us will have to play guinea pigs, and source pointers won't hurt either.

..you mentioned a fix coming some time this week, ETA hints?
This stuff is on the GPU, so not fixable by those without access. I released it to Dom on Monday, so should be in the next firmware release, but I don't know when that will be.
..hum, a better way might pack it in .debs etc and put in your Raspian etc repositories, I ran out of disk space on my last rpi-update, it should chk and warn us, and clean up properly.

blowingraspberry
Posts: 2
Joined: Sun Sep 22, 2013 1:24 pm

Re: Minimizing latency with streaming encoded h264?

Wed Nov 27, 2013 9:21 am

** Update, just seen jamesh'a post from monday. Will try that again **

I've tried both wifi and LAN and can't seem to get less than 250ms.

It would appear the hardware is more than capable of encoding/decoding with negligible latency (as evident by the preview capability of raspistill), and I am only testing with a bit rate of 8Mbits/s, which would leave plenty of headroom in terms of network bandwidth.

My network latency is consistently below 10ms, so any jitter from the network should be negligible too (I would hope the gstreamer jitter buffer with a 20ms buffer would easily cope with this for a 30fps stream; my desire is to use rtp)

Certainly appreciate any further thoughts and suggestions people may have.

jamesh
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 23631
Joined: Sat Jul 30, 2011 7:41 pm

Re: Minimizing latency with streaming encoded h264?

Wed Nov 27, 2013 10:20 am

The camera runs on the GPU, and is controlled (at a distance) by the Arm side code (raspistill/vid). The ARM code is OSS, the GPU code is not.

The camera is spec'd to 15fps at full resolution - you can believe something else if you like but that's the spec.

We call full frame a stills mode and 1080p a video mode and are handled differently in the GPU, but to the camera itself they are just different modes - there is a limit to how fast you can get the data off the sensor, which depends on frame size (ignoring exposure time). So larger frames = less fps. It's as simple as that. At the moment we have defined two modes, full frame and a 1080p30 mode. We need to do some more (full frame 1080p - which will actually be a scaled up binned mode, and 60, 90 and 120fps modes at lower resolutions) but that is done on the GPU so cannot be done by third parties.
Principal Software Engineer at Raspberry Pi (Trading) Ltd.
Contrary to popular belief, humorous signatures are allowed. Here's an example...
"My grief counseller just died, luckily, he was so good, I didn't care."

towolf
Posts: 421
Joined: Fri Jan 18, 2013 2:11 pm

Re: Minimizing latency with streaming encoded h264?

Wed Nov 27, 2013 11:44 am

Please also consider a video mode that captures full frame (2592x1944), bins this 2x2 to 1296x972, and then central crop to yield 1280x720.

This would give maximum quality wide 720p.

Really still holding my thumbs hoping this will be possible.

jamesh
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 23631
Joined: Sat Jul 30, 2011 7:41 pm

Re: Minimizing latency with streaming encoded h264?

Wed Nov 27, 2013 3:00 pm

towolf wrote:Please also consider a video mode that captures full frame (2592x1944), bins this 2x2 to 1296x972, and then central crop to yield 1280x720.

This would give maximum quality wide 720p.

Really still holding my thumbs hoping this will be possible.
That's an interesting one. Why the crop - we could scale the binned mode down to 720p (like we will need to scale it up to 1080p). Gives a slightly larger FOV. And I would only need to produce one mode that could be used for 1080p and 720p (I think). There would be no effect on performance doing a scale rather than a crop - the scaler can easily handle that.
Principal Software Engineer at Raspberry Pi (Trading) Ltd.
Contrary to popular belief, humorous signatures are allowed. Here's an example...
"My grief counseller just died, luckily, he was so good, I didn't care."

towolf
Posts: 421
Joined: Fri Jan 18, 2013 2:11 pm

Re: Minimizing latency with streaming encoded h264?

Wed Nov 27, 2013 3:55 pm

Because scaling reduces sharpness. It's such a minor crop, image quality should take precedence.

jamesh
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 23631
Joined: Sat Jul 30, 2011 7:41 pm

Re: Minimizing latency with streaming encoded h264?

Wed Nov 27, 2013 5:00 pm

towolf wrote:Because scaling reduces sharpness. It's such a minor crop, image quality should take precedence.
I think you'd be hard pushed to see any degradation when scaling down like this. We've certainly done something similar for production devices, and the notoriously extremely difficult to pass quality tests for the customer were fine.

Although it's certainly possible, it's another mode that would need defining and they take time to get right, but I'll put it on the list. It may be this is what Omnivision recommend for 720, but I can find out if that is the case.
Principal Software Engineer at Raspberry Pi (Trading) Ltd.
Contrary to popular belief, humorous signatures are allowed. Here's an example...
"My grief counseller just died, luckily, he was so good, I didn't care."

gasguru
Posts: 10
Joined: Sat Nov 23, 2013 1:16 am

Re: Minimizing latency with streaming encoded h264?

Wed Nov 27, 2013 5:31 pm

jamesh wrote: The camera runs on the GPU, and is controlled (at a distance) by the Arm side code (raspistill/vid). The ARM code is OSS, the GPU code is not.
..uhuh, long term you guys face competition from GPL GPU people, just tell the bossy ones to watch how e.g. the nouveau driver performance, _develops_ over time, compared to the commercial drivers, and then tell them about how and what kinda ethics drives that and this and other teams, up and down, going forward. ;o)
jamesh wrote: The camera is spec'd to 15fps at full resolution - you can believe something else if you like but that's the spec.
..I believe _anything_ I myself can see, touch, test, hear and smell. ;o)
jamesh wrote: We call full frame a stills mode and 1080p a video mode and are handled differently in the GPU,
..why? I see the point on the human interface side, we in meat space needs conveniences so we get help remembering H&WTF to do when we e.g. wanna shoot video, but I don't see why the GPU or the camera or the camera board hw should know the difference between "raspistill -n -tl 66" and "raspivid -n -h 1944 -w 2592 -fps 15", it's the same job and the same data, no?
jamesh wrote: but to the camera itself they are just different modes -
..does the camera need to know the difference? Or does it "just" feed data into the GPU?
jamesh wrote: there is a limit to how fast you can get the data off the sensor, which depends on frame size (ignoring exposure time). So larger frames = less fps. It's as simple as that.
..understood, longer exposures pulls down fps too, are there low end limits on lighting in these modes? I'll have my next sun rise in mid February, is this why I have no hangs with -night or -verylong???
jamesh wrote: At the moment we have defined two modes, full frame and a 1080p30 mode. We need to do some more (full frame 1080p - which will actually be a scaled up binned mode, and 60, 90 and 120fps modes at lower resolutions) but that is done on the GPU so cannot be done by third parties.
..tell us about the 960p45 mode, how wide is it??? No mention of it in the specs, only a wee "960p at 45 fps" in some sales sheet, when I try it with -w 1920, I go beyond the "2592x1944@15" data pump job in the specs. ;o)

jamesh
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 23631
Joined: Sat Jul 30, 2011 7:41 pm

Re: Minimizing latency with streaming encoded h264?

Wed Nov 27, 2013 10:42 pm

gasguru wrote:
jamesh wrote: The camera runs on the GPU, and is controlled (at a distance) by the Arm side code (raspistill/vid). The ARM code is OSS, the GPU code is not.
..uhuh, long term you guys face competition from GPL GPU people, just tell the bossy ones to watch how e.g. the nouveau driver performance, _develops_ over time, compared to the commercial drivers, and then tell them about how and what kinda ethics drives that and this and other teams, up and down, going forward. ;o)
So long term in fact that it not worth worrying about. There's about a thousand man years of work in the GPU code on the Raspi, by the people who designed the chip. Anyway, this is an old old story. If you want to get things opened up, the people you need to talk to are Henry Samuelli and Scott MacGregor.
gasguru wrote:
jamesh wrote: The camera is spec'd to 15fps at full resolution - you can believe something else if you like but that's the spec.
..I believe _anything_ I myself can see, touch, test, hear and smell. ;o)
jamesh wrote: We call full frame a stills mode and 1080p a video mode and are handled differently in the GPU,
..why? I see the point on the human interface side, we in meat space needs conveniences so we get help remembering H&WTF to do when we e.g. wanna shoot video, but I don't see why the GPU or the camera or the camera board hw should know the difference between "raspistill -n -tl 66" and "raspivid -n -h 1944 -w 2592 -fps 15", it's the same job and the same data, no?
Yes, lots of things change when going between video and stills mode - camera tuning, exposure etc.
gasguru wrote:
jamesh wrote: but to the camera itself they are just different modes -
..does the camera need to know the difference? Or does it "just" feed data into the GPU?
The camera need to be set up to output difference line lengths, frame heights crops etc when moving between modes. So yes, it does need to know the difference.
gasguru wrote:
jamesh wrote: there is a limit to how fast you can get the data off the sensor, which depends on frame size (ignoring exposure time). So larger frames = less fps. It's as simple as that.
..understood, longer exposures pulls down fps too, are there low end limits on lighting in these modes? I'll have my next sun rise in mid February, is this why I have no hangs with -night or -verylong???
The current lower limit in stills is about 2s exposure. Any more than that requires changes to the driver that I have been unable to get working - due to the hackiness of getting these longer exposures - the higher levels don't understand what is going on - some of the numbers start to exceed 32bits and there may be overflows, or changing line lengths and frame lengths cause the camera interface to get confused. But I don't have any more time to investigate.
gasguru wrote:
jamesh wrote: At the moment we have defined two modes, full frame and a 1080p30 mode. We need to do some more (full frame 1080p - which will actually be a scaled up binned mode, and 60, 90 and 120fps modes at lower resolutions) but that is done on the GPU so cannot be done by third parties.
..tell us about the 960p45 mode, how wide is it??? No mention of it in the specs, only a wee "960p at 45 fps" in some sales sheet, when I try it with -w 1920, I go beyond the "2592x1944@15" data pump job in the specs. ;o)
No idea. Never looked at anything around 45fps - only 60 and 90, both of which I didn't get working. Think they were VGA.
Principal Software Engineer at Raspberry Pi (Trading) Ltd.
Contrary to popular belief, humorous signatures are allowed. Here's an example...
"My grief counseller just died, luckily, he was so good, I didn't care."

towolf
Posts: 421
Joined: Fri Jan 18, 2013 2:11 pm

Re: Minimizing latency with streaming encoded h264?

Fri Nov 29, 2013 12:58 am

jamesh wrote:
towolf wrote:Because scaling reduces sharpness. It's such a minor crop, image quality should take precedence.
I think you'd be hard pushed to see any degradation when scaling down like this. We've certainly done something similar for production devices, and the notoriously extremely difficult to pass quality tests for the customer were fine.
I’m not a mobile phone. Now that I have a mobile phone with a camera I must say I’m severely underwhelmed. Dunno what those elevated quality demands are. In the end all becomes jumbled up in Instagram with tacky foto filters anyway.

Fact is, even better scaling kernels like Mitchell or Lanczos lose sharpness on downscaling even if they avoid moiré. I don’t assume the GPU does anything better than bilinear. I’d rather lose 16 columns than have moiré or aliasing.

jamesh
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 23631
Joined: Sat Jul 30, 2011 7:41 pm

Re: Minimizing latency with streaming encoded h264?

Fri Nov 29, 2013 8:44 am

towolf wrote:
jamesh wrote:
towolf wrote:Because scaling reduces sharpness. It's such a minor crop, image quality should take precedence.
I think you'd be hard pushed to see any degradation when scaling down like this. We've certainly done something similar for production devices, and the notoriously extremely difficult to pass quality tests for the customer were fine.
I’m not a mobile phone. Now that I have a mobile phone with a camera I must say I’m severely underwhelmed. Dunno what those elevated quality demands are. In the end all becomes jumbled up in Instagram with tacky foto filters anyway.

Fact is, even better scaling kernels like Mitchell or Lanczos lose sharpness on downscaling even if they avoid moiré. I don’t assume the GPU does anything better than bilinear. I’d rather lose 16 columns than have moiré or aliasing.
You are correct that scaling down is bi-linear. Up is trapezoidal. I think that with only 16 columns difference you would still be hard pressed to see any difference between the crop and the scaled version.

I think you would be extremely surprised to see the level of quality that is required by the camera labs of certain phone manufacturers. Don't judge all phones against yours. Samsung and Nokia are extremely picky on their higher end devices - others much much less so. The Broadcom ISP is one of only a very few who are qualified for one of those companies and the competition is fierce. What phone do you have which is underwhelming? Unless its an S3/4 or related, or Nokia N8/808 or Lumia 1020 I won't be surprised.
Principal Software Engineer at Raspberry Pi (Trading) Ltd.
Contrary to popular belief, humorous signatures are allowed. Here's an example...
"My grief counseller just died, luckily, he was so good, I didn't care."

matheweis
Posts: 1
Joined: Wed Feb 19, 2014 6:48 pm

Re: Minimizing latency with streaming encoded h264?

Wed Feb 19, 2014 6:57 pm

Just checking to see if there is any update on getting the Periodic/Cyclic Intra Refresh into the RPI firmware? Also, on a related topic, has there been any progress with bumping the camera up to 60fps?

I was hoping to use the RPI for an ultra low latency H.264 camera, but have come up against a wall at about 180ms +/- 20ms ("glass to glass" timing)


jamesh
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 23631
Joined: Sat Jul 30, 2011 7:41 pm

Re: Minimizing latency with streaming encoded h264?

Fri Feb 21, 2014 4:19 pm

matheweis wrote:Just checking to see if there is any update on getting the Periodic/Cyclic Intra Refresh into the RPI firmware? Also, on a related topic, has there been any progress with bumping the camera up to 60fps?

I was hoping to use the RPI for an ultra low latency H.264 camera, but have come up against a wall at about 180ms +/- 20ms ("glass to glass" timing)
We are actively looking at the higher frame rates, and the full FOV for video. Hopefully next week. Been very busy on other stuff. As stated in another thread, we've got 58fps out of it.

Don't think we are looking at P/CIR at all.
Principal Software Engineer at Raspberry Pi (Trading) Ltd.
Contrary to popular belief, humorous signatures are allowed. Here's an example...
"My grief counseller just died, luckily, he was so good, I didn't care."

Return to “Camera board”