User avatar
waveform80
Posts: 315
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: Pure Python camera interface

Tue Nov 05, 2013 12:46 pm

selectnone wrote:Here's a bug I noticed - if I set the camera's ISO setting to >800, it throws an error stating that the range is 0 to 1600.

The docs suggest it might not actually do anything, but the error ought to at least be consistent ;)
Oops - yup, that's another one to squish!

SnowLeopard
Posts: 106
Joined: Sun Aug 18, 2013 6:10 am

Re: Pure Python camera interface

Tue Nov 05, 2013 3:25 pm

About the ISO, raspistill has a problem with it also.
If using the -iso option, the maximum value it'll take is 800, but when using -ex sports you get ISO values up to 1600 (as per exif tags).

User avatar
waveform80
Posts: 315
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: Pure Python camera interface

Tue Nov 05, 2013 9:16 pm

jbeale wrote:
waveform80 wrote:
When using video-port based capture only the preview area is captured; in some cases this may be desirable (see the discussion under Preview vs Still resolution).
I really ought to expand on this and add some bits to the API docs for it, so consider this a test run to see if I can explain it better! Basically when use_video_port is True, you *are* capturing video (at least as far as the camera is concerned). The only difference is that instead of strapping an H.264 encoder to the video port, we strap a JPEG encoder onto it instead. The upshot is that instead of receiving a stream of H.264 frames we receive a stream of JPEG frames (which we then write out to different files/streams as opposed to sticking them all in a single file/stream).
In that case, I am confused about why I am getting the full field of view for the full-resolution 2592x1944 case, even though I am in video mode?
Ah, setting the resolution to the full field of view forces the camera (in both preview and video mode) to use the full field of view, but automatically lowers the framerate to 15fps to permit it to do so. That's what the bit at the end of the "Preview vs Still Resolution" section is going on about. Interestingly, previewing full-frame at 15 fps works, but recording video to H.264 in full-frame doesn't (results in MMAL returning "out of resources (other than memory)"). However, when a JPEG encoder is attached to the video port in full-frame it *does* work (although the bandwidth required is naturally enormous).

Again, I'm sure the documentation needs some improvement in this area - there's a lot that isn't explained terribly well (partially because I'm not entirely clear on it myself - I've no idea why the JPEG encoder works with the above setup and the H.264 one doesn't, or where the cut-off to 15fps actually is - I've implemented it at anything >1920x1080 but I'm not at all sure that's right!).

Anyway, please do point out discrepancies like this - it'll only help improve the docs in the long-run - and I'm definitely interested in experiments people do with these settings (I'd never tried full-frame captures with use_video_port=True before now!)

Cheers,

Dave.

User avatar
waveform80
Posts: 315
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: Pure Python camera interface

Tue Nov 05, 2013 9:26 pm

selectnone wrote:If I remember right, I wasn't actually able to get the example given in the docs to work, as given
<snip>
Unfortunately, it turns out you're using an old version of the docs there. If you skip to a later version (0.6 or latest) you should find that example got fixed. I should probably remove some of the older versions of the docs at some point (I've already deleted 0.2 and 0.3) but I'm not entirely sure what versions people are still using so I've left 0.4 onwards so far.


Edit:
Argh, no it didn't - it just got moved to capture_continuous... bah! Right, time to fix after all...


Cheers,

Dave.

selectnone
Posts: 55
Joined: Fri Jun 22, 2012 10:16 pm

Re: Pure Python camera interface

Wed Nov 06, 2013 12:06 am

Ahah! I think I was just googling for 'picamera' and relevant keywords, once I knew that the docs existed - I didn't realise there were old versions too :D

Edit: Yeah, I'd failed to get the current-version example to work - the old-version was just the one I found when writing up the report

User avatar
jbeale
Posts: 3571
Joined: Tue Nov 22, 2011 11:51 pm
Contact: Website

Re: Pure Python camera interface

Wed Nov 06, 2013 1:30 am

waveform80 wrote:Ah, setting the resolution to the full field of view forces the camera (in both preview and video mode) to use the full field of view, but automatically lowers the framerate to 15fps to permit it to do so. That's what the bit at the end of the "Preview vs Still Resolution" section is going on about. Interestingly, previewing full-frame at 15 fps works, but recording video to H.264 in full-frame doesn't (results in MMAL returning "out of resources (other than memory)"). However, when a JPEG encoder is attached to the video port in full-frame it *does* work (although the bandwidth required is naturally enormous).
Thanks for that clarification. In an ideal world it would be nice to have the option (even in video mode) to downscale the full field of view within the GPU, rather than as a post-scaling process on the CPU (if I understood that correctly). I guess that will need to await firmware to implement that functionality.

Presumably the VideoCore-IV architecture was designed only to handle 1920x1080 compression to H.264 and runs out of discrete-cosine multiplier blocks or somesuch, if you try to request more pixels per frame than Full-HD.

User avatar
waveform80
Posts: 315
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: Pure Python camera interface

Wed Nov 06, 2013 9:59 am

jbeale wrote:
waveform80 wrote:Ah, setting the resolution to the full field of view forces the camera (in both preview and video mode) to use the full field of view, but automatically lowers the framerate to 15fps to permit it to do so. That's what the bit at the end of the "Preview vs Still Resolution" section is going on about. Interestingly, previewing full-frame at 15 fps works, but recording video to H.264 in full-frame doesn't (results in MMAL returning "out of resources (other than memory)"). However, when a JPEG encoder is attached to the video port in full-frame it *does* work (although the bandwidth required is naturally enormous).
Thanks for that clarification. In an ideal world it would be nice to have the option (even in video mode) to downscale the full field of view within the GPU, rather than as a post-scaling process on the CPU (if I understood that correctly). I guess that will need to await firmware to implement that functionality.

Presumably the VideoCore-IV architecture was designed only to handle 1920x1080 compression to H.264 and runs out of discrete-cosine multiplier blocks or somesuch, if you try to request more pixels per frame than Full-HD.
Yup - that was basically my guess - there's not enough of <insert-processing-resource-here> to do H.264 at full-frame (which naturally requires considerably more processing than JPEG to encode). Unfortunately, as you point out, this is all firmware-side stuff so we'll have to wait and see if anything can be done to bring parity to the capture area of stills and video. I'll keep an eye on the forums and the userland github, and try and keep picamera up to date with any new developments!


Cheers,

Dave.

MacHack
Posts: 21
Joined: Sat May 19, 2012 1:00 pm

Re: Pure Python camera interface

Wed Nov 06, 2013 10:35 am

Hi,

if you're looking in a way to resize the image in the GPU between the video_output_port and the encoder_input_port, you could take a look at my code that I wrote for the mjpeg-streamer : http://www.raspberrypi.org/phpBB3/viewt ... &start=100

It uses a resizer component that I borrowed from wibble82 to resize in the GPU the full resolution image before the encoder.
It means that you can get full field of view with resolution lower than 2592x1944 using the video mode.
However the downside is that the framerate is limited to 15 fps.

I hope this code'll be usefull.

User avatar
waveform80
Posts: 315
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: Pure Python camera interface

Wed Nov 06, 2013 10:51 am

MacHack wrote:Hi,

if you're looking in a way to resize the image in the GPU between the video_output_port and the encoder_input_port, you could take a look at my code that I wrote for the mjpeg-streamer : http://www.raspberrypi.org/phpBB3/viewt ... &start=100

It uses a resizer component that I borrowed from wibble82 to resize in the GPU the full resolution image before the encoder.
It means that you can get full field of view with resolution lower than 2592x1944 using the video mode.
However the downside is that the framerate is limited to 15 fps.

I hope this code'll be usefull.

Most interesting! Thanks for posting that - I'm just skimming through the code at the moment, but the "vc.ril.resize" component looks extremely useful. Now comes the hard part of figuring out a nice simple interface for picamera to present this... (I'm all ears for suggestions!)


Cheers,

Dave.

Alan Johnstone
Posts: 45
Joined: Tue Jan 08, 2013 4:35 pm

Re: Pure Python camera interface

Thu Nov 07, 2013 12:27 pm

I am using your example code to try to get an openCV image
I get the following error:

TypeError: expected string or Unicode object, _io.BytesIO found.

What am I doing wrong?

Thanks

User avatar
waveform80
Posts: 315
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: Pure Python camera interface

Thu Nov 07, 2013 5:25 pm

Alan Johnstone wrote:I am using your example code to try to get an openCV image
I get the following error:

TypeError: expected string or Unicode object, _io.BytesIO found.

What am I doing wrong?

Thanks
Sorry about that! You're not doing anything wrong - I just forgot to actually test that example (I vaguely recall something about an 8 hour compile time and just going "oh commit it, it'll be fine..."). Having read the docs, it seems imread() doesn't deal with anything but filenames. However, imdecode() will read an image from a data structure (specifically a numpy array). Now that I've actually got a copy of OpenCV in my test environment I've managed to come up with an example that actually works (and I've actually tested it this time!)

You should find it at http://picamera.readthedocs.org/en/late ... ncv-object in a few minutes (I've just pushed it to GitHub, so ReadTheDocs may take a minute or two to notice and rebuild stuff). If the docs look a bit different there, that's because they're the latest version (for the upcoming release) and ReadTheDocs seems to have upgraded their default theme since the last release.


Cheers,

Dave.

Alan Johnstone
Posts: 45
Joined: Tue Jan 08, 2013 4:35 pm

Re: Pure Python camera interface

Thu Nov 07, 2013 6:02 pm

Thanks, that works which is a great start as I have not found any other solution that allows me to use the Python IDE with the Rpi Camera.
I am not sure how much extra time the conversion is going to take.

selectnone
Posts: 55
Joined: Fri Jun 22, 2012 10:16 pm

Re: Pure Python camera interface

Fri Nov 08, 2013 10:42 am

FYI, here's what I've done with the PiCamera library:

Image
http://www.youtube.com/watch?v=-8S4kEq3Drk

(I show the camera feature around the 5:10 mark)

Edit: I've since more than doubled the camera-framerate after I managed to 'get capture_continuous' working, using waveform80's suggestion to let PyGame covert a stream-copy instead of the stream itself:

Code: Select all

stream_copy = io.BytesIO(stream.getvalue())

Alan Johnstone
Posts: 45
Joined: Tue Jan 08, 2013 4:35 pm

Re: Pure Python camera interface

Sat Nov 09, 2013 12:35 pm

I think I was a little hasty saying the new version of getting an Opencv image worked.When I tried it first the camea was pointing at the ceiling and I did not look closely at the result.

First I think you are asking for a grayscale image by using cv2.imdecode(data, 0). I think the 0 must be a 1.

Then I think the result has its dimensions swapped and the RGB becomes BGR.

It is a little difficult for me to be sure because as I am running the RPI headless using TightVNC Viewer
I cannot see the result of cv2.imshow (nor can I see camera.preview). I have to convert it into a SimpleCV image and then view it.
Thus there is just the possibility that the problem is in that conversion.

I have tried to understand the numpy manual as to how to transpose width and height in a 3D array and swap the R with the B but so far I have been unsuccessful

Incidentally I am not getting anything like the frames per sec that you are suggesting.
Using your code to take just the photo at 640x480 resolution and without any convertion, or display or preview or saving to disc I only get 1.7fps.When I use the PIL image example to get to a SimpleCV image and just output it without any analysis I get 0.6fps.
This seem to be slower than taking the picture using raspistill then reading the result and displaying.


Alan

Alan Johnstone
Posts: 45
Joined: Tue Jan 08, 2013 4:35 pm

Re: Pure Python camera interface

Sat Nov 09, 2013 4:16 pm

This works though I am sure there are numpy experts who can do better

Code: Select all

import io
import time
import picamera
import cv2
import numpy as np

# Create the in-memory stream
stream = io.BytesIO()
with picamera.PiCamera() as camera:
    camera.start_preview()
    time.sleep(2)
    camera.capture(stream, format='jpeg')
# Construct a numpy array from the stream
data = np.fromstring(stream.getvalue(), dtype=np.uint8)
# "Decode" the image from the array
image = cv2.imdecode(data, 0)
# transpose width and height
trans=image.transpose(1,0,2)
# swap R and B channels
final=trans.copy()
B=trans[:,:,0]
R=trans[:,:,2]
final[:,:,0]=R
final[:,:,2]=B
The code transposes the width and height and then swaps the R as B colours
I cannot think of a way of doing it without copying some at least of the array.

In terms of speed this is slower than using the PIL method - about 0.4 fpf compared with 0.6 fps

Alan Johnstone
Posts: 45
Joined: Tue Jan 08, 2013 4:35 pm

Re: Pure Python camera interface

Sat Nov 09, 2013 4:19 pm

CORRECTION in the above code change:
image = cv2.imdecode(data, 0)

to
image = cv2.imdecode(data, 1)

User avatar
waveform80
Posts: 315
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: Pure Python camera interface

Sat Nov 09, 2013 8:39 pm

Alan Johnstone wrote:I think I was a little hasty saying the new version of getting an Opencv image worked.When I tried it first the camea was pointing at the ceiling and I did not look closely at the result.

First I think you are asking for a grayscale image by using cv2.imdecode(data, 0). I think the 0 must be a 1.
Argh - you are indeed correct, the last parameter should be 1 (or anything non-zero really) to have imdecode leave the image as color. Might be worth experimenting to see if there's a performance difference between 1 (which apparently strips alpha channels) and -1 (which doesn't). Obviously there's no alpha channel to strip in data from the camera, but if -1 lets it skip some pointless processing, so much the better (although the difference may turn out to be nothing).
Alan Johnstone wrote: Then I think the result has its dimensions swapped and the RGB becomes BGR.

It is a little difficult for me to be sure because as I am running the RPI headless using TightVNC Viewer
I cannot see the result of cv2.imshow (nor can I see camera.preview). I have to convert it into a SimpleCV image and then view it.
Thus there is just the possibility that the problem is in that conversion.

I have tried to understand the numpy manual as to how to transpose width and height in a 3D array and swap the R with the B but so far I have been unsuccessful
Ah, not quite. I realize it seems odd that the height comes first in the resulting array, but actually that's correct and the right way round (if you transpose it you'll wind up with a sideways image). Have a look at the raw capture recipes for some examples of this.

That said, the reversal of the RGB order is indeed odd. I assume there's some good CV-related reason for the OpenCV library doing this but I've no idea what it is off the top of my head. The simplest way I can think of to reverse the order of the color bits is as follows (assuming "img" is a numpy array containing image data structured [height, width, color-bits]):

Code: Select all

img = img[:, :, ::-1]
This rather obscure snippet of slicing code basically does the following:
  • The first ":" means "leave the height dimension as it is"
  • The second ":" means "leave the width dimension as it is"
  • The "::-1" means "take all elements of the color-bits-dimension and return them in reverse order"
You can do something similar with any iterable, e.g. strings. For example:

Code: Select all

>>> s = 'foo bar baz'
>>> print s[::-1]
zab rab oof
Alan Johnstone wrote: Incidentally I am not getting anything like the frames per sec that you are suggesting.
Using your code to take just the photo at 640x480 resolution and without any convertion, or display or preview or saving to disc I only get 1.7fps.When I use the PIL image example to get to a SimpleCV image and just output it without any analysis I get 0.6fps.
This seem to be slower than taking the picture using raspistill then reading the result and displaying.
About the only light I can shed on this is that my Pi is overclocked to 900Mhz (shouldn't make much difference - it's not that much faster than the base), and I'm using a non-Kingston class 10 SD-card for storage (not that classes are particularly meaningful for SD cards, and I only mention the non-manufacturer over some concerns I've read about).

For what it's worth, here's a little snippet I just whipped together to make sure I wasn't imagining my earlier figures:

Code: Select all

#!/usr/bin/env python

import picamera
import time

FRAMES = 30

with picamera.PiCamera() as camera:
    camera.resolution = (640, 480)
    start = time.time()
    for index, name in enumerate(camera.capture_continuous('image{counter:02d}.jpg')):
        print('Captured %s' % name)
        time.sleep(0.1)
        if index == FRAMES - 1:
            break
    finish = time.time()
    print('Captured %d frames in %.2fs at %.2ffps' % (
        FRAMES,
        finish - start,
        FRAMES / (finish - start),
        ))    
Here's the output on my pi with picamera 0.6:

Code: Select all

...
Captured image28.jpg
Captured image29.jpg
Captured image30.jpg
Captured 30 frames in 9.20s at 3.26fps
Incidentally, that code (if run for long enough) suffers from issue #22 - images eventually fade to black because the preview isn't running. In the course of fixing this for 0.7, I've re-run the script above against the current revision of the code, and I get significantly slower captures - 1.67fps, awfully close to your 1.7fps measurement. It seems that using a null-sink causes as much slow down as using an actual preview, so after 0.7 is out I'm afraid people won't be able to get much faster captures than that anyway without resorting to the use_video_port trick.

Speaking of which, though, if I modify the code above to use the video port (with all the attendant issues that can raise):

Code: Select all

#!/usr/bin/env python

import picamera
import time

FRAMES = 30

with picamera.PiCamera() as camera:
    camera.resolution = (640, 480)
    start = time.time()
    for index, name in enumerate(camera.capture_continuous('image{counter:02d}.jpg', use_video_port=True)):
        print('Captured %s' % name)
        if index == FRAMES - 1:
            break
    finish = time.time()
    print('Captured %d frames in %.2fs at %.2ffps' % (
        FRAMES,
        finish - start,
        FRAMES / (finish - start),
        ))
Now I get the following:

Code: Select all

...
Captured image28.jpg
Captured image29.jpg
Captured image30.jpg
Captured 30 frames in 1.22s at 24.50fps
Of course, it can't sustain that sort of performance for long - soon the disk cache fills up, and the process stalls for a long write, but I've run something similar over a network interface for a few hundred images without it stalling out (although previously that also suffered from issue #22 - hopefully the null-sink will fix video-port captures too but I've yet to test that).

Anyway, thanks for your persistence on the OpenCV example - I'll get the corrected example uploaded to the docs tonight!


Cheers,

Dave.

User avatar
waveform80
Posts: 315
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: Pure Python camera interface

Sat Nov 09, 2013 11:07 pm

selectnone wrote:FYI, here's what I've done with the PiCamera library:

Image
http://www.youtube.com/watch?v=-8S4kEq3Drk

(I show the camera feature around the 5:10 mark)
That is ... well, let's just say I've lost track of the number of hours I've burned on Fallout (Steam hasn't, but I don't dare look at that counter!). Incidentally, if you wanted a near real-time view of the camera in your V.A.T.S. emulation have you considered just using the preview, making it semi-transparent (with preview_alpha), positioning it (with preview_window), and desaturating it (either with saturation or color_effect=(0,0) - which when translucent should allow your overlay to colour it to some extent)? However, I've no idea how you'd actually tie OpenCV stuff in there (e.g. for identification of body parts ;)


Cheers,

Dave.

Alan Johnstone
Posts: 45
Joined: Tue Jan 08, 2013 4:35 pm

Re: Pure Python camera interface

Sun Nov 10, 2013 5:42 pm

img[:,:,:-1] worked and I am sure is much faster than my method.
I new that numpy stored images heightxwidthxBGR but the SimpleCV image command
img=Image(cvsimage=True) is supposed to sort that.

It doesnt.

I tried eliminating the ByteIO object and the Decode method by simply importing an image into opencv with cv2.imread("image.jpg") and then trying to convert into a SimpleCV image as before with the same result.
looking at the Image class it seem that it should use the exact same numpy methods as I am doing explicitly
ie img[:,:,:-1].transpose(1,0,2)
but for some reason it is either not getting to that part of the program or it is just not working. (My expertise in python is limited but the Windows and rpi programs are different and in one route through the program on the Windos version seems to do the transformation twice !)

So until some SimpleCV developer stumbles across this discussion you need to do the conversion yourself. (There seems to be little activity on the SimpleCV website so maybe the project is dormant.)

However going via the PIL image route is quicker (.99fps verses .74fps) and simpler to understand.

I used the test program as you did and got a comparable 2.49fps from my standard rpi
.
I was not sure why you had the .1second delay and when that and the print statements are removed it gets to 3.51fps.
removing the display of the image which I will only need while debugging gets the speed up to 6fps.
So the continuous mode when used properly (which I was not doing before) is a bit faster than the normal capture method.

So finally unless you can tell me how to see the display of the opencv imshow command (or otherwise) with a headless rpi
then we are done.

Thanks for all your help.

Alan

selectnone
Posts: 55
Joined: Fri Jun 22, 2012 10:16 pm

Re: Pure Python camera interface

Mon Nov 11, 2013 1:31 pm

waveform80 wrote:
selectnone wrote:FYI, here's what I've done with the PiCamera library:

Image
http://www.youtube.com/watch?v=-8S4kEq3Drk

(I show the camera feature around the 5:10 mark)
That is ... well, let's just say I've lost track of the number of hours I've burned on Fallout (Steam hasn't, but I don't dare look at that counter!). Incidentally, if you wanted a near real-time view of the camera in your V.A.T.S. emulation have you considered just using the preview, making it semi-transparent (with preview_alpha), positioning it (with preview_window), and desaturating it (either with saturation or color_effect=(0,0) - which when translucent should allow your overlay to colour it to some extent)? However, I've no idea how you'd actually tie OpenCV stuff in there (e.g. for identification of body parts ;)


Cheers,

Dave.
I'd rather keep the video-layering in Python if possible, to keep all my hackiness consistent..
I got the video streaming a lot faster since I made that video, so I'm not too worried about speed now.

I suspect doing a "real" VATS view with body-parts would probably be beyond the Pi's abilities, but I've not experimented with OpenCV yet - I guess it doesn't need to be in realtime, as the Fallout VATS-camera does pause the game...

My intent is that this mode takes photos when a button is pressed - it'll take a high-res image and show it in full colour on-screen; I suppose I'd better run an shape-detection method over it at that point for authenticity :)

randallsussex
Posts: 13
Joined: Thu Oct 10, 2013 5:13 pm

Re: Pure Python camera interface

Thu Nov 14, 2013 8:34 pm

Folks,

To fill in the end of this problem:

Traceback (most recent call last):
File "testUseCamera.py", line 31, in <module>
useCamera.takeSinglePicture("test", 1)
File "./actions/useCamera.py", line 80, in takeSinglePicture
camera = picamera.PiCamera()
File "/usr/local/lib/python2.7/dist-packages/picamera-0.6-py2.7.egg/picamera/camera.py", line 206, in __init__
GPIO.setup(5, GPIO.OUT, initial=GPIO.LOW)
RPIO.Exceptions.InvalidChannelException: The channel sent is invalid on a Raspberry Pi (not a valid gpio)

I uninstalled the RPIO library (I wanted the RPIO library because of their support of DMA driven PWM - it's cool) but it is NOT compatible with picamera. I uninstalled the RPIO library and put the latest RPi.GPIO library back in and used their software based PWM. While it is jittery (because of the multi-tasking linux on the pi), I'm using a servo with a pot on it so I can see where the servo is and hit it again until I get what I want.

However, I had already taken out the picamera library, so I am not SURE that this fixed the problem. Next project.

Randall

User avatar
waveform80
Posts: 315
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: Pure Python camera interface

Thu Nov 14, 2013 9:13 pm

Well, release 0.7 is out. It's not quite the release I'd hoped for - I'm going to need a bit more time to perfect the resizer stuff - it works, but I'm not sure I've got the API quite right yet. In the meantime, there's been plenty of bug fixes (crop, fade-to-black), some new features (video quantization and inline headers), and some doc improvements, so it's worth doing a release anyway.

Hopefully next time there'll be some rather more impressive features - specifically, the bits I'm working on are getting full-frame video capture (via the resizer), equivalent preview/capture area (again via the resizer), capture of stills while recording without frame drop (via the video-splitter) and if I can find the time, overlays via the new OpenGL stuff that's just been added to the latest version of raspistill (though this may involve a certain amount of learning OpenGL on my part!).

Anyway, as always enjoy and do feel free to report bugs / request feature enhancements, etc.


Cheers,

Dave.

User avatar
waveform80
Posts: 315
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: Pure Python camera interface

Thu Nov 14, 2013 11:56 pm

randallsussex wrote:...

I uninstalled the RPIO library (I wanted the RPIO library because of their support of DMA driven PWM - it's cool) but it is NOT compatible with picamera.
...
Sadly correct - in 0.7 I've removed the RPIO bits because, as you correctly point out, they just don't work (and I should again point out, this is entirely my fault for not properly testing the led property with RPIO). That said, I finally pulled my finger out and submitted a pull request upstream which should permit RPIO to control the camera LED in future (assuming it's accepted which, given it's a bit of a dirty hack, it may not be - then again, it's based on the same method as RPi.GPIO's control of the camera LED ... I guess we'll see). Anyway, if the pull request makes it in, I'll revert the RPIO removal at the next opportune point so that both libraries will be supported.


Cheers,

Dave.

User avatar
jbeale
Posts: 3571
Joined: Tue Nov 22, 2011 11:51 pm
Contact: Website

Re: Pure Python camera interface

Fri Nov 15, 2013 12:20 am

waveform80 wrote:...the bits I'm working on are getting full-frame video capture (via the resizer), equivalent preview/capture area (again via the resizer), capture of stills while recording without frame drop (via the video-splitter) and if I can find the time, overlays via the new OpenGL stuff that's just been added to the latest version of raspistill (though this may involve a certain amount of learning OpenGL on my part!)
If you get any of those things working, that would be very cool, especially the first one (resizer to capture the full-frame field of view).

Redsandro
Posts: 27
Joined: Mon Nov 25, 2013 7:19 pm
Location: The Netherlands
Contact: Website

Re: Pure Python camera interface

Tue Nov 26, 2013 7:16 am

Wow this is great. Pure python interface to capturing video and images would be great.

I see you plan to be able to capture images while capturing video using a video splitter. This will be valuable. You can record on motion events, and still send a low-fi stills-stream over the web for remote viewing.

Both my Canon HV20 and Canon EOS D60 can take stills while filming, and the stills are bigger and different aspect ratio. This makes me wonder if the hardware CMOS itself can't just spit out two images. I mean, the EOS has a powerful pixel processor, but the HV20 kinda sucks and is still able to do it. :mrgreen:

Return to “Camera board”