User avatar
Marcos.Scholl
Posts: 31
Joined: Wed Feb 05, 2014 7:12 pm
Location: Brasil

Re: Pure Python camera interface

Thu Feb 13, 2014 3:18 pm

jbeale wrote:omxplayer and hello-video use the GPU hardware acceleration for playback. I don't think any other program does. Software decode on the ARMv6 is slow.
As I understand, you mean that omxplayer is best suited for work with GPU acceleration, correct?
Could you explain me how the functioning of omxplayer. He works with streaming? Sorry for my english.
Marcos.

User avatar
CopterRichie
Posts: 131
Joined: Tue Mar 26, 2013 3:14 am
Location: Los Angeles CA.

Re: Pure Python camera interface

Thu Feb 13, 2014 3:21 pm

jbeale wrote:omxplayer and hello-video use the GPU hardware acceleration for playback. I don't think any other program does. Software decode on the ARMv6 is slow.
Please, I have tried to get omxplayer to decode the stream from a second Pi but was unable to get it to work reading from the stdin. Would you have a command-line instructions for getting this to work? Surely would be appreciated.

Thank you.

User avatar
waveform80
Posts: 315
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: Pure Python camera interface

Thu Feb 13, 2014 3:52 pm

Hi Marcos,

Indeed - as jbeale points out, only omxplayer on the Pi does GPU-decoding of video so I'm going to have to figure out a way of getting that working with the network streaming code (it's not as friendly as VLC and mplayer when it comes to unorthodox inputs like this). Unfortunately I won't have any time to work on this until the weekend.


Dave.

Ketta
Posts: 9
Joined: Mon Feb 10, 2014 2:18 pm

Re: Pure Python camera interface

Fri Feb 14, 2014 3:03 pm

Dave,

I seem to have an issue where any time I ask the camera to use it's .capture() method I get an image that is much darker than what the preview uses or what I get when I set the video photo capture setting to true. Do you know why that may be? I should mention I am using the camera model without the infrared filter, as well as an additional lens filter on it to block visible light, but my infrared sources show up well in the videos and smaller resolution images. As far as settings go, I have set the white balance off, shutter speed to 60ms for my project's purposes, and color effects to a monochrome setting also per the project specifications. Otherwise, nothing has been changed.

User avatar
jbeale
Posts: 3517
Joined: Tue Nov 22, 2011 11:51 pm
Contact: Website

Re: Pure Python camera interface

Fri Feb 14, 2014 5:09 pm

When you use raspistill to take an image immediately without a pause waiting for the AGC (auto gain control) to work, the image is often underexposed. This may be happening here as well. At least with raspistill, you can avoid this by running the camera with preview for a few seconds before capturing the image, during which time the AGC functions to set the exposure properly.

I'm not aware of any camera-phone device where you can turn the camera on from a cold start and instantly grab a photo, I believe for this same reason.

There was a separate problem with raspistill in time-lapse mode, with the exposure tending to become darker over time when the camera was remaining on but not running the preview; however I thought that had been fixed some months ago. Perhaps some version of it lives on it the Python interface; I don't know.

User avatar
waveform80
Posts: 315
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: Pure Python camera interface

Fri Feb 14, 2014 6:07 pm

jbeale wrote:When you use raspistill to take an image immediately without a pause waiting for the AGC (auto gain control) to work, the image is often underexposed. This may be happening here as well. At least with raspistill, you can avoid this by running the camera with preview for a few seconds before capturing the image, during which time the AGC functions to set the exposure properly.

I'm not aware of any camera-phone device where you can turn the camera on from a cold start and instantly grab a photo, I believe for this same reason.
This would be my guess as well. Usually I give the camera a 2 second pause after init before doing anything just to let the AWB and exposure bits time to work (as mentioned in other threads on this forum, fixing AWB doesn't *really* fix it ... yet, at least)
jbeale wrote:There was a separate problem with raspistill in time-lapse mode, with the exposure tending to become darker over time when the camera was remaining on but not running the preview; however I thought that had been fixed some months ago. Perhaps some version of it lives on it the Python interface; I don't know.
That should be fixed in both raspistill and picamera these days. Nowadays, start_preview() in picamera is actually a bit of a lie. The preview is started as soon as the PiCamera instance is constructed, but by default it's piped into a null.sink component. When start_preview() is called it just switches the camera's preview port from the null.sink component to a renderer component. Still, as mentioned above, the camera needs a second or two's warm up time after init to let things settle.


Dave.

User avatar
Marcos.Scholl
Posts: 31
Joined: Wed Feb 05, 2014 7:12 pm
Location: Brasil

Re: Pure Python camera interface

Sat Feb 22, 2014 1:59 pm

waveform80 wrote:Hi Marcos,
Indeed - as jbeale points out, only omxplayer on the Pi does GPU-decoding of video so I'm going to have to figure out a way of getting that working with the network streaming code (it's not as friendly as VLC and mplayer when it comes to unorthodox inputs like this). Unfortunately I won't have any time to work on this until the weekend.
Dave.
I got nothing that worked properly. I disabled until the graphical interface, but there was not much improvement.

dren
Posts: 6
Joined: Mon Mar 17, 2014 8:54 pm

Re: Pure Python camera interface

Tue Mar 18, 2014 1:42 am

Hello, I tried setting the shutter speed to 1 second (which I've seen listed as the upper limit on shutter speed, and can be accomplished with raspistill) but it didn't work. When I checked the shutter speed with camera.shutter_speed the reported shutter speed was 33158 (0.033s). I was able to get the shutter speed up to 66428 (0.066s) by setting the resolution on my camera to (2592, 1944) but that's as high as I could get the shutter.

I tried putting in a sleep after the construction of the camera object but it made no difference. Here is my app:

Code: Select all

import logging
import argparse
import picamera
import time

logging.getLogger().setLevel(logging.INFO)

def takepic(camera, filename, shutter=None, iso=None, awb=None):
    logging.info('%s %s' % (camera.shutter_speed, camera.ISO))
    logging.info(camera.resolution)
    if shutter is not None:
        camera.shutter_speed = shutter
    if iso is not None:
        camera.ISO = iso
    if awb in picamera.PiCamera.AWB_MODES:
        camera.awb_mode = awb
    logging.info('%s %s' % (camera.shutter_speed, camera.ISO))
    camera.capture(filename, use_video_port=False)

def main():
    parser = argparse.ArgumentParser()
    parser.add_argument('filename', help='filename to write to')
    args = parser.parse_args()
    
    with picamera.PiCamera() as camera:
        time.sleep(5)
        camera.resolution = (2592, 1944)
        takepic(camera, args.filename, shutter=10**6, iso=800)

if __name__ == '__main__':
    main()

User avatar
waveform80
Posts: 315
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: Pure Python camera interface

Tue Mar 18, 2014 8:18 am

dren wrote:Hello, I tried setting the shutter speed to 1 second (which I've seen listed as the upper limit on shutter speed, and can be accomplished with raspistill) but it didn't work. When I checked the shutter speed with camera.shutter_speed the reported shutter speed was 33158 (0.033s). I was able to get the shutter speed up to 66428 (0.066s) by setting the resolution on my camera to (2592, 1944) but that's as high as I could get the shutter.

I tried putting in a sleep after the construction of the camera object but it made no difference. Here is my app:

Code: Select all

import logging
import argparse
import picamera
import time

logging.getLogger().setLevel(logging.INFO)

def takepic(camera, filename, shutter=None, iso=None, awb=None):
    logging.info('%s %s' % (camera.shutter_speed, camera.ISO))
    logging.info(camera.resolution)
    if shutter is not None:
        camera.shutter_speed = shutter
    if iso is not None:
        camera.ISO = iso
    if awb in picamera.PiCamera.AWB_MODES:
        camera.awb_mode = awb
    logging.info('%s %s' % (camera.shutter_speed, camera.ISO))
    camera.capture(filename, use_video_port=False)

def main():
    parser = argparse.ArgumentParser()
    parser.add_argument('filename', help='filename to write to')
    args = parser.parse_args()
    
    with picamera.PiCamera() as camera:
        time.sleep(5)
        camera.resolution = (2592, 1944)
        takepic(camera, args.filename, shutter=10**6, iso=800)

if __name__ == '__main__':
    main()
Hi - are you by any chance running a recent firmware? (i.e. have you done "sudo rpi-update" recently?). The reason I ask if I've been experimenting with the new firmware for the new modes and one of the tests that's failing under it is to do with shutter speed. It seems that somewhere around revision 648 the firmware changed the implementation of shutter speed so that it's now clamped by the current framerate.

So, if you want to set shutter speed to something more than 33ms (which is the limit for 30fps) you first need to set framerate to some appropriately low value (e.g. 1fps or lower - currently lower framerates require the use of a (num, denom) tuple but the next release will permit fractions and floats to be specified too). I'm updating the docs in the next release to reflect the firmware change.


Dave.

dren
Posts: 6
Joined: Mon Mar 17, 2014 8:54 pm

Re: Pure Python camera interface

Tue Mar 18, 2014 11:44 am

Thanks Dave,

I am running a recent firmware, updated it yesterday. Setting

Code: Select all

camera.framerate = (1, 1)
cleared up the problem for me. I can now capture with shutter speeds of up to 999777. Where did you hear about the firmware change?

I see some curious behavior that is probably not related to picamera. I have the camera pointed outside and now that it's morning if I try a 1s exposure at ISO 100 (which overexposes terribly since the sun is up and there is snow out) I get a black frame instead of an all white frame. Had to go down to a quarter of a second before I got a nearly all white, mostly clipped frame, as I expected.

Thanks again, if it wasn't for you I'd be spending tonight poring over the differences between my implementation with picamera and the raspistill implementation.

PS - Any idea what lag there is, if any between sequential calls to camera.capture? Any thoughts on how to minimize it?

User avatar
waveform80
Posts: 315
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: Pure Python camera interface

Tue Mar 18, 2014 11:54 am

dren wrote:Thanks Dave,

I am running a recent firmware, updated it yesterday. Setting

Code: Select all

camera.framerate = (1, 1)
cleared up the problem for me. I can now capture with shutter speeds of up to 999777. Where did you hear about the firmware change?

I see some curious behavior that is probably not related to picamera. I have the camera pointed outside and now that it's morning if I try a 1s exposure at ISO 100 (which overexposes terribly since the sun is up and there is snow out) I get a black frame instead of an all white frame. Had to go down to a quarter of a second before I got a nearly all white, mostly clipped frame, as I expected.

Thanks again, if it wasn't for you I'd be spending tonight poring over the differences between my implementation with picamera and the raspistill implementation.
Sounds like there's some kind of overflow there, though I'm not sure exactly what - I've not managed to reproduce it myself, but then there's not an over-abundance of light here at the moment (more a typical Mancunian grey drizzle!).

Here's a link to the post about the firmware update - it doesn't explicitly mention the change in the shutter speed implementation, just the new modes and framerates supported (90fps - woohoo!) but I assume the two are related as I only noticed the shutter speed tests in the picamera test-suite failing after the firmware upgrade (and by failing I just mean that the test-suite was cycling through setting shutter speeds, expecting to get a similar value back without first setting the framerate - I've now committed a change to fix this, and to test that the framerate clamp is working too).


Dave.

dren
Posts: 6
Joined: Mon Mar 17, 2014 8:54 pm

Re: Pure Python camera interface

Tue Mar 18, 2014 12:20 pm

Awesome, thank you. I will follow the driver thread. Good to hear you have tests that caught the problem.

Did you see my PS from the previous post?

User avatar
waveform80
Posts: 315
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: Pure Python camera interface

Tue Mar 18, 2014 1:00 pm

dren wrote:PS - Any idea what lag there is, if any between sequential calls to camera.capture? Any thoughts on how to minimize it?
dren wrote:Awesome, thank you. I will follow the driver thread. Good to hear you have tests that caught the problem.

Did you see my PS from the previous post?
Oops - missed that! There's no lag between sequential calls to capture as it's a synchronous method (it doesn't return until the output has been completely written and, in the case of a file opened by picamera with a filename, flushed to disk).

However, capturing full quality stills (i.e. not from the video port) is quite a slow process as it involves a mode change on the camera (see under the hood for the gory details) so the maximum capture rate (at lower resolutions) winds up being 1fps or thereabouts. At full resolution (which your script is using) the amount of data involved will probably mean you won't get close to 1fps due to the Pi's I/O limits.


Dave.

dren
Posts: 6
Joined: Mon Mar 17, 2014 8:54 pm

Re: Pure Python camera interface

Tue Mar 18, 2014 1:57 pm

Ah, I don't think I was quite clear about my meaning. My aim is to circumvent the 1s exposure time limit by taking multiple captures and combining the data. My question was really about if sequential calls to capture will leave gaps in the combined image, and if so, how big and can they be minimized somehow?

NeoMopp
Posts: 7
Joined: Fri Mar 08, 2013 10:25 am

Re: Pure Python camera interface

Wed Mar 19, 2014 10:40 am

Dave,

Firstly thank you ever so much for this excellent API for python seems like it will be very helpful for a project I'm working on. I was wondering if you know of a way to place the camera's current image inside a window like those provided by opencv. I'm currently using the following code:

Code: Select all

import io
import time
import picamera
import cv2
import numpy as np

while True:
    stream = io.BytesIO()
    with picamera.PiCamera() as camera:
        #camera.start_preview()
        time.sleep(2)
        camera.capture(stream, format='jpeg')

    data = np.fromstring(stream.getvalue(), dtype=np.uint8)
    cv2.imshow("video", data)
I commented out the "start_preview" as the image was taking up the full screen and was not what I wanted. But when I run this code all I get is an empty window. Is there a way to get the image into the window or is that not a feature that is available?

Many thanks

Mopp

User avatar
waveform80
Posts: 315
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: Pure Python camera interface

Fri Mar 21, 2014 12:14 am

dren wrote:Ah, I don't think I was quite clear about my meaning. My aim is to circumvent the 1s exposure time limit by taking multiple captures and combining the data. My question was really about if sequential calls to capture will leave gaps in the combined image, and if so, how big and can they be minimized somehow?
Hmm, not sure I'm understanding what the issue is here. Consecutive calls to capture will capture separate frames from the camera so there may well be differences between those frames if movement is involved. The only way to minimize this is to capture as rapidly as possible. Is that what you mean by "gaps" in the combined image?

As I understand it, the Pi's camera is a "continuous" camera (which is apparently typical of mobile cameras). Once initialized it continually captures a sequence of frames, albeit to a null-sink by default (or to a preview renderer if you've called start_preview).

When you call capture (for example), a JPEG encoder is constructed and the next frame to be captured is fed to that encoder, and the output written to whatever destination you've provided. It's important to note that the call to capture didn't cause the camera to capture a frame - it was already capturing frames anyway - all it did was provide an encoder and a destination for the next frame that happened to be captured.

Does that answer the question? Sorry - I'm probably not understanding something here!


Dave.

User avatar
waveform80
Posts: 315
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: Pure Python camera interface

Fri Mar 21, 2014 12:30 am

NeoMopp wrote:Dave,

Firstly thank you ever so much for this excellent API for python seems like it will be very helpful for a project I'm working on. I was wondering if you know of a way to place the camera's current image inside a window like those provided by opencv. I'm currently using the following code:

Code: Select all

import io
import time
import picamera
import cv2
import numpy as np

while True:
    stream = io.BytesIO()
    with picamera.PiCamera() as camera:
        #camera.start_preview()
        time.sleep(2)
        camera.capture(stream, format='jpeg')

    data = np.fromstring(stream.getvalue(), dtype=np.uint8)
    cv2.imshow("video", data)
I commented out the "start_preview" as the image was taking up the full screen and was not what I wanted. But when I run this code all I get is an empty window. Is there a way to get the image into the window or is that not a feature that is available?

Many thanks

Mopp
The camera's preview system is fairly crude. It doesn't output to a window - in fact it doesn't interact with X-windows at all. Instead the camera's preview system effectively tells the GPU "overlay my output on whatever's being displayed". This is why the preview works even from the command line (outside X).

That said, picamera does provide various properties that allow you to configure the preview area. The preview_window property allows you to specify the region of the screen that the preview should occupy. If you can react to a window's movement and resizing events you could "fake" the preview appearing in that window by manoeuvring it within the window's borders each time they moved. You can also use preview_alpha to make the preview partially transparent (very handy to ensure you can see what you're typing if you're working directly on the Pi!).

Still, back to why your original code doesn't display anything. I suspect the reason is that OpenCV expects raw image data. What you're capturing is a JPEG which requires decoding first. Try something like the folllowing:

Code: Select all

import io
import time
import picamera
import cv2
import numpy as np

while True:
    stream = io.BytesIO()
    with picamera.PiCamera() as camera:
        #camera.start_preview()
        time.sleep(2)
        camera.capture(stream, format='jpeg')

    # Retrieve the JPEG data
    data = np.fromstring(stream.getvalue(), dtype=np.uint8)
    # Decode the JPEG data into raw image data
    data = cv2.imdecode(data, 1)
    # Swap RGB values into BGR (which OpenCV expects)
    data = data[::-1]
    # Display the resulting image
    cv2.imshow("image", data)
    cv2.waitKey(0)

Dave.

dren
Posts: 6
Joined: Mon Mar 17, 2014 8:54 pm

Re: Pure Python camera interface

Fri Mar 21, 2014 1:41 am

waveform80 wrote:
dren wrote:Ah, I don't think I was quite clear about my meaning. My aim is to circumvent the 1s exposure time limit by taking multiple captures and combining the data. My question was really about if sequential calls to capture will leave gaps in the combined image, and if so, how big and can they be minimized somehow?
Hmm, not sure I'm understanding what the issue is here. Consecutive calls to capture will capture separate frames from the camera so there may well be differences between those frames if movement is involved. The only way to minimize this is to capture as rapidly as possible. Is that what you mean by "gaps" in the combined image?

As I understand it, the Pi's camera is a "continuous" camera (which is apparently typical of mobile cameras). Once initialized it continually captures a sequence of frames, albeit to a null-sink by default (or to a preview renderer if you've called start_preview).

When you call capture (for example), a JPEG encoder is constructed and the next frame to be captured is fed to that encoder, and the output written to whatever destination you've provided. It's important to note that the call to capture didn't cause the camera to capture a frame - it was already capturing frames anyway - all it did was provide an encoder and a destination for the next frame that happened to be captured.

Does that answer the question? Sorry - I'm probably not understanding something here!


Dave.
Thanks Dave, you got my meaning about gaps. I'm trying to take exposures longer than 1 second as a stepping stone for creating a timelapse program for the pi that can do holy grail timelapses. I think the pi could be perfect for the task because it is programmable and allows such fine control over the shutter speed (which would allow a very linear day/night transition without flickering). Solutions for DSLRs like the Promote Controller tend to be very expensive (as is all camera stuff) and sort of clunky.

Anyway, the camera API won't allow long exposures, the limit is 1 second as detailed by jamesh in this thread, but I believe it's possible by taking consecutive pictures then combining the data in post. The people in the RAW Output Information thread have come up with code for converting the rpi camera raw data to an Adobe DNG. Using their stuff I should be able to sum the raw data from consecutive captures, convert to DNG, then process the DNGs in lightroom, adobe camera raw, or ufraw to get jpgs.

Another thing related to the pi camera that I am interested in is this paper: http://www.cs.ubc.ca/labs/imager/tr/201 ... nsImaging/

The paper is essentially about how dramatic improvements in sharpness can be achieved for fixed focal length fixed focus lenses by measuring the lens defects and applying transforms to the photos taken with the lens. I'm curious as to if this technique could be applied to the pi camera to dramatically increase its image quality. This idea kind of takes a backseat to the timelapse idea for me but maybe someone else will find it interesting.

edit: then again, that idea might not work because to compute the correction they have to stop down and get a sharp image. The pi camera is fixed aperture.

User avatar
waveform80
Posts: 315
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: Pure Python camera interface

Fri Mar 21, 2014 11:45 am

dren wrote:
waveform80 wrote:
dren wrote:Ah, I don't think I was quite clear about my meaning. My aim is to circumvent the 1s exposure time limit by taking multiple captures and combining the data. My question was really about if sequential calls to capture will leave gaps in the combined image, and if so, how big and can they be minimized somehow?
Hmm, not sure I'm understanding what the issue is here. Consecutive calls to capture will capture separate frames from the camera so there may well be differences between those frames if movement is involved. The only way to minimize this is to capture as rapidly as possible. Is that what you mean by "gaps" in the combined image?

As I understand it, the Pi's camera is a "continuous" camera (which is apparently typical of mobile cameras). Once initialized it continually captures a sequence of frames, albeit to a null-sink by default (or to a preview renderer if you've called start_preview).

When you call capture (for example), a JPEG encoder is constructed and the next frame to be captured is fed to that encoder, and the output written to whatever destination you've provided. It's important to note that the call to capture didn't cause the camera to capture a frame - it was already capturing frames anyway - all it did was provide an encoder and a destination for the next frame that happened to be captured.

Does that answer the question? Sorry - I'm probably not understanding something here!


Dave.
Thanks Dave, you got my meaning about gaps. I'm trying to take exposures longer than 1 second as a stepping stone for creating a timelapse program for the pi that can do holy grail timelapses. I think the pi could be perfect for the task because it is programmable and allows such fine control over the shutter speed (which would allow a very linear day/night transition without flickering). Solutions for DSLRs like the Promote Controller tend to be very expensive (as is all camera stuff) and sort of clunky.

Anyway, the camera API won't allow long exposures, the limit is 1 second as detailed by jamesh in this thread, but I believe it's possible by taking consecutive pictures then combining the data in post. The people in the RAW Output Information thread have come up with code for converting the rpi camera raw data to an Adobe DNG. Using their stuff I should be able to sum the raw data from consecutive captures, convert to DNG, then process the DNGs in lightroom, adobe camera raw, or ufraw to get jpgs.

Another thing related to the pi camera that I am interested in is this paper: http://www.cs.ubc.ca/labs/imager/tr/201 ... nsImaging/

The paper is essentially about how dramatic improvements in sharpness can be achieved for fixed focal length fixed focus lenses by measuring the lens defects and applying transforms to the photos taken with the lens. I'm curious as to if this technique could be applied to the pi camera to dramatically increase its image quality. This idea kind of takes a backseat to the timelapse idea for me but maybe someone else will find it interesting.

edit: then again, that idea might not work because to compute the correction they have to stop down and get a sharp image. The pi camera is fixed aperture.
Fascinating stuff! I should note that the current version of picamera doesn't support capturing the raw bayer data from the sensor (picamera's "raw" captures are just unencoded (YUV/RGB/whatever) post-GPU captures). The next release will be adding a "bayer" parameter to the capture methods to permit this but the output will basically be equivalent to raspistill - i.e. the bayer data will be embedded in the JPEG metadata. Issue 52 and the related issue 59 are tracking progress on this and they're the last to close before I release (hopefully this weekend).


Dave.

User avatar
waveform80
Posts: 315
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: Pure Python camera interface

Mon Mar 24, 2014 10:56 am

It's that time again: picamera 1.3 is now out. Highlights this are:
  • A fix to ensure Exif metadata is full recorded in still captures
  • Raw captures now work with capture_continuous and capture_sequence (this got broken somewhere around 1.0)
  • A new splitter_port parameter to permit multiple simultaneous video recordings with different parameters (most likely resize)
  • A new bayer parameter to permit JPEGs to include raw (pre-GPU) sensor data (same as raspistill's --raw option)
  • And most important of all: limits on framerate and resolution removed to permit access to the new modes introduced in the recent firmware upgrade (rather astonishingly, Python's callbacks turn out to be quick enough to deal with 90fps, provided the resolution isn't silly!)
There's one known issue in the release which is an issue with picamera itself (although it's influenced by the upstream firmware change): raw (YUV/RGB/etc.) captures at full resolution (2592x1944) from the still port fail with an out of memory error. If you revert to an older firmware, they'll work again but you'll obviously lose the new high-fps modes and so forth.

I might be able to work around this in picamera, but it would involve backwards incompatible changes so I'm keen to find out if anyone is actually using this combination of settings before ploughing ahead.

Anyway, have fun - as usual, bug reports, suggestions, patches all welcome!


Dave.

User avatar
leol
Posts: 147
Joined: Fri Jan 13, 2012 4:27 pm
Location: Haute-Vienne, France

Re: Pure Python camera interface

Mon Mar 24, 2014 1:20 pm

Thanks for this Dave

Leo
waveform80 wrote:It's that time again: picamera 1.3 is now out.
Dave.

kelevraxx
Posts: 11
Joined: Wed Mar 26, 2014 12:58 am
Location: Brazil

Re: Pure Python camera interface

Thu Mar 27, 2014 7:32 pm

waveform80 wrote:
electronmage wrote:I haven't tried downloading, installing and running this yet, but thank you for all your work. All done in a day!! I'll let you know how it goes!
I'd be interested to hear any experiences, good or bad!
electronmage wrote:Oh, the homepage link is broken. Just fyi. I would like to read about the picroscopy project.
At the moment there is no homepage for the picroscopy project because I haven't made an official release of it (I probably shouldn't have bunged a homepage link in the README - planning too far ahead). That said, picamera's now at the point where I can start using it in picroscopy; I'm hoping to find enough time this weekend to get a first release of picroscopy out (which will also entail building some sort of crude website for it). The application itself is reasonably functional at the moment, but integrating picamera into it will add a few features (most notably full resolution preview).

The other bit I need to do some work on is actually mounting a camera on the microscopes in question. However, another user here posted a fascinating thread the other day and it seems he's been working on much the same thing but from the other direction (get the mount working, then figure out the software). He's very kindly agreed to share some designs with me so my hope is that in the relatively near future we might have something quite slick for microscopy with a pi.

I'm also dabbling with some rapid continuous shooting stuff as a) it seems a hot topic and b) while supping on a pint of ale at the pub a few hours ago it occurred to me that I can probably make picamera do something similar to raspistill's timelapse mode with very little tweaking (and a rather nice pythonic interface suggested itself too!)

Anyway, have fun and let me know how things go!

Dave.

Hi Dave
i am from brazil so, sorry my english is bad!
hi dave I'd love to test this project picroscpy
I'm trying to install following the book but says that nothing was found when easy_install picroscopy
I need to install picamera first?hi dave I'd love to test this project picroscpy
I'm trying to install following the book but says that nothing was found when i try easy_install give picroscopy
I need to install picamera first?

User avatar
waveform80
Posts: 315
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: Pure Python camera interface

Thu Mar 27, 2014 11:32 pm

kelevraxx wrote:...

Hi Dave
i am from brazil so, sorry my english is bad!
hi dave I'd love to test this project picroscpy
I'm trying to install following the book but says that nothing was found when easy_install picroscopy
I need to install picamera first?hi dave I'd love to test this project picroscpy
I'm trying to install following the book but says that nothing was found when i try easy_install give picroscopy
I need to install picamera first?
I'm afraid I still haven't found the time to do a proper release of picroscopy, and with all the demands on my time at the moment (not least from picamera) I'm unlikely to for the next few weeks. A quick guide for the technically confident (sorry I don't have time to expand this into something more user-friendly!):

Code: Select all

# install the necessary tools
$ sudo apt-get install build-essential exuberant-ctags python-virtualenv python-picamera python-pip git
# clone the repo
$ git clone https://github.com/waveform80/picroscopy.git
# construct and activate a virtualenv for development
$ virtualenv -p python3 picroscopyenv
$ source picroscopyenv/bin/activate
# use the develop target in the makefile to install a development copy in the virtualenv
$ cd picroscopy
$ make develop
# tweak the config as required (you'll probably need to uncomment and change the listening port for something non-privileged, e.g. 8000)
$ vim picroscopy.ini
# run the app
$ picroscopy -c picroscopy.ini
# you should see the preview start on the pi's monitor
# visit http://your-pi-address:port/ in a web-browser to try out the app
What the app really needs is for me to get on with the OpenGL/ES overlay stuff in picamera so that I can do overlays of things like app status and scale bars in the preview (that'll also mean finishing the interface for configuring the lenses of the microscopy and adding controls for specifying the selected lens). Unfortunately I'm not going to have any time to do much on this for several weeks.


Dave.

kelevraxx
Posts: 11
Joined: Wed Mar 26, 2014 12:58 am
Location: Brazil

Re: Pure Python camera interface

Fri Mar 28, 2014 11:41 am

Thx for the awnser dave
i will tri do what you tell me to do....and do my best for modify as i need.
looking forward too see you work done!
thx a lot!

kelevraxx
Posts: 11
Joined: Wed Mar 26, 2014 12:58 am
Location: Brazil

Re: Pure Python camera interface

Sun Mar 30, 2014 10:50 pm

waveform80 wrote:
kelevraxx wrote:...

Hi Dave
i am from brazil so, sorry my english is bad!
hi dave I'd love to test this project picroscpy
I'm trying to install following the book but says that nothing was found when easy_install picroscopy
I need to install picamera first?hi dave I'd love to test this project picroscpy
I'm trying to install following the book but says that nothing was found when i try easy_install give picroscopy
I need to install picamera first?
I'm afraid I still haven't found the time to do a proper release of picroscopy, and with all the demands on my time at the moment (not least from picamera) I'm unlikely to for the next few weeks. A quick guide for the technically confident (sorry I don't have time to expand this into something more user-friendly!):

Code: Select all

# install the necessary tools
$ sudo apt-get install build-essential exuberant-ctags python-virtualenv python-picamera python-pip git
# clone the repo
$ git clone https://github.com/waveform80/picroscopy.git
# construct and activate a virtualenv for development
$ virtualenv -p python3 picroscopyenv
$ source picroscopyenv/bin/activate
# use the develop target in the makefile to install a development copy in the virtualenv
$ cd picroscopy
$ make develop
# tweak the config as required (you'll probably need to uncomment and change the listening port for something non-privileged, e.g. 8000)
$ vim picroscopy.ini
# run the app
$ picroscopy -c picroscopy.ini
# you should see the preview start on the pi's monitor
# visit http://your-pi-address:port/ in a web-browser to try out the app
What the app really needs is for me to get on with the OpenGL/ES overlay stuff in picamera so that I can do overlays of things like app status and scale bars in the preview (that'll also mean finishing the interface for configuring the lenses of the microscopy and adding controls for specifying the selected lens). Unfortunately I'm not going to have any time to do much on this for several weeks.


Dave.
hello dave, today I tested the things you told me to test.
I was able to ride like you told me, the only thing was caught:
When I ran picroscopy-c picroscopy.ini I received an error saying that the thumb folders and images did not exist, I was in / tmp and created folders like myself was showing in picroscopy.ini, began to run but soon returned an error of access denied, do not know of much that part of the access linux ai created a folder in / home / pi and within it the thumbs and images folders, the file modified. ini, began to run in another computer accessed the web interface but when to take photo gave error, the file was created but with 0 bytes.
u could give me a light?

thank you. Caio

Return to “Camera board”