ethanol100
Posts: 587
Joined: Wed Oct 02, 2013 12:28 pm

Re: CME -x postponed?

Fri May 23, 2014 4:59 pm

The encoder just take frame by frame, and it does not know from then the images are. So in principle there should be not problem with the calculation of the motion vectors. But it would be wise to fix the exposure and white balance, else there could be big differences in the picture just influenced by these values.

But yes it is only a workaround, you can edit the source. You will find help here in the forum, if you would have some problems with it. If I find some time on the weekend I will give it a try to make a clean solution. I would be interested in monitoring sand dunes or similar slow motion processes, too.

For once a minute you can change the 1.86 to 59.86 and then you would get approximately one frame per minute.

User avatar
peepo
Posts: 305
Joined: Sun Oct 21, 2012 9:36 am

Re: CME -x postponed?

Fri May 23, 2014 7:50 pm

please do not fork threads??

this thread is already quite hard to follow.

~:"

Yggdrasil
Posts: 138
Joined: Sun Aug 26, 2012 8:45 pm

Re: CME -x postponed?

Sun May 25, 2014 7:09 pm

Hello Ethanol100,
ethanol100 wrote:Edit: have pushed my changes to fix the exp mode to: https://github.com/ethanol100/userland/tree/expOff
A new parameter "--waitAndFix 5000" will run the preview for 5s and than fix the exposure and continues as usual.
Thanks for this patch. :) It works for me and I will patch it into my raspivid version ( only RaspiVid with OpenGL output, no RaspiStill), too.

User avatar
Hove
Posts: 1205
Joined: Sun Oct 21, 2012 6:55 pm
Location: Cotswolds, UK
Contact: Website

Re: CME -x postponed?

Sat Jun 07, 2014 11:03 am

Sorry to take this thread up to the simpleton high level question:

Is it possible to reduce the number of macro blocks to process for motion by reducing the frame size to say 320 x 240 (20 x 15 macro blocks) at a frame rate as low as say 10Hz?

I ask because I'm considering using this method for motion detection of the whole (low res) frame compared to the previous i.e. it's the camera which is moving over a static landscape as opposed to objects moving within that landscape. Hence (I hope), all the macro blocks should produce very similar vectors, and hence could be averaged out to give a reasonable approximation to the motion of the camera itself.

I realize this is an oversimplification, but confirmation that the above is viable will provide sufficient motivation for me to try.

Thanks
www.pistuffing.co.uk - Raspberry Pi and other stuffing!

jamesh
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 23891
Joined: Sat Jul 30, 2011 7:41 pm

Re: CME -x postponed?

Sat Jun 07, 2014 2:31 pm

That sounds feasible.
Principal Software Engineer at Raspberry Pi (Trading) Ltd.
Contrary to popular belief, humorous signatures are allowed. Here's an example...
“I think it’s wrong that only one company makes the game Monopoly.” – Steven Wright

User avatar
Hove
Posts: 1205
Joined: Sun Oct 21, 2012 6:55 pm
Location: Cotswolds, UK
Contact: Website

Re: CME -x postponed?

Sat Jun 07, 2014 4:48 pm

Thanks, perfect! All I needed was someone in the know to not say my idea will sink like a brick!
www.pistuffing.co.uk - Raspberry Pi and other stuffing!

jonesymalone
Posts: 2
Joined: Tue Jun 10, 2014 12:20 am

Re: CME -x postponed?

Tue Jun 10, 2014 12:31 am

All,

I am interested in using the motion vector output similar to the way Hove described. However, the intent is to provide a sensory interface for waypoint following on a moving platform. The goal is to set a "waypoint", effectively a screen capture at T=0, and measure accumulation of error. Specifically, my plan is to obtain a field of motion vectors and project them down to a single axis (ie: left or right magnitude). Accumulate all motion vectors, and observe accumulated "error" from an initial snapshot (which is periodically updated). This accumulated error is fed back to the motive control loop of the robot, to produce a course adjustment such that the platform continues moving towards the initial (or periodically updated snapshot).

I would like to write a custom program which polls the camera system for motion vectors, performs the projection and accumulation, and produces a course correction. My question, thus, is:

Is it possible for me to import/access a library or utility which I can use to pull motion vectors directly into my program's working memory? In essence this:

1) allocate motion vector struct array
2) initialize array and accumulators
3) loop: getMotionVectors(structArray *mvArray)
//do x-axis projection
// accumulate error over frames.
// if accumulated error exceeds threshold, report position control.

I am still learning the camera interface, and would appreciate pointers to any documentation that would best allow me to integrate continual capture and evaluation of motion vectors into a homebrewed application.

Thanks in advance,
Jonesy

Return to “Camera board”