hopkinskong
Posts: 22
Joined: Sun Jul 03, 2016 9:35 am
Location: Hong Kong

Running two OMX instances

Mon Jul 04, 2016 2:55 pm

Hi all,

I am running a custom OMX program with image_encode component. However, when i run raspivid first, my custom OMX program have stopped working.

The problem is when there is another OMX instance(i.e. raspivid) running, the callback at the second OMX instance(i.e. my custom program with image_encode) will not be executed. After calling "OMX_EmptyThisBuffer" and "OMX_FillThisBuffer", the program just hangs with no callback. However, when i kill raspivid at that time, my second OMX instance will continue to run.

Is there any way to let two programs run together?

Thanks.

hopkinskong
Posts: 22
Joined: Sun Jul 03, 2016 9:35 am
Location: Hong Kong

Re: Running two OMX instances

Tue Jul 05, 2016 9:58 am

Any one?
More info: I am using raspivid with MJPEG output, my custom OMX program use image_encode on 340 and 341, with BGR888 input and JPEG output.

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 7026
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: Running two OMX instances

Tue Jul 05, 2016 10:19 am

There's nothing stopping you running multiple OMX applications beyond the limit to calls to bcm_host_init (max 8 now IIRC - it was increased in about February 2016). Try multiple omxplayers with different window properties and you should see it work.

The issue you may be hitting is that the JPEG hardware block can not save the entire context, so once a component has claimed it, it has to complete that operation and will stall other image_encode/decode components attempting to use it.

raspivid actually uses MMAL instead of IL, so if you say raspivid/still stall it, then it isn't OMX per se. Raspivid with MJPEG output will grab the JPEG block for the majority (if not all) the time, so that would fit with stalling an IL image_encode (or decode) component. I'm guessing the MJPEG codec doesn't play nice and release the hardware block properly between frames.
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

hopkinskong
Posts: 22
Joined: Sun Jul 03, 2016 9:35 am
Location: Hong Kong

Re: Running two OMX instances

Tue Jul 05, 2016 10:35 am

6by9 wrote:There's nothing stopping you running multiple OMX applications beyond the limit to calls to bcm_host_init (max 8 now IIRC - it was increased in about February 2016). Try multiple omxplayers with different window properties and you should see it work.

The issue you may be hitting is that the JPEG hardware block can not save the entire context, so once a component has claimed it, it has to complete that operation and will stall other image_encode/decode components attempting to use it.

raspivid actually uses MMAL instead of IL, so if you say raspivid/still stall it, then it isn't OMX per se. Raspivid with MJPEG output will grab the JPEG block for the majority (if not all) the time, so that would fit with stalling an IL image_encode (or decode) component. I'm guessing the MJPEG codec doesn't play nice and release the hardware block properly between frames.
So do you mean i can't use image_encode while i am using raspivid MJPEG output?

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 7026
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: Running two OMX instances

Tue Jul 05, 2016 10:48 am

hopkinskong wrote:So do you mean i can't use image_encode while i am using raspivid MJPEG output?
Currently yes.
I'll look at making a firmware mod to make the MJPEG codec play nicely with other potential users of the JPEG block. It should only be a couple of lines to fix it. Just be aware that if you start encoding or decoding a JPEG from your app, complete the process or you'll stall the MJPEG codec, and potentially in a way that may not be easily interruptible.
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

hopkinskong
Posts: 22
Joined: Sun Jul 03, 2016 9:35 am
Location: Hong Kong

Re: Running two OMX instances

Tue Jul 05, 2016 11:47 am

6by9 wrote:
hopkinskong wrote:So do you mean i can't use image_encode while i am using raspivid MJPEG output?
Currently yes.
I'll look at making a firmware mod to make the MJPEG codec play nicely with other potential users of the JPEG block. It should only be a couple of lines to fix it. Just be aware that if you start encoding or decoding a JPEG from your app, complete the process or you'll stall the MJPEG codec, and potentially in a way that may not be easily interruptible.
Thanks. Looking forward to the fix.

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 7026
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: Running two OMX instances

Wed Jul 06, 2016 10:38 pm

I've just sent a patch to Pi Towers to be reviewed and released.
As I wrote earlier, it will be your responsibility to ensure that any other users of the JPEG block don't stall the MJPEG encoder for longer than you're happy with.

Keep an eye on https://github.com/Hexxeh/rpi-firmware for the patch being released, and then use "sudo rpi-update" to get it (you may need to "sudo apt-get install rpi-update" first).
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

hopkinskong
Posts: 22
Joined: Sun Jul 03, 2016 9:35 am
Location: Hong Kong

Re: Running two OMX instances

Thu Jul 07, 2016 12:03 am

6by9 wrote:I've just sent a patch to Pi Towers to be reviewed and released.
As I wrote earlier, it will be your responsibility to ensure that any other users of the JPEG block don't stall the MJPEG encoder for longer than you're happy with.

Keep an eye on https://github.com/Hexxeh/rpi-firmware for the patch being released, and then use "sudo rpi-update" to get it (you may need to "sudo apt-get install rpi-update" first).
Thanks for your help. Does that mean when you filling OMX buffer with one program, the other program will wait the first program to finish its filling before it can fill the buffer?

BTW, is the patch is patching a closed source source code? Does it visible on GItHub?

Thanks.

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 7026
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: Running two OMX instances

Thu Jul 07, 2016 7:21 am

hopkinskong wrote:Thanks for your help. Does that mean when you filling OMX buffer with one program, the other program will wait the first program to finish its filling before it can fill the buffer?
image_encode will acquire the JPEG block when it is presented with the first buffer of image data and will not release it until it has had the last line. If nSliceHeight < nFrameHeight, that puts the responsibility on the app to complete the frame in a timely manner to release the block.
The MJPEG encoder always works on full frames, and now acquires the JPEG block when it has a frame to encode, and releases it as soon as it is done, so there is no way it can hold the block for a significant amount of time.
hopkinskong wrote:BTW, is the patch is patching a closed source source code? Does it visible on GItHub?
Yes, it's in the start_x.elf binary blob. The commit text describes the fixes, and normally includes a link to the forum thread or issue that gave rise to the change.
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 7026
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: Running two OMX instances

Fri Jul 08, 2016 2:38 pm

Firmware update done - https://github.com/Hexxeh/rpi-firmware/ ... d0df7c8d38
"sudo rpi-update" to get it (you may need "sudo apt-get install rpi-update" first).
Hopefully that'll allow you interleave image_encode or decode with MJPEG encoding. I didn't have a suitable application to test with, but it was a fairly low risk change to do partially speculatively. MJPEG encoding was certainly still working afterwards.
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

hopkinskong
Posts: 22
Joined: Sun Jul 03, 2016 9:35 am
Location: Hong Kong

Re: Running two OMX instances

Sun Jul 24, 2016 12:28 am

6by9 wrote:
hopkinskong wrote:Thanks for your help. Does that mean when you filling OMX buffer with one program, the other program will wait the first program to finish its filling before it can fill the buffer?
image_encode will acquire the JPEG block when it is presented with the first buffer of image data and will not release it until it has had the last line. If nSliceHeight < nFrameHeight, that puts the responsibility on the app to complete the frame in a timely manner to release the block.
The MJPEG encoder always works on full frames, and now acquires the JPEG block when it has a frame to encode, and releases it as soon as it is done, so there is no way it can hold the block for a significant amount of time.
hopkinskong wrote:BTW, is the patch is patching a closed source source code? Does it visible on GItHub?
Yes, it's in the start_x.elf binary blob. The commit text describes the fixes, and normally includes a link to the forum thread or issue that gave rise to the change.
I am now able to fill the encoder input buffer while running raspivid. However, my application will freeze the output of raspivid (I observed that by running with -o -. I saw no data coming out to stdout.) when it is encoding JPEG frames. I think this behavior was due to my application had acquired the JPEG block and had not release it back to raspivid. After one frame encoded (which the flag "OMX_BUFFERFLAG_ENDOFFRAME" is set in nFlags), the raspivid still have nothing coming out. My application needs to be terminated (or exited) to let raspivid split data again.

I have tried to add a delay (a significant one, i.e 2 seconds for debug) at the end of every frame encoded in my application, raspivid still have no output until i kill my application. So, what is the proper way to "release" the JPEG block from my application to raspivid?

Thanks.

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 7026
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: Running two OMX instances

Sun Jul 24, 2016 12:17 pm

hopkinskong wrote: I am now able to fill the encoder input buffer while running raspivid. However, my application will freeze the output of raspivid (I observed that by running with -o -. I saw no data coming out to stdout.) when it is encoding JPEG frames. I think this behavior was due to my application had acquired the JPEG block and had not release it back to raspivid. After one frame encoded (which the flag "OMX_BUFFERFLAG_ENDOFFRAME" is set in nFlags), the raspivid still have nothing coming out. My application needs to be terminated (or exited) to let raspivid split data again.

I have tried to add a delay (a significant one, i.e 2 seconds for debug) at the end of every frame encoded in my application, raspivid still have no output until i kill my application. So, what is the proper way to "release" the JPEG block from my application to raspivid?
Send in ((nFrameHeight+nSliceHeight-1) / nSliceHeight) buffers.

Rightly or wrongly the image_encode component ignores OMX_BUFFERFLAG_ENDOFFRAME and only counts the number of lines presented in the buffers. What would the correct interpretation be if it receives such a buffer flag but hasn't had the correct number of buffer? Corrupt stream error? It can't complete the encode successfully as the JPEG headers won't match the encoded data.
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

hopkinskong
Posts: 22
Joined: Sun Jul 03, 2016 9:35 am
Location: Hong Kong

Re: Running two OMX instances

Sun Jul 24, 2016 12:54 pm

6by9 wrote:
hopkinskong wrote: I am now able to fill the encoder input buffer while running raspivid. However, my application will freeze the output of raspivid (I observed that by running with -o -. I saw no data coming out to stdout.) when it is encoding JPEG frames. I think this behavior was due to my application had acquired the JPEG block and had not release it back to raspivid. After one frame encoded (which the flag "OMX_BUFFERFLAG_ENDOFFRAME" is set in nFlags), the raspivid still have nothing coming out. My application needs to be terminated (or exited) to let raspivid split data again.

I have tried to add a delay (a significant one, i.e 2 seconds for debug) at the end of every frame encoded in my application, raspivid still have no output until i kill my application. So, what is the proper way to "release" the JPEG block from my application to raspivid?
Send in ((nFrameHeight+nSliceHeight-1) / nSliceHeight) buffers.

Rightly or wrongly the image_encode component ignores OMX_BUFFERFLAG_ENDOFFRAME and only counts the number of lines presented in the buffers. What would the correct interpretation be if it receives such a buffer flag but hasn't had the correct number of buffer? Corrupt stream error? It can't complete the encode successfully as the JPEG headers won't match the encoded data.
My current frame width and height is 1280px and 720px respectively. I have set SliceHeight to 720px.
So i need to send in (720+720-1)/720=1.998611111111111 buffers. Is that 2 buffers?
What do you mean "2 buffers"? Is it 2 full frames? Or 1 full frame with blank data at the end?

Thanks.

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 7026
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: Running two OMX instances

Sun Jul 24, 2016 1:13 pm

hopkinskong wrote:My current frame width and height is 1280px and 720px respectively. I have set SliceHeight to 720px.
So i need to send in (720+720-1)/720=1.998611111111111 buffers. Is that 2 buffers?
What do you mean "2 buffers"? Is it 2 frames? Or 1 frame with blank data at the end?
Sorry, I'm always working with integer arithmetic.
(720+720-1)/720 = 1 buffer.

If dealing with nFrameHeight=1080 and nSliceHeight=16, 1080 is not a multiple of 16, and you'd need (1080+16-1)/16 = 68 slices, with the last one being 8 active lines and 8 lines of padding.
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

hopkinskong
Posts: 22
Joined: Sun Jul 03, 2016 9:35 am
Location: Hong Kong

Re: Running two OMX instances

Sun Jul 24, 2016 1:38 pm

6by9 wrote:
hopkinskong wrote:My current frame width and height is 1280px and 720px respectively. I have set SliceHeight to 720px.
So i need to send in (720+720-1)/720=1.998611111111111 buffers. Is that 2 buffers?
What do you mean "2 buffers"? Is it 2 frames? Or 1 frame with blank data at the end?
Sorry, I'm always working with integer arithmetic.
(720+720-1)/720 = 1 buffer.

If dealing with nFrameHeight=1080 and nSliceHeight=16, 1080 is not a multiple of 16, and you'd need (1080+16-1)/16 = 68 slices, with the last one being 8 active lines and 8 lines of padding.
Sorry, I am not quite understand the maths in here. Is there any formula to determine the correct SliceHeight? Taking 1080 as FrameHeight and 16 as SliceHeight, did you mean you need to fill the buffer for 68*frameWidth*3(for bgr888) bytes each time? How do you come up with the last line as 8 active lines and 8 lines of padding?

Thanks.

hopkinskong
Posts: 22
Joined: Sun Jul 03, 2016 9:35 am
Location: Hong Kong

Re: Running two OMX instances

Sun Jul 24, 2016 4:35 pm

For my SliceHeight be 720px, i only need to fill in the buffer 1 time. However, the JPEG block is still not being released after OMX_EmptyThisBuffer and OMX_FillThisBuffer.

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 7026
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: Running two OMX instances

Sun Jul 24, 2016 5:36 pm

I gave the generic formula for how many buffers were required as you had not quoted any information about your port format. IL supports passing buffers in slices for pipeline efficieny purposes.

Correct with nFrameHeight = 720, nSliceHeight=720, you should only need one buffer, and you should get one sequence of buffers out the other side ending with one which has OMX_BUFFERFLAG_ENDOFFRAME set.
How many buffers and of what size have you passed in to the output port? The encode isn't complete until all of the output has been produced and it is not necessarily a one-in, one-out buffering scheme. The default output buffer size is only 80kB so that output file writing can also be pipelined via multiple buffers.
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

hopkinskong
Posts: 22
Joined: Sun Jul 03, 2016 9:35 am
Location: Hong Kong

Re: Running two OMX instances

Mon Jul 25, 2016 12:07 pm

6by9 wrote:I gave the generic formula for how many buffers were required as you had not quoted any information about your port format. IL supports passing buffers in slices for pipeline efficieny purposes.

Correct with nFrameHeight = 720, nSliceHeight=720, you should only need one buffer, and you should get one sequence of buffers out the other side ending with one which has OMX_BUFFERFLAG_ENDOFFRAME set.
How many buffers and of what size have you passed in to the output port? The encode isn't complete until all of the output has been produced and it is not necessarily a one-in, one-out buffering scheme. The default output buffer size is only 80kB so that output file writing can also be pipelined via multiple buffers.
My input port has configured as:

Code: Select all

nFrameWidth=IMAGE_WIDTH;
nFrameHeight=IMAGE_HEIGHT;
nSliceHeight=IMAGE_HEIGHT;
nStride=IMAGE_WIDTH*IMAGE_STRIDES;
bFlagErrorConcealment=OMX_FALSE;
eColorFormat=OMX_COLOR_Format24bitBGR888;
eCompressionFormat=OMX_IMAGE_CodingUnused;
nBufferSize=IMAGE_WIDTH*IMAGE_HEIGHT*IMAGE_CHANNELS*IMAGE_STRIDES;
Where:
IMAGE_WIDTH=1280
IMAGE_HEIGHT=720
IMAGE_STRIDES=1
IMAGE_CHANNELS=3
(So the buffer size is exactly 1280*720*3=1 full BGR888 frame)
My output port has configured as:

Code: Select all

nFrameWidth=IMAGE_WIDTH;
nFrameHeight=IMAGE_HEIGHT;
bFlagErrorConcealment=OMX_FALSE;
eColorFormat=OMX_COLOR_FormatYUV420PackedPlanar;
eCompressionFormat=OMX_IMAGE_CodingJPEG;
My buffer i/o code as follows (for 1 frame):

Code: Select all

ctx.encoder_input_buffer_needed=1;
ctx.encoder_output_buffer_available=1;
while(true) {
			if(ctx.encoder_input_buffer_needed) {
				// Encoder needs something to feed into its input buffer
				memcpy(ctx.encoder_ppBuffer_in->pBuffer, BGR888Data, IMAGE_WIDTH*IMAGE_HEIGHT*IMAGE_CHANNELS);
				
				ctx.encoder_ppBuffer_in->nOffset = 0;
				ctx.encoder_ppBuffer_in->nFilledLen = IMAGE_WIDTH*IMAGE_HEIGHT*IMAGE_CHANNELS;
				
				ctx.encoder_input_buffer_needed=0; // Will be set to 1 in the event handler
				if((r = OMX_EmptyThisBuffer(ctx.encoder, ctx.encoder_ppBuffer_in)) != OMX_ErrorNone) {
					omx_die(r, "Failed to request emptying of the input buffer on encoder input port 340");
				}
			}
			
			if(ctx.encoder_output_buffer_available) {
				// Encoder needs to clear its output buffer
				write(STDOUT_FILENO, ctx.encoder_ppBuffer_out->pBuffer+ctx.encoder_ppBuffer_out->nOffset, ctx.encoder_ppBuffer_out->nFilledLen);
				ctx.encoder_output_buffer_available=0; // Will be set to 1 in the event handler
				if((ctx.encoder_ppBuffer_out->nFlags&OMX_BUFFERFLAG_ENDOFFRAME)) {
					// 1 frame completed, bail out the loop
					break;
				}
				
				if((r = OMX_FillThisBuffer(ctx.encoder, ctx.encoder_ppBuffer_out)) != OMX_ErrorNone) {
					omx_die(r, "Failed to request filling of the output buffer on encoder output port 341");
				}
			}
		}
For the next frame, it just repeats the above code.

I have added a delay at the end of the code (sleep(2)), so after every one frame encoded, there are time (2s) for raspivid to continue to enocde MJPEG frames. However, it seems that the JPEG block is still not released to raspivid.

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 7026
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: Running two OMX instances

Mon Jul 25, 2016 12:22 pm

So nStride on the input port will be wrong (I think this is the first time you'd mentioned using BGR888 - I'd assumed YUV420PackedPlanar). BGR/RGB888 is 24bpp, so nStride will need to be at least ((nFrameWidth+15) &0xFFF0) * 3 = 3840 in your case. You should be able to leave it as 0 and let the component set it appropriately. The same should be true on nBufferSize.
Are you checking the return value from your call setting the Port Definition? I'm surprised it isn't rejecting it based on an invalid format.

nFilledLen on the buffer needs to be set to nBufferSize. In your case I think the numbers may all just fall out anyway.

The codec will do a conversion from BGR888 to YUV420 before encoding. That shouldn't make a difference.

What buffers do you get back, and with what nFilledLen and nFlags for each?
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

hopkinskong
Posts: 22
Joined: Sun Jul 03, 2016 9:35 am
Location: Hong Kong

Re: Running two OMX instances

Mon Jul 25, 2016 12:29 pm

6by9 wrote:So nStride on the input port will be wrong (I think this is the first time you'd mentioned using BGR888 - I'd assumed YUV420PackedPlanar). BGR/RGB888 is 24bpp, so nStride will need to be at least ((nFrameWidth+15) &0xFFF0) * 3 = 3840 in your case. You should be able to leave it as 0 and let the component set it appropriately. The same should be true on nBufferSize.
Are you checking the return value from your call setting the Port Definition? I'm surprised it isn't rejecting it based on an invalid format.

nFilledLen on the buffer needs to be set to nBufferSize. In your case I think the numbers may all just fall out anyway.

The codec will do a conversion from BGR888 to YUV420 before encoding. That shouldn't make a difference.

What buffers do you get back, and with what nFilledLen and nFlags for each?
I was setting nStride to 3840 before, but the nBufferSize seems not multiplying by 3. Both nBufferSize have reported 2764800 regardless the nStride is 1280 or 3840 (the image have encoded correctly in both cases, too.) My BGR888 data is exactly 2764800 bytes too.

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 7026
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: Running two OMX instances

Mon Jul 25, 2016 12:41 pm

hopkinskong wrote:I was setting nStride to 3840 before, but the nBufferSize seems not multiplying by 3. Both nBufferSize have reported 2764800 regardless the nStride is 1280 or 3840 (the image have encoded correctly in both cases, too.) My BGR888 data is exactly 2764800 bytes too.
In which case I suspect that the component has replaced your incorrect nStrideof 1280 automatically and you haven't noticed.
I'll quote the spec.
4.2.2 Minimum Buffer Payload Size for Uncompressed Data
Uncompressed image and video data have a minimum buffer payload size. The minimum
buffer payload size is determined by the nSliceHeight and nStride fields of the
port definition structure. nStride indicates the width of a span in bytes; when negative,
it indicates the data is bottom-up instead of the top-down). nSliceHeight indicates
the number of spans in a slice.
The minimum buffer payload size can be easily calculated as the absolute value of
(nSliceHeight * nStride ).
2764800 is the correct size for 1280x720 BGR888. I don't get your "but the nBufferSize seems not multiplying by 3" if the stride is correct at 3840 - the stride has already compensated for the 24bpp aspect, so no multiplication needed.

What buffers and flags do you get back?
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

hopkinskong
Posts: 22
Joined: Sun Jul 03, 2016 9:35 am
Location: Hong Kong

Re: Running two OMX instances

Mon Jul 25, 2016 12:57 pm

6by9 wrote:
hopkinskong wrote:I was setting nStride to 3840 before, but the nBufferSize seems not multiplying by 3. Both nBufferSize have reported 2764800 regardless the nStride is 1280 or 3840 (the image have encoded correctly in both cases, too.) My BGR888 data is exactly 2764800 bytes too.
In which case I suspect that the component has replaced your incorrect nStrideof 1280 automatically and you haven't noticed.
I'll quote the spec.
4.2.2 Minimum Buffer Payload Size for Uncompressed Data
Uncompressed image and video data have a minimum buffer payload size. The minimum
buffer payload size is determined by the nSliceHeight and nStride fields of the
port definition structure. nStride indicates the width of a span in bytes; when negative,
it indicates the data is bottom-up instead of the top-down). nSliceHeight indicates
the number of spans in a slice.
The minimum buffer payload size can be easily calculated as the absolute value of
(nSliceHeight * nStride ).
2764800 is the correct size for 1280x720 BGR888. I don't get your "but the nBufferSize seems not multiplying by 3" if the stride is correct at 3840 - the stride has already compensated for the 24bpp aspect, so no multiplication needed.

What buffers and flags do you get back?
I thought i need to allocate some extra space for the strides. i.e: 1280(w)*720(h)*3(channels)*3(strides), because i need to do this when i am playing with the H.264 encoder or i will get incorrect frames. I have set nBufferSize=8294400, and the reported nBufferSize is still 2764800, that's why i said "nBufferSize seems not multiplying by 3". Maybe JPEG encoding is not required to multiple nBufferSize by 3 for the strides.

My input buffer nFilledLen=2764800 (1280*720*3)
Output buffer nFilledLen=73576
Output buffer nFlags=0x11 (=0b10001=OMX_BUFFERFLAG_ENDOFFRAME and OMX_BUFFERFLAG_EOS )

hopkinskong
Posts: 22
Joined: Sun Jul 03, 2016 9:35 am
Location: Hong Kong

Re: Running two OMX instances

Mon Jul 25, 2016 1:18 pm

I'm sorry, it seems that there are more than one iteration on the encoding loop:
  • (First iteration)
    1. Wrote 2764800 bytes to input buffer
    2. Wrote 0 bytes to output buffer, flags = 0x0
    (Second interation)
    3. Wrote 2764800 bytes to input buffer
    4. Wrote 73576 bytes to output buffer, flags=0x11 (EOS, ENDOFFRAME)
Seems the input buffer is 1 frame ahead the iteration. Maybe i need to set the initial state of encoder_output_buffer_available=0.

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 7026
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: Running two OMX instances

Mon Jul 25, 2016 1:19 pm

YUV420 is a planar format, so nStride defines the stride of the first of those planes. The luma plane is 8bpp, so nStride = ALIGN_UP(nFrameWidth, 32). Stride values for the chroma planes are assumed to be half that of the luma.
RGB is an interleaved format, so there is only one plane, and nStride = (ALIGN_UP(nFrameWidth,16) *3)
(Alignment gives significant performance advantages due to the way the vector processing and memory access work).
There should be no significant differences between video_encode for H264 and image_encode for JPEG in this regard.

I'd be very surprised if nFilledLen on the output of 73576 bytes was a fully encoded 1280x720 JPEG, although nFlags of 0x11 is OMX_BUFFERFLAG_EOS and OMX_BUFFERFLAG_ENDOFFRAME would say it is. Is that file decodeable?
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 7026
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: Running two OMX instances

Mon Jul 25, 2016 1:24 pm

hopkinskong wrote:I'm sorry, it seems that there are more than one iteration on the encoding loop:
  • (First iteration)
    1. Wrote 2764800 bytes to input buffer
    2. Wrote 0 bytes to output buffer, flags = 0x0
    (Second interation)
    3. Wrote 2764800 bytes to input buffer
    4. Wrote 73576 bytes to output buffer, flags=0x11 (EOS, ENDOFFRAME)
Seems the input buffer is 1 frame ahead the iteration. Maybe i need to set the initial state of encoder_output_buffer_available=0.
Most definitely!
IL is an asynchronous system. You can stuff input buffers in and they'll all wait patiently until there are sufficient buffers to accept that data. You should be able to match input and output using timestamps, or deliberately waiting on ENDOFFRAME flags where supported.

Also be VERY careful about not submitting buffers that haven't been returned to you as yet. There are pointers inside the headers that are used to create linked lists of buffer headers. Submit the same buffer twice and you get a loop in the linked list.

What is setting ctx.encoder_input_buffer_needed? If you've been round the loop once then it should be cleared, but your logging says you've pushed another buffer in.
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

Return to “OpenMAX”