piotrek
Posts: 5
Joined: Tue Jul 19, 2016 2:15 pm

Getting garbage out of OMX.broadcom.image_encode

Tue Jul 19, 2016 2:48 pm

Hello, I have been struggling to get GPU accelerated jpeg encoding working for quite some time now. Basically what I want to do is to feed OMX.broadcom.image_encode component YUYV data (obtained from v4l2 camera) and get jpeg compressed image out of it. My program is heavily based on the example found here https://github.com/gagle/raspberrypi-openmax-jpeg, with the difference being that instead of tunneling data from camera component I provide raw YUYV buffers myself.

So far I was able to successfully change state to executing, supply full frame data and receive a 'valid' jpeg image out of it. The problem is that the jpeg image is a complete garbage, like the example below (https://www.dropbox.com/s/tzoph2otsyq3efv/still.jpg). (The frame should be all black)

Image

I am configuring the jpeg encoding like this:

Code: Select all

#define JPEG_QUALITY 100 //1 .. 100
#define JPEG_EXIF_DISABLE OMX_TRUE
#define JPEG_IJG_ENABLE OMX_TRUE
#define JPEG_THUMBNAIL_ENABLE OMX_FALSE
#define JPEG_THUMBNAIL_WIDTH 64 //0 .. 1024
#define JPEG_THUMBNAIL_HEIGHT 48 //0 .. 1024
#define JPEG_PREVIEW OMX_FALSE

void set_jpeg_settings(component_t* encoder) 
{
	printf("configuring '%s' settings\n", encoder->name);

	OMX_ERRORTYPE error;
  
	//Quality
	OMX_IMAGE_PARAM_QFACTORTYPE quality;
	OMX_INIT_STRUCTURE(quality);
	quality.nPortIndex = 341;
	quality.nQFactor = JPEG_QUALITY;
	if ((error = OMX_SetParameter(encoder->handle, OMX_IndexParamQFactor, &quality)))
	{
		fprintf(stderr, "error: OMX_SetParameter: %s\n", dump_OMX_ERRORTYPE(error));
		exit(1);
	}
  
	//Disable EXIF tags
	OMX_CONFIG_BOOLEANTYPE exif;
	OMX_INIT_STRUCTURE(exif);
	exif.bEnabled = JPEG_EXIF_DISABLE;
	if ((error = OMX_SetParameter(encoder->handle, OMX_IndexParamBrcmDisableEXIF, &exif))) 
	{
		fprintf(stderr, "error: OMX_SetParameter: %s\n", dump_OMX_ERRORTYPE(error));
		exit(1);
	}
  
	//Enable IJG table
	OMX_PARAM_IJGSCALINGTYPE ijg;
	OMX_INIT_STRUCTURE(ijg);
	ijg.nPortIndex = 341;
	ijg.bEnabled = JPEG_IJG_ENABLE;
	if ((error = OMX_SetParameter(encoder->handle, OMX_IndexParamBrcmEnableIJGTableScaling, &ijg))) 
	{
		fprintf(stderr, "error: OMX_SetParameter: %s\n", dump_OMX_ERRORTYPE(error));
		exit(1);
	}
  
	//Thumbnail
	OMX_PARAM_BRCMTHUMBNAILTYPE thumbnail;
	OMX_INIT_STRUCTURE(thumbnail);
	thumbnail.bEnable = JPEG_THUMBNAIL_ENABLE;
	thumbnail.bUsePreview = JPEG_PREVIEW;
	thumbnail.nWidth = JPEG_THUMBNAIL_WIDTH;
	thumbnail.nHeight = JPEG_THUMBNAIL_HEIGHT;
	if ((error = OMX_SetParameter(encoder->handle, OMX_IndexParamBrcmThumbnail, &thumbnail))) 
	{
		fprintf(stderr, "error: OMX_SetParameter: %s\n", dump_OMX_ERRORTYPE(error));
		exit(1);
	}
}
And here is the format configuration:

Code: Select all

#define CAM_WIDTH 720
#define CAM_HEIGHT 480

printf("configuring '%s' output port definition\n", encoder.name);
	
	OMX_PARAM_PORTDEFINITIONTYPE port_def;
	OMX_INIT_STRUCTURE(port_def);
	port_def.nPortIndex = 341;
	if ((error = OMX_GetParameter(encoder.handle, OMX_IndexParamPortDefinition, &port_def))) 
	{
		fprintf(stderr, "error: OMX_SetParameter: %s\n", dump_OMX_ERRORTYPE(error));
		exit(1);
	}
	
	port_def.format.image.nFrameWidth = ALIGN(CAM_WIDTH, 32);
	port_def.format.image.nFrameHeight = ALIGN(CAM_HEIGHT, 16);
	port_def.format.image.eCompressionFormat = OMX_IMAGE_CodingJPEG;
	port_def.format.image.eColorFormat = OMX_COLOR_FormatUnused;
	
	if ((error = OMX_SetParameter(encoder.handle, OMX_IndexParamPortDefinition, &port_def))) 
	{
		fprintf(stderr, "error: OMX_SetParameter: %s\n", dump_OMX_ERRORTYPE(error));
		exit(1);
	}
	
	printf("configuring '%s' input port definition\n", encoder.name);
	
	OMX_INIT_STRUCTURE(port_def);
	port_def.nPortIndex = 340;
	if ((error = OMX_GetParameter(encoder.handle, OMX_IndexParamPortDefinition, &port_def))) 
	{
		fprintf(stderr, "error: OMX_SetParameter: %s\n", dump_OMX_ERRORTYPE(error));
		exit(1);
	}
	
	port_def.format.image.nFrameWidth = ALIGN(CAM_WIDTH, 32);
	port_def.format.image.nFrameHeight = ALIGN(CAM_HEIGHT, 16);
	port_def.format.image.eCompressionFormat = OMX_IMAGE_CodingUnused;
	port_def.format.image.eColorFormat = OMX_COLOR_FormatYUV422PackedPlanar;
	port_def.nBufferSize = 0;
	port_def.format.image.nStride = port_def.format.image.nFrameWidth * 2; //1 pixel = 2 bytes
	
	if ((error = OMX_SetParameter(encoder.handle, OMX_IndexParamPortDefinition, &port_def))) 
	{
		fprintf(stderr, "error: OMX_SetParameter: %s\n", dump_OMX_ERRORTYPE(error));
		exit(1);
	}
The rest of the code is just a standard buffer filling and emptying. The only interesting part is where I calculate the number of bytes I need to provide:

Code: Select all

const int testBufferSize = port_def.format.image.nStride * port_def.format.image.nFrameHeight;
int testBufferLeftToWrite = testBufferSize;
Then I supply image_encoder with exactly 'testBufferSize' bytes and after that I'm setting 'OMX_BUFFERFLAG_EOS' to mark the end of frame.

Code: Select all

if (testBufferLeftToWrite <= 0)
{
	fprintf(stdout, "info: done writing to omx buffer\n");
	encoder_input_buffer->nFlags |= OMX_BUFFERFLAG_EOS;
}
Everything runs fine without any errors, warnings or unexpected state changes, so I'm clueless what might be wrong.

EDIT:
I would like to add that I tested these color formats without any success:
port_def.format.image.eColorFormat = OMX_COLOR_FormatYUV422PackedPlanar;
port_def.format.image.eColorFormat = OMX_COLOR_FormatYCbYCr;
port_def.format.image.eColorFormat = OMX_COLOR_Format16bitRGB565;
Last edited by piotrek on Tue Jul 19, 2016 3:12 pm, edited 1 time in total.

jamesh
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 23052
Joined: Sat Jul 30, 2011 7:41 pm

Re: Getting garbage out of OMX.broadcom.image_encode

Tue Jul 19, 2016 2:53 pm

I think the stride is wrong judging by the results, or the data isn't aligned to 16/32 byte boundaries, or the dth and height you are trying to encode doesn't match the width/height of the incoming data.
Principal Software Engineer at Raspberry Pi (Trading) Ltd.
Contrary to popular belief, humorous signatures are allowed. Here's an example...
"My grief counseller just died, luckily, he was so good, I didn't care."

piotrek
Posts: 5
Joined: Tue Jul 19, 2016 2:15 pm

Re: Getting garbage out of OMX.broadcom.image_encode

Tue Jul 19, 2016 2:59 pm

I experimented with multiple stride settings, different multiplies of width and they all give similar results. Right now I'm calculating stride like this, because of the color format I specify:

Code: Select all

port_def.format.image.nStride = port_def.format.image.nFrameWidth * 2; //1 pixel = 2 bytes
For the example image I filled buffers with all 0 so the image should be all black I guess, or at least the same color everywhere.

I'm filling the buffer like this:

Code: Select all

int buffSize = encoder_input_buffer->nAllocLen;
int toSend = (testBufferLeftToWrite < buffSize ? testBufferLeftToWrite : buffSize);
encoder_input_buffer->nFilledLen = toSend;
			
for (int n = 0; n < toSend; ++n)
{
	encoder_input_buffer->pBuffer[n] = 0;
}

jamesh
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 23052
Joined: Sat Jul 30, 2011 7:41 pm

Re: Getting garbage out of OMX.broadcom.image_encode

Tue Jul 19, 2016 3:14 pm

Ah, you are using YUV422PackedPlanar in the port format. That isn't YUYV. It's each plane separate, stride = width. YUYV is interleaved.

http://www.fourcc.org/yuv.php
Principal Software Engineer at Raspberry Pi (Trading) Ltd.
Contrary to popular belief, humorous signatures are allowed. Here's an example...
"My grief counseller just died, luckily, he was so good, I didn't care."

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 6995
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: Getting garbage out of OMX.broadcom.image_encode

Tue Jul 19, 2016 3:25 pm

Correct.
OMX_COLOR_FormatYUV422PackedPlanar is a complete Y plane, followed by a U plane, followed by a V plane, per slice. Think of it as I420 but with only horizontal chroma subsampling.

The format you're describing would be https://github.com/raspberrypi/userland ... mmon.h#L79
* YCbYCr : Organized as 16bit YUYV (i.e. YCbYCr)
* YCrYCb : Organized as 16bit YVYU (i.e. YCrYCb)
* CbYCrY : Organized as 16bit UYVY (i.e. CbYCrY)
* CrYCbY : Organized as 16bit VYUY (i.e. CrYCbY)
OMX_COLOR_FormatYCbYCr,
OMX_COLOR_FormatYCrYCb,
OMX_COLOR_FormatCbYCrY,
OMX_COLOR_FormatCrYCbY,
and those are not supported by image_encode. (I do have a patch that started to add support for it, but never completed it - the GPU has to deinterleave the data as the JPEG hardware block can't consume it when interleaved).
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

piotrek
Posts: 5
Joined: Tue Jul 19, 2016 2:15 pm

Re: Getting garbage out of OMX.broadcom.image_encode

Tue Jul 19, 2016 4:14 pm

Thanks a lot for your help! I wasn't aware that image_encode doesn't support those formats, so I will stick to YUV422PackedPlanar. After a bit of experimenting I was able to make some progress. If I do not change the image width,height or stride and leave it at the defaults (96x80x192) I can successfully generate monochrome jpeg image (filling input buffer with just a single value), just like these:

https://www.dropbox.com/s/dp3wae9jewmbs ... 3.jpg?dl=0
https://www.dropbox.com/s/2p0zw527aawrm ... 4.jpg?dl=0

Image
Image

However, as soon as I change width or height (in this case I just changed height to 480) the image is broken :/

https://www.dropbox.com/s/hjun9iqv9qipc ... 9.jpg?dl=0
Image

This is how I calculate how many bytes of data I need to supply image_encoder with:

Code: Select all

const int testBufferSize = port_def.format.image.nStride * port_def.format.image.nFrameHeight * 1.5;
//but I also tried
const int testBufferSize = port_def.format.image.nStride * port_def.format.image.nFrameHeight / 2;
const int testBufferSize = port_def.format.image.nStride * port_def.format.image.nFrameHeight * 2;
const int testBufferSize = port_def.format.image.nStride * port_def.format.image.nFrameHeight * 3;
const int testBufferSize = port_def.format.image.nStride * port_def.format.image.nFrameHeight * 4;

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 6995
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: Getting garbage out of OMX.broadcom.image_encode

Tue Jul 19, 2016 4:30 pm

Hmm, my patch got pushed. Apologies, I thought I'd held it back.
I'd need to check how far that patch did get - the plumbing is the easy bit, but I'm not sure where the image conversion routines got to.

One to try is that some codecs support encoding a different colourspace to the input format. eg pass in YUV422 (of some flavour), but encode as YUV420. That would be the bit I suspect is lacking as I only implemented YUYV (of some flavour) to YUV422, not to YUV420. If you set image.eColorFormat on the OUTPUT port to YUV422 that may solve the problem
eg

Code: Select all

   port_def.format.image.nFrameWidth = ALIGN(CAM_WIDTH, 32);
   port_def.format.image.nFrameHeight = ALIGN(CAM_HEIGHT, 16);
   port_def.format.image.eCompressionFormat = OMX_IMAGE_CodingJPEG;
   port_def.format.image.eColorFormat = OMX_COLOR_FormatYUV422PackedPlanar;
For YCbYCr I would expect:

Code: Select all

   port_def.format.image.nFrameWidth = CAM_WIDTH;
   port_def.format.image.nFrameHeight = CAM_HEIGHT;
   port_def.format.image.nSliceHeight = ALIGN(CAM_HEIGHT, 16);
   port_def.format.image.nStride = ALIGN(CAM_WIDTH, 32) << 1;
   port_def.format.image.eCompressionFormat = OMX_IMAGE_CodingUnused;
   port_def.format.image.eColorFormat = OMX_COLOR_FormatYCbYCr;
   port_def.nBufferSize = 0;
   // or
   port_def.format.image.nBufferSize = port_def.format.image.nSliceHeight * port_def.format.image.nStride; 
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 6995
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: Getting garbage out of OMX.broadcom.image_encode

Tue Jul 19, 2016 4:46 pm

Just double checking the firmware side for image_encode, nSliceHeight will be your issue.
It'll accept values of either 16, or the height aligned up to a multiple of 16. If neither of those is set (eg your 0), then it chooses 16.

You've then computed your own buffersize instead of reading the value back off the port, but the component will only look at the bit that it is expecting. The EOS will then make it flush the codec as it is the end of stream, so you get a partial encoding only.
The default format is nFrameWidth=96, nFrameHeight=80, nSliceHeight=80, nStride=(as defined by the format I think). So it falls into the nSliceHeight=ALIGN_UP(nFrameHeight,16).

Oh I do love IL ... not!
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

piotrek
Posts: 5
Joined: Tue Jul 19, 2016 2:15 pm

Re: Getting garbage out of OMX.broadcom.image_encode

Tue Jul 19, 2016 5:50 pm

Thanks, I got it working now. As you said the main issue was with incorrect slice height value. It looks like OMX_COLOR_FormatYCbYCr => OMX_COLOR_FormatYUV422PackedPlanar conversion doesn't work yet (I'm getting some weird results out of this), but OMX_COLOR_FormatYUV422PackedPlanar => OMX_COLOR_FormatYUV422PackedPlanar works just fine. In some other thread I found the information that resize component can do the hardware accelerated color conversion so I'm going to try get it working together with image_encode.

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 6995
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: Getting garbage out of OMX.broadcom.image_encode

Tue Jul 19, 2016 7:29 pm

piotrek wrote:Thanks, I got it working now. As you said the main issue was with incorrect slice height value. It looks like OMX_COLOR_FormatYCbYCr => OMX_COLOR_FormatYUV422PackedPlanar conversion doesn't work yet (I'm getting some weird results out of this), but OMX_COLOR_FormatYUV422PackedPlanar => OMX_COLOR_FormatYUV422PackedPlanar works just fine.
OK, I'll check it out. It's easy enough to make raspistill pass YUYV from camera to image_encode to exercise it - whilst that is using MMAL, it's almost exactly the same code paths through image_encode.
piotrek wrote:In some other thread I found the information that resize component can do the hardware accelerated color conversion so I'm going to try get it working together with image_encode.
Sorry, you've been misinformed there. resize uses the VPU, not dedicated hardware. It doesn't handle YUYV either - very few of the IL components do because it's a pain to process.
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 6995
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: Getting garbage out of OMX.broadcom.image_encode

Tue Jul 26, 2016 10:19 pm

It looks like YUYV works fine from the camera through image_encode.
Using the following diffs:

Code: Select all

diff --git a/host_applications/linux/apps/raspicam/RaspiStill.c b/host_applications/linux/apps/raspicam/RaspiStill.c
index 46e78e4..9d1f32f 100644
--- a/host_applications/linux/apps/raspicam/RaspiStill.c
+++ b/host_applications/linux/apps/raspicam/RaspiStill.c
@@ -1035,7 +1035,7 @@ static MMAL_STATUS_T create_camera_component(RASPISTILL_STATE *state)
          { MMAL_PARAMETER_CAMERA_CONFIG, sizeof(cam_config) },
          .max_stills_w = state->width,
          .max_stills_h = state->height,
-         .stills_yuv422 = 0,
+         .stills_yuv422 = 1,
          .one_shot_stills = 1,
          .max_preview_video_w = state->preview_parameters.previewWindow.width,
          .max_preview_video_h = state->preview_parameters.previewWindow.height,
@@ -1136,7 +1136,7 @@ static MMAL_STATUS_T create_camera_component(RASPISTILL_STATE *state)
         mmal_port_parameter_set(still_port, &fps_range.hdr);
    }
    // Set our stills format on the stills (for encoder) port
-   format->encoding = MMAL_ENCODING_OPAQUE;
+   format->encoding = MMAL_ENCODING_UUYV;
    format->es->video.width = VCOS_ALIGN_UP(state->width, 32);
    format->es->video.height = VCOS_ALIGN_UP(state->height, 16);
    format->es->video.crop.x = 0;
and raspistill will generate YUYV images for encoding. image_encode is processing those perfectly. The command line I was using is
raspistill -v -o image.jpg -w 3264 -h 2448
(I'm using a V2.1 camera)
I've just done a "sudo rpi-update" to get the absolute latest firmware, and have checked that there are no worrying asserts logged if "start_debug=1" is added to config.txt.

Please post your app or a runnable sample to github or similar if you want me to do further debugging, although I do hate OpenMAX these days.
I'd also suggest you try using something like Vooya to check that the images you are trying to encode really are as you think they are.
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

piotrek
Posts: 5
Joined: Tue Jul 19, 2016 2:15 pm

Re: Getting garbage out of OMX.broadcom.image_encode

Tue Aug 02, 2016 1:23 pm

I'm sorry for not replying earlier. Thanks a lot for the patch! Everything is working just fine and the encoder performance is great. I'm sure this thread will be very useful for others with the same issues.

Return to “OpenMAX”