OtherCrashOverride
Posts: 582
Joined: Sat Feb 02, 2013 3:25 am

Proposal for OMX.broadcom.read_media

Wed Feb 06, 2013 9:10 am

I have seen lots of posts regarding the read_media component and propose the following solution:

Instead of having vcfiled, why not just allow pumping the raw data to the component through openmax buffers on an in port where the components sole responsibility is to demux? This allows for a clean separation of OS and GPU and allows the data to source from anywhere be it file, network, other. More importantly, this would eliminate an addition data copy from support libraries such as libav to decoders saving memory bandwidth. This solution also allows those who wish to use ARM side library to continue to do so without impact.

saintdev
Posts: 39
Joined: Mon Jun 18, 2012 10:56 pm

Re: Proposal for OMX.broadcom.read_media

Wed Feb 06, 2013 7:09 pm

If I'm reading this correctly, this sounds like the OpenMAX content pipes (have a look at the spec if you want more info). I haven't actually checked, but would suspect these are not implemented.
SELECT `signature` FROM `users` WHERE `username`='saintdev';
Empty set (0.00 sec)

dom
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 5287
Joined: Wed Aug 17, 2011 7:41 pm
Location: Cambridge

Re: Proposal for OMX.broadcom.read_media

Wed Feb 06, 2013 7:46 pm

The other problem is that the read_media/write_media components only have limited support for containers (e.g. no .ts files).
Broadcom's taken the decision that this is better handled through open source ARM side code, so there will be no support for new features or bug fixes. And the code runs on GPU so is not open source.

I think this is better handled on the ARM through libav/ffmpeg.

OtherCrashOverride
Posts: 582
Joined: Sat Feb 02, 2013 3:25 am

Re: Proposal for OMX.broadcom.read_media

Thu Feb 07, 2013 2:39 am

I think this is better handled on the ARM through libav/ffmpeg.
"Why libav/ffmpeg Is Not The Answer"

Issues with libav/ffmpeg:

* Results in duplication of media data that affects memory bandwidth and application memory footprint (both are important on embedded devices with limited resources such a Raspberry PI): media is read into memory, demuxed, written to memory, then *copied to memory again* to fill an OMX buffer (1GB of media turns into 3GB - ideally the OMX buffers would point to the original data eliminating any data copying, but buffer alignment requirements of codecs make this impractical in real world scenarios)

* Introduces (lots of!) unnecessary complexity to a project raising the barrier to entry: using libav/ffmpeg is not the goal, using media is. Compiling (*see later point), binding libav/ffmpeg to higher level languages, introducing a threaded data pump to process the data, and learning another API are all eliminated by providing an OMX demux component

* API instability: libav/ffmpeg's API is a moving target. Code written today may not work on the next version of the library. The internet is littered with libavcodec samples/tutorials that no longer work. OpenMax code is more future proof.

* License: GPL/LGPL (not everything is open source or linux). This is important enough that Raspberry PI source is dual licensed.

* Patent/licensing issues with unused components: Requires specialized builds to get 'just what you need'.

* Better to solve the problem than "Make it someone else's problem". Having an OMX demuxer solves the issue for everyone instead of making everyone solve it.


Future:
* Possible Google Summer of Code project "OMX.raspberrypi.demux" (education: learn OpenMax and component authoring, provides a reference for others wanting to learn). There is no technical issue preventing this from residing entirely on the ARM side and in a different library to the vendor supplied components. GPU/SIMD does not offer a substantial benefit to demux which is mainly memory manipulation.

* It doesn't have to be done all at once: OMX.raspberrypi.demux.mp4,
OMX.raspberrypi.demux.mkv(webm), OMX.raspberrypi.demux.avi, OMX.raspberrypi.demux.ts

* Will provide same advantages for RaspberryPI.Next (not a one-time use scenario)

* Establishes the ground work for future *audio* codecs which are also being moved ARM side: OMX.raspberrypi.decode.mp3.
(As a side note, while its understandable that the OMX vendor wishes to move audio decode ARM side for their product line, the single core armv6 cpu used in the RPI does not offer a NEON SIMD unit like the current multicore armv7 offerings. Is GPU accelerated OpenMax *DL* (not a typo: *D*L) available for the SoC in the RPI? )


In conclusion, this proposal not only adds performance benefits but aligns with the Raspberry PI Foundation's goal of education. Additionally, it provides a pathway for the future of OpenMax on Raspberry PI on both current and future hardware and therefore is deserving of consideration.


I would like to hear comments and suggestion on this especially if anyone is interested in collaboration for a proof-of-concept or similar. Perhaps someone can recommend alternatives to libav/ffmpeg that would be readily adaptable to writing directly to OMX buffers.

Return to “OpenMAX”