tjsheffler
Posts: 8
Joined: Sun Aug 25, 2013 4:51 pm

Simple h.264 HLS Server, audio too

Tue Sep 03, 2013 11:26 pm

I wanted to do some experiments with serving up HLS segments using ffmpeg and the camera capture board. I also wanted to try capturing audio using a USB microphone. I expected there to be a lag between the audio and video. It's pretty big, and it seems to increase over time. Here is some of what I learned.

Prerequisites:
- build and install psips (https://github.com/AndyA/psips)
- build and install ffmpeg from source (here are my suggestions on how to do that: http://www.raspberrypi.org/phpBB3/viewt ... 28#p411628)

I've seen a number of posts that show how to use nginx to serve up the segments and manifest file. Installing nginx is great, but may be overkill for simple experiments. It turns out that python comes with a tiny webserver built-in that is more than capable for serving up files to just a few users.

Code: Select all

python -m SimpleHTTPServer <port>
This command starts a python process that serves up the files in the current directory. The server uses file suffixes to guess the content-type of the file. For the types of files we will be using (.html. .m3u8, and .ts) this little webserver is enough.

Andy's example shell script shows the proper way to make a silent audio track. Here, I show how to grab audio from an external usb microphone just to be different. On my system, I can list the audio devices using

Code: Select all

arecord -l
On my system, my external USB microphone shows up as "card 1" and it has a "subdevice 0." In the invocation of ffmpeg below, I use

Code: Select all

-f alsa -ac 1 -i hw:1,0
to grab the audio. This says to use the alsa device, configured for one audio channel. The input is card=1 and subdevice=0, for that card.

Here is the bash script I wrote to begin serving HLS video. It's based on Andy's, but starts up the Python Webserver in the current directory. The script generates the video.html file just so I don't have to remember to create one.

For ffmpeg, I included the segment_wrap option so that segment files are overwritten in a circular fashion. This eliminates the need to remove segment files.

Code: Select all

#!/bin/bash
#
# Capture raspivid and alsa audio.  segment it via ffmpeg and serve it via python.
#

# echo commands as they are executed
set -x

# kill subprocesses upon ^C
function handle_sigint()
{
    for proc in `jobs -p`
    do
        kill $proc
    done
}

# register the signal handler
trap handle_sigint SIGINT

# create a clean FIFO for our H264 feed
rm -rf live.h264
mkfifo live.h264

# create an HTML5 <video> frame for our HLS index
rm video.html
cat <<EOF > video.html
<html>
HLS Video:
<br>
<video src="index.m3u8" width="640" height="480" controls="x">
</video>
</html>
EOF

# Start a Python HTTP Server.  By default it listens on port 8000.
# Start raspivid, pipe its output through psips into the FIFO.
# Use ffmpeg to combine video and audio and segment the result.

python -m SimpleHTTPServer &
raspivid -w 640 -h 480 -fps 25 -g 25 -hf -t 10000000 -b 1800000 -o - | psips > live.h264 &
ffmpeg -i live.h264 \
-itsoffset 8 -f alsa -ac 1 -i hw:1,0 \
-vcodec copy -acodec libfaac \
-map 0:0 -map 1:0 \
-f segment -segment_list_flags live -segment_wrap 20 -segment_time 5.0 -segment_format mpegts -segment_list_size 10 -segment_list index.m3u8 "segment%04d.ts" 
In Safari, you can view the HLS feed by pointing the browser to

Code: Select all

http://raspberrypi:8000/video.html
I've had this configuration running for a few hours at a time, and it has been stable. As with all HLS configurations, viewing latency is related to segment length. Here is is about 20 seconds (4 segments).

I expected some audio/video skew since the video frames are going through the fifo. In my tests, at the start of a broadcast, the video trailed the audio by approximately 5 seconds. This is what I expected, since the video was FIFO'd. I wanted to see if the audio/video offset could be compensated by a fixed offset. The "-itsoffset 8" flag introduces an extra lag on the audio stream to try to align it with the video stream.

As the broadcast went on, the video and audio grow increasingly out of sync. The CPU is kept busy with about 70-90% of the CPU being used by ffmpeg. (That's a lot, and it may be a problem.) That's as far as I've gotten with this test, and it's probably as far as I'll go. It was an interesting experiment. I think that improving the audio/video sync will require a solution that doesn't involve a FIFO.

Mychele
Posts: 10
Joined: Wed Sep 18, 2013 4:40 pm

Re: Simple h.264 HLS Server, audio too

Mon Sep 15, 2014 4:37 pm

Thank you for your guide, it works.
Just a question: how can I serve the segment to another server (i.e. not localhost) in order to serve them with more bandwidth?
Thanks,
Michele

deivid
Posts: 46
Joined: Thu Oct 23, 2014 7:08 am

Re: Simple h.264 HLS Server, audio too

Wed Oct 29, 2014 6:43 am

I'm doing something similar to this. Have you gotten anything else working?

I'm streaming to an nginx server with the rtmp module, because HLS is not widely supported for desktops ATM.

Maybe your CPU usage is from the FIFO? I'm using this command:

Code: Select all

 ffmpeg \
                -probesize 50 -analyzeduration 1000 -r 20.3 -i - \
                -f alsa -ar $ar -ac 1 -itsoffset 1 -probesize 50 -analyzeduration 1000 -i btheadset \
                -c:v copy -r $fps -vsync 1 \
                -c:a libfdk_aac -afterburner 0 -async 2 -ar 48000 -ac 1 -b:a 48k \
                -tune zerolatency -bufsize $((bw*2)) \
                -rtmp_buffer 100 -rtmp_live live \
                -flags +global_header+low_delay \
                -f flv rtmp://*****************
And CPU usage is 19-20%. If I only encode audio, the CPU usage is 13%. using fdk_aac with afterburner disabled is the most efficient way to encode audio. (I've tried only aac and mp3)

Also, the drift seems to be 1.5% (I've been testing for 20 minutes now and seems correct) so consider multiplying your fps by 1.015, like I did (20->20.3)

webbhlin
Posts: 1
Joined: Tue Mar 24, 2015 11:16 am

Re: Simple h.264 HLS Server, audio too

Tue Mar 24, 2015 11:21 am

hi Sir,

I am pretty new with ffmpeg. Currently i don't have a microphone and which parts should I remove to run the hls server?

I tried to remove ac and aac, but it doesn't work.

python -m SimpleHTTPServer &
raspivid -w 640 -h 480 -fps 25 -g 25 -hf -t 10000000 -b 1800000 -o - | psips > live.h264 &
LD_LIBRARY_PATH=/usr/local/lib ffmpeg -i live.h264 \
-itsoffset 8 -f \
LD_LIBRARY_PATH=/usr/local/lib ffmpeg -i live.h264 \
-itsoffset 8 -f alsa -ac 1 -i hw:1,0 \
-vcodec copy -acodec libfaac \
-map 0:0 -map 1:0 \
-f segment -segment_list_flags live -segment_wrap 20 -segment_time 5.0 -segment_format mpegts -segment_list_size 10 -segment_list index.m3u8 "segment%04d.ts"


Thanks a lot for your assistance.

towolf
Posts: 421
Joined: Fri Jan 18, 2013 2:11 pm

Re: Simple h.264 HLS Server, audio too

Wed Mar 25, 2015 1:28 pm

Remove "-map 1:0"

You also don’t need "psips" anymore. Just add --inline to the raspivid command.

Fyn90
Posts: 1
Joined: Sun Sep 27, 2015 8:42 am

Re: Simple h.264 HLS Server, audio too

Mon Sep 28, 2015 10:09 am

Can't quite seem to get the stream pulling through to the player in index.html

When I run the script, the screen plays out the live stream and I can see the .ts files being created and then pulled into the live.m3u8 playlist.

Code: Select all

#!/bin/bash

base="/usr/share/nginx/www"

set -x

rm -rf live live.h264 "$base/live"
mkdir -p live
ln -s "$PWD/live" "$base/live"

# fifos seem to work more reliably than pipes - and the fact that the
# fifo can be named helps ffmpeg guess the format correctly.
mkfifo live.h264
raspivid -w 1280 -h 720 -fps 25 -hf -t 86400000 -b 1800000 -o --inline > live.h264 &

# Letting the buffer fill a little seems to help ffmpeg to id the stream
sleep 2

# Need ffmpeg around 1.0.5 or later. The stock Debian ffmpeg won't work.
# I'm not aware of options apart from building it from source. I have
# Raspbian packags built from Debian Multimedia sources. Available on
# request but I don't want to post them publicly because I haven't cross
# compiled all of Debian Multimedia and conflicts can occur.
ffmpeg -y \
  -i live.h264 \
  -f s16le -i /dev/zero -r:a 48000 -ac 2 \
  -c:v copy \
  -c:a aac -strict -2  -b:a 128k \
  -map 0:0 -map 1:0 \
  -f segment \
  -segment_wrap 20 \
  -segment_time 8 \
  -segment_format mpegts \
  -segment_list "$base/live.m3u8" \
  -segment_list_size 720 \
  -segment_list_flags live \
  -segment_list_type m3u8 \
  "live/%08d.ts" < /dev/null

# vim:ts=2:sw=2:sts=2:et:ft=sh
Would someone kindly explain what I'm doing wrong?
Please excuse my total lack of knowledge, I'm still a newbie!

Thanks
Attachments
Script run terminal.PNG
Script run terminal.PNG (51.35 KiB) Viewed 8591 times

jamie
Posts: 1
Joined: Tue Apr 19, 2016 7:06 am

Re: Simple h.264 HLS Server, audio too

Tue Apr 19, 2016 7:17 am

Fyn90 wrote:Can't quite seem to get the stream pulling through to the player in index.html

When I run the script, the screen plays out the live stream and I can see the .ts files being created and then pulled into the live.m3u8 playlist.

Code: Select all

#!/bin/bash

base="/usr/share/nginx/www"

set -x

rm -rf live live.h264 "$base/live"
mkdir -p live
ln -s "$PWD/live" "$base/live"

# fifos seem to work more reliably than pipes - and the fact that the
# fifo can be named helps ffmpeg guess the format correctly.
mkfifo live.h264
raspivid -w 1280 -h 720 -fps 25 -hf -t 86400000 -b 1800000 -o --inline > live.h264 &

# Letting the buffer fill a little seems to help ffmpeg to id the stream
sleep 2

# Need ffmpeg around 1.0.5 or later. The stock Debian ffmpeg won't work.
# I'm not aware of options apart from building it from source. I have
# Raspbian packags built from Debian Multimedia sources. Available on
# request but I don't want to post them publicly because I haven't cross
# compiled all of Debian Multimedia and conflicts can occur.
ffmpeg -y \
  -i live.h264 \
  -f s16le -i /dev/zero -r:a 48000 -ac 2 \
  -c:v copy \
  -c:a aac -strict -2  -b:a 128k \
  -map 0:0 -map 1:0 \
  -f segment \
  -segment_wrap 20 \
  -segment_time 8 \
  -segment_format mpegts \
  -segment_list "$base/live.m3u8" \
  -segment_list_size 720 \
  -segment_list_flags live \
  -segment_list_type m3u8 \
  "live/%08d.ts" < /dev/null

# vim:ts=2:sw=2:sts=2:et:ft=sh
Would someone kindly explain what I'm doing wrong?
Please excuse my total lack of knowledge, I'm still a newbie!

Thanks
Hey sorry this is a couple months late. I figured I should answer just in case anyone else comes across this problem. The reason you are seeing the generation of the ts files when you curl at the m3u8 but not a stream in safari or on ios is because you are serving the m3u8 file as a static file, but you need to serve the stream.m3u8 in a static directory with the ts files as sibling files. Basically, the m3u8 will tell the client about the availability of a new ts file but if the client can't open the ts file then you aren't really streaming anything. If you are already serving a static directory then try setting the mime type of the m3u8 as well to application/application/x-mpegURL (you do this by changing the Content-Header response). Finally if none of that is working make sure the directory your segments are being put into have the chmod permission 755. I hope that helps. I got this working with a node express server, not nginx but it should all work the same.

Return to “Camera board”