This example demonstrates how to use the tool-set. The scripts and config files provided should be modified to meet your own requirements.
If you want to reproduce the steps explained here, you will need some YUV video source files, some video codecs, configuration files and a set of shell-scipts. All of this stuff can be found here and on the EvalVid-Homepage.
All scripts and config files mentioned later in an archive.
I used the Akiyo, Hall and Mobile sequences in CIF resolution.
In this example I used two H.263, two MPEG-4 and two H.264 codecs, namely:
Here is a shell-script to encode all videos with all codecs.
Parameters like frame-rate, bit-rate and so on can be configured here.
For the JM codec extra config files are needed: akiyo_cif.cfg, hall_cif.cfg and mobile_cif.cfg.
This script encapsulates the encoded videos in mp4-container and hints these files for RTP transmission.
The picture shows the bit-rate of the encoded videos. As you can see, not all codecs stick to their bit-rate presets. This is due to the high demands of the Mobile video clip. It cannot be encoded at a low bit-rate and a high frame-rate in acceptable quality.
In order to send the files over a network, or a simulation thereof, we'll need some trace files containing packet sizes, sent times, frame types and so on. They are generated by sending the mp4-files.
You can use this script.
You can feed the generated sender trace files in a network simulator, e.g., ns-2. The simulation should produce a receiver trace file containing packet ID's, sizes and receiving time. Lost packets are either marked lost or can be omitted. You can have a look at the eg tool from EvalVid, it demonstrates how to read the sender traces and how to generate the relevant receiver traces.
Script to generate receiver trace files.
First we calculate the reference PSNR for all encoded videos, that is, the video quality before the transmission.
Script to calculate reference PSNR.
Next, the transmissions will be evaluated regarding, loss-rate, end-to-end delay and the video files at the receiver are reconstructed using the original video files and the information about lost/late packets/frames. In this example, I use different methods to generate the damaged videos.
The evaluation script.
The decoding script.
The last three scripts accumulate the calculated loss, delay and PSNR data and create files that can be used to produces pictures.
loss.sh creates loss_fx.txt, loss_f0.txt and loss_p0.txt. fx and f0 should have the same content. The files contain one column with the name of the transmission, three columns with the percentage of lost I-, P-, B-frames/packets and one column with the percentage of overall lost frames/packets.
delay.sh creates delay_fx.txt, delay_f0.txt and delay_p0.txt. Again, fx and f0 should be the same. The script extracts the end-to-end delay, but it can be modified to extract the inter-frame jitter at the sender or receiver or the cumulative jitter. It calculates the PDF (probabilty density function) and CDF (cumulative distribution function) of the delay. Lost packets/frames get a delay of 0. Thus, the start of the CDF-lines is the percentage of lost frames/packets.
mos.sh calculates the Mean Opinion Score (MOS) of all video transmissions. In mos.txt the first column is the name of the transmission, the next five columns contain the percentage of frames with a MOS of 1, 2, 3, 4 and 5. The last column is the average MOS of the video transmission. For comparison, the average MOS before transmission is also pictured for each video.