How to evaluate H.263/H.264/MPEG4 video transmission using the NS2 simulator?

 

        Before using my work, please read Jirka’s work and my previous article. All the codecs can be found at here. The needed codes for NS2 and an example can be found at here.

 

[The steps to install my module into NS2]

1.      Put a frametype_ and sendtime_ field in the hdr_cmn header. The frametype_ field is to indicate which frame type the packet belongs to. I frame type is defined to 1, P is defined to 2, and B is defined to 3. The sendtime_ field is to record the packet sending time. It can be used to measure end-to-end delay.

 

Modify the file packet.h in the common folder

 

struct hdr_cmn {

  enum dir_t { DOWN= -1, NONE= 0, UP= 1 };

  packet_t ptype_;      // packet type (see above)

  int     size_;                // simulated packet size

  int     uid_;         // unique id

  int     error_;              // error flag

  int     errbitcnt_;     // # of corrupted bits jahn

  int     fecsize_;

  double      ts_;           // timestamp: for q-delay measurement

  int     iface_;              // receiving interface (label)

  dir_t direction_;        // direction: 0=none, 1=up, -1=down

  // source routing

        char src_rt_valid;

  double ts_arr_; // Required by Marker of JOBS

 //add the following three lines

  int frametype_;               // frame type for MPEG video transmission (Henry)

  double sendtime_;  // send time (Henry)

unsigned long int frame_pkt_id_;

 

2.      Modify the file agent.h in the common folder

class Agent : public Connector {

 public:

  Agent(packet_t pktType);

  virtual ~Agent();

  void recv(Packet*, Handler*);

 

......

inline packet_t get_pkttype() { return type_; }

// add the following two lines

  inline void set_frametype(int type) { frametype_ = type; } // (Henry)

  inline void set_prio(int prio) { prio_ = prio; }  // (Henry)

 

 protected:

  int command(int argc, const char*const* argv);

 

......

  int defttl_;                 // default ttl for outgoing pkts

 // add the following line

  int frametype_;                       // frame type for MPEG video transmission

 

......

 private:

  void flushAVar(TracedVar *v);

};

 

3.      Modify the file agent.cc in the common folder

Agent::Agent(packet_t pkttype) :

  size_(0), type_(pkttype), frametype_(0),

  channel_(0), traceName_(NULL),

  oldValueList_(NULL), app_(0), et_(0)

{

}

 

......

Agent::initpkt(Packet* p) const

{

  hdr_cmn* ch = hdr_cmn::access(p);

  ch->uid() = uidcnt_++;

  ch->ptype() = type_;

  ch->size() = size_;

  ch->timestamp() = Scheduler::instance().clock();

  ch->iface() = UNKN_IFACE.value(); // from packet.h (agent is local)

  ch->direction() = hdr_cmn::NONE;

 

  ch->error() = 0;        /* pkt not corrupt to start with */

 // add the following line

  ch->frametype_= frametype_;

......

4.      copy the myevalvid folder (contains myevalvid.cc,  myudp.cc, myudp.h, myevalvid_sink.cc, and myevalvid_sink.h) into ns2.

For example, “ns-allinone-2.28/ns-2.28/myevalvid”

 

5.      Modify the ns-allinone-2.28/ns-2.28/tcl/lib/ns-default.tcl

Add the following two lines

Agent/myUDP set packetSize_ 1000

Tracefile set debug_ 0

 

6.      Modify the ns-allinone-2.28/ns-2.28/Makefile.in

Put myevalvid/myudp.o, myevalvid/myevalvid_sink.o and myevalvid/myevalvid.o in the OBJ_CC list

 

7.      Recompile NS2

./configure ; make clean ; make

 

[Example]

This example only runs on cygwin environment. If one wants to run this example on different environment or use other codec. Please download the needed software.

(The steps are almost the same)

1.      Encoding a yuv sequence into MPEG4 data format. It will create compressed raw videos with 30 frames per second, a GOP length of 30 frames with no B-frames. The bitrate-control from XviD does not work, so it is omitted here.

 
$./xvid_encraw -i akiyo_cif.yuv -w 352 -h 288 -framerate 30 -max_key_interval 30 -o a01.m4v

 

2.      Following command lines create ISO MP4 files containing the video samples (frames) and a hint track which describes how to packetize the frames for the transport with RTP.

 

       $./MP4Box -hint -mtu 1024 -fps 30 -add a01.m4v a01.mp4

 

3.      The mp4trace tool from EvalVid is able to send a hinted mp4-file per RTP/UDP to a specified destination host. The output of mp4trace will be needed later, so it should be redirected to a file.

$./mp4trace -f -s 192.168.0.2 12346 a01.mp4 > st_a01

4.      First, I use a simple topology to test the MPEG4 video delivery over a best-effort service network. The simulation environment is shown below. S1 will use the data in the file st to transmit packets to D1. The script file can refer to be_a01.tcl.

S1  ----  R1 --- R2  ---- D1

     $ns2 be_a01.tcl

 

set ns [new Simulator]

 

set nd [open out.tr w]

$ns trace-all $nd

 

set max_fragmented_size   1024

 

#add udp header(8 bytes) and IP header (20bytes)

set packetSize  1052

 

set s1 [$ns node]

set r1 [$ns node]

set r2 [$ns node]

set d1 [$ns node]

 

$ns duplex-link  $s1 $r1  10Mb   1ms DropTail

$ns simplex-link $r1 $r2  640kb  1ms DropTail

$ns simplex-link $r2 $r1  640Mb  1ms DropTail

$ns duplex-link  $r2 $d1  10Mb   1ms DropTail

 

set qr1r2 [[$ns link $r1 $r2] queue]

$qr1r2 set limit_ 50

 

set udp1 [new Agent/myUDP]

$ns attach-agent $s1 $udp1

$udp1 set packetSize_ $packetSize

$udp1 set_filename sd_a01

set null1 [new Agent/myEvalvid_Sink]

$ns attach-agent $d1 $null1

$ns connect $udp1 $null1

$null1 set_filename rd_a01

 

set original_file_name st_a01

set trace_file_name video1.dat

set original_file_id [open $original_file_name r]

set trace_file_id [open $trace_file_name w]

 

set pre_time 0

 

while {[eof $original_file_id] == 0} {

    gets $original_file_id current_line

    

    scan $current_line "%d%s%d%d%f" no_ frametype_ length_ tmp1_ tmp2_

    set time [expr int(($tmp2_ - $pre_time)*1000000.0)]

         

    if { $frametype_ == "I" } {

        set type_v 1

        set prio_p 0

    } 

 

    if { $frametype_ == "P" } {

        set type_v 2

        set prio_p 0

    } 

 

    if { $frametype_ == "B" } {

        set type_v 3

        set prio_p 0

    } 

   

    if { $frametype_ == "H" } {

        set type_v 1

        set prio_p 0

    }

 

    puts  $trace_file_id "$time $length_ $type_v $prio_p $max_fragmented_size"

    set pre_time $tmp2_

}

 

close $original_file_id

close $trace_file_id

set end_sim_time $tmp2_

puts "$end_sim_time"

 

set trace_file [new Tracefile]

$trace_file filename $trace_file_name

set video1 [new Application/Traffic/myEvalvid]

$video1 attach-agent $udp1

$video1 attach-tracefile $trace_file

 

proc finish {} {

        global ns nd

        $ns flush-trace

        close $nd

        exit 0

}

 

$ns at 0.0 "$video1 start"

$ns at $end_sim_time "$video1 stop"

$ns at [expr $end_sim_time + 1.0] "$null1 closefile"

$ns at [expr $end_sim_time + 1.0] "finish"

 

$ns run

 

5.      After simulation, ns2 will create two files, sd_a01 and rd_a01. The file sd_a01 is to record the sending time of each packet while the file rd_a01 is used to record the received time of each packet.

 

6.      The next step is the reconstruction of the transmitted video as it is seen by the receiver. For this, the video and trace files are processed by etmp4 (Evaluate Traces of MP4-file transmission):

 
$./etmp4 sd_a01 rd_a01 st_a01 a01.mp4 a01e 
This generates a (possibly corrupted) video file, where all frames that got lost or were corrupted are deleted from the original video track.
 

7.      Decode the received video to yuv format. (Please use ffmpeg to decode the compressed file. It won’t cause any error in most cases. If you use other codec to decode, it may cause errors in most cases.)

 

$./ffmpeg -i a01e.mp4 a01e.yuv  

 

8.      Compute the PSNR.

$./psnr.exe 352 288 420 akiyo_cif.yuv a01e.yuv

 

Last modified: 2005/12/22

 

Author : Chih-Heng, Ke

Website: http://140.116.72.80/~smallko

Email: smallko@ee.ncku.edu.tw

Phd candidate, EE Department, NCKU, Taiwan
신고
Posted by remos


티스토리 툴바