Table of Contents

1. Introduction 4

2. Detailed Description 6

2.1. The Octopus Node 6

2.2. OctopusAgent - Structure and Main Features 8

2.2.1. Octopus Agent Class Diagram 8

2.2.2. Main Type Definitions Used in OctopusAgent 9

2.2.3. NS2 Packets 10

2.2.4. The Octopus Packet Header 10

2.2.5. Main Tcl Commands Binded to Octopus Agent 10

2.2.6. OctopusAgent::recv Method 11

2.2.7. OctopusAgent::forward_pkt Method 11

2.2.8. OctopusAgent Modules and Components 11

2.2.8.1. Octopus Core Module 11

2.2.8.1.1. Core Neighbor Tables Management 12

2.2.8.1.2. Core One-Hop Update 14

2.2.8.1.3. Core Strip Update 16

2.2.8.1.4. Core End node 18

2.2.8.1.5. Core Bypassing (Optimization) 20

2.2.8.1.6. Core Queue (Optimization) 21

2.2.8.1.7. Core Two Hop Neighbor Table (Optimization) 22

2.2.8.1.8. Core Neighbor Tables Validation (Optimization) 24

2.2.8.1.9. Low Energy (unstable) Nodes (Experiment) 25

2.2.8.2. Octopus Find Location Module 27

2.2.8.2.1. FL Seek Target 29

2.2.8.2.2. FL Reply (Optimization) 30

2.2.8.2.3. FL Cache (Optimization) 30

2.2.8.2.4. FL Queue (Optimization) 31

2.2.8.2.5. FL Step Queue (Optimization) 32

2.2.8.2.6. FL Bypass (Optimization) 33

2.2.8.2.7. FL Estimated Location (Optimization) 34

2.2.8.2.8. FL Forward to Target (Optimization) 34

2.2.8.2.9. Multiple Sending Directions (Optimization) 35

2.2.8.3. The OctopusDB Class 35

3. Octopus Agent Installation and Usage 36

3.3. Prerequisites 36

3.3.1. Development Platform 36

3.3.2. NS2 36

3.3.3. Octopus Agent Integration into NS2 37

3.3.3.1. C++ Source Code Installation 37

3.3.3.2. OTcl Source Code Installation 38

3.3.3.3. Building NS2 with Octopus Agent 38

3.4. Octopus Agent Parameters 38

3.4.1. Tcl Parameters 38

3.4.2. C++ Parameters 41

3.5. Running Octopus Simulations 42

3.5.1. Octopus Shell Scripts 42

3.5.2. Running a Single Octopus Simulation 42

3.5.2.1. single_test.csh CShell Script 42

3.5.2.2. Responsibility 42

3.5.2.3. Execution 42

3.5.2.4. Input Parameters 42

3.5.2.5. Output 44

3.5.2.6. file.tcl Tcl Script 44

3.5.2.6.1. Responsibility 44

3.5.2.6.2. Execution 44

3.5.2.6.3. Input Parameters 44

3.5.2.6.4. Output 44

3.5.3. Running Multiple Octopus Simulations 46

3.5.3.1. test_all.csh CShell Script 46

3.5.3.1.1. Responsibility 46

3.5.3.1.2. Execution 46

3.5.3.1.3. Input Parameters 46

3.5.3.1.4. Output 46

3.5.4. Editing Octopus Simulations 46

3.5.5. Analyzing Simulation Results 47

4. Octopus Agent Terms 47


1. Introduction

Imagine that Octopus network is spread on the following area:

Figure 1

Octopus Routing Protocol will divide the area into horizontal and vertical "strips" as follows:

Figure 2

The strip width is defined by the STRIPE_RESOLUTION C++ parameter and can be easily changed (see Octopus C++ Parameters chapter for more information regarding this parameter). Several experiments have been performed to evaluate the optimal strip width that will provide maximum effectiveness.

The nodes (appear as laptops on the map) are randomly spread on the grid. The nodes located in the same vertical column are said to belong to the same vertical strip. The nodes located in the same horizontal row are said to belong to the same horizontal strip.

2. Detailed Description

2.1. The Octopus Node

Each wireless node in NS2 is represented by an instance of a MobileNode object. Among others, the MobileNode object holds information on node’s current geographic location, routing protocol used to communicate, node’s velocity, etc. So-called routing Agents, one Agent for each protocol, represent the routing protocols in NS2. Each MobileNode object holds its own single instance of the routing Agent. The routing Agent to be used in a given experiment is defined by the input parameters of the simulation and cannot be altered during NS2 execution. For a more detailed description of NS2 MobileNode structure, see the NS Manual.

* DSR Agent and DSDV Agent (as well as many others) were already implemented in NS2

The Octopus routing Agent is represented by the OctopusAgent class (octopus.h/cc). As stated before, each MobileNode holds one instance of OctopusAgent object, and the routing itself is done by Octopus Agents of different nodes communicating one with the other.

The following information is retrieved by OctopusAgent from its parent MobileNode in order to perform the routing:

§ Current geographical location (in terms of X and Y)

§ Speed

§ Energy level

§ Direction of movement (in terms of start and destination locations)

Implementation Aspect

Each octopus agent handles, among others, a reference to its parent instance of MobileNode:

MobileNode *node_;

For example, to retrieve current geographical location, following commands can be used:

int x = octAgent_->node_->X();

int y = octAgent_->node_->Y();

2.2. OctopusAgent - Structure and Main Features

2.2.1. Octopus Agent Class Diagram

2.2.2. Main Type Definitions Used in OctopusAgent

§ OctPktType – defines the module and component that Octopus packet belongs to. OCT_DEFAULT_TYPE is used to identify new, uninitialized packets. For a more detailed description of each value, see relevant modules’ description.

enum octpkttypes {OCT_DEFAULT_TYPE = -1,

OCT_CORE_HOP_UPDATE = 0, OCT_CORE_STRIPE_UPDATE_TO_NORTH,

OCT_CORE_STRIPE_UPDATE_TO_EAST, OCT_CORE_STRIPE_UPDATE_TO_SOUTH,

OCT_CORE_STRIPE_UPDATE_TO_WEST,

OCT_FIND_LOCATION, OCT_GF,

OCT_END_STRIPE_UPDATE,

OCT_FIND_LOCATION_REPLY,

OCT_BYPASSED,

OCT_RETURN_BYPASSED};

typedef octpkttypes OctPktType;

§ OctRouteEntry – represents Octopus Route Entry, which is the unit of data Octopus Agents use to store information on other nodes.

typedef struct oct_route_entry{

int id_; // ID of the node

double xLoc_; // Latest X location

double yLoc_; // Latest Y location

double xLocPrev_; // Previous X location

double yLocPrev_; // Previous Y location

double timetag_loc_prev_; // Timestamp of the previous location

double velocity_; // Node’s speed

double lastUpdateTime_; // The timestamp of last update of this entry

SquareDirection sqrDirection_; // The direction in which the node is located

} OctRouteEntry;

2.2.3. NS2 Packets

The communication between nodes in NS2 is performed by simulation of sending and receiving packets of data (the packets are represented by the Packet class). Each packet holds several headers, one header for each communication layer. Headers are used to enable data transfer between same communication layers of different nodes. For further details regarding packet structure in NS2, see the NS Manual.

2.2.4. The Octopus Packet Header

The Octopus Header (struct oct_header) was added to NS2 packets in order to enable communication between Octopus Agents of different nodes. The following list describes Octopus Header features that are used by all Octopus modules and components (for a detailed description of module-specific Octopus header fields see relevant module’s description):

double send_time; // Simulation time at current packet sending
OctPktType octType; // The module and component that the packet is
// intended to
double YoctLocation; // Current sending node’s x location
double XoctLocation; // Current sending node’s y location
int myaddr_; // Address of currently transmitting node

2.2.5. Main Tcl Commands Binded to Octopus Agent

NS2 supports controlling routing agents from Tcl input file. The method that is responsible for receiving such commands on C++ end is <agent-class>::command. For further details regarding handling Tcl commands in NS2 the NS Manual.

The OctopusAgent supports the following Tcl commands:

§ start-octopus – starts Octopus Agent (triggers proactive data collection).

§ fl_debug seek_loc <target id> - initiates Find Location query to locate the <target id> node.

2.2.6. OctopusAgent::recv Method

Each Agent in NS2 has an <agent-name>::recv method. This method is the entry point of all packets into the routing agent.

The OctopusAgent::recv method handles the following tasks:

§ Adds all valid information to the Find Location Cache (for more information, see Find Location Module detailed description).

§ Checks whether the packet was intended to be received by this node (in case not – the packet is ignored).

§ Checks the Octopus Type of the packet and invokes the relevant module and component (in case of OCT_DEFAULT_PKT type, the packet assumed to be received from the application layer of this node, therefore the invoked module is Find Location Seek Target).

2.2.7. OctopusAgent::forward_pkt Method

The method handles sending an Octopus packet (including updating the general fields of the Octopus header with the relevant data).

2.2.8. OctopusAgent Modules and Components

2.2.8.1. Octopus Core Module

§ The Octopus Core Module represents the proactive part of the Octopus protocol.

§ The module is responsible for managing the neighbor tables of Octopus Agent.

Follows is a list of terms used in the Octopus Core Module:

§ End Node – (used by Core Strip Update) is a node that does not have any neighbors in one (or more) of the four geographical directions, in current strip. End Nodes are the ones that initiate Strip Updates.

§ Grid – the area covered by Octopus network and divided into Octopus Strips.

§ Node’s Database – all node’s Neighbors Tables (One-Hop and Strips).

§ Radio Range – currently defined to be 250m, is the range within which wireless transmission of one node may be received by another node.

2.2.8.1.1. Core Neighbor Tables Management

Each node manages five neighbor tables:

§ One-Hop Neighbors Table - consists of nodes located within the Radio Range (see Octopus Terms chapter). For example, on Figure 2 there are three such areas:

° Node E one-hop area contains the following nodes: T and I.

° Node M’s one-hop area contains no nodes.

° Node V one-hop area contains node U.

In this case, one-hop neighbor table of node E will contain nodes T and I, one-hop table of M will be empty and one-hop table of V will contain node U.

§ West Strip Neighbors Table – consists of nodes located to the left of current node’s vertical strip.

§ East Strip Neighbors Table – consists of nodes located to the right of current node’s vertical strip.

§ North Strip Neighbors Table – consists of nodes located above current node’s horizontal strip.

§ South Strip Neighbors Table – consists of nodes located below current node’s horizontal strip.

For example, on Figure 2, the strip tables of node V will be as follows:

West Strip Table: C, T, I, E

East Strip Table: M

South Strip Table: R, K

North Strip Table: J

Note: The nodes located in one-hop neighbors table will not appear in strip tables. Thus, node U is not contained in node’s V strip tables.

Implementation Aspect

The tables maintained by the Core module are instances of the OctopusDB class (see OctopusDB Class chapter).

Each node initializes the following data structures before it becomes active:

OctopusDB * hopTable = new OctopusDB(HOP_TABLE);
OctopusDB * northStripeTable = new OctopusDB(NORTH_STRIPE_TABLE);
OctopusDB * southStripeTable = new OctopusDB(SOUTH_STRIPE_TABLE); OctopusDB * westStripeTable = new OctopusDB(WEST_STRIPE_TABLE);
OctopusDB * eastStripeTable = new OctopusDB(EAST_STRIPE_TABLE);

2.2.8.1.2. Core One-Hop Update

Every OCT_BROADCAST_INTERVAL (see Tcl Parameters chapter) seconds, each node broadcasts its ID and location. Each node within its Radio Range (see Octopus Terms chapter) receives this message and updates its one-hop neighbors table accordingly. Experiments have been performed in order to evaluate the optimal OCT_BROADCAST_INTERVAL: on one hand, short timeout will create congestion and packets will be lost; on the other hand, longer OCT_BROADCAST_INTERVAL will decrease the data reliability in the neighbor tables.

Implementation Aspect of the Sending Side

When created, the Octopus Core module schedules a location broadcast event in OCT_BROADCAST_INTERVAL seconds (see NS Manual for information regarding event scheduling in NS2). Octopus scheduled events are handled by the OctopusHandler::handle method, which initiates location broadcast and schedules the next broadcast event.

void OctopusPeriodicHandler::handle(Event *e)
{
octAgent_->broadcastMyLocation();
Scheduler::instance().schedule(this, e, octAgent_->

OCT_BROADCAST_INTERVAL + jitter(0.3,1));

}

Note: jitter(0.3,1) function produces a random float number, which is added to the time of next broadcasting event. It helps avoiding network load each OCT_BROADCAST_INTERVAL (see Tcl Parameters chapter) interval.

broadcastMyLocation() method creates new packet and initializes its octopus header* in the following way:

void OctopusAgent::updateHeaderSenderDetailes(hdr_octopus* hdr)
{
hdr->send_time = CURRENT_TIME;
hdr->myaddr_ = myaddr_; /*sender id */
hdr->XoctLocation = node_->X(); /*sender X location */
hdr->YoctLocation = node_->Y(); /*sender Y location */
hdr->velocity_ = node_->speed(); /*sender velocity */
hdr->XoctLocationPrev = myPrevX_; /*sender X location in the

previous update */
hdr->YoctLocationPrev = myPrevY_; /*sender Y location in the

previous update */

hdr->send_timePrev = my_prev_loc_timetag_; /*time stamp of the

previous update */
hdr->send_timeq = CURRENT_TIME; /*time stamp of the current

update */

return;
}

Implementation Aspect of the Receiving Side

In the case of "Hello" packet, the packet type will be set to OCT_CORE_HOP_UPDATE. The octopus agent will perform the following tasks:

§ Check whether hello packets from the sending node had already been received in the past.

§ If sending node ID already appears in the table, all relevant information, including sending node's location, and the last update time*, will be updated in the appropriate entry in the table.

§ Otherwise, new entry will be created in the table. The entry will hold all the relevant information.

The appropriate case in recv(Packet* pkt) method looks as follows:

case OCT_CORE_HOP_UPDATE:
entry = hopTable->findEntry(hdr->myaddr_);

octopusCore->handleCoreHopUpdate(hdr,entry.xLoc_, entry.yLoc_,

entry.lastUpdateTime_);
// here appears additional code that handles GF/CORE/FL Queues
break;

Where hdr is a reference to the octopus header of the received packet. In case the entry is not found in the table, the default entry is returned by findEntry method, so handleCoreHopUpdate will add a new entry to the table.

*Last Update Time is saved for each entry in the table in order to support validating mechanism. It may occur, that after node B had successfully received “Hello” message from node A, node A has moved away from node B and the two nodes are no longer within each-others Radio Range (see Octopus Terms chapter). In such case, B’s table entry for A should be invalidated after a certain period of time (otherwise B will forever "think" that A is its one hop neighbor). The timeout for invalidating such entries in the one-hop table is defined by HOP_NEIGHBOR_OUTDATE_INTERVAL (see Tcl Parameters chapter).

2.2.8.1.3. Core Strip Update

Every STRIPE_BROADCAST_INTERVAL (see Tcl Parameters chapter) seconds, each node will initiate a strip update. Only End Nodes (see Octopus Terms chapter) are able to do that. Therefore, every STRIPE_BROADCAST_INTERVAL (see Tcl Parameters chapter), each node checks whether it is currently an End Node in any of the four geographical directions. If the node happens to be an End Node, it will initiate strip update.

STRIPE_BROADCAST_INTERVAL: on one hand, short timeout will create congestion and packets will be lost; on the other hand, longer interval will decrease the reliability of the neighbor tables’ data.

Implementation Aspect of the Sending Side

In case the node happens to be an End Node (see Octopus Terms chapter) it initiates a strip update. In that case, a new neighbor table is created in the Octopus header, which will hold information on who belong to the strip for which the update is initiated. A packet with this header will be sent across the relevant strip, from the End Node to the opposite end of the strip, so that all nodes in-between will receive the list of their strip neighbors.

Let’s discuss the example of node R on Figure 2. R node appears to be an End Node of the vertical strip from South and initiates an update in North direction. R updates the Octopus header neighbor table with all its one-hop neighbors that belong to its vertical strip.

As opposed to the “Hello” messages, that are broadcasted across the Grid (see Octopus Terms chapter), strip update packets are unicast, i.e. have a specific destination.