Micro Aerial Vehicle Optical Navigation Software Subsystem
System Design Specification
Jacob Schreiver, Justin Clark, Adrian Fletcher, and Nathan Armentrout
Sponsored by: Dr. Adrian Lauf
11/20/2011
Revision 1
This is the System Design Specification for Micro Aerial Vehicle Optical Navigation Project to be developed for Dr. Adrian Lauf.0.2 Table of Contents
1.0System Description
1.1 System Interfaces
1.1.1 Moisture Sensors
1.1.2 Temperature Sensors
1.1.3 Sprinkler Valves
1.1.4 Power Transformer
1.1.5 Graphical User Interface
1.2 Major Components
1.2.1 Moisture Sensors
1.2.2 Temperature Sensors
1.2.3 Control Board
1.2.3.1 Moisture Sensors
1.2.3.2 Temperature Sensors
1.2.3.3 Relays
1.2.3.4 Relay Control
1.2.3.5 DDMS
1.2.4 Power Converter
1.2.4.1 Power Transformer
1.2.4.2 Rectifier Circuit
1.2.5 Display and Data Management
1.2.6 Voltage Buffer Daughter Card
1.2.7 Wire Management
2.0 Detailed Design
2.1 Moisture Sensors
2.2 Temperature Sensors
2.3 Control Board
2.4 Power Converter
2.4.1 Power Transformer
2.4.2 Rectifier Circuit
2.5 Display and Data Management System
2.5.1 DDMS Frame
2.5.2 Thread Manager
2.5.3 Serial Communication
2.5.4 Scheduler
2.5.5 Data Logger
2.5.6 Data Visualization
2.6 Voltage Buffer Daughter Card
2.7 Wire Management
2.8 Personal Computer
2.9 Installation Materials
2.8.1 Enclosure
3.8.2 Wire
3.8.3 RS-232 Cable
3.0 Principles of Operation
3.1 Moisture Sensors
3.2 Temperature Sensors
3.3 Control Board
3.4 Power Transformer and Rectifier Circuit
3.5 Display and Data Management System
3.5.1 DDMS Frame
3.6 Voltage Buffer Daughter Card
4.0 Test Procedures
4.1 Moisture Sensor Testing
4.2 Temperature Sensor Testing
4.3 Control Board Testing
4.4 Rectifier Circuit
4.5 Display and Data Management System
5.0 Requirements Traceability
6.0 List of References
1.0 System Description
Figure 1:High Level Algorithm Diagram
The system consists of the software modules illustrated as boxes in Figure 1. The optical correction module is designed to produce a <reliable, clean, clear, stable> image and is furnished by the sponsor. The object discovery, object tracking & recognition, and egomotion estimation modules execute in parallel to synthesize the corrected video feed into <useful, meaningful> <descriptions, information>. The three-dimensional (3D) reconstruction module uses the provided information to create and revise a virtual model of the real world environment, depicted by the corrected video feed. The virtual model produced is used by the path planning module to calculate a path from the current location to a user selected destination. The graphical user interface (GUI) module displays the raw video feed, a visualization of the 3D model, and suggested navigation output to the user.
The software system is designed to be highly modular, allowing future teams to iteratively improve the system over time.
1.1 System Interfaces
[recaps the external interfaces to the system that were documented in the SyRS]
Note: I copied the SyRS for this info.
1.1.1Camera
The camera will observe the closed, static environment as the sole input for the optical navigation software subsystem.
1.1.2 Graphical User Interface
The graphical user interface will display the current state of the optical navigation software subsystem and allow the user to select a travel destination. The user will be able to see the camera video feed, a representation of the 3D map, and the suggested navigation output produced by the software.
1.2 Major Components
[Describe in detail the interfaces, both internal and external, and the allocated functional requirements for the major component]
1.2.1 Object Discovery
Functional requirement: find new objects to learn about
Input: optically corrected video feed
Output: bounding box encapsulating a new found object
1.2.2 Object Tracking
Functional requirement: track objects in short & long term to learn about their spacing and depth
Input: bounding box with an item to track
Output:
1.2.3 Object Recognition
Functional requirement: discern if an object has been studied before, able to attribute more knowledge to it
Input: optically corrected video feed
Output: bounding box encapsulating a known, on-screen object
1.2.4 Egomotion Estimation
The egomotion estimation module must <accurately> describe the translational and rotational movement of the camera. This information will be communicated to the 3D reconstruction module to locate the camera in the virtual environment and to provide a sense of depth perception. The egomotion estimation module will receive a video feed from the optical correction module.
Functional requirement: determine the camera motion to map current location and help rectify images for 3D reconstruction
Input: optically corrected video feed
Output: translation and rotation matricies describing movement
1.2.5 Three-Dimensional (3D) Reconstruction
The 3D reconstruction module must create a virtual 3D model of the current environment. The model must describe the size and location of detected objects, the current location of the camera, and the target destination if selected. The 3D reconstruction module will communicate with the object discovery, object tracking & recognition, and egomotion estimation modules to obtain information describing the environment. The 3D reconstruction module will also communicate with the path planning and graphical user interface modules to provide the most updated 3D environment model.
Functional requirement: construct a virtual 3D model to use as a map for navigation
Input: results from object tracking & egomotion estimation
Output: dynamic, virtual 3D model of the environment
1.2.6Path Planning
The path planning module must calculate a path from the current location to the user-selected target destination. The criteria for the path are dependent on the algorithm encapsulated by the path planning module. The current location, target destination, and, inherently, the path will be based on the 3D environment model provided by the 3D reprojection module. The path planning module will communicate with the graphical user interface the current heading the camera should follow.
Functional requirement: calculate a path from current location to user-defined target
Input: 3D model including current location, target destination
Output: a path from current location to target destination
1.2.7Graphical User Interface
The graphical user interface must communicate system data to the user and accept target destination selection input from the user. The system data output will consist of the video feed, a 3D environment model visualization, and suggested navigation output. The video will be obtained from the camera directly. The 3D environment model visualization will be created by the GUI based on the 3D map provided by the 3D reprojection module. The suggested navigation output will be visually interpreted by the GUI from dataprovided by the path planning module.
Functional requirement: display visual interpretations of data, accept user input
Input: video feed, 3D model, navigation information/path
Output: video feed, 3D model visualization, navigation output
2.0 System Setup
Instead, replace with how to setup netbeans, opencv, javacv, the project, how to run it.
2.1 Personal Computer
2.2Camera Calibration
2.3 Camera Rig
3.0 Principles of Operation
[theory of operation of the major components of the system; reference the theory of operation to the level of detail that a well-informed engineering manager could read the description, and understand the operation of the circuit. AVOID THE USE OF JARGON AND “BUZZWORDS”]
The most important design principle of the MAV optical navigation software subsystem is modularity. Much like the concept of interchangeable parts, each software module in the high level algorithm shown in Figure 1 can be replaced with another module that accepts the same inputs and produces the same outputs. It is envisioned that future modules based on new algorithms will be developed by others and can easily be integrated into the software subsystem to incrementally improve the overall system over time. In addition to establishing modularity, many preliminary modules have been developed and are discussed in detail below.
3.1 Software
3.1.1Object Discovery
3.1.2Object Tracking
3.1.2.1 Long Term Tracking
3.1.2.2 Short Term Tracking
3.1.3 Object Recognition
3.1.4Egomotion Estimation
3.1.5Three-dimensional Reconstruction
[May not include this section because it is not finished]
Map model described as no-fly zones.
3.1.6Path Planning
[May not include this section because it is not finished]
The path planning module is responsible for finding a path in the 3D environment model between the current location and user-selected destination and suggesting navigational output to the user through the graphical user interface. The criteria for the path are based on the algorithm encapsulated by the currently executing module. Two modules have been developed to show how this works.
One path planning module developed is the Dijkstra module. This module is based on Dijkstra’s algorithm, an algorithm used to find the shortest path between two nodes in a graph.
[Insert description of Dijkstra’s algorithm]
The algorithm is applied to 3D navigation by .The current location and user-selected destination, defined as airspaces in the 3D environment model, are treated as the start and end nodes for the algorithm. Between each neighboring airspace is a potential edge
3.1.7Graphical User Interface
[May not include this section because it is not finished]
3.1.8Camera Calibration
3.2 Hardware
May not need this section?
3.2.1 Camera Rig
3.2.2 Personal Computer4.0 Test Procedures
[also describe the test approach for each of the major components, and the system test procedure. In order for the design to be verified, you must demonstrate that the capabilities of the implemented design meet all of the requirements of the SyRS, as specified at the beginning of the semester.]
Check the SYRS for requirements. Make a checklist of those requirements
5.0 Requirements Traceability
Requirement Number / Requirements / Test / Pass/Fail6.0 List of References
[1] Mikroelektronica. Installing USB Drivers. [Online]. Available:
1