ECE 599/CECS 596ECE Team 3 – CECS Team 1

University of Louisville
Micro Aerial Vehicle Optical Navigation Software Subsystem
System Requirements Specification
Jacob Schreiver, Justin Clark, Adrian Fletcher, and Nathan Armentrout
Sponsored by: Dr. Adrian Lauf
9/25/2011

Revision 2

This is the System Requirements Specification for the Micro Aerial Vehicle Vision Project

Table of Contents

1.Purpose of the System

2.Background Information

3.Operational Concept

4.System Description

4.1.Functional Requirements

4.2.Major Components

4.3.External Interfaces

4.4.Internal Interfaces

4.5.Design Constraints

5.Standards/References

6.Appendix

1.Purpose of the System

The purpose for the Micro Aerial Vehicle (MAV) Optical Navigation Software Subsystem is to enable a flapping-wing MAV toautonomously navigate to a predetermined target location in a closed, static environment (e.g. a room) using purely optical sensor input.

2.Background Information

MAVs are a subset of the highly recognized unmanned aerial vehicles (UAVs). MAVs are unique because of their small size and nimbleness, relative to UAVs. Within the MAV subset there are three dominant mechanical designs: fixed-wing, rotary-wing, and flapping-wing. Flapping-wing MAVs are designed for higher maneuverability than rotary-wing designs, however they are currently underdeveloped compared to the other two designs.

MAVs cannot provide a large amount of lift, and therefore are designed to be light weight. This severely limits the amount of electronics and sensorsa MAV can have. Typical sensors include accelerometers, gyroscopes, cameras, GPS, and range finders. The MAV particular to this project is equipped with a large scale and small scale 3-axis accelerometers, a gyroscope, and a camera-transceiver unit. Due to the real-time nature and accuracy needs of the MAVs navigation system, the desired approach is to use a purely optical navigation approach.

In general, this technology lends itself to many future applications. One application is to map the internal structure of an unknown structure. This can be helpful in two very distinct situations. One situation is intelligence gathering. An adversary’s compound can be internally investigated with low risk to human life by using the 3D mapping capabilities. The other situation is search and rescue. An MAV equipped with the vision subsystem can be sent into unstable structures to look for disaster survivors while providing more information than current alternatives.

3.Operational Concept

The user will open the graphical user interface on a serverthat will have access to the live video feed. The user will first use a video camera to record and register video of the target to be recognized. Once enough data has been collected on the target the user will end the target training session. The user will then place the camera at the desired start location for the navigation process and set up the optical navigation software subsystem for autonomous operation. The user can then manually move the camera to mimic the necessary flight pattern of the MAV for learning about the environment. The subsystem will simultaneously display and record the video feed, track objects within the video feed, and generate a 3-D map of the environment. Once the subsystem locates the target, it will then calculate the optimal route to reach it and begin to provide the user with navigational output.

Prior to use, the vision software subsystem must be integrated with an MAV. (Is this true?)

4.System Description

Figure 1 - System Block Diagram

4.1.Functional Requirements

The optical navigation software subsystem has two primary functions. One function is to produce a 3D map of the environment. The 3D map must have a high enough resolution to be useful for navigation and user interpretation. The 3D map is an asset to both the software and the user. The software may use the map, once completed, to calculate a path to the target destination.

<allow the user to verify visual data; allow the software to plan a path to the target> the user the assurance that the software subsystem interprets its environment correctly.

3D mapping. Real time controls navigation output. Location awareness.

4.2.Major Components

There are two major physical components to the optical navigation software subsystem. They are a camera and a computer. The software subsystem will be executing from the supporting computer and will use many existing and custom software components.

Camera

The camera will be a typical COTS web camera. It will interface with the supporting computer to provide optical input for the optical navigation software subsystem. Additionally, the image quality of the web camera will more closely resemble the performance of a MAV mounted camera.

Supporting Computer

The supporting computer will run the optical navigation software subsystem. It will be able to execute the software subsystem quickly enough to produce real time control information through a visual display.

Software

The optical navigation software will synthesize the incoming camera data into a 3D map and provide controls output regarding the direction the camera should travel, understandable to the user.

4.3.External Interfaces

The optical navigation software subsystem has two external interfaces: the environment and the user.

Camera to Environment

The

4.4.Internal Interfaces

Interface with the camera and GUI output.

4.5.Design Constraints

The optical navigation software subsystem must conform to several design constraints. One constraint is that only one camera can be used to gather visual data. The primary reason for this constraint is to model the constraints that the MAV will have.

three reasons for this constraint are weight, cost, and power consumption.

(Probably not necessary) MAV particular to this project can only support a payload weighing ___grams. Adding another camera would nearly exceed the payload weight restriction, needlessly jeopardizing its flight performance. Secondly, the camera specified for the MAV costs thousands of dollars. Adding another camera would needlessly increase the cost of the MAV by ___%. Lastly, power is at a premium on the MAV. An additional camera would needlessly increase power consumption, thereby lowering the mean flight time before recharge.

Another constraint is low computing power. Image processing must be completed quick enough to provide real time feedback.

Yet another constraint is

Room agnostic

5.Standards/References

List research articles and books found here.

6.Appendix

1