Mission-Oriented Sensor Array for On-board Automatic Generation of Thematic Maps in UAVs

Nina Machado Figueira*, Onofre Trindade Júnior*, Ana Carolina de Sousa Silva**, Eduardo do Valle Simões*

*Instituto de Ciências Matemáticas e Computação (ICMC), Universidade de São Paulo (USP) – Campus São Carlos.

** Laboratório de Física Aplicada e Computacional – LAFAC, Universidade de São Paulo (USP) – Campus Pirassununga.

Abstract. This paper addresses the automatic generation of thematic maps by an embedded system in an Unmanned Aerial Vehicle. This system follows the Mission-Oriented Sensor Array (MOSA) architecture, presented and discussed in this paper. We present a MOSA application for automatic mapping and characterization of ground detected sound sources. These sources may come from internal combustion engines or artillery activity. The MOSA modeled for this application integrates the data from a thermal camera, RGB imaging sensors and an on-the-ground Sound Sensor Network (SSN). The integration of positional information, provided by these two data sources, into a single cartographic basis is one of the key aspects addressed in this work. This paper introduces, as a main contribution, the development process of new remote sensing applications for UAVs following the architecture of MOSA systems.

Keywords: UAVs, MOSA, Thematic Mapping, Embedded Processing, Multi-Sensor Data Fusion.

1.Introduction

This work focuses on the remote sensing. It addresses the definition of a model and reference architecture for the development of smart sensors oriented to specific tasks. The main objective of these missions is the automatic generation of thematic maps. Automatic generation of thematic maps from data acquired by sensor arrays requires the use of results from different areas of knowledge, particularly Computer Systems, Electronics Engineering and Cartographic Engineering. In this paper, we integrate this knowledge into a multi-sensor system architecture (Figueira et al. 2013).

The use of Unmanned Aerial Vehicle (UAVs) has become increasingly common, not just in the military context but also in civilian applications. In the military scenario, the use of UAVs has focused the accomplishment of specific tasks in two broad categories: remote sensing and transport of military material (Trindade et al. 2010). Military UAV missions normally are based on image sensors (RGB, Thermal, Radar and others), generating many gigabytes of images each flight hour. Sensors are mounted on inertial platforms, relying on GPS and Inertial Measurement Units (IMU) to get correct positioning.

Several and different UAVs are available currently for civilian applications ranging from low-cost electric powered units to big size high endurance units. UAVs must provide useful work and a good cost/benefit ratio to be worth using in civilian applications. Applications normally refer to a geographic region, activity, and specific user needs, leading to a great diversity of sensor arrays and processing facilities.

Nowadays, trained personal using supervised and non-supervised processing algorithms are usually responsible for data processing on the ground in a Ground Control Station (GCS) (Trindade et al. 2010). In some cases, there is manual inspection of the data but this not always comply with the temporal resolution requirements of the application. A good example is data processing for agriculture management, where the detection of plagues and diseases must generate thematic maps within a specific time slot otherwise the crop can be compromised before the application of the appropriate countermeasures. An automatic data processing sound promising in this scenario.

The MOSA (Mission-Oriented Sensor Array) architecture and sensors proposed in this paper have the potential to provide, in real time,ready-to-use information provided by embedded data processing engines. Furthermore, they reduce or eliminate the requirement on high bandwidth communication channels to ground facilities normally used to carry real time data such as high-resolution images. In the military operation scenario, ready to use thematic maps can be loaded in real time into military systems, such as automatic aiming and decision support systems.

The structure of the remaining text is: Section 2presents a review of the related works; Section 3 introduces the MOSA concepts and architecture; Section 4 describes a designing of a MOSA payload for environmental monitoring; and Section 5 presents some conclusions and future work.

2.Related Works

Multi-sensor systems have been employed extensively for both: environment monitoring (Abielmona et al. 2010) and human activity monitoring (Ugolotti et al. 2011). Polychronopoulos et al. (2006) presented a paper on multi-sensor data fusion in platforms for wide-area perception. Molina et al. (2012a) and (2012b) discussed the technical and operational challenges of the combined use of images (infrared and visible) for lost people localization. Based on the analysis of the position quality provided by the geodesic system European Geostationary Navigation Overlay Service (EGNOS), it was possible to obtain a realistic and accurate position of the targets. Klausner et al. (2006) demonstrated the feasibility of embedded computation for real-timemulti-sensor fusion.

Yi Lin et al. (2013) proposed the development of a new system, the Air-Ground Remote Sensing (AGRS), aiming the acquisition of scenes of interest. The work consists of the association of AGRS images with a mobile mapping system (MMS) aboard an UAV. Nagai et al. (2009) developed a three-dimensional mapping system from the integration of multi-sensors embedded in UAVs. Kealy et al. (2013) presented a study on cooperative (or collaborative) navigation using different positioning and location sensors platforms with different accuracy. A positioning accuracy of a few meters was achieved from preliminary field experiments.

Zhang & Kovacs (2012) conducted a review of studies involving the use of small UAVs in precision agriculture. The results of these studies indicate that to provide farmers with a reliable final product advances are necessary in the design of acquisition platforms, analysis of technical production details, standardization in georeferencing and mosaicking images. Choi & Lee (2013) have developed a sequential aerial triangulation algorithm for direct geo-referencing of real time image sequences acquired by an airborne multi-sensor system. This algorithm can be used for applications that require real-time image georeferencing, such as disaster monitoring and image-based navigation.

Li-Chee-Ming & Armenakis (2012) introduced a Mobile Stereo Mapping System (MSMS) using UAVs, designed for quick navigation and collection of three-dimensional spatial data through direct georeferencing and integration of multi-sensor array. Hruska et al. (2005) present a workflow/architecture for the acquisition of high-resolution geotagged images using small UAVs, including mission planning, selection and integration of sensors, acquisition, processing and analysis of the images.

3.Mission Oriented Sensor Arrays – A Proposal

MOSA systems include a set of embedded sensors that provide raw data for specific applications. In addition to the hardware, a MOSA system also includes the software that is able to carry out a mission, communicate with all sensors, and send/receive data to the aircraft (Pires et al. 2012). On-board processing reduces raw data complexity into ready-to-use information. This approach leads to modern aerial systems that can accomplish complex missions, presenting decision-making capabilities and optimizing the air-to-ground, real-time dataflow within the limits of the communication channels.

During a mission, a MOSA payload can dynamically adapt to mission demands, choosing the best sensor arrangement according to the situation.

Although in complex systems, such as medium and large UAVs, hardware costs do not present a limitation, the use of MOSA can provide great versatility and flexibility in the development process of sensor systems for new applications. Different sensors and processing units can be integrated into the best cost/benefit sensor arrangement for a specific usage scenario.

The main feature of the MOSA architecture is the division of the system in two distinct modules, the aircraft module (the critical part of the UAV in relation to flight security, including the autopilot and flight sensors) and the MOSA module (the non-critical part of the UAV). To communicate with the aircraft, the MOSA uses a standard interface, called SSP/SSI (Smart Sensor Protocol/Smart Sensor Interface) (Pires et al. 2012). SSP is the communication protocol, while SSI is the interface that allows the MOSA system to use various services provided by the aircraft, particularly the air transportation service and communication with the GCS. Figure 9 shows this organization. Figure 1 shows a simplified functional diagram of the MOSA architecture and the interconnection among system components. Modules with dashed edges are optional. The diagram can change in complexity and number of components according to a particular application.

MOSA makes possible the integration of many devices, such as: Global Positioning System (GPS) receivers, Inertial Navigation System (INS) units, Infra-Red (IR) and thermal image sensors, photography cameras, video cameras, laser scanners, radars, acoustic sensors, among others. MOSA systems can be used in different UAVs that had been adapted to communicate over the SSI/SSP interface (Smart Sensor Interface / Smart Sensor Protocol). The communication protocol uses a plug-and-play mechanism to check if the aircraft is able to perform a specific mission. In some cases, very large range or fast maneuvers may be required, among other limiting factors. According to these limitations, MOSA systems must be able to accomplish, completely or partially a planned mission.

Figure 1: MOSA architecture.

4.Designing of a MOSA Payload for Environmental Monitoring

In this section, we present the specification and design of a MOSA payload intended to environmental monitoring in regions of difficult access (with little or no communication infrastructure, such as wild regions or enemy territory).

a) Problem Description

Nowadays it is being constantly emphasized the importance of environmental monitoring and preservation. On November 24, 2014, the newspaper Folha de São Paulo published a news entitled "Indians will use mobile phones in trees to monitor forests in Amazon" (Garcia 2014), describing the importance of environmental monitoring for an Indian reservation that is constantly being invaded by loggers. This is just one of many cases where continuous environmental monitoring is necessary.

Environmental monitoring consists of the measurements and/or the observations, addressed to indicators and parameters, in order to verify that certain environmental impacts are occurring, their magnitudes and the efficiency of any preventive or corrective measures (Bitar & Ortega 1998). According to Machado (1995), the register of the monitoring activities is extremely important to evaluate the situation and aid the decision making in various spheres of the public and the private sectors.

According to Machado (1995), the main objectives of environmental monitoring are:

• Check that certain environmental impacts are occurring;

• Compute their magnitude;

• Assess whether mitigation measures are being effective or not;

• Propose, when necessary, the adoption of additional mitigation measures.

Many environmental monitoring activities require differentiated solutions. Among them, we can mention fauna monitoring and environmental surveillance, encompassing detection and location of illegal activities (hunting and poaching). Updated maps of the areas under concern are also important.

The monitored variables are specific to each type of application. For example, when monitoring water resources, the amount of fecal coliform bacteria can be an indicator of pollution. Monitoring an ecosystem brings out many other variables, such as: air temperature, humidity, wind speed, rainfall, dew point, air pollution, soil salinity, pH, water vapor pressure, among others.

In the context of this work, we address the following issues:

• Map update to reflect the cartographic reality of the area under monitoring;

• Automatic detection of the presence of large animals and humans, characterized by sound and thermal emission;

• Animal movement and hunting activity: characterized by animal sounds, thermal images and firearm activity, characterized by sounds, thermal emission of fires and related images.

b) Types of data, detection methods and sensors

The described scenario is usually on a poorly mapped area of difficult access, where there may be poaching, incidence of environmental crimes (such as illegal logging and silting of riverbeds), and even endangered species that need to be frequently monitored.

To plan a surveillance mission it is important to know:

(1)The types of data describing the phenomena / elements under study;

(2)The detection methods of the phenomena / elements that enable the selection of the sensors;

(3)The sensors selected (Sá 2002).

There are biomes in Brazil with different compositions ranging from dense vegetation (rain forest) to sparse vegetation (savanna, cerrado, pampas). Mission planning requires different approaches to overcome difficulties, taking into account the particularities of each scenario.

When a certain area needs to be monitored or supervised, two important issues must be addressed: poor geographical knowledge of the location and access difficulties to the region (Sá 2002). Geographic knowledge of the area under investigation is essential for planning and accomplishment of the surveillance mission. The absence of updated cartographic documents makes it very difficult to locate the targets.

In very wide areas of difficult access, it is often impractical to implement a continuous monitoring system, as there are cost and safety issues associated with the monitoring activities. A possible solution for these cases can be the employment of aerial-based monitoring. Aerial photography based on conventional aircraft is an expensive and time-consuming process when compared to the flexibility and versatility of recently available UAV platforms.

Images are important sources of thematic information. There are innumerous sensors that can be embedded into an UAV to generate this type of data (radars, cameras, RGB and thermal video cameras,). Vision algorithms for the automatic identification of elements process the generated images.

In the context of environment monitoring, the acquisition, processing and analysis of sound data are also important since they may increase the perception of the phenomena that occur in a given area. Inspired by the Soundscape, which is the study of sound in a specific scenario (Pijanowski et al. 2011), embedded audio recorders could be used in multiple sensor stations to register occurring sounds in the monitored area. These stations can be connected wirelessly to form a Sound Sensor Network (SSN). In this proposed solution, the SSN collect environment sounds, pre-process and send them (via a modem) to the UAV overflying the area.

In the SSN sound data are converted into spectrograms by Fast Fourier Transform before being sent to the UAV. This process reduces the volume of data over the limited bandwidth channel between the SSN and the UAV. The sound information, images and GPS coordinates are processed on-board in the MOSA system.

c) The proposed system

The following elements, illustrated at Figure 2, compose the system:

1) A SSN deployed in the geographical area of ​​interest;

2) Continuous processing of the raw sound data by on-the-ground processors, resulting time series of sound frequency and amplitude (Fast Fourier Transform);

4) UAV flights over the SSN area collecting the data processed by the sound processors;

5) Processing of sound data by the MOSA payload to classify sounds based on a pre-existing sound-signature library. It is also possible to use algorithms to determine the angle of incidence of the sound and the sound source motion;

6) On-board processing of aerial thermal imaging for the detection of the presence of large animals (including humans) in the area;

7) Merging of the thematic information from the sound sensors with the thematic information obtained from the thermal sensor to extract the following information:

a. Presence of animals and humans;

b. Detection of poaching activity;

c. Detection of routine animal activity.

8) Acquisition of aerial photographs for later georeferencing and orthorectification helping the mapping of the area under ​​interest.

It should be taken into account that communication between the SSN and the MOSA payload is not always possible, since the UAV will not always be flying over the SSN. For example, ground sensors can record and store chainsaw sound signatures over a whole week, and these data will be sent to MOSA for analysis only when the UAV flies over the SSN.

Figure 2: Elements of the proposed system

The diagram in Figure 3 presents a model of the proposed MOSA payload.

Figure 3: Data-flow diagram of the proposed MOSA architecture.

The processes in this Data-flow diagram are:

P1: FRAME SELECTION: a process that receives a video stream N frames per second and separates periodic framesfrom the sequence, since there is a huge image overlap among adjacent frames in the time sequence;

P2: HOT SPOTS DETECTION: this process uses a search window to findin thermal images clustersof pixels that represent elements that have temperatures above a given threshold;

P3: THERMAL IMAGE GEOREFERENCING: process that correlates elements in the thermal images to coordinates from different sources (GPS, IMU and documents in the geographic database);

P4: BINARIZATION: process that converts an image into another image with two groups of pixels: cluster of hot spots and the rest of the image;

P5: IMAGE FEATURES EXTRACTION: process that analyzes binary image produced by P4 and extracts the contour of the cluster of pixels with high temperature;

P6: THERMAL IMAGE CLASSIFICATION: process that compares the silhouette of the element contained in the binary image with silhouettes of hot spots contained in the thermal signatures library;

P7: SOUND FEATURES EXTRACTION: process that search spectrograms to find characteristicsound patterns (acoustic signatures) related to elements of the sought targets;

P8: SOUND CLASSIFICATION: process that compares the features found in P7 with an acoustic library to identify targets;

P9: COMPARISON OF SOUND CLASSIFICATION WITH IMAGE CLASSIFICATION: process that checks the consistency of the results from processes P6 and P8;