Colorado Space Grant Symposium

Three-Corner Satellite Imaging

Presented by:

Ryan Olds

Matt Culbreth

Matt Gadda

Abstract:

The imaging system of the three-corner satellite was designed to fulfill the largest mission of the project. The basis of this project is to obtain stereo images of the Earth and obtain image data of clouds to determine cloud height. A stack of three satellites in formation will be used to obtain images at different perspectives of the same object. This data will be used to calculate the dimensions of the object. Off the shelf digital cameras will be installed in each satellite. In order to accomplish this goal, software was written to interface with the camera, process and compress image data, and to rank the image data according to the usefulness of the image to our goal. This image data will be transferred down to Earth and analyzed using stereo image software developed here at the University of Colorado. The results of this analysis will potentially lead to data useful to meteorologists in determining cloud height.
THE THREE-CORNER SATELLITE PROJECT:

The Three-Corner Satellite project, know as 3CS, is a collaborative effort between the University of Colorado, Arizona State University, and New Mexico State University to send a formation of three satellites up into Earth orbit. The main mission objectives of the Three-Corner Satellite Project include taking stereoscopic images of the Earth, implementing inter-satellite communication, and fully automating the operation of the satellites. The satellites will hopefully be launched sometime in August of 2002. The three satellites will be put into a stack and launched into orbit by the space shuttle. Once they are in orbit they will detach and from a triangular formation. This formation of satellites will fly in orbit together testing the concept of satellite formation flying. The satellites will all be equipped with a camera system that will take stereo images of the Earth. Other experimental concepts such as the Free Molecule Micro Resistance Jet (FMMR), and Continuous Activity Scheduling Planning Execution and Replanning software known as CASPER will be tested on this project. FMMR was developed by Joyce Wong at Arizona State University. This is an experimental propulsion device, but will not actually be used as a source of propulsion during this mission. The 3CS project will simply test this device and operate FMMR equipment on two of the three satellites. CASPER software will be continuously running on all three of the satellites for the duration of the mission. This software is capable of functioning and making decisions on its own based on data collected by the satellite. This software will allow the satellites to be fully automated. Jet Propulsion Laboratories in Pasadena, California developed this software. If it works correctly NASA could possibly use this software on future satellites. With all of these new technologies to test 3CS will be a very interesting mission. This paper will discuss the collaborative stereo imaging system that the three satellites will use.

1.0)IMAGING SYSTEMS:

The Three-Corner Satellite project is faced with the difficult task of building an imaging system that will allow a constellation of three satellites to take collaborative stereo images of the Earth. The main objective of this mission is to obtain several images of clouds and other objects on the Earth at different from different angles. This visual data could help determine the dimensions of objects on the Earth from space. The heights of certain objects such as clouds would prove very useful to meteorologists in predicting weather patterns or even routing airline flights more efficiently around storms or turbulent weather. To provide this data, the satellites in this mission must be equipped with capable camera equipment as well as collaborative software that will allow them to take organized pictures of similar spots on the Earth.

1.1)Camera Equipment:

The camera equipment chosen for this mission was surprisingly very simple. JamCam 2.0 digital cameras are going to be the only cameras used on this mission. They were chosen for several reasons: The main reason why these cameras were picked was because they were extremely easy to obtain. Understanding how the equipment used on the satellites had to be built from scratch and interfaced to fit in the satellite, lots of experimentation had to be done to get equipment working. This meant that cameras could possibly be damaged or destroyed when interfacing them for the mission. This process will be discussed later. Having a large stockpile of cameras was essential to testing and building the equipment. These cameras were also chosen because the manufacturer was extremely helpful in providing information that allowed project team members to interface with the cameras and hooking them up to the satellite flight computers.

In order to talk to the cameras through the satellite flight computers the cameras first had to be taken apart. The cameras were examined to determine where power was provided and where power needed to be provided to take a picture. The manufacturers of the cameras (KB Gear) provided the project team with resources as to how this should be done. Once the camera components were determined, they were hardwired to a serial line that could be attached to the flight computers through an interface board developed by students and faculty here at the University of Colorado. A simple diagram of the cameras is shown below.

Using this serial line, commands provided by the manufacturer could now be sent to the cameras. Using these commands the cameras could now be told to turn on, take a picture and download these pictures to the flight computers. Once the cameras were functioning correctly, software had to be written to manage the cameras efficiently and store the images to the memory on the flight computer.

1.2)Camera Software:

The software needed for the cameras on the Three-Corner Satellite project had to synchronize the cameras on all three satellites in the hope that they would all capture an image of the same object on the Earth at the same time from different angles. Groups of pictures such as this would allow the dimensions of the object in the pictures to be determined. The method for this analysis will be discussed later. The software developed for the camera equipment on the satellites utilizes the onboard memory of the cameras before pictures are stored onto the flight computer’s memory. When a camera takes a picture, the image is first stored onto the camera’s memory system. In order to get the image to its destination here on Earth from the orbiting satellites a long process of steps must be followed.

Step 1:The first step in taking a picture on one of the satellites is simply to turn on a camera and make sure it is responding to commands and requests. The software does this by routing power to a specific camera through the imaging board and then pinging the camera to verify that it was in fact turned on and that it is responding to serial commands.

Step 2:Next the satellite is told to clear its memory bank to ensure that the picture to be taken does not get confused with older pictures that are not of any interest.

Step 3:During this step the camera takes a picture. Since the satellites will be constantly moving, it is desired to increase the shutter speed on the camera as much as possible to avoid blurry images. Exposure parameters provided by the manufacturer will allow the 3CS team to customize the camera’s exposure for images taken several hundred kilometers from Earth.

Step 4:Once an image has been taken it is stored onto the cameras on board memory. In order to process it and eventually downlink it to the ground over the satellite’s radios, it must first be downloaded from the camera to the memory on the flight computer. The images taken by the camera are approximately 100 kilobytes. Transferring the image from the camera to the flight computer takes approximately one minute to complete.

Step 5:After an image has been successfully taken by the camera and transferred to the satellite flight computer, the next step is to process the image. This will be discussed in more detail later, but a brief summary of this process will be given here. The images taken by the camera are 100 K and are in 8-bit color (more about the color scheme will be talked about in later sections). In order to view the picture in full color and detail, it is first transformed to 24-bit color by an imaging software algorithm. This increases its size three times so the image is approximately 300 K. This size is much too large to economically store large amount of these images on the flight computer of the satellite. Because of this, the image is then compressed into a JPEG file. The greater the compression factor of these files, the less detail can be achieved in the picture. A happy medium between file size and picture detail was found to apply to all of the images taken on the satellite.

Step 6:Now that the image has been processed it is ready to be sent to the ground. During a pass of the ground station here on Earth, the radio equipment on the satellite will be scheduled to transmit a specified picture to the ground. Since the main focus of the 3CS project is to obtain stereo images, the imaging software was designed to organize all of the pictures taken by the three satellites so that when a stereo image is taken by the formation, all of the images will be transmitted back home together.

Step 7:After a picture has been taken and processed, all of the imaging equipment is powered down as soon as possible to conserve power on the satellites.

Each satellite will be equipped with four cameras to increase the odds that at any given time, at least one camera will be pointed toward the Earth. This precaution was taken because the satellites will have no attitude control system on board. This drawback makes it much more challenging to acquire three images that all point at the same thing from different angles, so steps were taken to work around this difficulty. A ranking system was derived that would rank every picture taken using an algorithm written by Brian Egaas who is a faculty member at the University of Colorado. If every picture is ranked by how useful it can be in obtaining a useful stereo image, then these useful images will be given priority to be transmitted to Earth first. All of the camera equipment, including the imaging board used to route power to the cameras will be packed tightly into aluminum casing to protect it from being damaged during launch or in space. Probably the most important aspect of the camera software has to do with processing, compressing, and ranking all of the images taken by the cameras. This Software is discussed in the next section.

2.0)IMAGE PROCESSING:

The image processing routines in the 3CS imaging software are vital to the mission because the main mission objective has to do with analyzing the images taken very accurately. In order to ensure that 3CS is a successful project, images taken by the satellites must be as good as possible. It is an added challenge that mission control will have no control over the attitude and orientation of the satellites. To work around this problem, several different ways of obtaining stereo images were derived to give the project as much redundancy and possible. Stereo images can be taken in two ways. One way is to have two or more satellites all take a picture at the same time and possibly acquire an image of the same thing on the ground that can be analyzed. A second way is that one satellite can take a picture of the Earth, and then take another a couple of minutes later. Since the cameras on the satellite have a field of view of about 30, this single satellite can possibly get two different angles of the same object by itself. No matter how these stereo images are obtained, the imaging software must determine if the images acquired are good and could possibly lead to a very nice stereo image of an object, or junk that is not beneficial to the mission. The imaging software goes about this task by performing the following analyses.

2.1)RGB Color Interpolation:

As briefly stated before, the images that the digital cameras take are in 8-bit color and around 100 K in size. 8-bit format contains information for only one color value of red, green, or blue, per pixel. This format doesn’t look very clear and lacks the kind of detail necessary to examine an object in the picture, so it is necessary to transform the image into 24-bit color. In 24-bit color format, every pixel in the picture has its own value of red, green and blue. The two images shown below demonstrate the difference between a picture in 8-bit color and a picture in 24-bit color.

The 24-bit color is obviously clearer and contains great color. In order to obtain this ideal 24-bit format an interpolation technique was used to transform the 8-bit image. The 8-bit color pixel arrangement that is produced by the camera when a picture is taken is the same for every picture. The color pattern of pixels produced by the JamCam 2.0 cameras was determined experimentally and is shown below to the left. Notice how every pixel has only one color value of red, green, or blue. Since this pattern is the same for every picture, it is possible to interpolate the image so that every pixel will have a red, green and blue value.

To interpolate every pixel so that it has every color in it, an average of the colors in the surrounding pixels was used. For example, a red pixel in 8-bit color needs green and blue values to become 24-bit color. To get these values, an average value of the surrounding blue pixels is used for blue, and an average value of the surrounding green pixels is used for green. Now that the red pixel has values for all three colors, it is now in 24-bit color.

This process also triples the size of the picture file since it is tripling the amount of color information in the picture. Since the satellites have a very limited amount of memory, and transmitting files to Earth takes a long time, it is necessary to compress these pictures into JPEG files.

2.2)JPEG Compression:

The JPEG Image Compression Standard was chosen to compress the stereo images for several reasons. First, by using JPEG compression an image can be compressed with a ratio of between 10:1 or 20:1 with a relatively small degradation in image quality. The compression ratio is especially important for this project because of the limited communications bandwidth available to transfer images. Images that are produced by the cameras are typically around 40k in size when compressed. The next benefit of the JPEG compression is that it is an industry standard and is the most common form of image compression. There is a wide range of software available to view and edit JPEG files, which eliminates the need to write custom viewing software. Another benefit of JPEG compression is that a software library exists that is maintained by the Independent JPEG Group that contains full software functionality for compressing JPEG software and is free of charge. This software library was able to be modified to work on the flight hardware and was interfaced with the imaging system with relative ease. This software package saved a great deal of time that would have been otherwise been required to write compression software.

There are several disadvantages to the JPEG imaging system that mainly stem from various artifacts that occur from the compression process. The JPEG compression process is what is known as a “Lossy” Process where there is some loss of data during the compression process. This loss causes degradation in quality that is directly proportional to the compression ratio. The degradation manifests itself primarily in two forms: visual artifacts and inaccurate and poorly blended colors. The visual artifacts typically show up as “blockiness” in the image. This blockiness comes from the JPEG compression process, which processes the image in sections of 8x8 pixels. When uncompressed, adjacent blocks will look different because they have been compressed differently. The poor color representation comes from the general loss of high-order color data that comes from the compression process and also from the transformation and reduction of colorspace data.

JPEG compression also does not handle sharp contrasts or lines in images. JPEG compression was designed with natural images in mind, which usually contain gradually blended colors. Sharply contrasting sections of the images will experience higher than normal occurrences of the aforementioned artifacts, and also will result in a larger compressed size.

Each of the above-mentioned problems with JPEG compression can be alleviated to a large extent by staying within a 10:1 to 20:1 compression ratio, and by avoiding highly contrasting images. For the 3CS mission a compression ratio of around 10:1 is used and the images taken are of clouds, which exhibit nicely blended colors, so the loss of image data has been found to be insignificant.

2.3)Determining Height from Stereo Images:

Using only a pair of stereo images, a few predetermined distances, and an equation derived through the geometry of similar triangles, the height of clouds is easily determined.