VDFS / IR Camera
Software Functional Specification / Doc. Number: / VIS-DES-ATC-06081-0001
Date: / 22 April 2003
Issue: / 1.3
Page: / Page 2 of 24
Author: / Steven Beard

Data Flow System

Document Title: / Further of July 2007 VIRCAM Linearity Data
Document Number: / VIS-TRE-IOA-200000-0019
Issue: / 1.0
Date: / 2008-01-16

VDFS VDF

3-06-26

Authors: / Jim Lewis,
Peter Bunclark
(CASU) / Signature
And date:
Document
Approved by: / Mike Irwin
(CASU Manager) / Signature
And date:

Change Record

Issue / Date / Sections Affected / Description of Change/Change Request Reference/Remarks
1.00 / 2008-01-1604/07/02 / All / New document
Draft 0.2 / 26/06/03 / All / Jim Emerson added quick comments/changes
IOA: / W Sutherland
QMUL: / J Emerson
VISTA
Data Flow System / VIRCAM Linearity Data / Document: / VIS-TRE-IOA-20000-0019
Date: / 2008-01-16
Issue: / 1.0
Page: / 1 of 24

Contents

Change Record

List of Figures

1 Introduction

1.1Goals

1.2Applicable Documents

1.3Reference Documents

1.4Abbreviations and Acronyms

2Overview of the data

2.1Revision of Analysis Algorithm

2.2Linearity Sequence Analysis

2.3Monitor Sequence Analysis

3Further Lessons Learned

4Summary

List of Figures

Figure 21 Flux(ADU) versus Time(sec) for detectors 1-8

Figure 22 Flux(ADU) versus Time(sec) for detectors 9-16

Figure 23 Linearity curve detector 1

Figure 24 Linearity curve detector 2

Figure 25 Linearity curve detector 3

Figure 26 Linearity curve detector 4

Figure 27 Linearity curve detector 5

Figure 28 Linearity curve detector 6

Figure 29 Linearity curve detector 7

Figure 210 Linearity curve detector 8

Figure 211 Linearity curve detector 9

Figure 212 Linearity curve detector 10

Figure 213 Linearity curve detector 11

Figure 214 Linearity curve detector 12

Figure 215 Linearity curve detector 13

Figure 216 Linearity curve detector 14

Figure 217 Linearity curve detector 15

Figure 218 Linearity curve detector 16

Figure 219 Median flux (ADU) of monitor exposures versus time (fraction of day) for 2007-07-15 (Detectors 1-8).

Figure 220 Median flux (ADU) of monitor exposures versus time (fraction of day) for 2007-07-12 (Detectors 9-16)

1 Introduction

During July 2007, an engineering team from RAL, ATC & QMU obtained some 2,300 VIRCAM frames during EMC testing and other miscellaneous lab tests. A report from CASU [RD1]describes the subsequent analysis of the data in Cambridge. In that report the non-linearity of the data was only dealt with in a very cursory way. For this report we present a more detailed analysis of the linearity data.

1.1Goals

The analysis has the following goals:

  1. Obtain a measure of non-linearity for each detector
  2. Use monitor sequences to correct for light source drift during the observations
  3. Determine a point in the linearity curves where the detectors begin to saturate
  4. Work out a good practice strategy for observing linearity sequences.

1.2Applicable Documents

1.3Reference Documents

[RD1]Analysis of July 2007 VIRCAM Engineering Data, VIS-TRE-IOA-20000-0018

[RD2]VISTA Data Reduction Library Design, VIS-TRE-IOA-20000-0010, v1.9, 2007-10-26

1.4Abbreviations and Acronyms

CASU / Cambridge Astronomical Survey Unit
VDFS / VISTA Data Flow System
VIRCAM / VISTA Infrared Camera
VISTA / Visible and Infrared Survey Telescope for Astronomy

2Scientific Requirements

2.1Scientific Case

3

3.1.1Historical Context

4Thirty years ago, the UK took the world lead in optical survey astronomy with the establishment of the United Kingdom Schmidt Telescope (UKST), followed by the development of astonishingly powerful, for the time, plate-scanning and reduction systems (APM in Cambridge and COSMOS in Edinburgh). Now, the VISTA telescope will provide the ESO community with the world’s leading infrared survey capability.

4.1.1Importance of Survey Facilities

5

6Survey work is the backbone of modern astronomy. Many conceptual revolutions have sprung from the major sky atlases at a variety of wavelengths. At optical wavelengths two specialised wide-field imaging telescopes have been of international significance – the UKST in Australia and the Palomar Schmidt in the USA. They combined to produce a photographic atlas of the entire sky in several optical bands, along with many other specialised surveys of more restricted areas. These dedicated wide-field telescopes, through their data products, provided one of the most important astronomical resources of the 20th century: (i) the atlases produced are a fundamental reference source, as shown by the fact that essentially every major university astronomy group world-wide has a reproduction copy. Digitised versions, produced by scanning machines, which are now replacing the actual physical photographic material, are being utilised even more extensively. (ii) The atlases and other survey products have allowed us to capitalise on our expensive investment in space science – very little physics can be done with an X-ray source until it is optically identified – and the benefits of optical and infrared surveys extends over the entire wavelength range studied by astronomers. (iii) Frequently, object samples selected from the atlas material itself, or from specialised surveys, initiate follow-up projects of enormous and lasting significance (e.g. the Palomar-Green and Durham quasar samples, the Abell/ACO galaxy-cluster catalogues, the Lynds catalogues of dark clouds) and the survey telescopes feed the more specialised telescopes and satellites. (iv) Much important science has been undertaken directly with the survey material - the measurement of large-scale structure from the two-dimensional clustering in the APM galaxy catalogue is a recent example.

7

7.1.1New Science Goals

8Major new areas of astrophysical research require the definition of object-populations not detectable using current survey facilities. For example, studies of the formation of the formation of galaxies, the evolution of structure, and the history of star formation, all require: the study of objects much fainter than the limits of current sky surveys, a multi-wavelength approach, and large statistical samples. Longstanding but unsolved problems, such as the nature of quasar evolution, the detection of very high-redshift objects and whether most of the Active Galactic Nuclei (AGN) population is hidden by dust, require very large samples and deep IR surveys. Many especially interesting objects – e.g. putative z=7 quasars, extremely red galaxies, high redshift clusters, and Kuiper belt objects, are predicted to be faint but to have low sky densities (a few per square degree) requiring wide-field deep surveys to find them. Another growing area of research is the bottom end of the stellar main sequence. Bright examples of brown dwarfs have been found recently but to reach the proposed fragmentation limit will require deep IR surveys in both the field and in open clusters of various ages. Finally, many new ideas require monitoring of large samples on a variety of timescales, looking both for objects which move (e.g. Kuiper belt objects, field brown dwarfs and white dwarfs) and for objects which vary in brightness (e.g. high-redshift supernovae, micro-lensing events).

8.1.1Importance of the Infrared

9IR observations are widely recognised as crucial to the astronomy of the next decade: (i) many objects are hidden behind dusty obscuring material – e.g. star-forming regions, ultra-luminous galaxies, our own Galactic Centre, and possibly most quasars. (ii) Objects with temperatures in the range a few hundred to a few thousand degrees – e.g. brown dwarfs, proto-stars and planets around nearby stars – radiate primarily in the IR. (iii) At large redshift, optical emission from galaxies and quasars is shifted into the IR. The recent near-IR sky surveys (DENIS, 2MASS) have ended, but they are an order of magnitude shallower than the existing optical surveys.

9.1.1The Growth of Major Facilities

10In the era of 8-10m telescopes, the existing sky surveys (conducted photographically using 1.2m telescopes) are completely inadequate VLT & Gemini routinely make detailed spectroscopic studies of objects that are invisible on the existing sky surveys. Space missions operating in regions of the spectrum inaccessible from the ground, such as AXAF, XMM, FIRST, Planck, SIRTF and the (IR) James Webb Space Telescope will study objects whose optical counterparts are likewise invisible. To maximise the scientific return from these major facilities, deeper wide-field infra-red sky surveys are a pre-requisite.

Data-Flow System Requirements

11

12The Science Requirements define and constrain the DFS logical and physical infrastructure. In summary, the main features of the DFS are:

13Provide quality control on data at the observatory and in Garching. This includes science data and all calibration data.

14

15Modules to allow photometric and astrometric calibration of all science OBs.

16

2Overview of the data

Section 2 of [RD1] gives a complete description of all of the data that were taken during the July 2007 engineering data run. During that run, data were taken using the standard correlated double sampling read mode and an experimental reset-read mode. As a result of some of the analysis of that paper it has been decided not to offer the latter read mode for general observing with VIRCAM. Hence in this paper we’ll restrict our analysis to the linearity sequences done with correlated double sampling read mode.

16.12.1Revision of Analysis Algorithm

In Section 2 of [RD2] a full mathematical treatment is presented for the analysis of data which have been taken on an instrument without a shutter and which are reset corrected. This analysis depends critically on the timing model of the reset and read stage of the exposure and is pretty much a general treatment. The VIRCAM detectors are run in a mode where the reset time is equal to the read time, which is a far simpler timing model than for many detectors. This allows for a simplification of the analysis and leads to a more robust solution. A full explanation of how the method has been modified will be included in the next release of [RD2].

A further modification to the algorithm includes monitoring the constancy of the ambient light source during the observations of the linearity sequence. This is done by taking a series of exposures with a single exposure time, each element of this series being interspersed with the exposures done for the linearity sequence. If the mean flux of the light source is varying during the linearity sequence it can be seen by changes in the median flux in the monitor sequence. Any deduced change in the light source can be included as a correction factor for the exposures in the linearity sequence. These monitor sequences were actually taken during the July 2007 run, but were not analysed in [RD1]. All of the results that follow in this paper were done with statistics that have been corrected for any perceived background change as seen in the monitor sequences.

16.22.2Linearity Sequence Analysis

17

18Scientific Requirements

18.1Scientific Case

19

19.1.1Historical Context

20Thirty years ago, the UK took the world lead in optical survey astronomy with the establishment of the United Kingdom Schmidt Telescope (UKST), followed by the development of astonishingly powerful, for the time, plate-scanning and reduction systems (APM in Cambridge and COSMOS in Edinburgh). Now, the VISTA telescope will provide the ESO community with the world’s leading infrared survey capability.

20.1.1Importance of Survey Facilities

21

22Survey work is the backbone of modern astronomy. Many conceptual revolutions have sprung from the major sky atlases at a variety of wavelengths. At optical wavelengths two specialised wide-field imaging telescopes have been of international significance – the UKST in Australia and the Palomar Schmidt in the USA. They combined to produce a photographic atlas of the entire sky in several optical bands, along with many other specialised surveys of more restricted areas. These dedicated wide-field telescopes, through their data products, provided one of the most important astronomical resources of the 20th century: (i) the atlases produced are a fundamental reference source, as shown by the fact that essentially every major university astronomy group world-wide has a reproduction copy. Digitised versions, produced by scanning machines, which are now replacing the actual physical photographic material, are being utilised even more extensively. (ii) The atlases and other survey products have allowed us to capitalise on our expensive investment in space science – very little physics can be done with an X-ray source until it is optically identified – and the benefits of optical and infrared surveys extends over the entire wavelength range studied by astronomers. (iii) Frequently, object samples selected from the atlas material itself, or from specialised surveys, initiate follow-up projects of enormous and lasting significance (e.g. the Palomar-Green and Durham quasar samples, the Abell/ACO galaxy-cluster catalogues, the Lynds catalogues of dark clouds) and the survey telescopes feed the more specialised telescopes and satellites. (iv) Much important science has been undertaken directly with the survey material - the measurement of large-scale structure from the two-dimensional clustering in the APM galaxy catalogue is a recent example.

23

23.1.1New Science Goals

24Major new areas of astrophysical research require the definition of object-populations not detectable using current survey facilities. For example, studies of the formation of the formation of galaxies, the evolution of structure, and the history of star formation, all require: the study of objects much fainter than the limits of current sky surveys, a multi-wavelength approach, and large statistical samples. Longstanding but unsolved problems, such as the nature of quasar evolution, the detection of very high-redshift objects and whether most of the Active Galactic Nuclei (AGN) population is hidden by dust, require very large samples and deep IR surveys. Many especially interesting objects – e.g. putative z=7 quasars, extremely red galaxies, high redshift clusters, and Kuiper belt objects, are predicted to be faint but to have low sky densities (a few per square degree) requiring wide-field deep surveys to find them. Another growing area of research is the bottom end of the stellar main sequence. Bright examples of brown dwarfs have been found recently but to reach the proposed fragmentation limit will require deep IR surveys in both the field and in open clusters of various ages. Finally, many new ideas require monitoring of large samples on a variety of timescales, looking both for objects which move (e.g. Kuiper belt objects, field brown dwarfs and white dwarfs) and for objects which vary in brightness (e.g. high-redshift supernovae, micro-lensing events).

24.1.1Importance of the Infrared

25IR observations are widely recognised as crucial to the astronomy of the next decade: (i) many objects are hidden behind dusty obscuring material – e.g. star-forming regions, ultra-luminous galaxies, our own Galactic Centre, and possibly most quasars. (ii) Objects with temperatures in the range a few hundred to a few thousand degrees – e.g. brown dwarfs, proto-stars and planets around nearby stars – radiate primarily in the IR. (iii) At large redshift, optical emission from galaxies and quasars is shifted into the IR. The recent near-IR sky surveys (DENIS, 2MASS) have ended, but they are an order of magnitude shallower than the existing optical surveys.

25.1.1The Growth of Major Facilities

26In the era of 8-10m telescopes, the existing sky surveys (conducted photographically using 1.2m telescopes) are completely inadequate VLT & Gemini routinely make detailed spectroscopic studies of objects that are invisible on the existing sky surveys. Space missions operating in regions of the spectrum inaccessible from the ground, such as AXAF, XMM, FIRST, Planck, SIRTF and the (IR) James Webb Space Telescope will study objects whose optical counterparts are likewise invisible. To maximise the scientific return from these major facilities, deeper wide-field infra-red sky surveys are a pre-requisite.

Data-Flow System Requirements

27

28The Science Requirements define and constrain the DFS logical and physical infrastructure. In summary, the main features of the DFS are:

The first thing to note about the linearity results is that although each data channel is being analyse separately, the linearity results are very similar for all channels in an individual detector. The raw flux versus time curves for all sixteen detectors can be found in Figure 21 and Figure 22. Although there are eight curves on each graph, the curve for each detector is clearly marked.Table 21 below gives the calculated value of the average non-linearity for each detector and is defined at a nominal input flux of 10000 counts. This was done initially by defining a saturation level of 40000 counts for each detector and doing a 4th order fit. The table shows that none of the detectors came very close to the nominal saturation level and hence it was impossible to define a true saturation level for most of the detectors. The exceptions are detectors 1 and 5 (starred values in the data range column). These showed quite strong saturation tendencies at roughly 33000 and 24000 counts respectively.

During the course of this analysis it became quite clear that the nominal linearity estimated from this analysis could be variable on the 1-2% level depending upon the order of the polynomial that was fit. This was in spite of the fact that the formal error in the fit is quite small. We feel that this is due mainly to the fact that the number of points at the lower flux levels (where one would expect nearly linear behaviour) is quite small and hence the linear term of the fit is very poorly constrained. Thus the estimate of non-linearity present in Table 21 should only be considered a first guess.

In order to try and give a rough idea of the run of non-linearity with flux level we did a linear fit of the first few points of each flux versus time curve and then subtracted this linear estimate from each point on the curves. The resulting graphs of residual versus input flux are shown in Figure 23 to Figure 218. The curves for detectors 1 and 5 clearly show the effect of saturation at the bright end.

The final item that pops out of the linearity analysis is the percentage of bad pixels on each detector. This is presented in the final column of Table 21.