Long-term Monitoring of Molonglo Calibrators

B. M. Gaensler , R. W. Hunstead ,, PASA, 17 (1), 72.

Next Section: Results
Title/Abstract Page: Long-term Monitoring of Molonglo
Previous Section: Introduction
Contents Page: Volume 17, Number 1

Subsections



Observations and Data Analysis


SCAN Measurements

The MOST is an east-west synthesis telescope, consisting of two cylindrical paraboloids of dimensions 778 m  x  12 m. Radio waves are received by a line feed system of 7744 circular dipoles. The telescope is steered by mechanical rotation of the cylindrical paraboloids about their long axis, and by phasing the feed elements along the arms. In a single 12-hour synthesis, the MOST can produce an image at a spatial resolution of

$43''\times43''{\rm cosec}(\vert\delta\vert)$ and at a sensitivity of $\sim$1 mJy beam-1 (where 1 jansky [Jy] =10-26 W m-2 Hz-1).

Before and after each 12-hour synthesis, the MOST typically observes $\sim5$ calibration sources in fan-beam ``SCAN'' mode in order to determine the gain and pointing corrections for the telescope. These sources are chosen from a list of 55 calibrators, 45 of which were chosen from the Molonglo Reference Catalogue (MRC) at 408 MHz (Large et al 1981), using as selection criteria that they have declination

$\delta < -30^{\circ}$, Galactic latitude

$\vert b\vert > 10^{\circ}$, angular sizes <10'' and flux densities

$S_{\rm {408\,MHz}}>4$ Jy and

$S_{\rm {843\,MHz}}>2.5$ Jy; further discussion is given by Hunstead (1991). This list was later supplemented by seven flat-spectrum (

$S_{\rm {408\,MHz}}<4$ Jy) sources from the work of Tzioumis (1987), plus three compact sources for which

$\delta > -30^{\circ}$. The full list of calibrators is given in Paper I.

For each SCAN observation the calibrator source is tracked for two minutes, after which the mean antenna response is compared with the theoretical fan-beam response to a point source. From 1994 to 1996, over 58000 such measurements were made. In each case, parameters such as the goodness-of-fit of the response and the pointing offset from the calibrator position are recorded, along which an amplitude which is the product of the instantaneous values of the source flux density, the intrinsic telescope gain and local sensitivity factors. The main factors are strong but well-determined functions of meridian distance1 (MD) and of ambient temperature (which ranges from -10^$^{\circ}$C to +40^$^{\circ}$C during the year); the variation of sensitivity with MD is shown in Figure 1. After applying corrections for these two factors, the telescope gain for each SCAN is derived by comparing the corrected amplitude with the tabulated flux density of the corresponding source (see Table 1 of Paper I). The residual scatter in the gain determined from steady sources (defined in Section 2.3) is typically 2% RMS; this is the fundamental limit to the uncertainty of measurements made using the SCAN database.


Selection Criteria

Various selection criteria are applied to the SCAN database before accepting measurements for further analysis:

  • The uncertainties in the MD gain curve increase towards large MD, and observations made outside the MD range $\pm50$^$^{\circ}$ are excluded;

  • Observations made during routine performance testing (characterised by a large number of successive SCANs of the same source) are discounted, except where the standard deviation in gain was less than 5%. In such cases the group is treated as a single measurement with a gain equal to the average of the group;

  • a poor fit to the antenna response can often indicate a confusing source or a telescope malfunction, and such data are excluded;
  • extreme values of the relative gain (below 0.5 or above 1.5) are assumed to be discrepant and are discarded.

Because calibration observations are made just before and after each synthesis, the database is typically clustered into SCANs closely spaced in time. We define a ``block'' as a group of at least three valid observations made within the space of an hour. We initially exclude observations of 15 of the 55 calibrators (see Table 1 of Paper I), because of: (i) a flat spectrum ($\alpha > -0.5$,

$S \propto \nu^{\alpha}$), (ii) suspected variability or (iii) the presence of a confusing source. By averaging the gains determined from each SCAN within a block, a representative gain for the telescope at that particular epoch can be determined. This is then applied to each individual observation within the block to obtain a measurement of flux density for that source.

Some of the resultant light curves have thousands of data points, generally sampled at highly irregular intervals. Some light curves have significant scatter; it is not clear whether this scatter is due to unrecognised systematic errors in our flux density determination, to true variability on time-scales shorter than the typical sampling interval, or to the presence of confusing sources in the field. In any case, we chose to bin each light-curve at 30 day intervals; the mean of all flux densities within a given bin becomes a single point on a smoothed light curve, and the standard deviation of the measurements in that bin becomes the error bar associated with this measurement2. While binning the data filters out any genuine variability on time-scales less than a month, the irregular sampling intervals of the observations and the inherent uncertainty in a single SCAN's flux density make the MOST database less than ideal for studying such short-term behaviour.


Analysis of Variability

In order to quantify which sources are variable and which are steady, we calculate the $\chi^2$ probability that the flux has remained constant for a given source (e.g. Kesteven et al 1976). We first calculate the quantity

\begin{displaymath} x^2 = \sum_{i=1}^{n} (S_i - \tilde{S})^2/\sigma_i^2 \end{displaymath} (1)

where $\tilde{S}$ is the weighted mean, given by

\begin{displaymath} \tilde{S} = \frac{\sum_{i=1}^{n} (S_i/\sigma_i^2)}{\sum_{i=1}^{n} (1/\sigma_i^2)}, \end{displaymath} (2)

Si is the ith measurement of the flux density for a particular source, $\sigma_i^2$ is the variance associated with each 30-day estimate of Si, and n is the number of binned data points for that source. For normally-distributed random errors, we expect x2 to be distributed as $\chi^2$ with n-1 degrees of freedom. For each source, we can then calculate the probability, P, of exceeding x2 by chance for a random distribution.

A high value of P indicates that a source has a steady flux density over the available time period; we classify a source as steady (S) if P>0.01, and undetermined (U) if

0.001 < P < 0.01. However the $\chi^2$ test cannot distinguish between sources which are genuinely variable and those which simply have a large scatter in their light curve; both light curves result in a low value of P. We distinguish between these possibilities by computing the structure function (e.g. Hughes et al 1992; Kaspi & Stinebring 1992) for each source for which P<0.001. The mean is subtracted from the binned time series St, and these data are then normalised by dividing by the standard deviation. This yields a new time series Ft, from which the structure function

\begin{displaymath} \Sigma_\tau = \langle [ F_{t + \tau} - F_t]^2 \rangle \end{displaymath} (3)

can be calculated, where $\tau$ is a parameter known as the lag. If a light curve contains scatter but no true variability, then the structure function will have the value

$\Sigma_\tau \approx 2$ for all values of $\tau$. But when a source is truly varying, we expect the resulting structure function to consist of three regimes:

  • Noise regime: at small lags, $\Sigma_\tau$ is more or less constant.

  • Structure regime: as $\tau$ increases, $\Sigma_\tau$ increases linearly (on a log-log plot).

  • Saturation regime: at high lags, the structure function turns over and oscillates around

    $\Sigma _\tau = 2$ (for our normalisation). If there is a second, longer, time-scale in the data, the structure function can enter another linear regime at longer lags before again saturating.

If a source has P<0.001 but shows no clear structure in its structure function, we classify it as undetermined (U). Only sources which have both P < 0.001 and show structure are classified as variable (V). In these cases, the structure function can also be used to obtain a characteristic time scale, $\tau _V$, for variability; we define $\tau _V$ to be equal to twice the lag at which the structure function saturates. We expect a structure function to be sensitive only to time scales longer than about 100 days (i.e. a few multiples of the sampling interval of the binned data). Furthermore, caution should be applied when interpreting structure at large values of $\tau$, as only a few points make a contribution to $\Sigma_\tau$ at these long lags (e.g. Hughes et al 1992).


Next Section: Results
Title/Abstract Page: Long-term Monitoring of Molonglo
Previous Section: Introduction
Contents Page: Volume 17, Number 1

Welcome... About Electronic PASA... Instructions to Authors
ASA Home Page... CSIRO Publishing PASA
Browse Articles HOME Search Articles
© Copyright Astronomical Society of Australia 1997
ASKAP
Public