# Introduction

Because synthesis arrays sample the u-v plane at discrete locations, there is incomplete knowledge about the Fourier transform of the source intensity distribution. The measured visibility data can be thought of as the true distribution, V(u,v), in the u-v plane multiplied by some sampling function, S(u,v). The convolution theorem states that the Fourier transform of the sampled distribution (the dirty image, ) is equal to the convolution of the Fourier transform of the true source visibility distribution (the true image, I) and the Fourier transform of the sampling function (the dirty beam, ):

where indicates convolution, and indicates the Fourier transform. Deconvolution algorithms attempt to account for the UN-sampled regions of the u-v plane. If it was fully sampled, there would be no sidelobes, since the sampling function would be a constant, and the Fourier transform of a constant is a delta function; a perfect beam. Thus, deconvolution tries to remove the sidelobes of the dirty beam that are present in the image. It is important to realize that in doing so, the algorithm is guessing at what the visibilities are in the UN-sampled part of the u-v plane. The solution to the convolution equation is not unique, and the problem of image reconstruction is reduced to that of choosing a plausible image from the set of possible solutions.

You should be extremely cautious when deconvolving images formed from a small number of snapshots. In these cases, there will be large areas of the u-v plane that are unsampled because of the poor instantaneous u-v coverage of the ATCA. If the source is complicated, the deconvolution algorithm may go badly wrong in its guess of what the source really looks like in the gaps. The best way to make a decent image of an object is to observe it, not to allow a deconvolution algorithm to guess what it looks like.

If you are using multi-frequency synthesis (MFS) in a situation where spectral index effects are significant (i.e. when the fractional spread in frequencies is and/or images with dynamic ranges better than a few hundred are required), then the simple convolution relationship no longer applies - see Section 14.4.

There are two techniques used commonly in radio astronomy; CLEAN and maximum entropy (MEM). CLEAN is rarely used outside of radio astronomy, but MEM is more far reaching. For detailed discussion on the pros' and cons' of these algorithms, see the NRAO imaging workshops and references therein. Much blood has been spilt over their relative merits in the last decade or so.

It is probably fair to say that in general, CLEAN is easier to drive than MEM, although using MEM can result in reduced processing times for large problems. All dirty images produced by invert (continuum, line, MFS, any Stokes parameters) can be deconvolved with either clean (CLEAN) or maxen (MEM). However to deconvolve an MFS image where spectral index effects are important, a special version of CLEAN is used, mfclean.

After the deconvolution is finished, you have produced a the model of the source. CLEAN produces a CLEAN component image (a collection of delta functions), whereas MEM produces some more smooth model. The output images of both CLEAN and MEM are in units of Jy/pixel, and are super-resolved (i.e. they contain spatial frequencies beyond those measured). Looking at the CLEAN component image is a sobering experience. While the MEM output is more visually appealing, it will generally contain a positive bias. To improve the qualitative appearance of the models produced by the deconvolution tasks, and to suppress what is essentially unmeasured high spatial frequency structure, it is common (essentially universal with CLEAN) to follow the deconvolution by a `restore' step. This step involves convolving the model sky with a Gaussian. This convolved image is then added to the residual image. The Gaussian is chosen to match the main lobe of the dirty beam, and it is generally called the CLEAN beam (regardless of whether CLEAN or MEM was used) or the restoring beam.