difx:amplitudescaling

This shows you the differences between two versions of the page.

Both sides previous revision Previous revision Next revision | Previous revision | ||

difx:amplitudescaling [2010/11/19 04:47] adamdeller |
difx:amplitudescaling [2015/10/21 10:08] (current) |
||
---|---|---|---|

Line 1: | Line 1: | ||

+ | |||

===== Amplitude scaling in General ===== | ===== Amplitude scaling in General ===== | ||

Line 15: | Line 16: | ||

The level of correction done online by DiFX is controlled by the TSYS entry in the DATASTREAM table entries of the .input file. If TSYS < = 0, only the first two steps are done. If TSYS=1.0, all steps excepting the last two are done. If TSYS = (some nominal tsys value), then all steps except the last are performed. This was originally the default (and indeed only, in early versions) behaviour in DiFX, since it was what the old LBA hardware correlator did. It has the advantage of allowing quick-look processing of the data without any further calibration. However, it also means that the a priori values of system temperature must be un-applied when measured system temperatures are later applied, and hence this is not widely used outside the LBA. | The level of correction done online by DiFX is controlled by the TSYS entry in the DATASTREAM table entries of the .input file. If TSYS < = 0, only the first two steps are done. If TSYS=1.0, all steps excepting the last two are done. If TSYS = (some nominal tsys value), then all steps except the last are performed. This was originally the default (and indeed only, in early versions) behaviour in DiFX, since it was what the old LBA hardware correlator did. It has the advantage of allowing quick-look processing of the data without any further calibration. However, it also means that the a priori values of system temperature must be un-applied when measured system temperatures are later applied, and hence this is not widely used outside the LBA. | ||

- | Each step will now be considered in more detail | + | Each step will now be considered in more detail. |

==== 1. Forming unnormalised correlation counts ==== | ==== 1. Forming unnormalised correlation counts ==== | ||

Line 23: | Line 24: | ||

==== 2. Scaling the unnormalised correlation to correct for the length of integration ==== | ==== 2. Scaling the unnormalised correlation to correct for the length of integration ==== | ||

- | At the end of one accumulation period the correlator calculates the number of valid samples accumulated and divides the visibilities by this number. Thus, the mean value of a visibility at this stage should be actual correlation coefficient of the reconstructed datastreams, multiplied by the expectation value of a sample squared. In the scheme outlined above, this is equal to 0.17*-3.336*-3.336 + 0.33*-1*-1 + 0.33*1*1 + 0.17*3.336*3.336 = 4.444. Thus, after this stage, the mean values of the autocorrelations will be 4.444, if the sampler level occupation was exactly as planned. The cross correlations will be some value less than 4.444 -- they have been attenuated by the system temperature noise and (in the low correlation regime, as is typical) the attenuation due to quantisation. At this stage, when going the DiFX format output -> difx2fits -> FITS-IDI route, the correlator is done, and writes out a DiFX format record. | + | At the end of one accumulation period the correlator calculates the number of valid samples accumulated and divides the visibilities by this number. Thus, the mean value of an autocorrelation at this stage should be the expectation value of a sample squared. In the scheme outlined above, this is equal to 0.17*-3.336*-3.336 + 0.33*-1*-1 + 0.33*1*1 + 0.17*3.336*3.336 = 4.444. Thus, after this stage, the mean values of the autocorrelations will be 4.444, if the sampler level occupation was exactly as planned. The cross correlations will be some value less than 4.444 -- they have been attenuated by the system temperature noise and (in the low correlation regime, as is typical) the attenuation due to quantisation. At this stage, when going the DiFX format output -> difx2fits -> FITS-IDI route, the correlator is done, and writes out a DiFX format record. |

This step is skipped in DiFX when TSYS>0 - as we shall see, the division by autocorrelations later on makes it redundant, since this same correction is applied to both autocorrelations and cross correlations | This step is skipped in DiFX when TSYS>0 - as we shall see, the division by autocorrelations later on makes it redundant, since this same correction is applied to both autocorrelations and cross correlations | ||

- | ==== 3. Correcting for the nominal occupation of sampler levels ==== | + | ==== 3. Correcting for the nominal occupation of sampler levels, and for the unpack values chosen ==== |

This is where things start to get muddled. In the existing VLBA infrastructure, FITLD applies part of this correction when DIGICOR is turned on, and the rest is applied by a fudge factor in difx2fits. This mirrors what happened for hardware correlator FITS-IDI files. Now, we could get rid of the scaling in both difx2fits and FITLD by changing the unpack values to 0.474 and 1.583, which would make the expectation value for the power of a single sample equal to 1, removing the need for later scaling. Or the raw visibilities could be divided by the number of samples * 4.444, which would achieve the same thing. The trouble is, this requires an update to FITLD, so it may not be practical in the near term. Also, at some level this is dependent on the assumptions about sampler stats, and while 0.17/0.33/0.33/0.17 is pretty standard, it is still just an assumption. Although, as we will see later, it will all come out in the wash when the "actual" autocorrelation corrections are applied. | This is where things start to get muddled. In the existing VLBA infrastructure, FITLD applies part of this correction when DIGICOR is turned on, and the rest is applied by a fudge factor in difx2fits. This mirrors what happened for hardware correlator FITS-IDI files. Now, we could get rid of the scaling in both difx2fits and FITLD by changing the unpack values to 0.474 and 1.583, which would make the expectation value for the power of a single sample equal to 1, removing the need for later scaling. Or the raw visibilities could be divided by the number of samples * 4.444, which would achieve the same thing. The trouble is, this requires an update to FITLD, so it may not be practical in the near term. Also, at some level this is dependent on the assumptions about sampler stats, and while 0.17/0.33/0.33/0.17 is pretty standard, it is still just an assumption. Although, as we will see later, it will all come out in the wash when the "actual" autocorrelation corrections are applied. | ||

difx/amplitudescaling.1290102466.txt.gz · Last modified: 2010/11/19 04:47 by adamdeller