Hogg's playground for understanding interferometry
A playground to explore the probabilistic and inference properties of radio interferometry arrays. The hope is to evolve into a system for computing likelihoods or posterior probabilities over scenes that explain a set of radio interferometry data.
Copyright 2012 the author. All rights reserved.
If you have interest in using or re-using any of this content, get in touch with Hogg.
You have a faint blob in your cleaned image. Is it significant? Surely this is a question that should or could be answered in the space of visibilities.
Can we write down a good noise model for visibilities; that is, can we take things known about the antennae and correlator and predict the magnitudes of the residuals in the visibilities away from the best-fit "true scene" model (convolved with the dirty beam)?
Related to the above, is it possible to project the scene inferred by CLEAN back into the visibilities and test the noise model?
There are N radio telescopes and ~N^2 baselines; does this mean that you can infer the phase delays you get from your phase calibrator from the science data themselves? That is, can you self-calibrate always when N is large?
Related to the above, shouldn't we be marginalizing over the phase calibration information, since (a) it is noisy and (b) it requires interpolation between calibration measurements?
Bandwidth-smearing is something that needs to be a part of any real generative model. That is, we might be able to account for this in a proper model.
Can we go "fully probilistic"? Is it possible to return a posterior PDF over scenes that could have created the data?
Should we be looking for and using interesting and astronomy-informed prior information on the scene we infer?
Would a generative model help usefully with RFI identifcation and elimination?