Abstract.
The slowdown in Moore’s Law requires that we find ways to use hundreds or thousands of compute nodes to process a single SKA data set. This is a well-trodden path but the new elements are the very large data volume and the requirement to run in quasi-real time. The SDP Consortium has been investigating graph processing for this purpose. My part has been to develop a reference library, containing all the major algorithms (ARL), and to test a python-based graph processing system, Dask, I will report on what we have learned so far.