Abstract:
Projects in astronomy like the Square Kilometer Array (SKA) or the
European Extremely Large Telescope are among the world’s largest Big
Data projects and the largest international computing collaborations,
with unique computing challenges in Signal Processing and Machine
Learning (SPML) still to be solved. The challenge in terms of
computing, data transport and storage capacity is to design a
processing chain ranging from the acquisition of raw data from the
sensors to the production and the analysis of multidimensional images
of the sky with Worldwide Distributed Calculations. In that context, a
new generation of Low-Power High-Performance Computing Systems has to
replace general-purpose High-Performance Computing (HPC) systems to
meet the challenge of climate change, including the reuse and upgrades
of computing systems already operational in a recycling
approach. State of the art Programming Models and their Development
Frameworks are lagging behind in offering support for efficient use of
resources, high service availability and quality and cost
competitiveness. This presentation will discuss how dataflow models
associated with platform- and component-based designs can help to tame
complexity during the design and the operating phases of big projects
in astronomy, assessing the performance both in time and energy of a
complex scientific workflow on a not-yet-existing computing
infrastructure.
|