Computers have been used for scientific research since they were invented. Up to recent times, two computational infrastructure paradigms have been largely dominant: workstations/PCs for individual researchers' and small group usage, and batch-queuing systems shared across Departments for larger-scale computation. With the recent availability of large data sets, especially in disciplines that were not traditionally data-intensive, new tools are needed -- that facilitate analysis of massive data sets and fast validation of research ideas. Flexibility of the infrastructure is a key enabler here: we should adapt IT infrastructure to the needs of data analysis, rather than adapt our methods to the existing computational infrastructure. Based on experience supporting research IT at the University of Zurich, I would like to present a few scientific computing cases and how they prompted us to evolve our computing infrastructure.Readings for this lecture
There are no additional required readings for this lecture.
Riccardo Murri currently works at the Services and Support for Science IT unit (S3IT) at the University of Zurich, supporting research groups at UZH and in Swiss academia make best use of computational infrastructures. He has a PhD in Mathematics from SNS Pisa, and is a co-author of ElastiCluster and other software.