LCPC 2015

The 28th International Workshop on
Languages and Compilers for Parallel Computing

September 9-11, 2015    ·    Raleigh, NC, USA

Colocated with CnC-2015

Keynote by Paul H J Kelly

Synthesis versus Analysis: What Do We Actually Gain from Domain-Specificity?

Paul H J Kelly

Imperial College London


Abstract:

Domain-specific performance optimisations (DSOs) can prove extremely profitable. My group at Imperial has worked on six or seven DSO different projects, mostly in computational science applications. This talk aims to reflect on our experiences. One aspect, of course, is whether we have a stand-alone domain-specific language (DSL), a DSL embedded in a general host language, or an “active library” whose implementation delivers DSOs, perhaps across sequences of calls. A key question, though, is just what enables us to deliver a DSO. Is it some special semantic property deriving from the domain? Is it because the DSL abstracts from implementation details – enabling the compiler to make choices that would be committed in lower-level code? Is it that the DSL captures large-scale dataflows that are obscured when coded in a conventional general-purpose language? Is it simply that we know that particular optimisations are good for a particular context? The talk will explore this question with reference to our DSO projects in finite-element methods, unstructured meshes, linear algebra and Fourier interpolation. This is joint work with many collaborators.


Bio:

Paul H J Kelly graduated in Computer Science from University College London in 1983, and moved to Westfield College, University of London, for his PhD. He came to Imperial College, London, in 1986, working on fault-tolerant wafer-scale multicore architectures, and parallel functional programming. He was appointed as Lecturer in 1989, and Professor of Software Technology in 2009. He leads Imperial's Software Performance Optimisation research group, and he is also co-Director of Imperial's Centre for Computational Methods in Science and Engineering. He has worked on single-address-space operating systems, scalable cache coherency protocols, bounds checking, pointer analysis, graph algorithms, performance profiling, and custom floating-point arithmetic. His major current projects include Firedrake, PyOP2 and SLAMBench.

Sponsored by: