PhD Student in Distributed Systems Group
Current big data processing platforms like Hadoop and Spark are abstracting away from the architecture and low-level properties of computers by using the Java virtual machine as an execution environment. This can become a performance liability in large-scale applications and prevents many system- and network-related optimisations. The aim of this PhD project is to build a middleware system that is able to act as a replacement for such platforms by producing highly optimised code even in heterogeneous distributed environments and offering more predictable performance by retaining a higher degree of control over system resources.
This job comes from a partnership with Science Magazine and