skip to content

Trust & Technology Initiative


The Physical Computation Laboratory research group in the Department of Engineering at the University of Cambridge investigates new ways to exploit information about the physical world to make more efficient computing systems that interact with nature. Our research applies this idea to new hardware architectures for processing noisy/uncertain data, new methods for learning models from physical sensor data, new approaches to differential privacy which exploit knowledge of physics, and new methods for synthesizing state estimators (e.g., Kalman filters) and sensor fusion algorithms from physical system descriptions. Among our ongoing activities, we are investigating new processor hardware architectures (and associated programming language and systems software support) to track uncertainty through all the stages from signal acquisition in sensors to final control decisions at actuators.

Existing computing systems largely treat environmental sensor measurements as though they were error-free. As a result, computing systems that consume sensor data and which implement algorithms such as obstacle avoidance may perform billions of calculations per second on values that might be far removed from the quantities they are supposed to represent. When these algorithms control safety-critical systems, unquantified measurement uncertainty can inadvertently lead to failure of subsystems such as object detection or collision avoidance. This can in turn lead to injury or fatalities and the prospect and evidence of such failures reduces trust in autonomous systems. With the ever more pervasive use of sensors to drive computation and actuation such as in autonomous vehicles and robots which interact with humans, there is a growing need for computing hardware and systems software that can track information about uncertainty or noise throughout the signal processing chain.

The new techniques we are investigating could enable a fundamental shift in the acceptability and trustworthiness of future autonomous systems and have led to a spinout, Signaloid. The results of our research could enable, for example, safer and more trustworthy control systems in autonomous vehicles by enabling their underlying signal processing of sensor data to track the uncertainty of measurements (e.g., from LIDAR) and hence to ascribe degrees of uncertainty to their computational results (e.g., whether or not there is a human in their path). These fundamental new capabilities could have impact both in other research disciplines as well as in commercial applications.

Keep in Touch

    Sign up to our Mailing List
    Follow us on Twitter
    Email us