Skip to main content

Robotics and reasoning: Allen School professor Dieter Fox receives IJCAI 2023 John McCarthy Award for pioneering work in building intelligent systems

Dieter Fox, wearing glasses and a blue shirt, smiles in front of a blurred background of trees and a red roofed building.

Allen School professor Dieter Fox will be honored at the 32nd International Joint Conference on Artificial Intelligence (IJCAI) with the 2023 John McCarthy Award. The award is named for the eponymous scientist, widely regarded as one of the founders of the field of artificial intelligence (AI), and recognizes established researchers who have built up a distinguished track record of research excellence in AI. Fox will receive his award this week and give a presentation on his work at the conference held in Macao, S.A.R.

“Receiving the John McCarthy Award is an incredible honor, and I’m very grateful for the truly outstanding students and collaborators I had the pleasure to work with throughout my career,” Fox said. “I also see this award as a recognition of the importance the AI community places on building intelligent systems that operate in the real world.” 

Fox has made a number of key contributions to the fields of AI and robotics, developing powerful machine learning techniques for perception and reasoning, as well as pioneering Bayesian state estimation and the use of depth cameras for robotics and activity recognition. 

His research focuses on systems that can interact with their environment in an intelligent manner. Currently, most robots lack the intelligence to perceive and understand changing environments over time. They move objects in a set, programmable way. In a factory, where conditions are tightly controlled, this is a strength. Everywhere else, it’s a problem.

During his time as a Ph.D. student at the University of Bonn, Fox’s work on Markov localization tackled a fundamental problem in robotics and is now considered a watershed moment for the field. Near the start of the 21st century, researchers concentrated on the problem of tracking, giving a robot a map and its initial location. But these robots lacked true autonomy. They were unable to estimate their location and recover from mistakes out in the field — traits, importantly, displayed by a human pathfinder. 

Fox and his collaborators developed grid-based and sampling-based Bayes filters to estimate a robot’s position and orientation in a metric model of the environment. Their work produced the first approach that allowed a robot to reorient itself and recover from failure in complex and changing conditions. Fox’s pioneering work in robotics touches virtually every successful robot navigation system, be it indoors, outdoors, in the air or on streets. 

Fox’s contributions go beyond core robotics. Using a variety of data sources, including GPS, Wi-Fi signal strength, accelerometers, RFID and geospatial map information, Fox developed and evaluated hierarchical Bayesian state estimation techniques to solve human activity recognition problems from wearable sensors. With his collaborators, he demonstrated that a person’s daily transportation routines could be gleaned from a history of GPS sensor data. The work was motivated by the aim to help people with cognitive disabilities safely navigate their community without getting lost. Trained on GPS data, the wearable system assists users who get off track by helping them find public transportation to reach their intended destination. This influential work earned an Association for the Advancement of Artificial Intelligence 2004 (AAAI-04) Outstanding Paper Award, a 2012 Artificial Intelligence Journal (AIJ) Prominent Paper Award and a Ubicomp 2013 10-Year Impact Award.

In 2009, Fox began a two-year tenure as director of Intel Labs Seattle. There, he and his collaborators developed some of the very first algorithms for depth camera-based 3D mapping, object recognition and detection. Back at the University of Washington, Fox and his colleagues set an additional precedent with a separate study on fine-grained 3D modeling. Called DynamicFusion, the approach was the first to demonstrate how depth cameras could reconstruct moving scenes and objects, such as a person’s head or hands, with impressive resolution in real time. The work won a Best Paper Award from the Conference on Computer Vision and Pattern Recognition (CVPR) in 2015

For Fox, the McCarthy Award represents another milestone in a journey that began in his youth. As a high school student in Germany, he stumbled upon the book “Gödel, Escher, Bach: An Eternal Golden Braid” by Douglas Hofstadter. The pages, he found, flew by. When he finally closed its cover, he was spellbound. 

“From the book, I was fascinated by the ideas behind logic, formal reasoning and AI,” Fox said. “I learned that by studying computer science, I’d be able to continue to have fun investigating these ideas.”

Fox currently shares his time between the UW and NVIDIA, joining the company in 2017. He directs the UW Robotics and State Estimation Laboratory and is senior director of robotics research at NVIDIA. His work at NVIDIA stands at the cutting edge of deep learning for robot manipulation and sim-to-real transfer, bringing us ever closer to the dream of smart robots that are useful in real world settings such as factories, health care and our homes

Among his many honors, he is the recipient of the 2020 RAS Pioneer Award presented by the IEEE Robotics & Automation Society, and multiple best paper awards at AI, robotics and computer vision conferences. Fox, who joined the UW faculty in 2000, was also named a 2020 Association for Computing Machinery (ACM) Fellow, an IEEE Fellow in 2014 and a 2011 Fellow of the AAAI.