Skip to main content

Luis Ceze named Fellow of the Association for Computing Machinery for advancing new paradigms in computer architecture and programming systems

Portrait of Luis Ceze standing with arms crossed, smiling at the camera, against a grey background. Luis is wearing glasses with dark acrylic frames and clear lenses, black short-sleeved, open-necked shirt, and a black smartwatch on his left wrist.

Since he first arrived at the University of Washington in 2007, Allen School professor Luis Ceze has worn many hats: teacher, mentor, researcher, entrepreneur, venture investor. As of this week, he can add Fellow of the Association for Computing Machinery to that list after the organization bestowed upon him its most prestigious level of membership for “contributions to developing new architectures and programming systems for emerging applications and computing technologies.”

A computer architect by training, Ceze has been at the forefront of an expanding vision of the future of computation — and challenging the computer architecture community to rethink what a computer even is, thanks in part to some nifty research at the intersection of information technology and biology. His work also has extended to reimagining the hardware/software stack and embracing the emerging capabilities of machine learning. 

“I’m motivated by the question of how we can build new programming models with and for future technologies and applications,” said Ceze, the inaugural holder of the Edward D. Lazowska Endowed Professorship at the Allen School. “There is so much untapped potential in drastically improving efficiency, enabling new types of applications, and making use of new hardware and device technology. From machine learning to automated hardware/software to molecular programming, we are in the midst of a new computing revolution.”

Ceze has played a significant role in enabling that revolution, having broken new ground with his work on DNA-based data storage and computing. As co-director of the Molecular Information Systems Lab, Ceze has teamed up with Allen School colleagues, Microsoft researchers and synthetic DNA supplier Twist Bioscience on an ambitious series of projects that demonstrate synthetic DNA’s potential as a data storage medium, developing a process for converting those digital 0s and 1s into the As, Ts, Cs and Gs of DNA — and then, crucially, back again — that combined advances in biotechnology with computational techniques such as error encoding schemes.

“Life has produced this fantastic molecule called DNA that efficiently stores all kinds of information about your genes and how a living system works — it’s very, very compact and very durable,” Ceze explained in a UW News release in 2016. “This is an example where we’re borrowing something from nature — DNA — to store information. But we’re using something we know from computers — how to correct memory errors — and applying that back to nature’s ‘device technology.’ “

Since their initial paper appeared at the International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS), Ceze and his MISL collaborators have set a new record for the amount of data stored in DNA, demonstrated the ability to perform random access to selectively retrieve stored files and convert them back to digital format, and developed a method for performing content-based similarity search of digital image files stored in DNA — moving past an initial focus on DNA’s prospects as an archival storage medium to, as Ceze observed at the time, “pave the way for hybrid molecular-electronic computer systems.” The team also built a prototype of an automated, end-to-end system for encoding data in DNA. 

Ceze subsequently initiated a collaboration with Seattle-based artist Kate Thompson to produce a portrait of pioneering British scientist Rosalind Franklin — the first person to have captured an image of the DNA double-helix — using paint infused with synthetic DNA in which the lab had encoded photos of memories collected from people around the world. Since then, Ceze and his fellow MISL researchers have branched out to develop a new platform for digital microfluidics automation — also known as “lab on a chip” — as well as a portable molecular tagging system and the capability for living cells to interface with computers.

“Our initial work on DNA data storage helped motivate and inform U.S. government research investment in this space, and then it expanded to other directions,” Ceze said. “And it was brought about by a collaborative team involving computer system architects, molecular biologists, machine learning engineers, and others. What we have in common is a curiosity and an excitement about what computing can learn from biology, and vice versa. Not many computer science schools have their own wet lab!”

Ceze didn’t need a wet lab for his other innovation: TVM, short for Tensor Virtual Machine, a flexible, efficient, end-to-end optimization framework for deploying machine learning applications across a variety of hardware platforms. Developed by a team that combined expertise in computer architecture, systems and machine learning, TVM bridged the gap between deep learning systems optimized for productivity and various hardware platforms, each of which are accompanied by their own programming, performance and efficiency constraints. TVM would allow researchers and practitioners to rapidly deploy deep learning applications on a range of systems — from mobile phones, to embedded devices, to and specialized chips — without having to sacrifice battery power or speed.

“Efficient deep learning needs specialized hardware,” Ceze noted at the time. “Being able to quickly prototype systems using FPGAs and new experimental ASICs is of extreme value.”

Ceze and his collaborators later teamed up with Amazon Web Services to build upon the TVM stack with the NNVM — short for Network Virtual Machine — compiler for deploying deep learning frameworks across a variety of platforms and devices. A year after TVM’s initial release, the team introduced the Versatile Tensor Accelerator, or VTA, an open-source customizable deep-learning accelerator for exploring hardware-software co-design that enables researchers to rapidly explore novel network architectures and data representations that would otherwise require specialized hardware support. 

The team eventually handed off TVM to the non-profit Apache Software Foundation as an incubator project. Ceze subsequently co-founded a company, OctoML, that builds upon and uses the Apache TVM framework to help companies deploy machine learning applications on any hardware, reducing effort and operational costs. To date, the UW spinout — for which Ceze serves as CEO — has raised $132 million from investors and currently employs more than 130 people, with the majority in Seattle and the rest spread across the U.S. and abroad. 

Before delving into deep learning accelerators and DNA synthesizers, Ceze made his mark in approximate computing. Combining aspects of programming languages, compilers, processor and accelerator architectures, machine learning, storage technologies, and wireless communication, Ceze and his colleagues developed a principled approach for identifying permissible tradeoffs between the correctness and efficiency of certain applications, such as those for search and video, to achieve significant energy savings in exchange for minimal sacrifices in output quality. 

Their initial contributions revolved around EnerJ — referred to as “the language of good-enough computing” — is a Java extension that enables developers to designate which program components should yield precise or approximate results to achieve performance savings and then check the quality of output and recompute or reduce the approximation as warranted. The team also developed a pair of hardware innovations in the form of an instruction set architecture (ISA) extension that provided for approximation operations and storage along with a dual-voltage microarchitecture, called Truffle, that enabled both approximate and precise computation to be controlled at a fine grain by the compiler. Ceze and his colleagues subsequently proposed a new technique for accelerating approximate programs using low-power neural processing units and dual mechanisms for approximate data storage that improves the performance and density while extending the usable life of solid-state storage technologies such as Flash.

In addition to his roles at the Allen School and OctoML, in his “free time” Ceze is also a venture partner at Madrona Venture Group and chairs their technical advisory board. Madrona funded OctoML and his first startup, Corensic, that was spun out of the UW in 2008. Before his ascension to ACM Fellow, Ceze shared the ACM SIGARCH Maurice Wilkes Award from the ACM Special Interest Group on Computer Architecture with MISL co-director and Allen School affiliate professor Karin Strauss, senior principal research manager at Microsoft. He is the co-author of multiple Best Papers and IEEE Micro Top Picks and holds a total of 29 patents based on his research. To date, he has guided 23 Ph.D. students as they earned their degrees on their way to launching careers in academia or industry.

“Computing is an extremely rich field of intellectual pursuit, and it is especially exciting now with the convergence of abundant computing resources, new AI techniques, and the ability to interact with natural systems from the molecular level all the way to the cognitive level,” said Ceze. “I’m honored by this recognition and am extremely grateful to all my Ph.D. advisees and collaborators for contributing so much to the work and to my career!”

Read the ACM announcement here.

Congratulations, Luis!