Skip to main content

New Virtual Reality Systems course turns students into makers

Students were provided with a kit to build their own head-mounted display, including an LCD, an HDMI driver board, an inertial measurement unit (IMU), lenses, an enclosure, and all cabling

Over a video conference presentation, Eugene Jahn showed viewers an augmented reality program he created to help aspiring Michael Jordans shoot the perfect basket, showing the best path and angle to become a better shooter. The Allen School sophomore is a student in the Virtual Reality Systems CSE 490V taught by affiliate instructor Douglas Lanman, Director of Display Systems Research at Facebook Reality Labs

The students — five graduate students, one sophomore, two juniors and 23 seniors, all studying computer science or computer engineering — spent the winter quarter learning about virtual reality systems, building software and hardware for a complete VR headset in 10 weeks. The hands-on course introduced students to the cross-disciplinary skills needed to create their own VR systems. Jahn and his classmates presented their final projects over Zoom as part of a virtual demo day, instead of in person as originally planned, after the University of Washington moved classes online due to COVID-19. 

“I gave the students the option of not doing the final project because they didn’t have as much time in the lab to build their system, but all of them chose to do it — they all have a ‘maker’ mentality,” Lanman said during his introduction to the online demo day. “Although the students planned to have more time in the maker space in the Reality Lab, they made the most of finishing up their projects and presenting them over a conference call.”

Lanman said he had long dreamed of teaching a course at UW. He saw his chance when he moved to Seattle to join Facebook Reality Labs, reaching out to Allen School faculty shortly after his arrival to discuss possible courses. He said the moment he had been waiting for came when Facebook co-funded the launch of the UW Reality Lab.

“The creation of this center positioned UW, and the greater Seattle region, at the center of augmented and virtual reality revolution,” Lanman said. “When I read about the lab, I knew it was the right time to train a new generation of students to join the growing AR/VR industry being built in our backyard.”

Lanman emailed his idea to Brian Curless and Steve Seitz, who, along with Ira Kemelmacher-Shlizerman, comprise the Reality Lab’s leadership team. They agreed to the course and Lanman began preparing the materials.

Putting the course together, Lanman said, was a major undertaking. 

“The class goes through almost everything you need to know about the process of building VR headsets, the research that goes into it and actually building one over the course of the quarter,” said Andrew Wei, a graduate student who completed the course. “Then you get to go a little further and you get to explore a bit and do something in VR that you’re interested in, so it made it a great experience.”

As you might expect, the students needed to become very comfortable with computer graphics to set up the rendering engine, Lanman explained. In order to track the headsets, they also had to master a good bit of linear algebra, signal processing, and computer vision methods. Over six assignments the students programmed projects in JavaScript, WebGL, OpenCV, and Arduino frameworks. On top of all this programming, they also had to physically assemble their headsets. 

“Ear Hands,” is a game to showcase the unique potential of VR to depict the position and direction of sounds correctly.

“For many, this was their first time building a piece of hardware, since most are CS students,” he said. “Learning to accept that you’ll break some, or many, parts is an important, albeit stressful, lesson. Thanks to funding from Facebook, it wasn’t an expensive lesson for the students! I’m happy to report that more than 30 students went through this course and are now the fearless hardware-software hackers I dreamed of inspiring.”

From sourcing hardware components for the headset kits to preparing comprehensive lecture notes, the course was custom-built from the ground up. Gordon Wetzstein, an electrical engineering professor at Stanford, had already built course materials over the last five years. He and Lanman have published numerous academic papers together and share an enthusiasm for augmented reality and VR display technology. 

“My first call after getting the green light to bring this course to UW was to Gordon,” Lanman said. He had referred to this as an ‘experimental course’ when he first started offering it at Stanford and he said he was excited to see it ‘graduate’ to other schools.” 

Building off of Wetzstein’s course, Lanman added his own spin, focusing both on computer graphics and computer vision. One unique addition was to implement a positional tracking system for the headset that used the students’ webcams. This eliminated the need for any custom hardware and meant that every student would be able to demo a fully working modern VR system to their friends and family, starting from a box of fairly simple parts. Lanman said they also spent quite a lot more time talking about the optics and display technologies behind modern AR/VR systems, “which happens to be what I work on in my day job.”

Kirit Narain was an undergraduate teaching assistant for the course. Last summer, Narain decided to try Wetzstein’s course on his own, out of an interest in VR and wanting to build his own headset. He said the two courses have some similarities and many differences. One being Lanman.

“490V leans slightly more into Doug’s expertise in displays, graphics and optics – and is more focused on understanding the cutting edge in each of these fields, including solutions still in the research stage,” Narain said. “There is also a bigger focus on augmented reality displays as well, which in my opinion makes it a more rounded class and gives students taking it an amazing insight into exactly how each of these systems works and prepared to take on their challenges.”

Lanman also added field trips and guest lectures into the course. After all, Lanman explained, Seattle is the growing center of the AR/VR universe, so how hard could it be to give students an opportunity to see the latest, greatest technologies — and meet the people behind them — at our local companies? 

“As with the UW faculty, researchers and engineers at Microsoft, Valve, and Facebook Reality Labs were eager to be part of this course. This is the part of the course I’m most proud of: providing an opportunity to unite individuals passionate about AR/VR across the region and try to make something truly unique for the students,” Lanman said. “It was very exciting to see members of these companies, and others, offer guest lectures and sit in on some of the classes.”

Lanman said that building state of the art AR/VR display systems is not a theoretical or pure software exercise: his students had to get their hands dirty and yes, occasionally break things. 

“I hope that this course put hardware projects on students’ radars. This course was designed for students receptive to this path: if you are scared of building a VR headset from scratch, you probably didn’t register. By the end of this course, I can see the students have really sharpened their maker skills,” Lanman said. “I told them at the beginning of the class that research is not as mysterious or difficult as it’s depicted in movies. I’m not sure they believed me. By the end, I am happy to see many students adopt the mentality of a graduate student: if you’re diligent in understanding what’s been done before, it’s not too difficult to figure out the path forward, even if it seems like walking into the unknown.”

For Narain, who was a first time TA, the experience was “unbeatable.” He was particularly keen to play a role in helping students to build a strong conceptual foundation and understanding of the numbers and algorithms, in addition to answering questions and helping them to debug their code. According to Narain, the course material was not easy, and having that maker attitude was crucial to student success. There also was a lot more nitty-gritty mathematics involved than most CSE classes, and working directly with hardware feels unfamiliar to some students.  

“Having conquered these challenges over the last 10 weeks, the boost in student ambition and confidence could clearly be seen with their innovative final projects,” Narain said. “The novelty of AR/VR means there are no standard set of tools or methodologies for interaction yet. Watching the students experiment with different approaches and discovering what works and doesn’t is very interesting for me, too.”

Neil Sorens, a senior majoring in computer science, said he appreciated the way the class pulled together knowledge from many different disciplines: math, physics, computer science and various areas within computer science such as graphics and computer vision.

“Having an incredibly passionate and fun instructor was really motivating because the class was highly challenging,” Sorens said. “It was a good mix of practical and theoretical work, and we had a lot of freedom in choosing our final projects.”

During the virtual demo day, Sorens and his fellow students in the VR Systems class presented their final projects to Lanman, their TAs and each other. Allen School faculty and industry experts were also invited to drop into the Zoom meeting and see the final projects. Students worked on projects that varied, including hardware, body and hand tracking, eye tracking, rendering, audio, training and education, and even applications. In pairs or on their own, students presented a total of 19 projects as a culmination of what they learned over the quarter.

In addition to Jahn’s basketball training project, other presentations include:

Wide field of view

Wide-FOV VR headsets using Fresnel Lenses (Andrew Wei and Daoyi Zhu): A headset with peripheral vision, the team created a prototype with a wide field of vision.

360 degree vision using FOV compression (Neil Sorens): A headset with a 110 degree field of vision and equipped it with a 360 degree camera for specialized real-world applications to give users “eyes in the back of their head.”

Finger tracking using magnetometers (Alex Mastrangelo and Paul Yoo): A glove with magnetometers built in the fingertips that has high accuracy and low power for gaming. 

Inverse kinematics and full-body tracking (Terrell Strong): A headset and controllers with full tracking points and animated arms and a torso to better understand a user’s sense of presence in VR experiences.

Exploring two-handed interactions (Andrew Rudasics): A task for users to manipulate a two-handed object to determine the most comfortable and accurate way to model for users.

VR Wings (Rory Soiffer and Everett Cheng): A game that allows players to fly like a bird through a virtual world by flapping their arms, using custom wing-like controllers.

Eye tracking for VR gaming (Alex Zhang): A shooting game using virtual reality by only using eye tracking techniques.

Real-time foveated ray tracing (Frank Qin and Anny Kong): Implemented foveated ray-tracing to make images more detailed and therefore look more realistic.

VR  Volume Rendering (Xiao Liang, Jeffrey Tian and Nguyen Duc Duong): A 3D experience to use for medical scans in a more realistic manner.

Spatial audio for VR gaming (Thomas Hsu and Christie Zhao): A game that makes the player focus on spatial audio to navigate through the VR world.

VR batting cage (Dylan Hayre): A game that allows users to practice hitting baseballs and adjust their swing to build his or her skills.

3D drawing in VR (Daniel Lyu and Lily Zhao): A VR app that supports letter animation to enhance the learning of vocabulary. 

VR galaxy tour (Natalia Abrosimova and Wenqing Lan): An experience that allows users to experience the galaxy through a VR headset.

Crime scene investigation (Zhu Lu and Weihan Lan): A 3D degree crime scene that allows users 360 degree view of the room to explore for clues.

Sketching in AR with the ability to add 3D model support (Anthony Lu and Jacky Mooc): An application that allows users to import avatars and modify them by drawing or sketching, drawing on their AR device.

VR dueling (Robin Schmit): A multiplayer game was created to sync real-time information from two or more headsets in order to play games with multiple participants. 

March 31, 2020

“Hey, check out this 450-pound dog!” Allen School researchers explore how users interact with bogus social media posts

Dark, swirling clouds over an aerial shot of Sydney harbor and downtown
Is that a superstorm over Sydney, or fake news?

We’ve all seen the images scrolling through our social media feeds — the improbably large pet that dwarfs the human sitting beside it; the monstrous stormcloud ominously bearing down on a city full of people; the elected official who says or does something outrageous (and outrageously out of character). We might stop mid-scroll and do a double-take, occasionally hit “like” or “share,” or dismiss the content as fake news. But how do we as consumers of information determine what is real and what is fake?

Freakishly large Fido may be fake news — sorry! — but this isn’t: A team of researchers led by professor Franziska Roesner, co-director of the Allen School’s Security and Privacy Research Laboratory, conducted a study examining how and why users investigate and act on fake content shared on their social media feeds. The project, which involved semi-structured interviews with more than two dozen users ranging in age from 18 to 74, aimed to better understand what tools would be most useful to people trying to determine which posts are trustworthy and which are bogus.

In a “think aloud” study in the lab, the researchers asked users to provide a running commentary on their reaction to various posts as they scrolled through their social feeds. Their observations provided the team with insights into the thought process that goes into a user’s decision to dismiss, share, or otherwise engage with fake content they encounter online. Unbeknownst to the participants, the researchers deployed a browser extension that they had built which randomly layered misinformation posts previously debunked by over legitimate posts shared by participants’ Facebook friends and accounts they follow on Twitter.

The artificial posts that populated users’ feeds ranged from the sublime (the aforementioned giant dog), to the ridiculous (“A photograph shows Bernie Sanders being arrested for throwing eggs at civil rights protesters”), to the downright hilarious (“A church sign reads ‘Adultery is a sin. You can’t have your Kate and Edith too’”). As the participants scrolled through the mixture of legitimate and fake posts, Allen School Ph.D. student Christine Geeng and her colleagues would ask them why they chose to engage with or ignore various content. At the end of the experiment, the researchers pointed out the fake posts and informed participants that their friends and contacts had not really shared them. Geeng and her colleagues also noted that participants could not actually like or share the fake content on their real feeds.

“Our goal was not to trick participants or to make them feel exposed,” explained Geeng, lead author of the paper describing the study. “We wanted to normalize the difficulty of determining what’s fake and what’s not.”

Participants employed a variety of strategies in dealing with the misinformation posts as they scrolled through. Many posts were simply ignored at first sight, whether because they were political in nature, required too much time and effort to investigate, or the viewer was simply disinterested in the topic presented. If a post caught their attention, some users investigated further by looking at the name on the account that appeared to have posted it, or read through comments from others before making up their own minds. For others, they might click through to the full article to check if the claim was bogus — such as in the case of the Bernie Sanders photo, which was intentionally miscaptioned in the fake post. Participants also self-reported that, outside of a laboratory setting, they might consult a fact-checking website like, see if trusted news sources were reporting on the same topic, or seek out the opinions of family members or others in their social circle.

The researchers found that users were more likely to employ such ad hoc strategies over purpose-built tools provided by the platforms themselves. For example, none of the study participants used Facebook’s “i” button to investigate fake content; in fact, most said they were unaware of the button’s existence. Whether a matter of functionality or design (or both), the team’s findings suggest there is room for improvement when it comes to offering truly useful tools for people who are trying to separate fact from fiction.

“There are a lot of people who are trying to be good consumers of information and they’re struggling,” said Roesner. “If we can understand what these people are doing, we might be able to design tools that can help them.”

In addition to Roesner and Geeng, Savanna Yee, a fifth-year master’s student in the Allen School, contributed to the project. The team will present its findings at the Association for Computing Machinery’s Conference on Human Factors in Computing Systems (CHI 2020) next month.

Learn more in the UW News release here, and read the research paper here.

March 19, 2020

Computer Science student and Sophomore Medalist Louis Patsawee Maliyam dances to his own beat

Mark D. Stone/University of Washington

Allen School undergraduate Louis Patsawee Maliyam balances a long-standing love of computing with a passion for the arts. It’s a combination that has served him well at the University of Washington, where he believes his major in computer science and minor in dance has widened his world and prepared him for the future. It has already propelled him to the top of his class, earning him the Sophomore Medalist award from the UW President’s Office for the 2018-2019 academic year in recognition of his academic achievement  and rigorous coursework. 

Maliyam’s interest in computer science began when he was about 10 years old, helping his parents at the internet cafe they owned in Thailand. He would assemble computer parts, install drivers and programs, and troubleshoot whenever there were issues, using Google as his guide.

“I started learning to piece information together like playing with a jigsaw puzzle and solving the problems as best I could,” Maliyam said. “Sometimes it failed but that didn’t stop me from trying. I think the trait of yearning to solve problems paved my way to pursue a computer science degree.” 

From his experience in the family business, Maliyam qualified to attend an Olympiad training camp for computing, which helped him solidify a Royal Thai Scholarship. Awarded to the top students in Thailand, the scholarship sends 50 to 70 to the United States for their college education. He chose to study at the Allen School. 

“I know some UW alumni and I’ve heard about the great faculty and resources in the Allen School,” Maliyam said. “And I was so thrilled to come here to continue my passions and connect with so many people. Plus, I was fascinated by the beauty of the campus, the diversity in Washington state, and the CSE program in general.”

UW Dance’s production of Pink Matter Volume 2: What Is Love? Choreographed by Dani Tirrell and “Majinn” Mike O’Neal, Warren Woo/University of Washington

With his background and interests, it was natural for Maliyam to pick computer science as his major. But he wanted even more versatility and creativity in his studies, so he added a dance minor.

“It connects me to people and teaches me to be vulnerable and strong,” he said. “The dance community provides me a safe space where I can be exposed and supported in the learning process. Dance is not just dance anymore, since it widens how I see the world and helps me redefine myself.”

Computer science, on the other hand, challenges the way he thinks and evaluates daily situations.

“I would say that the CS program has challenged me to grow and become an impactful teacher and engineer, while the dance program has helped me to become a considerate human being and has taught me how to love myself and others,” he explained.

Although he enjoyed tinkering with computers as a child, Maliyam said he developed a true passion for computer science after enrolling in a C/C++ programming course in high school. That passion has solidified during his time in the Allen School.

“I feel inspired by faculty members in the school and also by the engineers I work with,” he said. “I know that if I keep improving on myself, I will be like them one day.”

Recognizing that early exposure to programming isn’t an opportunity everyone enjoys in high school, Maliyam sees being a teaching assistant as an opportunity to give back by sharing his love of computing with others.

“Being a TA is my favorite part of being in the Allen School. It goes beyond teaching for me because it connects me to people, enabling me to become a part of the community and to build up that community,” he explained. “As a TA, I have the opportunity to create a safe learning space for students, help them on their journeys and support them throughout the course.”

As a TA and a student, Maliyam has said he wants to work to cultivate a culture of caring in the school and in the broader technology industry. 

“I feel like we have been greatly trained in our technical skills, however, we still need more training on our  interpersonal and ‘soft’ skills,” he said. “As an international student, I value everyone’s voice. I believe people want to be seen and heard and not feel isolated. Caring for people gives them a sense of belonging.”

Maliyam experienced that first-hand during his software development internship last summer at He found the engineers he worked with there to be caring and kind, and said they gave him a sense of belonging. Maliyam aims to spread that same culture wherever he goes, including in his work as a mentor in the university’s International Student Mentorship Program

2020 Dance Majors Concert: Do we have home? Choreographed by Sojung “Esther” Lim, Warren Woo/University of Washington

“ISMP is a strong community that I have found uplifting since I began studying at UW, and it’s helped prepare me to be a better leader overall,” he said.

When he is not at his computer or mentoring fellow students, Maliyam enjoys performing under the lights in productions staged by the UW dance department. The rehearsals remind him how lucky he is to be surrounded by wonderfully talented people. 

This summer, Maliyam has an internship with DocuSign, and will apply to the combined B.S./M.S. program. He continues to perform in various dance concerts and is preparing for upcoming auditions.

Congratulations on your medal, Louis — and thank you for cultivating a culture of caring in our school and everywhere you go!  

March 9, 2020

Remembering Paul Young (1936 – 2019)

Former chairs (left to right) Jerre Noe, Paul Young, Jean-Loup Baer, Ed Lazowska

The Allen School community was sad to learn recently that former chair and professor emeritus Paul Young passed away in December. Young was a gifted computer scientist who spent five years as chair of what was then known as the University of Washington Department of Computer Science. During his tenure, Young advanced UW’s reputation as a national leader in computer science education and research, advocated for more resources to bring the best and brightest faculty to Seattle, and initiated conversations around the creation of a permanent, purpose-built home for the program. 

Young, who earned his Ph.D. from MIT following undergraduate studies at Antioch College, joined the UW faculty in 1983 from Purdue University along with his colleague Larry Snyder. The dual recruitment was a major coup for UW, with Young assuming leadership of the CS department at a time of rapidly increasing demand for the major, and Snyder taking the reins of the UW/Northwest VLSI Consortium focused on advancing our leadership in very large-scale integrated circuit design.

Young was a talented educator and researcher with interests that spanned theoretical computer science, including computational complexity, algorithmic theory, formal language theory, and connections with mathematical logic. His leadership and professional activities on and off campus helped to raise the profile of UW Computer Science. After his five years as chair came to a close, Young remained on the UW faculty for another decade, serving for three of those years as Associate Dean of Research, Facilities & External Affairs in the College of Engineering. 

Paul Young (right) with Punkin the “pocket rocket”

In 1994, Young took a leave of absence from the university to serve as Assistant Director of the National Science Foundation for its Directorate for Computing and Information Science and Engineering (NSF CISE). He also was active in the Computing Research Association (CRA) and served on the organization’s board from 1983 to 1991 — the last three years as board chair. Under his leadership, the computing research community ramped up its involvement in science and technology policy. CRA recognized his contributions with its Distinguished Service Award in 1996.

Following his retirement from the UW in 1998, Young joined his wife, Deborah Joseph, in Wisconsin, where she was a member of the computer science faculty at the University of Wisconsin, Madison. They settled into Lime Creek Farm in the southwest corner of the state, where they restored more than 40 acres of prairie habitat and renovated a 100-year-old farm house. Lime Creek also served as a breeding and training ground for the couple’s performance Labrador Retrievers. These included Punkin — the runt of the farm’s first litter of puppies — who earned the nickname “Paul’s Pocket Rocket” due to her combination of intense speed and drive coupled with her diminutive size. Under Young’s tutelage, Punkin earned titles in retrieving, pointing, tracking, obedience, and agility, and she held the distinction of being Wisconsin’s first Grand Master Pointing Retriever.

We will remember Paul for his many contributions to our program and to our field, and we send our condolences to his family, friends, and colleagues.

March 5, 2020

Allen School and AI2 researchers earn Outstanding Paper Award at AAAI for advancing new techniques for testing natural language understanding

The team onstage at AAAI 2020 (from left): conference program co-chair Vincent Conitzer, Ronan Le Bras, Yejin Choi, Chandra Bhagavatula, Keisuke Sakaguchi, and conference program co-chair Fei Sha

Allen School professor Yejin Choi and her colleagues Keisuke Sakaguchi, Ronan Le Bras and Chandra Bhagavatula at the Allen Institute for Artificial Intelligence (AI2) recently took home the Outstanding Paper Award from the 34th Conference of the Association for the Advancement of Artificial Intelligence (AAAI–20). The winning paper, “WinoGrande: An Adversarial Winograd Schema Challenge at Scale,” introduces new techniques for systematic bias reduction in machine learning datasets to more accurately assess the capabilities of state-of-the-art neural models.

Datasets like the Winograd Schema Challenge (WSC) are used to measure neural models’ ability to exercise common-sense reasoning. They do this by testing whether they can correctly discern the meaning of pronouns used in sentences describing social or physical relationships between entities or objects based on contextual clues. These clues tend to be easy for humans to comprehend but pose a challenge for machines. The models are fed pairs of nearly identical sentences that primarily differ by a “trigger” word, which flips the meaning of the sentence by changing the noun to which the pronoun refers. A high score on the test suggests that a model has achieved a level of natural language understanding that goes beyond mere recognition of statistical patterns to a more human-like grasp of semantics.

But the WCS, which consists of 273 problem sets hand-written by experts, is susceptible to built-in biases that paint an inaccurate picture of a model’s performance. Because individuals have a natural tendency to repeat their problem-crafting strategies, they also have a tendency to introduce annotation artifacts — unintentional patterns in the data — that reveal information about the target label that can skew the results of the test.

For example, if a pair of sentences asks the model to determine whether a pronoun is referring to a lion or a zebra based on the use of the trigger words “predator” or “meaty,” the model will note that the word “predator” is often associated with the word “lion.” In a similar fashion, a reference to a tree falling on a roof will lead the model to correctly associate the trigger word “repair” with “roof,” because while there are very few instances of trees being repaired, it is quite common to repair a roof. By choosing the correct answers to these questions, the model is not indicating an ability to reason about each pair of sentences. Rather, the model is making its selections based on a pattern of word associations it has detected across the dataset that just happens to correspond with the right answers.

“Today’s neural models are adept at exploiting patterns of language and other unintentional biases that can creep into these datasets. This enables them to give the correct answer to a problem, but for incorrect reasons,” explained Choi, who splits her time between the Allen School’s Natural Language Processing group and AI2. “This compromises the usefulness of the test, because the results are not an accurate reflection of the model’s ability. To more accurately assess the state of natural language understanding, we came up with a new solution for systematically reducing these biases.”

That solution enabled the team to produce WinoGrande, a dataset comprising 44,000 sentence pairs that follow a similar format to that of the original WSC. One of the shortcomings of the WSC was its relatively small size owing to the need to hand-write the questions. Choi and her colleagues got around that difficulty by crowdsourcing question material using Amazon Mechanical Turk, following a carefully designed procedure to ensure that problem sets avoided ambiguity or word association and covered a variety of topics. To eliminate any unintentional biases embedded in the dataset at scale, the researchers developed a new algorithm, dubbed AFLite, that employs state-of-the-art contextual representation of words to identify and eliminate annotation artifacts. AFLite is modeled on an existing adversarial filtering (AF) algorithm but is more lightweight, thus requiring fewer computational resources.

The team hoped that their new, improved benchmark would provide a clearer picture of just how far machine understanding has progressed. As it turns out, the answer is “not as far as we thought.”

“Human performance on problem sets like the WSC and WinoGrande surpass 90% correctness,” noted Choi. “State-of-the-art models were found to be approaching human-like levels of accuracy on the WSC. But when we tested those same models using WinoGrande, their performance dropped to between 59% and 79%.

“Our results suggest that common sense is not yet common when it comes to machine understanding,” she continued. “Our hope is that this sparks a conversation about how we approach this core research problem and how we design the benchmarks for assessing future progress in AI.”

Read the research paper here, and a related article in MIT Technology Review here.

Congratulations to Yejin and the team at AI2!

March 3, 2020

Building a satellite has given Allen School undergraduate Nathan Wacker out-of-this-world experiences

In the latest Allen School undergrad spotlight, Nathan Wacker, can proudly say he’s helped build something that is truly out of this world. The third-year Allen School student from Seattle worked on HuskySat-1, a 3U CubeSat that was launched into space on November 2, 2019 and left Northrup Gruman’s Cygnus cargo spacecraft on January 31.

Allen School: What interested you in working on HuskySat-1, and what was your job in the Husky Satellite Lab?

Nathan Wacker: I was interested in joining because I wanted to work on something that was going to space. It still feels surreal to say that I have done it. Working at the lab seemed more tangible than most of the programing I had done up to that point. 

Since I joined the team in the fall of 2017, I have worked on the flight software for the power distribution system, plasma thruster and various other systems on the spacecraft. I also worked quite extensively on our ground station command and control software. Since we are now in space, I have been maintaining the ground station and leading satellite operations. 

Allen School: What was it like watching the cargo craft launch from NASA’s Wallop Flight Facility last November, knowing HuskySat-1 was on it? 

NW: Folks from the lab that weren’t able to attend the launch in person, myself included, gathered in a basement classroom at 6:30 a.m. Saturday, November 2 to watch the NASA stream live. There was a lot of excitement in the room as the rocket disappeared into the clouds, but it was hard not to think about the failure scenarios and how we wouldn’t know until months later whether our satellite had survived. 

Allen School: What kind of information have you learned since the satellite’s deployment? 

NW: Deployment from Cygnus occurred on January 31 and we made first contact several hours later. Since then, we have determined that the power system is healthy, the primary communications system works, and that we are sitting at a cozy temperature — for space. We have also commissioned the camera and taken some low-resolution images of earth.

Allen School: What are some unique lessons you learned while on the satellite team that you might not have experienced anywhere else? 

NW: A few lessons I have learned are: Don’t prematurely optimize, don’t over-engineer, document early and often, test everything. Also, the ability and drive to learn is more valuable than knowledge. These lessons all came from working on a long-term and large-scale project, which is difficult to teach in a classroom.

Allen School: Several academic areas would allow you to work on a project like HuskySat-1, why did you choose to major in computer science? 

NW: Computer programming has always been appealing because smaller-scale projects can be put together quickly and iterated on at no cost other than the computer. Growing up, that was a much quicker path to gratification than woodworking or electronics projects, for instance, but still scratched an itch of mine to build something. 

Allen School: What do you like most about being in the Allen School?

NW: The education is excellent. As a student, it is extremely rewarding to take classes from people who have significantly advanced their field and still care about student success. Upper-division CSE classes are the most interesting classes I have taken at UW. 

Allen School: What are some of your favorite activities or experiences here at the UW?

NW: The Husky Satellite Lab has by far been my most fulfilling experience here at UW. Not only has it been a great engineering experience that has informed my career interests, but I have also had the pleasure of getting to know the talented individuals I work with in a professional and social context. 

Read more about HuskySat-1 and follow the live orbital tracker to stay up-to-date on the satellite’s mission. We are proud to have Nathan as a member of the Allen School community! 

February 25, 2020

#MemoriesInDNA portrait project blends DNA technology and art to memorialize pioneering scientist Rosalind Franklin

Portrait of Rosalind Franklin against background of tiny images
Portrait of Rosalind Franklin by Seattle artist Kate Thompson. Dennis Wise/University of Washington

British scientist Rosalind Franklin, who spent the early 1950s researching the structure of DNA at King’s College London, should have won the Nobel Prize. She very well may have, except that her untimely death from ovarian cancer at the age of 37 meant that the Nobel Committee, which does not award posthumously, did not even consider her. For it was Franklin, not the famous scientific duo Watson and Crick, who  captured the first image proving the shape of deoxyribonucleic acid — better known as DNA, the building block of all life.

At the time, Franklin was applying her expertise in x-ray crystallography to determine the structure of DNA in collaboration with King’s College Ph.D. student Raymond Gosling. The researchers captured an image of moistened DNA fibers using x-ray diffraction techniques and equipment refined by Franklin herself. The so-called Photo 51, which revealed a helical shape consisting of two strands, and other unpublished data from Franklin’s lab would find their way into the hands of fellow scientists James Watson and Francis Crick at Franklin’s alma mater, the University of Cambridge. The material confirmed the three-dimensional structure of DNA as a double helix — a structure the pair would race to describe in a paper published in April 1953 in the journal Nature.

Although Franklin would publish her own paper co-authored with Gosling on the double helix in the same issue, it was Watson and Crick, along with Franklin’s King’s College colleague Maurice Wilkins, who went on to share the 1962 Nobel Prize in Physiology or Medicine and thus ensure their places in the pantheon of scientific achievement. The contributions of Franklin, who had passed away 4 years before the Nobel announcement, would only begin to be more widely appreciated decades later. In any event, the Nobel Prize can only be shared by up to 3 individuals, so we will never know whether Franklin, had she lived long enough, would have received her due. Given the limitation on sharing the prize and the attitudes toward women in science at that time — and her own colleagues’ attitudes toward Franklin, in particular — it seems unlikely.

More than 60 years after Watson and Crick laid eyes on Photo 51, Allen School professor Luis Ceze met Seattle-based multimedia artist Kate Thompson in a bar not far from the University of Washington campus. Ceze co-directs the Molecular Information Systems Laboratory, a partnership between UW and Microsoft that is exploring synthetic DNA’s potential as a long-term storage solution and computational platform for digital data. He had already crossed paths with Thompson on campus, where she was doing an artist’s residency in the Nemhauser Lab focused on evolutionary biology. Ceze was intrigued by what she told him about her work, which is focused on making science visual.

Action shot from above of artist Kate Thompson painting Rosalind Franklin's portrait over backdrop of images
Kate Thompson at work in her studio. Mary Bruno

For more than a year, he and his colleagues had been collecting images submitted by people around the world as part of the #MemoriesInDNA project under the tagline “What do you want to remember forever?” After meeting Thompson, Ceze became interested in exploring a way to ensure that the world would remember Franklin and her contributions forever — and he had a particularly fitting medium in mind.

“Rosalind Franklin was largely responsible for uncovering the structure of DNA, nature’s own perfected storage medium. Her work opened up a whole new avenue of scientific research and discovery for which, to this day, she does not really get the credit that she deserves,” Ceze said. “We had this massive collection of image files signifying what people want to preserve for posterity, and this new storage method. So we thought, why not use them to demonstrate the science we’ve been working on while paying tribute to the scientist who started it all?”

The idea he and Thompson hashed out over glasses of wine was elegantly simple. The lab would work with Thompson to create a piece of art to commemorate Franklin that incorporates the very medium that she revealed to Watson, Crick, and the world: DNA. To be precise, the medium would be synthetic DNA containing thousands of copies of images people had voluntarily contributed to advance a new wave of molecular systems research.

“I was fascinated by the idea of honoring this brilliant but mostly forgotten woman using the same material that should have made her famous in her own lifetime,” said Thompson, who took on the role of artist-in-residence at MISL after meeting with Ceze. “Until recently, her legacy to science and the world went largely unnoticed. I hope this project helps to ensure it will not be ignored.”

Closeup of a portion of Rosalind Franklin's face painted over tiny images
Dennis Wise/University of Washington

The result of Thompson’s collaboration with the MISL is now on display in the Bill & Melinda Gates Center for Computer Science & Engineering on the UW Seattle campus. The work, which measures 40 inches high by 30 inches wide and was created using acrylic ink on archival paper, took nearly 8 months from conception to completion. It is one of three original copies Thompson produced at the behest of the lab.

Viewed at a distance, the work is clearly a portrait of Franklin. Thompson reproduced her likeness from an old black and white photograph, combining soft, dark brush strokes with a mosaic of nearly 2,000 meticulously arranged images — the majority of which measure just ⅞ inch square — from the #MemoriesInDNA collection. The artist arranged the latter with the help of a macro she wrote for the purpose, which sifted through the images and sorted them by tonal values.

The images that make up Franklin’s face are contained within a larger collection that serves as a colorful backdrop for the subject of the painting. Up close, the images themselves come into sharper relief; if they linger long enough, viewers can make out the contents of the individual photos in detail. Each image depicts a person, place or object someone wants remembered in perpetuity.

But Thompson didn’t just paint on the images; she painted with them, infusing Franklin’s likeness with the actual data files ensconced in their microscopic storage medium. Using roughly half of the trove of more than 10,000 photos submitted as part of #MemoriesInDNA, MISL researchers first converted the digital image files — around 325 megabytes of data — into the As, Ts, Cs and Gs of DNA and encoded them into synthetic DNA ordered from Twist Bioscience. Thompson then took the vial of DNA furnished by the lab, which consisted of a mere 1.5 milliliters of liquid, and mixed the contents with black acrylic ink along with a binding substance that would help it adhere to the paper. She combined yet more of the image-laden DNA with a clear acrylic hardener, which she used to coat the finished piece.

“I painted several practice portraits using plain ink first,” the artist recalled with a laugh. “I didn’t want to mess up when it came time to use the real thing. Ink is relatively inexpensive but specially encoded DNA from a lab, not so much!” 

The process the lab and artist followed to turn digital photos into a DNA-infused painting. Kate Thompson

Before Thompson could pick up her brush, lab members David Ward, Bichlien Nguyen, Xiaomeng Liu, and Jeff Nivala conducted experiments to ensure that, in the latter’s words, “the science was as rigorous as the art.” First and foremost, they needed to establish that no chemical reaction would occur between the synthetic DNA and acrylic medium when Thompson mixed the two in her studio. The team also wanted to be confident that, once mixed and applied to paper, the DNA could be subsequently retrieved from the material. The outcome is both a work of art and an artifact of science.

“Not that I’m suggesting you do this — in fact, please don’t! — but if you were to scrape a little bit of the portrait off, with the right equipment you could retrieve the data and convert it back from DNA molecules to digital 0s and 1s,” explained MISL co-director Karin Strauss, principal research manager at Microsoft Research and affiliate professor at the Allen School. “This portrait is not only preserving Franklin’s memory but preserving the data as well, in a form that will be accessible to future generations.”

To increase the likelihood that the data contained in the work will, indeed, be accessible for generations to come, the team built in a high degree of redundancy. Nguyen estimates it took 30 minutes to amplify copies of each image file using polymerase chain reaction, or PCR.

“Because DNA as a storage medium is so dense, we were able to provide Kate with around a trillion copies of each image to mix into the paint,” explained Nguyen, a senior researcher at Microsoft Research who oversaw the DNA storage process. “That way, we can be certain that we will be able to retrieve all of the data — even if a portion of one set of images is somehow lost or damaged, we still have many back-ups.”

This is not the first time that the MISL team has applied its science to the arts. The lab previously partnered with Twist Bioscience to preserve significant cultural and historical artifacts in DNA as a way of demonstrating its potential for archival data storage and retrieval, including iconic musical performances at the Montreux Jazz Festival, the top 100 books of Project Gutenberg, the Universal Declaration of Human Rights in 100 languages, and the non-profit Crop Trust’s entire seed database. Along the way, the team set a new record for the amount of digital data stored in and successfully retrieved in DNA that appeared in a peer-reviewed journal, presented new techniques for random access, developed a new platform for microfluidics automation for DNA data storage at scale, and demonstrated the world’s first end-to-end automated system for encoding digital data in DNA.

Members of the team with the finished portrait, left to right: Luis Ceze, David Ward, Bichlien Nguyen, Kate Thompson, and Karin Strauss (not pictured: Xiaomeng Liu, Jeff Nivala). Dennis Wise/University of Washington

Members of the public may view the portrait of Franklin on the ground floor of the Bill & Melinda Gates Center, at the base of the Anita Borg Grand Stairway. The lab is also exploring the potential to exhibit the artwork in additional locations in the future to reach an even wider audience. 

“Our team was excited to partner with Kate on this project, which highlights an often forgotten figure who helped usher in the age of molecular storage,” Ceze said. “I hope the artwork itself, the science upon which it’s based, and the story of Rosalind Franklin will inspire people.”

Read more about Thompson’s work here, and learn more about the MISL’s work on DNA data storage here. Read the original #MemoriesInDNA announcement here. The lab members and the artist would like to express their appreciation to the thousands of people, spread out over 80 countries, who shared their personal photos as part of the #MemoriesInDNA project. The lab will continue to use the images as part of its research to advance DNA-based computation and image search.

February 24, 2020

Hannaneh Hajishirzi and Yin Tat Lee named 2020 Sloan Research Fellows

Professor Hanna Hajishirzi, a professor in the Natural Language Processing group and director of the Allen School’s H2Lab, and Yin Tat Lee, a professor in the Theory of Computation group, have been named 2020 Sloan Research Fellows by the Alfred P. Sloan Foundation. The program recognizes early-career scientists in the United States and Canada who are nominated and judged by their peers based on their creativity, leadership, and achievements in research.

“I am thrilled that the Sloan Foundation has honored Hanna and Yin Tat for their outstanding work on fundamental problems that have broad relevance and potential for impact,” said professor Magdalena Balazinska, director of the Allen School. “Hanna is working at the leading edge of artificial intelligence to transform the way we conceive of and build AI systems that touch people’s everyday lives, from education and media, to financial services and scientific documents. And Yin Tat is doing groundbreaking — even audacious — work that pushes past decades-old limits of computing to create faster, better solutions to a range of modern-day problems.”

Hanna Hajishirzi

Hajishirzi, who joined the Allen School faculty in 2018 and is also an AI research fellow at the Allen Institute for Artificial Intelligence (AI2), addresses foundational problems in natural language processing, artificial intelligence, and machine learning. Her goal is to develop general-purpose algorithms that can represent, comprehend, and reason about diverse forms of data efficiently and on a large scale. Hajishirzi’s research spans multiple domains, including representation learning, question answering, knowledge graphs, and applications such as conversational dialogue and knowledge extraction from unstructured text.

“Enormous amounts of information are available online in multiple forms across diverse resources; for example, in news articles, web pages, textbooks and technical documents,” explained Hajishirzi. “An important challenge in AI is how to represent and integrate diverse resources to facilitate further comprehension and reasoning. It is the right time to address this challenge at large scale and in real-world settings, using a unified representation that combines the best features of deep neural models and symbolic formalisms.”

Hajishirzi is among the pioneers in designing novel, end-to-end neural models for question answering and reading comprehension. One of her key contributions is Bi-Directional Attention Flow for Machine Comprehension, or BiDAF, which is a deep neural model for end-to-end question-answering about text and diagrams that has been widely adopted in academia and industry. Hajishirzi and her collaborators designed the system to be both scalable and modular, thus enabling its use with multiple modalities and knowledge bases. Hajishirzi is also among the first to address the problem of understanding scientific articles and data across multiple modalities, such as diagrams, math and geometry word problems.  For example, she led the development of DyGIE, a system for enabling knowledge extraction from computer science and biomedical scientific papers. She also led the GeoS project, the first automated system for solving geometry word problems that can answer SAT geometry test questions on a par with the average American 11th grade student. More recently, Hajishirzi devised a new interpretable neural model for solving math problems, MathQA, that maps word problems to operation programs, and DenSPI and DecompRC, systems for real-time and multi-hop question answering that achieves state-of-the-art results by decomposing compositional questions into simpler sub-questions.

Hajishirzi has garnered numerous accolades for her research. She received an Allen Distinguished Investigator Award in AI for her work on the Spoon Feed Learning (SPEL) framework that combined principles of child education and machine learning to enable computers to interpret diagrams. Hajishirzi later earned a Google Faculty Research Award for her efforts to develop practical, scalable methods for open-domain question answering. She has also received an Amazon Research Award, a Bloomberg Data Science Award, and a Best Paper Award from the Special Interest Group on Discourse and Dialogue (SIGDIAL). Hajishirzi regularly publishes at the top conferences in the field, including the annual meeting of the Association for Computational Linguistics (ACL), the Conference on Empirical Methods in Natural Language Processing (EMNLP), and the Conference on Computer Vision and Pattern Recognition (CVPR).

Yin Tat Lee

Lee, who joined the Allen School faculty in 2017 and is also a visiting researcher at Microsoft Research AI, combines ideas from continuous and discrete mathematics to produce state-of-the-art algorithms for solving optimization problems that underpin the theory and practice of computing. His work encompasses multiple domains, including convex optimization, convex geometry, spectral graph theory, and online algorithms.

“From machine learning and experiment design, to route planning and medical imaging, convex optimization is used everywhere,” Lee said. “My group develops new techniques and algorithms to optimize faster, with the goal to design a universal optimization algorithm without compromising performance.”

Lee has already expanded convex optimization techniques to break long-standing running time barriers for a variety of problems. For example, he and his colleagues presented a new general interior point method that yielded the first significant improvement in linear programming in more than 20 years and a new algorithm for approximately solving maximum flow problems in near-linear time. Lee also has demonstrated the applicability of optimization techniques to an even broader class of problems than previously was considered feasible, devising a faster cutting plane method that improved the running time for solving classic problems in both continuous and combinatorial optimization. More recently, Lee contributed to a pair of new algorithms that achieve optimal convergence rates for optimizing non-smooth convex functions in distributed networks. That same year, Lee contributed to a total of six papers that appeared at the Symposium on Theory of Computing (STOC 2018) — a record high for an individual researcher at the conference.

Lee’s work has earned him multiple Best Paper and Best Student Paper awards at premier conferences in the field, including the IEEE Symposium on Foundations of Computer Science (FOCS), the ACM-SIAM Symposium on Discrete Algorithms (SODA), and the Conference on Neural Information Processing Systems (NeurIPS 2018). Last year, he earned a Microsoft Research Faculty Fellowship in recognition of his efforts to advance the field of theoretical computer science for real-world applications. In 2018, the Mathematical Optimization Society awarded Lee the A.W. Tucker Prize, which recognizes the best doctoral thesis in optimization in the prior three years, for his work on faster algorithms for convex and combinatorial optimization. That same year, he received a CAREER Award from the National Science Foundation to build upon that work and overcome multiple obstacles to optimization.

“To receive a Sloan Research Fellowship is to be told by your fellow scientists that you stand out among your peers,” Adam F. Falk, president of the Alfred P. Sloan Foundation, said in a press release. “A Sloan Research Fellow is someone whose drive, creativity, and insight makes them a researcher to watch.”

Hajishirzi and Lee are among four University of Washington researchers to watch in this latest group of Fellows, which includes Kyle Armour, a professor in the School of Oceanography and Department of Atmospheric Sciences, and Jacqueline Padilla-Gamiño, a professor in the School of Aquatic and Fishery Sciences.

A total of 37 current or former faculty members at the Allen School have been recognized through the Sloan Research Fellowship program. Recent honorees include Shayan Oveis Gharan, who was recognized last year for his work on solutions to fundamental NP-hard counting and optimization problems; Maya Cakmak, for her contributions to robotics; Ali Farhadi and Jon Froehlich, for their research in artificial intelligence and human-computer interaction, respectively; and Emina Torlak, for her work in computer-aided verification and synthesis.

View the complete list of 2020 Sloan Research Fellows here and read the Sloan Foundation press release here. Read a related UW News release here.

February 12, 2020

UW Reality Lab opens incubator to foster student innovation in augmented and virtual reality 

In the UW Reality Lab incubator

The UW Reality Lab has launched a new incubator where students can develop innovative projects in augmented and virtual reality (AR/VR) with guidance and resources from lab faculty and staff. The Reality Lab, which launched two years ago, allows researchers to focus on the pursuit of leading-edge research and educating the next generation of innovators in this growing field. The incubator gives students a space to work on AR/VR projects while fostering a community of collaboration and organic mentorship. It also supports the greater UW community in AR/VR research and allows the novel research taking place in the incubator to be shared with the whole world. 

“We select projects and teams for the incubator with the goal of having every project ultimately be released to the community in the form of research results or an application,” said John Akers, director of research and education in the lab. “Projects can either be in support of other groups in the greater UW community, such as other labs and departments, or student motivated projects based on ideas they are committed to developing fully.” 

According to Ira Kemelmacher, professor of computer science and director of the UW Reality, Lab, one of the biggest challenges in AR/VR adoption is content and experiences creation. 

“We are opening the incubator to allow undergraduate researchers to team up, come up with fresh ideas, and invent the future of AR/VR. We started by teaching a series of AR/VR capstones where teams of students came up with application ideas and implementations, in an amazing variety of fields from visualization of homelessness to cooking in AR,” she said. “In capstones they only have 10 weeks to bring their ideas into life. Due to its high popularity in the undergrad community and successful results, we decided to open an incubator that will allow more time for development. We believe undergrads have an immense potential in creating breakthroughs in AR/VR technology and our incubator encourages them to do exactly that, while getting advising, state of the art hardware, and full support from us and our collaborators.”

At the moment, the incubator is hosting two projects. One is led by Max Needle, a Ph.D. student in the Department of Earth and Space Sciences. Needle’s research is in rocks bending. He flew a drone around a strip mine in Pennsylvania that features a large folded rock layer, as well as several fossils and lots of faults. He was then able to generate a high-resolution 3D model of the mine from the drone photos, with the goal of developing immersive geological adventure and educational experiences. 

Team discussion in the UW Reality Lab incubator

“The strip mine is a field-trip destination for many university geology classes, however, like many exquisite exposures of geologic structures, there are geographical and physical limiting factors,” Needle said. “To overcome obstacles related to access, my group at the Reality Lab Incubator is developing a virtual field trip through the strip mine. The geology-specific tools that we develop for VR with the Reality Lab, as well as the format and gameplay, can be put in a pipeline to enhance VR experiences of other geologic sites that have been mapped digitally in 3D.

His work will open new doors for teaching geology, and who has access to field geology.

“The incubator is great, so far,” said Andrew Wang, a second year Allen School student working with Needle. “All of the necessary tools are available. Having these tools so accessible will help us debug and further develop our knowledge in VR/AR technology.”

The other active incubator project is led by an Allen School senior and explores how different forms of locomotion mechanics in virtual reality can create emergent gameplay. He is working to see if some of these modes could reduce the simulator sickness some people feel in VR.

According to Akers, the incubator hopes to increase projects in the future as the process process for how projects are accepted and teams are composed is solidified. Learn more about the incubator and how to get involved on their website

February 10, 2020

AuraRing puts the power of electromagnetic tracking system on your finger

With continuous tracking, AuraRing can pick up handwriting — potentially for short responses to text messages

Sometimes a ring symbolizes a promise, sometimes it shows a person’s birth month or mood, and sometimes it’s a statement about their taste in jewelry. But thanks to researchers in the Allen School’s Ubicomp Lab, a ring can now do a lot more.

The latest in smart technology, the AuraRing is a ring and wristband combination with high-fidelity input tracking. The combination is a magnetic tracking system designed to report precise finger movement. 

“We’re thinking about the next generation of computing platforms,” said Allen School alumnus and co-lead author Eric Whitmire (Ph.D. ‘19), now a research scientist at Facebook Reality Labs. “We wanted a tool that captures the fine-grain manipulation we do with our fingers — not just a gesture or where your finger is pointed, but something that can track your finger completely.”

The ability to track a finger enables freeform and subtle input for wearable platforms like smartwatches and augmented and virtual reality headsets. AuraRing enables applications like object manipulation, drawing, sliding, swipe-based text input and hand pose reconstruction because of its absolute, continuous tracking with millimeter-level accuracy. Due to a high bandwidth and data rate, AuraRing is also capable of detecting taps of various intensities which enables new kinds of always-available ambient interfaces. 

The ring is a single transmitter coil tightly wrapped around a 3D-printed loop

The system, which is worn on the index finger, consists of a single transmitter coil tightly wrapped around a 3D-printed ring and a wristband with three embedded sensor coils that measure the resulting magnetic field. Using these measurements, the wristband tracks the absolute position and orientation of the ring in real-time, making free-form drawing, handwriting short text messages, controlling games and moving virtual objects with mixed reality headsets possible.  

The writstband is embedded with sensor coils that measure the ring’s magnetic field

AuraRing is also a low-power, battery operated device that generates an oscillating magnetic field around the hand. By focusing on short-range tracking over distances between 10 and 15 centimeters, the system is less susceptible to using up too much power and encountering environmental interference.

“To have continuous tracking in other smart rings you’d have to stream all the data using wireless communication. That part consumes a lot of power, which is why a lot of smart rings only detect gestures and send those specific commands,” said co-lead author Farshid Salemi Parizi, a Ph.D. student in electrical and computer engineering. “But AuraRing’s ring consumes only 2.3 milliwatts of power, which produces an oscillating magnetic field that the wristband can constantly sense. In this way, there’s no need for any communication from the ring to the wristband.” 

With these minimal, low-power electronics, AuraRing can operate for about a day on self-contained batteries and therefore has the potential to do a lot more.  

“Because AuraRing continuously monitors hand movements and not just gestures, it provides a rich set of inputs that multiple industries could take advantage of,” said professor Shwetak Patel, who holds a joint appointment in the Allen School and the Department of Electrical & Computer Engineering. “For example, AuraRing could detect the onset of Parkinson’s disease by tracking subtle hand tremors or help with stroke rehabilitation by providing feedback on hand movement exercises.” 

AuraRing was developed with support from the UW Reality Lab.The team, which has open-sourced the hardware designs and algorithms for their work, published their findings in Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies. To learn more, watch the group’s video and check out the UW News release and coverage by KING 5 News and VentureBeat.   

February 5, 2020

« Newer PostsOlder Posts »