The U.S. Bureau of Labor Statistics has just released its employment projections for the decade 2016-2026. It’s a highly detailed forecast: more than 1,000 specific job categories are included.
Computing occupations once again dominate STEM, accounting for 66% of all job growth, and 60% of all job openings (whether due to growth or to replacement).
BLS projects a growth of 546,000 computing jobs over the decade, and 3,475,000 job openings.
“Luke Zettlemoyer, a professor at the University of Washington … turned down a lucrative offer from Google, instead taking a post at the nonprofit Allen Institute for Artificial Intelligence so he could continue teaching.”
Luke and Ali Farhadi are heavily engaged in AI2, which is led by long-time Allen School professor Oren Etzioni. It offers the best of both worlds.
Professor Jennifer Mankoff, a member of the Allen School’s human computer interaction research group, has been honored with a GVU Impact Award from the GVU Center at her alma mater, Georgia Tech. To mark its 25th anniversary, the center recognized Mankoff and 13 other current or former members who have had a significant impact on the world and contributed substantially to GVU’s reputation, influence, and community in pursuit of its mission to improve the human condition through technology.
Mankoff earned an Impact Award for her contributions to accessibility, health, and sustainability through research that combines empirical methods and deep technical innovation. She is widely known for her people-centric approach to technology, such as her novel use of 3D printing to create personalized assistive technologies for people with disabilities — work which, like all accessibility research, will ultimately benefit everyone. Mankoff also has explored the use of natural materials in computing, including embedding textiles in 3D printing and creating knitted objects programmatically, and developed tools and techniques to assist people in managing chronic illness.
“Each of the individuals featured…embodies the interdisciplinary mindset and commitment to real-world impact that is a hallmark of GVU’s identity,” Keith Edwards, director of the GVU Center, said in an announcement. “Through their leadership, service, and research excellence, they have changed the way we use computing technology, advanced the frontiers of knowledge, and strengthened the GVU community at Georgia Tech.”
Mankoff earned her Ph.D. in 2001 working with Gregory Abowd and fellow honoree Scott Hudson. She joined the Allen School faculty as the Richard E. Ladner Professor this fall after spending 13 years on the faculty of Carnegie Mellon University, where she was a professor in the HCI Institute. Before her arrival at CMU, Mankoff was a faculty member at the University of California, Berkeley.
The GVU Center formally recognized Mankoff and her fellow Impact Award winners at its 25th anniversary celebration earlier today.
This map, representing an individual’s morning commute, shows the locations where the research team was able to track the person’s movements through location-based ads.
Online ads may not only be trying to sell you something; they may be selling you out. That’s according to a team of researchers in the Allen School’s Security and Privacy Research Lab, who recently discovered how easy it is for someone with less than honorable intentions to turn online ads into a surveillance tool. They found that, for as little as $1,000, a person or organization could conceivably purchase ads that will enable them to track someone’s location and app use via their mobile phone — gaining access to potentially sensitive personal information about that individual’s dating preferences, health, religious and political affiliation, and more. The team hopes that by sharing its findings publicly, it will raise awareness among online advertisers, mobile service providers, and customers about a potential new cybersecurity threat.
This threat stems from how the existing online advertising ecosystem enables ad purchasers to precisely target consumers based on their geographic location, interests, and browsing history for marketing purposes. The problem, as researchers explained in a UW News release, is that the same infrastructure can be exploited by people and organizations other than advertisers to precisely target individuals in ways that could compromise their privacy and security. According to former Allen School Ph.D. student Paul Vines, lead author on the project, it would be easy for anyone from a foreign agent to a jealous spouse to sign up with an online advertising service and track another individual.
“If you want to make the point that advertising networks should be more concerned with privacy, the boogeyman you usually pull out is that big corporations know so much about you. But people don’t really care about that,” Vines explained in a Wired article about the project. “[T]he potential person using this information isn’t some large corporation motivated by profits and constrained by potential lawsuits. It can be a person with relatively small amounts of money and very different motives.”
As the team discovered, online advertising can deliver fairly detailed information about a person’s behavior. For example, the researchers were able to determine an individual user’s location within a distance of 8 meters based on where their ads were being served. By establishing a grid of hyperlocal ads, the team was able to discern an individual’s daily routine based on where ads were served to the user’s device at various points along the way.
The team refers to this method of information gathering as ADINT, or “advertising intelligence,” reminiscent of well-known intelligence collection tactics such as SIGINT (signals intelligence) and HUMINT (human intelligence). To test the capabilities of ADINT, Vines and his coauthors — Allen School professors Franziska Roesner and Tadayoshi Kohno — purchased a series of ads through a demand-side provider, or DSP, which is an entity that facilitates the purchase and delivery of targeted advertising. They set up their ads to target a mix of 10 actual users and 10 facsimile users with the help of each device’s unique mobile advertising identifier (MAID), which functions as a sort of “whole device” tracking cookie. The team then repurposed the tools designed to deliver relevant ads for commercial purposes to instead collect information on each user’s whereabouts and behavior.
The ADINT research team, from left: Tadayoshi Kohno, Franziska Roesner, and Paul Vines Dennis Wise/University of Washington
Movement was not the only thing they could track; it turns out that ad purchasers have the ability to learn a lot about a person by viewing what apps they use, including popular dating and fitness-tracking apps. The team’s experiments also revealed that the individual being tracked does not need to actually click on an ad in order for ADINT to work, because purchasers can see where the ad is being served regardless of whether the target interacts with it.
“To be very honest, I was shocked at how effective this was,” said Kohno, who co-directs the Allen School’s Security and Privacy Research Lab with Roesner. “There’s a fundamental tension that as advertisers become more capable of targeting and tracking people to deliver better ads, there’s also the opportunity for adversaries to begin exploiting that additional precision.”
The team surmises that ADINT attacks could be driven by a variety of motives, from criminal intent, to political ideology, to financial profit. According to Roesner, the ease with which the team was able to deploy targeted ads against individuals calls for heightened awareness and vigilance — not just within the computer security community, but on the part of the policy and regulatory communities, as well.
“We are sharing our discoveries so that advertising networks can try to detect and mitigate these types of attacks,” she explained, “and so that there can be a broad public discussion about how we as a society might try to prevent them.”
Stefan Savage, who earned his Ph.D. in Computer Science & Engineering from the University of Washington in 2002, has been named a 2017 MacArthur Foundation Fellow. The foundation selected Savage, a faculty member at the University of California, San Diego, for his groundbreaking research focused on “identifying and addressing the technological, economic, and social vulnerabilities underlying internet security challenges and cybercrime.”
The MacArthur Fellows Program — commonly referred to as MacArthur “Genius” Awards — celebrates exceptionally creative individuals whose track record of accomplishments indicates the potential to make significant advances through their future work. Savage’s track record reflects his unique approach to cybersecurity research that combines computer science and social science.
“One of the things we try to do in our work is to move beyond simply looking at computer security in terms of the technical details of, say, a vulnerability or an attack,” Savage said in his MacArthur Foundation video, “and instead focus on what is the ecosystem that makes the attackers want to do this.”
Savage applied this filter to the problem of email spam. Rather than focusing on the technical aspects of keeping unwanted email messages from piling up in people’s inboxes, he and his colleagues opted to focus on the economic motivation of the senders. The idea, Savage explained, was to block the profitability of sending the messages in the first place. From “following the money” they identified a relatively small number of banks that were monetizing the bulk of the spam. Savage and his team worked with financial institutions and regulators to set up a system for tracking the flow of money to the spammers’ accounts. The banks were then able to shut those accounts down. He followed a similar process for targeting online drug trafficking, tracing the credit card transactions from buyers to the banks that were processing the charges.
After hitting cyber criminals where it hurts — their wallets — Savage and his team hit the road. He had become interested in the extent to which automobiles had become computerized, and therefore vulnerable to attack, with the emergence of OnStar and other systems that communicate with the internet. He teamed up with Allen School professor Tadayoshi Kohno, who knew Savage from his student days at UCSD. Together, they purchased some cars and enlisted the help of a group of student researchers — including current Allen School professor Franziska Roesner and research scientist Karl Koscher — who set about trying to hack into the vehicles’ systems. What they found was that they could effectively take over a car completely and interfere with the driver’s control of critical systems, such as the vehicle’s brakes.
“We knew nothing about automobiles, but we knew a fair amount about computer security,” Savage said. “All of the automotive companies, to varying extent, now take the cybersecurity threat seriously.”
The results of this work still reverberate today, having inspired follow-on experiments, congressional inquiries, and heightened focus on the part of manufacturers and the public to cybersecurity as more systems are connected to the internet. Savage recently has begun directing his attention to other modes of transportation, such as aircraft.
“Stefan has track record of important, high-impact results. This track record started before I knew him, and continues to this day,” said Kohno. “There are not that many people known for being a world leader in any single area of science, but Stefan is known for being a world leader in multiple areas, including the study of botnets and worms, the criminal underground economy, and cyber-physical systems and modern automobiles.”
It took a while for Savage to find his calling as a cybercrime fighter. After earning a bachelor’s degree in applied history from Carnegie Mellon University, where he worked in then-CMU professor Brian Bershad’s computer lab, he followed Bershad to the University of Washington. Savage started off his Ph.D. focused on operating systems research, but as the internet began to take off, he switched to networking under the guidance of Bershad and fellow Allen School professor Tom Anderson. As a young faculty member at UCSD, Savage initially focused on business network security before delving into the darker side of the internet.
“What I think is most remarkable about Stefan is his creativity,” said Anderson. “You can look at almost anything he’s done and say, wow, that’s university research at its best.”
“Stefan is the most creative person working not just in network security, privacy, and reliability, but in the field of computer science as a whole,” Lazowska said. “He has an uncanny ability to ask exactly the right question, devise exactly the right methodology to explore that question, propose exactly the right solution, and see that solution through to impact.”
To help them achieve even greater impact, each of the MacArthur Fellows receives an unrestricted grant of $625,000 to disburse as they see fit. The grant, which comes with no strings attached, is the foundation’s way of investing in its fellows’ “originality, insight, and potential.”
“It is very exciting to get to work on something is both intellectually stimulating, and at the same time, creates an opportunity to make the world a better place for people — and for them to be safer online,” Savage said. “The fellowship will allow me to reach the people that I need to reach with my research results, and my message, that I couldn’t reach before.”
Past MacArthur Fellowship recipients with an Allen School connection include Ph.D. alumnus Christopher Ré in 2015; current Allen School and Electrical Engineering professor Shwetak Patel in 2011; and former professor Yoky Matsuoka in 2007.
The University of Washington Board of Regents today approved the naming of the Allen School’s second building as the Bill & Melinda Gates Center for Computer Science & Engineering. The naming of the building in honor of the Gateses was made possible by gifts from Microsoft and a group of local business and philanthropic leaders who are longtime friends and colleagues of the couple.
“There is wonderful symbolism in having the Bill & Melinda Gates Center for Computer Science & Engineering across the street from the Paul G. Allen Center for Computer Science & Engineering on the University of Washington campus,” said Microsoft President Brad Smith in a news release. “As teenagers, Bill and Paul roamed UW computer labs. They went on to change the face of Seattle and the world — first with Microsoft, and later with their philanthropy. I can’t think of a better way for those of us who have had the privilege of working alongside Bill and Melinda to express our gratitude and admiration than to name this building for them.”
Smith and his spouse, Kathy Surace-Smith, are among the group of longtime friends and colleagues of the couple who personally contributed to the naming gift. Smith and fellow donors Charles & Lisa Simonyi spearheaded the fundraising effort to name the building in the Gateses’ honor. Altogether, the Friends of Bill & Melinda contributed more than $30 million toward the project.
The Bill & Melinda Gates Center is scheduled for completion in December 2018, and will be ready for occupancy in early 2019.
We are tremendously grateful to the Friends of Bill & Melinda for enabling this enduring tribute to the Gateses — and exceedingly proud to have a second home bearing their name. Thank you for giving us the room to grow and deliver a world-class computer science education to more of Washington’s students!
In the run-up to the Paul G. Allen School’s annual fall recruiting fair, 15 industry volunteers reviewed more than 300 student résumés on Tuesday afternoon in the atrium. Many thanks to Amazon’s Greg Geiger and Abigail Gualberto, Whitepages’ Rachel Flanagan, Redfin’s Marissa Carr, Krystin Morgan and Kritin Vij, Microsoft’s Kelsey Saboori, Indeed’s Jason Gabriel and Robert Noble, Qumulo’s Anthony Falsetto, Google’s Zach Spann, Carolyn Balousek and Lauren Woodward, RealSelf’s Finnian Durkan, and Karat’s Aram Greenman! Read more →
A team of researchers at the Allen School and AWS have released a new open compiler for deploying deep learning frameworks across a variety of platforms and devices. The NNVM compiler simplifies the design of new front-end frameworks and back-end hardware by offering the ability to compile front-end workloads directly to hardware back-ends. The new tool is built upon the TVM stack previously developed by the same Allen School researchers in order to bridge the gap between deep learning systems optimized for productivity, and the programming, performance, and efficiency constraints enforced by different types of hardware.
“While deep learning is becoming indispensable for a range of platforms — from mobile phones and datacenter GPUs, to the Internet of Things and specialized accelerators — considerable engineering challenges remain in the deployment of those frameworks,” noted Allen School Ph.D. student Tianqi Chen. “Our TVM framework made it possible for developers to quickly and easily deploy deep learning on a range of systems. With NNVM, we offer a solution that works across all frameworks, including MXNet and model exchange formats such as ONNX and CoreML, with significant performance improvements.”
With the help of the TVM stack, the NNVM compiler represents and optimizes common deep-learning workloads in standardized computation graphs. It then transforms these high-level graphs, optimizing the data layout while reducing memory utilization and fusing the computation patterns for different hardware back-ends. Finally, NNVM presents an end-to-end compilation pipeline, from the front-end frameworks to bare-metal hardware.
“Existing deep learning frameworks package the graph optimization with the deployment runtime,” noted Allen School professor Carlos Guestrin. “NNVM follows the conventional wisdom of compilers, separating the optimization from the deployment runtime. Using this approach, we get substantial optimization while keeping the runtime lightweight.”
While NNVM is still under development, early indications are that the approach is a step forward compared to the current state of the art. The team benchmarked the performance of the new compiler against that of the MXNet framework for two popular hardware configurations: Nvidia GPU on AWS and ARM CPU on Raspberry Pi. On both benchmarks, the NNVM compiler achieved faster speeds; on the Raspberry Pi, the code generated by the compiler was two times faster for ResNet18 and 11 times faster for MobileNet. With the NNVM compiler, developers will be able to provide consistent results from multiple frameworks to users of a variety of platforms in less time and with significantly less engineering effort.
Like TVM, the NNVM compiler is the product of a collaboration among researchers in machine learning, systems, and computer architecture. In addition to Chen and Guestrin, Allen School Ph.D. students Thierry Moreau and Haichen Shen, and professors Luis Ceze and Arvind Krishnamurthy worked with the AWS AI team to build the new tool.
Roughly 40 Allen School students attended this week’s Grace Hopper Celebration of Women in Computing – a phenomenal event dating to 1994 that this year had 18,000 attendees! Read more →
Allen School Ph.D. student Kanit “Ham” Wongsuphasawat, who works with professor Jeffrey Heer in the Interactive Data Lab, won the Best Paper Award at the Institute for Electrical and Electronics Engineers’ Conference on Visual Analytics Science & Technology (IEEE VAST) for “Visualizing Dataflow Graphs of Deep Learning Models in TensorFlow.” Wongsuphasawat is the first author on the paper, which is based on work he did as an intern at Google Research with colleagues Daniel Smilkov, James Wexler, Jimbo Wilson, Dandelion Mané, Doug Fritz, Dilip Krishnan, Fernanda B. Viégas, and Martin Wattenberg.
Deep learning is becoming increasingly important in a variety of applications, from scientific research to consumer-facing products and services. Google’s TensorFlow open-source platform provides high-level APIs that simplify the creation of neural networks for deep learning, generating a low-level dataflow graph to support learning algorithms, distributed computation, and multiple devices. But developers still need to understand their structure. One way for them to do this is through a visualization; however, the dataflow graphs of such complicated models contain thousands of heterogeneous, low-level operations — some of which are high-degree nodes connected to many parts of the graph. This level of complexity yields tangled visualizations when produced using standard layout techniques.
In their award-winning paper, Wongsuphasawat and his collaborators offer a solution in the form of the TensorFlow Graph Visualizer, a tool for producing interactive visualizations of the underlying dataflow graphs of TensorFlow models. The visualizer is shipped as part of TensorBoard, TensorFlow’s official visualization and dashboard tool. The tool has enabled users of TensorFlow to understand and inspect the high-level structure of their models, with the ability to explore the complex, nested structure on demand.
The visualization takes the form of a clustered graph in which nodes are grouped according to their hierarchical namespaces as determined by the developer. To support detailed exploration, the team employed a novel use of edge bundling to enable stable and responsive expansion of the clustered flow layout. To counteract clutter, the researchers came up with the idea to extract less important nodes by applying heuristics to extract non-critical nodes and introducing new visual encodings that decouple extracted nodes from the layout. They also built in the ability to detect and highlight repeated structures, while overlaying the graph with quantitative information that will assist developers in their inspection. Users who tried the tool found it to be useful for a variety of tasks, from explaining a model and its application, to highlighting changes during debugging, to illustrating tutorials and articles.
Wongsuphasawat and his co-authors are being recognized at the big IEEE VIS conference, with which IEEE VAST, InfoVis and SciVis are co-located, in Phoenix, Arizona this week. Watch a video of Wongsuphasawat’s presentation of the work below.