Allen School Ph.D. student Kanit “Ham” Wongsuphasawat, who works with professor Jeffrey Heer in the Interactive Data Lab, won the Best Paper Award at the Institute for Electrical and Electronics Engineers’ Conference on Visual Analytics Science & Technology (IEEE VAST) for “Visualizing Dataflow Graphs of Deep Learning Models in TensorFlow.” Wongsuphasawat is the first author on the paper, which is based on work he did as an intern at Google Research with colleagues Daniel Smilkov, James Wexler, Jimbo Wilson, Dandelion Mané, Doug Fritz, Dilip Krishnan, Fernanda B. Viégas, and Martin Wattenberg.
Deep learning is becoming increasingly important in a variety of applications, from scientific research to consumer-facing products and services. Google’s TensorFlow open-source platform provides high-level APIs that simplify the creation of neural networks for deep learning, generating a low-level dataflow graph to support learning algorithms, distributed computation, and multiple devices. But developers still need to understand their structure. One way for them to do this is through a visualization; however, the dataflow graphs of such complicated models contain thousands of heterogeneous, low-level operations — some of which are high-degree nodes connected to many parts of the graph. This level of complexity yields tangled visualizations when produced using standard layout techniques.
In their award-winning paper, Wongsuphasawat and his collaborators offer a solution in the form of the TensorFlow Graph Visualizer, a tool for producing interactive visualizations of the underlying dataflow graphs of TensorFlow models. The visualizer is shipped as part of TensorBoard, TensorFlow’s official visualization and dashboard tool. The tool has enabled users of TensorFlow to understand and inspect the high-level structure of their models, with the ability to explore the complex, nested structure on demand.
The visualization takes the form of a clustered graph in which nodes are grouped according to their hierarchical namespaces as determined by the developer. To support detailed exploration, the team employed a novel use of edge bundling to enable stable and responsive expansion of the clustered flow layout. To counteract clutter, the researchers came up with the idea to extract less important nodes by applying heuristics to extract non-critical nodes and introducing new visual encodings that decouple extracted nodes from the layout. They also built in the ability to detect and highlight repeated structures, while overlaying the graph with quantitative information that will assist developers in their inspection. Users who tried the tool found it to be useful for a variety of tasks, from explaining a model and its application, to highlighting changes during debugging, to illustrating tutorials and articles.
Wongsuphasawat and his co-authors are being recognized at the big IEEE VIS conference, with which IEEE VAST, InfoVis and SciVis are co-located, in Phoenix, Arizona this week. Watch a video of Wongsuphasawat’s presentation of the work below.
Congratulations, Ham!
Visualizing Dataflow Graphs of Deep Learning Models in TensorFlow from Kanit W on Vimeo.