Plenary Keynote  (Tues Sept 27)

Parthasarathy Ranganathan

VP/technical Fellow, Google

Make computing count: some grand opportunities for testing

Biography:  Partha Ranganathan is currently a VP, technical Fellow at Google where he is the area technical lead for hardware and datacenters, designing systems at scale. Prior to this, he was a HP Fellow and Chief Technologist at Hewlett Packard Labs where he led their research on systems and data centers.

Partha has worked on several interdisciplinary systems projects with broad impact on both academia and industry, including widely-used innovations in energy-aware user interfaces, heterogeneous multi-cores, power-efficient servers, accelerators, and disaggregated and data-centric data centers. He has published extensively (including being the co-author on the popular “Datacenter as a Computer” textbook), is a co-inventor on more than 100 patents, and has been recognized with numerous awards. He has been named a top-15 enterprise technology rock star by Business Insider, one of the top 35 young innovators in the world by MIT Tech Review, and is a recipient of the ACM SIGARCH Maurice Wilkes award, Rice University’s Outstanding Young Engineering Alumni award, and the IIT Madras distinguished alumni award. He is also a Fellow of the IEEE and ACM, and is currently on the board of directors for OpenCompute.

Abstract:  Moore’s law is slowing down, stressing traditional assumptions around cheaper and faster systems every year. At the same time, growing volumes of data, smarter edge devices, and new, diverse workloads are causing demand for computing to grow at phenomenal rates. In this talk, we will discuss the trends shaping the future computing landscape, with a specific focus on the role of testing — for correctness, agility, and performance — and some grand challenges, and opportunities, for the field.


Wednesday Keynote


John Shalf

Lawrence Berkeley National Labs

The future of High Performance Computing Beyond Moore’s Law

Biography:  John Shalf is Department Head for Computer Science at Lawrence Berkeley National Laboratory, and recently was deputy director of Hardware Technology for the DOE Exascale Computing Project.

Shalf is a coauthor of over 80 publications in the field of parallel computing software and HPC technology, including three best papers and the widely cited report “The Landscape of Parallel Computing Research: A View from Berkeley” (with David Patterson and others). He also coauthored the 2008 “ExaScale Software Study: Software Challenges in Extreme Scale Systems,” which set the Defense Advanced Research Project Agency’s (DARPA’s) information technology research investment strategy. Prior to coming to Berkeley Laboratory, John worked at the National Center for Supercomputing Applications and the Max Planck Institute for Gravitation Physics/Albert Einstein Institute (AEI) where he was was co-creator of the Cactus Computational Toolkit.

Abstract: The next decade promises to be one of the most exciting yet in the further evolution of computing. There are a number of developments that will change how we will compute in 10 years: the foreseeable end of Moore’s law will lead to the exploration of new architectures and the introduction of new technologies in HPC; the rapid progress in machine learning in the last decade has led to a refocus of HPC towards large scale data analysis and machine learning; the feasibility of quantum computing has led to the introduction of new paradigms for scientific computing; meanwhile 30 billion IOT devices will push advances in energy efficient computing and bring an avalanche of data. I would like to compare the situation to a Cambrian explosion: the change in computing environment has helped creating a wide and complex variety of “organisms” that will compete for survival in the next decade. The HPC community will have to deal with this complexity and extreme heterogeneity, and decide what ideas and technologies will be the survivors. In this talk, I will talk about emerging strategies such as heterogeneous integration (advanced packaging) that are playing out across the industry to continue to extract performance from systems and make predictions of where things are going for 2025-2030.

 


Thursday Keynote

Grady Giles, Mike Bienek, & Tim Wood

AMD

What did we learn in 120 years of DFT and test? 

Biography:  Grady Giles, Mike Bienek, and Tim Wood are all members of the DFX team at AMD, with a combined 120+ years of experience in the industry.

Grady worked at TI and Motorola prior to joining AMD, Mike worked at Convex, MegaTest, Geocast, and Neofocal prior to AMD, and Tim has been at AMD for his entire career.  Collectively they have been responsible for dozens of patents, many ITC papers (including two Best Paper awards), heavy involvement with IEEE standards and the Semiconductor Research Consortium, SWDFT and ITC conference program committees, and extensive engagements over many years across the industry.  Grady has been a champion of robust scan-based testability measures for decades, Mike architected methodologies and tools to achieve Known Good Die for complex supercomputer assemblies, and Tim led the DFT efforts for many generations of cutting-edge microprocessors.  Grady and Mike are both graduates of Texas A&M, and Tim earned his degree from Rensselaer Polytechnic Institute.

Abstract:  This interview-style discussion will feature three industry veterans whose careers have followed (and propelled) the growth in our field.  We plan to reflect on how our industry has evolved, how this conference has reflected and driven that evolution, what lessons were learned, and what we can expect (and make happen) next.  Along the way, we’ll share some anecdotes, tell some stories, brag about some accomplishments, and humbly give some advice on things we found out the hard way (so that you don’t have to).


Visionary Talk (Wednesday)

Tim Cheng

The Hong Kong University of Science and Technology

Ultra Low-Power AI Accelerators for AIoT –
Compute-in-memory, Co-Design, and Heterogeneous Integration

Biography: Tim Cheng is currently Vice-President for Research and Development at Hong Kong University of Science and Technology (HKUST) and Chair Professor jointly in the Departments of ECE and CSE.

His current research interests include design, EDA, computer vision, and medical image analysis. In 2020, he received HK$443.9M funding to lead the founding of the AI Chip Center for Emerging Smart Systems (ACCESS) which is a multidisciplinary center aims to advance IC design and EDA to help realize ubiquitous AI applications in society.He received his PhD from University of California, Berkeley. Prior to joining HKUST, he was a Professor at the University of California, Santa Barbara, and spent five years at AT&T Bell Laboratories. At UCSB, Cheng served as Founding Director of the Computer Engineering Program (1999-2002), Chair of ECE Department (2005-2008), and Associate Vice-Chancellor for Research (2013-2016). At HKUST, Cheng served as Dean of Engineering (2016-March 2022) prior to taking the VPRD role.Cheng, an IEEE fellow and a fellow of Hong Kong Academy of Engineering Sciences a, received 12 Best Paper Awards from various IEEE and ACM conferences and journals. He has also received UCSB College of Engineering Outstanding Teaching Faculty Award, Pan Wen Yuan Outstanding Research Award, 2020,  and Fellow of School of Engineering, The University of Tokyo. He served as Editor-in-Chief of IEEE Design and Test of Computers and was a board member of IEEE Council of Electronic Design Automation’s Board of Governors and IEEE Computer Society’s Publication Board.

Abstract:  We will give an overview of the objectives and some recent progress in designing ultra low-power AI accelerators for supporting a wide range of AIoT devices with powerful embedded intelligence. Specifically, we will discuss the roles of emerging memory and compute-in-memory for data-centric computing, application-specific co-design framework supporting light-weight deep learning which integrates neural network (NN) search, hardware-friendly NN compression and NN-aware architecture design for iterative co-optimization, as well as the critical role of 3D integration of processors and memory arrays for power, performance and size.