09:00 – 10:00: Keynote talk 2
Dr. Vijay Janapa Reddi, Harvard University
Tiny Machine Learning: A System-level Perspective
Speaker Bio: Vijay Janapa Reddi is an Associate Professor at Harvard University, Inference Co-chair for MLPerf, and a founding member of MLCommons, a nonprofit ML (Machine Learning) organization aiming to accelerate ML innovation. He also serves on the MLCommons board of directors. Before joining Harvard, he was an Associate Professor at The University of Texas at Austin in the Department of Electrical and Computer Engineering. His research interests include computer architecture and runtime systems, specifically in the context of autonomous machines and mobile and edge computing systems. Dr. Janapa Reddi is a recipient of multiple honors and awards, including the National Academy of Engineering (NAE) Gilbreth Lecturer Honor (2016), IEEE TCCA Young Computer Architect Award (2016), Intel Early Career Award (2013), Google Faculty Research Awards (2012, 2013, 2015, 2017, 2020), Best Paper at the 2020 Design Automation Conference (DAC), Best Paper at the 2005 International Symposium on Microarchitecture (MICRO), Best Paper at the 2009 International Symposium on High Performance Computer Architecture (HPCA), IEEE’s Top Picks in Computer Architecture awards (2006, 2010, 2011, 2016, 2017) and he has been inducted into the MICRO and HPCA Hall of Fame (in 2018 and 2019, respectively). He received a Ph.D. in computer science from Harvard University, M.S. from the University of Colorado at Boulder and B.S from Santa Clara University.
Talk Abstract: Tiny machine learning (TinyML) is a fast-growing discipline that blends machine learning techniques with low-cost embedded technology. TinyML allows for on-device sensor data analysis (vision, audio, IMU, and so on) while utilizing minimal power. Processing data close to the sensor enables a variety of unique, always-on ML use-cases that save bandwidth, latency, and energy while improving responsiveness and privacy. This session introduces the TinyML vision and illustrates some of the amazing applications made possible by TinyML. Despite the excitement, we must overcome various hardware and software challenges, as well as data privacy concerns. On-device ML constraints such as limited memory and storage, communication barriers, extreme hardware heterogeneity, software fragmentation, and a lack of relevant and commercially viable large-scale TinyML datasets pose a significant barrier to realizing TinyML's full potential for a more innovative and sustainable low-power ecosystem. Furthermore, the lack of secure protocols at the lowest level of hardware raises concerning questions such as, "Are TinyML devices spying on us?" The talk addresses the potential for addressing many of these challenges and ushering in a new era for TinyML-based "Machine Learning Sensors (ML Sensors)." The talk finishes with reasons why the future of machine learning is tiny and bright.
Hier-3D: A Hierarchical Physical Design Methodology for Face-to-Face Bonded 3D ICs (Best paper)
Anthony Agnesina, Moritz Brunion, Alberto Garcia-Ortiz, Francky Catthoor, Dragomir Milojevic, Manu Komalan, Matheus Cavalcante, Samuel Riedel, Luca Benini and Sung Kyu Lim