ISLPED will be a virtual event on Zoom
3-days of Existing Programs on Low Power Design


Day 1: August 10th


Day 2: August 11th


Day 3: August 12th


Welcome by General and Program Co-Chairs


Keynote Talk 2: Prof. Marian Verhelst (KUL, Belgium)
“Enabling deep NN at the extreme edge: Co-optimization across circuits, architectures, and algorithmic scheduling”


Best Paper Announcement


Keynote Talk 1: Prof. Bill Dally (Nvidia Corporation, USA)
"Low-Power Processing with Domain-Specific Architecture"


Session 2A: Tuning the Design Flow for Low Power: From Synthesis to Pin Assignment


Design Contest


Session 1A: Energy-efficient Machine Learning Systems


Poster session


Session 3A: Memory Technology and In-memory Computing


Session 1B: From CMOS to Quantum Circuits for Sensing, Computation and Security


Session 2B: Energy Efficient Neural Network Processors: Compression or Go for Near-sensor Analog


Session 3B: Low power system and NVM


Session 1C: Smart Power Management and Computing


Session 2C: Non-ML Low-power Architecture


Session 3C: ML-based Low-Power Architecture


IEEE/ACM member registration is just $75 this year, click here register now!.


Preliminary Program


10:00 am – 10:10 am (ET)

Welcome by General and Program Co-Chairs

10:10 am – 10:55 am (ET)

Keynote Talk 1: Prof. Bill Dally (Nvidia Corporation, USA) - “Low-Power Processing with Domain-Specific Architecture”

10:55 am – 11:35 am (ET)

Session 1A: Energy-efficient Machine Learning Systems

11:35 am – 12:15 pm (ET)

Session 1B: From CMOS to Quantum Circuits for Sensing, Computation and Security

12:15 pm – 12:55 pm (ET)

Session 1C: Smart Power Management and Computing

 

 

 

Keynote Talk 1: Prof. Bill Dally (Nvidia Corporation, USA) [click here for bio]

Low-Power Processing with Domain-Specific Architecture

Abstract: Domain-specific architecture is one of the most effective methods to reduce power dissipation in information processing systems. Efficiency in these systems comes from specialized functions, specialized memory systems, and reduced overhead. This talk will explore the efficiency gains possible from DSAs drawing examples from several accelerators.



Session 1A: Energy-efficient Machine Learning Systems

Chair: Ajay Joshi

Co-chair: Hashan Roshantha Mendis


Abstract: The design of today's portable/handheld machine learning systems is constrained by hardware resources and limited energy supply. This session is comprised of four papers that present the state-of-the-art software techniques for the development of energy-efficient machine learning systems.



10:55 am – 11:00 am (ET)


How to Cultivate a Green Decision Tree without Loss of Accuracy? (Best Paper Candidate)
Tseng-Yi Chen, Yuan-Hao Chang, Ming-Chang Yang and Huang-Wei Chen [5-min video]

11:00 am – 11:05 am (ET)

Approximate Inference Systems (AxIS): End-to-End Approximations for Energy-Efficient Inference at the Edge
Soumendu Kumar Ghosh, Arnab Raha and Vijay Raghunathan [5-min video]

11:05 am – 11:10 am (ET)

Time-Step Interleaved Weight Reuse for LSTM Neural Network Computing
Naebeom Park, Yulhwa Kim, Daehyun Ahn, Taesu Kim and Jae-Joon Kim [5-min video]

11:10 am – 11:15 am (ET)

Sound Event Detection with Binary Neural Networks on Tightly Power-Constrained IoT Devices
Gianmarco Cerutti, Elisabetta Farella, Luca Benini, Michele Magno, Lukas Cavigelli and Renzo Andri [5-min video]

11:15 am – 11:30 am (ET)

15-minute Q&A session for all presented papers


 

 

Session 1B: From CMOS to Quantum Circuits for Sensing, Computation and Security

Chair: Kannan Sankaragomathi

Co-chair: Ravi Sankar Vunnam


Abstract: Session 1B includes four papers exploring CMOS all the way to quantum technologies for a wide variety of applications. The session begins with a paper on crosstalk modeling and security analysis of quantum workloads for near-term quantum computers. The second paper presents an Ozone pollutant sensing circuit that reduces power consumption by 300X lower than the state of the art, followed by a paper describing a low power 900 MHz PLL featuring a class F VCO. The last paper of the session features a 640 pW Analog-To-Time Converter for a wake up sensor.



11:35 am – 11:40 am (ET)


Analysis of crosstalk in NISQ devices and security implications in multi-programming regime
Abdullah Ash- Saki, Mahabubul Alam and Swaroop Ghosh [5-min video]

11:40 am – 11:45 am (ET)

An 88.6nW Ozone Pollutant Sensing Interface IC with a 159 dB Dynamic Range (Best Paper Candidate)
Rishika Agarwala, Peng Wang, Akhilesh Tanneeru, Bongmook Lee, Veena Misra and Benton Calhoun [5-min video]

11:45 am – 11:50 am (ET)

A 1.2-V, 1.8-GHz Low-Power PLL Using a Class-F VCO for Driving 900-MHz SRD Band SC-Circuits
Tim Schumacher, Markus Stadelmayer, Thomas Faseth and Harald Pretl [5-min video]

11:50 am – 11:55 am (ET)

A 640 pW 32 kHz Switched-Capacitor ILO Analog-to-Time Converter for Wake-Up Sensor Application
Nicolas GOUX, Jean-Baptiste Casanova and Franck Badets [5-min video]

11:55 am – 12:10 pm (ET)

15-minute Q&A session for all presented papers


 

 

Session 1C: Smart Power Management and Computing

Chair: Amit Ranjan Trivedi, UIC

Co-chair: Minki Cho, Intel


Abstract: Smart ways of managing power consumption and performing computations are critical to scale Von Neumann bottleneck and enable the next generation low power computing engines. This session presents recent advances in power management techniques which range from energy harvesting to architectural techniques on idle core management for applications ranging from MPSoCs to wearables and search engines. The session will also cover intelligent memories with compute capability to cut down energy cost of data movement between processing engines and memory.



12:15 pm – 12:20 pm (ET)


Dynamic Idle Core Management and Leakage Current Recycling in MPSoC Platforms
MD Shazzad Hossain and Ioannis Savidis [5-min video]

12:20 pm – 12:25 pm (ET)

Towards Wearable Piezoelectric Energy Harvesting: Modeling and Experimental Validation (Best Paper Candidate)
Yigit Tuncel, Shiva Bandyopadhyay, Shambhavi Vikram Kulshrestha, Audrey Mendez and Umit Ogras [5-min video]

12:25 pm – 12:30 pm (ET)

RAMANN: In-SRAM Differentiable Memory Computations for Memory-Augmented Neural Networks
Mustafa F. Ali, Amogh Agrawal and Kaushik Roy [5-min video]

12:30 pm – 12:35 pm (ET)

Swan: A Two-Step Power Management for Distributed Search Engines
Liang Zhou, Laxmi Bhuyan and K. K. Ramakrishnan [5-min video]

12:35 pm – 12:50 pm (ET)

15-minute Q&A session for all presented papers


10:00 am – 10:45 am (ET)

Keynote Talk 2: Prof. Marian Verhelst (KUL, Belgium) - “Enabling deep NN at the extreme edge: Co-optimization across circuits, architectures, and algorithmic scheduling”

10:45 am – 11:20 am (ET)

Session 2A: Tuning the Design Flow for Low Power: From Synthesis to Pin Assignment

11:20 am – 11:50 am (ET)

Poster session

11:50 am – 12:25 pm (ET)

Session 2B: Energy Efficient Neural Network Processors: Compression or Go for Near-sensor Analog

12:25 pm – 1:00 pm (ET)

Session 2C: Non-ML Low-power Architecture

 

 

 

Keynote Talk 2: 10:00 am – 10:45 am (ET) Prof. Marian Verhelst (KUL, Belgium) [click here for bio]

Enabling deep NN at the extreme edge: Co-optimization across circuits, architectures, and algorithmic scheduling

Abstract: Deep neural network inference comes with significant computational complexity, making their execution until recently only feasible on power-hungry server or GPU platforms. The recent trend towards embedded neural network processing on edge and extreme edge devices requires a thorough cross layer optimization. The keynote will discuss how to exploit and join​tly optimize NPU/TPU processor architectures, dataflow schedulers and quantized neural network models for minimum latency and maximum energy efficiency.


Session 2A: Tuning the Design Flow for Low Power: From Synthesis to Pin Assignment

Chair: Matthew Ziegler

Co-chair: Sung Kyu Lim


Abstract: This session addresses topics related to low power synthesis. The session starts with a paper that offers a fresh look at logic synthesis, integrating in a single framework DL techniques, approximate computing and low-power solutions. Then the second paper describes a new power gating methodology to break the critical bottleneck in minimizing total size for state retention storage. Finally, moving towards next-generation ICs, the last paper presents a methodology for pin assignment for 3D ICs, where the inter-block connection is through pins placed at the middle rather than at the boundaries.



10:45 am – 10:50 am (ET)


Deep-PowerX: A Deep Learning-Based Framework for Low-Power Approximate Logic Synthesis
Ghasem Pasandi, Mackenzie Peterson, Moises Herrera, Shahin Nazarian and Massoud Pedram [5-min video]

10:50 am – 10:55 am (ET)

Steady State Driven Power Gating for Lightening Always-On State Retention Storage (Best Paper Candidate)
Taehwan Kim, Gyounghwan Hyun and Taewhan Kim [5-min video]

10:55 am – 11:00 am (ET)

Pin-in-the-Middle: An Efficient Block Pin Assignment Methodology for Block-level Monolithic 3D ICs
Bon Woong Ku and Sung Kyu Lim [5-min video]

11:00 am – 11:15 am (ET)

15-minute Q&A session for all presented papers


 

11:20 am – 11:50 am (ET): Poster Session

30-minute Q&A session for all poster papers


1


BrainWave: an energy-efficient EEG monitoring system - evaluation and trade-offs

Barry de Bruin, Kamlesh Singh, Jos Huisken and Henk Corporaal [5-min video]


2


QUANOS- Adversarial Noise Sensitivity Driven Hybrid Quantization of Neural Networks

Priyadarshini Panda [5-min video]


3


Pre-Layout Clock Tree Estimation and Optimization Using Artificial Neural Network

Sunwha Koh, Yonghwi Kwon and Youngsoo Shin [5-min video]


4


GC-eDRAM Design using Hybrid FinFET/NC-FinFET

Ramin Rajaei, Yen-Kai Lin, Sayeef Salahuddin, Michael Niemier and Xiaobo Sharon Hu [5-min video]


5


SAOU: Safe Adaptive Overclocking and Undervolting for Energy-Efficient GPU Computing

Hadi Zamani Sabzi, Devashree Tripathy, Laxmi Bhuyan and Zhizhong Chen [5-min video]


6


SparTANN: Sparse Training Accelerator for Neural Networks with Threshold-based Sparsification

Hyeonuk Sim, Jooyeon Choi and Jongeun Lee [5-min video]


7


Bit-Sparse LSTM Inference Kernel Enabling Efficient Calcium Trace Extraction for Neurofeedback Devices
Zhe Chen, Garrett Blair, Hugh T. Blair and Jason Cong [5-min video]


8


BiasP: A DVFS based Exploit to Undermine Resource Allocation Fairness in Linux Platforms

Harshit Kumar, Nikhil Chawla and Saibal Mukhopadhyay [5-min video]


9


Resiliency Analysis and Improvement of Variational Quantum Factoring in Superconducting Qubit

Ling Qiu, Mahabubul Alam, Abdullah Ash- Saki and Swaroop Ghosh [5-min video]


10


HIPE-MAGIC: A Technology-Aware Synthesis and Mapping Flow for HIghly Parallel Execution of Memristor-Aided LoGIC

Arash Fayyazi, Amirhossein Esmaili and Massoud Pedram [5-min video]


11


SHEARER: Highly-Efficient Hyperdimensional Computing by Software-Hardware Enabled Multifold Approximation

Behnam Khaleghi, Sahand Salamat, Anthony Thomas, Fatemeh Asgarinejad, Yeseong Kim and Tajana Rosing [5-min video]


12


Implementing Binary Neural Networks in Memory with Approximate Accumulation

Saransh Gupta, Mohsen Imani, Hengyu Zhao, Fan Wu, Jishen Zhao and Tajana Rosing [5-min video]




Session 2B: Energy Efficient Neural Network Processors: Compression or Go for Near-sensor Analog

Chair: Adam Teman

Co-chair: Mohsen Imani


Abstract: This session presents novel approaches to achieve energy efficiency in neural network accelerators. While the first paper proposes a compression technique to reduce DRAM accesses, the third paper proposes how to reduce the I/O bandwidth of video neural network processors. The second paper present a near-sensor architecture for analog neural networks.



11:50 am – 11:55 am (ET)


GRLC: Grid-based Run-length Compression for Energy-efficient CNN Accelerator (Best Paper Candidate)
Yoonho Park, Yesung Kang, Sunghoon Kim, Eunji Kwon and Seokhyeong Kang [5-min video]

11:55 am – 12:00 pm (ET)

NS-KWS: Joint Optimization of Near-Sensor Processing Architecture and Low-Precision GRU for Always-On Keyword Spotting
Qin Li, Sheng Lin, Changlu Liu, Yidong Liu, Fei Qiao, Yanzhi Wang and Huazhong Yang [5-min video]

12:00 pm – 12:05 pm (ET)

Multi-Channel Precision-Sparsity-Adapted Inter-Frame Differential Data Codec for Video Neural Network Processor
Yixiong Yang, Zhe Yuan, Fang Su, Fanyang Cheng, Zhuqing Yuan and Yongpan Liu [5-min video]

12:05 pm – 12:20 pm (ET)

15-minute Q&A session for all presented papers


 


Session 2C: Non-ML Low-power Architecture

Chair: Hun Seok Kim

Co-chair: Jaehyun Park


Abstract: The session explores low-power architecture and communication for energy efficient computing. The first paper investigates a new power management for GPU register files. The second paper presents a novel data selective transmission technique to reduce the energy of wearable devices. The third paper proposes new reconfigurable tiles of computing-in-memory SRAM architectures for vectorization computing.


12:25 pm – 12:30 pm (ET)


Slumber: Static Power Management for GPGPU Register Files
Devashree Tripathy, Hadi Zamani Sabzi, Debiprasanna Sahoo, Laxmi Narayan Bhuyan and Manoranjan Satpathy [5-min video]

12:30 pm – 12:35 pm (ET)

STINT: Selective Transmission for Low-Energy Physiological Monitoring
Tao-Yi Lee, Khuong Vo, Wongi Baek, Michelle Khine and Nikil Dutt [5-min video]

12:35 pm – 12:40 pm (ET)

Reconfigurable Tiles of Computing-In-Memory SRAM Architecture for Scalable Vectorization
Roman Gauchi, Valentin Egloff, Maha Kooli, Jean-Philippe Noel, Bastien Giraud, Pascal Vivet, Subhasish Mitra and Henri-Pierre Charles [5-min video]

12:40 pm – 12:55 pm (ET)

15-minute Q&A session for all presented papers


 

10:00 am – 10:10 am (ET)

Best Paper Announcement

10:10 am – 10:50 am (ET)

Design Contest

10:50 am – 11:25 am (ET)

Session 3A: Memory Technology and In-memory Computing

11:25 am – 12:00 pm (ET)

Session 3B: Low power system and NVM

12:00 pm – 12:35 pm (ET)

Session 3C: ML-based Low-Power Architecture

 

 

10:10 am – 10:50 am (ET): Design Contest


1


n-hot Weight Quantization and Approximate Multiplication for Low-Power Machine Learning

Tianen Chen, Ammar Mahmood, Luciano Ricotta, John Rupel, Younghyun Kim, and Joshua San Miguel [5-min video]


2


A Dynamic Timing Enhanced DNN Accelerator with Compute-Adaptive Elastic Clock Chain Technique

Tianyu Jia, Yuhao Ju, and Jie Gu [5-min video]


3


CoCoPIE: A Framework of Compression-Compilation Co-design Towards Ultra-high Energy Efficiency and Real-Time DNN Inference on Mobile Devices

Geng Yuan, Wei Niu, Pu Zhao, Xue Lin, Bin Ren, and Yanzhi Wang [5-min video]


4


In-Hardware Learning of Multilayer Spiking Neural Networks on Intel’s Loihi Chip

Amar Shrestha, haowen fang, Zaidao Mei, and Qinru Qiu [5-min video]


5


A Low-Power Dual-Factor Authentication Unit for Security of Implantable Devices

Saurav Maji, Utsav Banerjee, Samuel Fuller, Phillip Nadeau, Mohamed Abdelhamid, Rabia Yazicigil, and Anantha Chandrakasan [5-min video]


6


A Low-Power Side-Channel-Secure Configurable Accelerator for Post-Quantum LatticeBased Cryptography

Utsav Banerjee, Tenzin Ukyab, and Anantha Chandrakasan [5-min video]


7


Towards Wearable Piezoelectric Energy Harvesting: An Experimental Validation
Yigit Tuncel, Shiva Bandyopadhyay, Shambhavi Vikram Kulshrestha, Audrey Mendez and Umit Ogras [5-min video]



Session 3A: Memory Technology and In-memory Computing

Chair: Daniel Wong

Co-chair: Kshitij Bhardwaj


Abstract: Emerging memory technologies provide attractive and challenging platforms for implementing in-memory computing. Taking advantage of the diverse design characteristics that are present on various emerging memory devices, papers in this section aim to design energy-efficient in-memory computing for memory-intensive applications such as neural network computing. Papers in this session not only offer intriguing designs that achieve significant energy-performance advantages, but also open up research directions in algorithm development as well as hardware-software co-design.


10:50 am – 10:55 am (ET)


FeFET-Based Low-Power Bitwise Logic-in-Memory with Direct Write-Back and Data-Adaptive Dynamic Sensing Interface
Mingyen Lee, Wenjun Tang, Bowen Xue, Juejian Wu, Mingyuan Ma, Yu Wang, Yongpan Liu, Deliang Fan, Vijaykrishnan Narayanan, Huazhong Yang and Xueqing Li [5-min video]

10:55 am – 11:00 am (ET)

Enabling Efficient ReRAM-based Neural Network Computing via Crossbar Structure Adaptive Optimization
Chenchen Liu, Fuxun Yu, Zhuwei Qin and Xiang Chen [5-min video]

11:00 am – 11:05 am (ET)

Embedding Error Correction into Crossbars for Reliable Matrix Vector Multiplication using Emerging Devices
Qiuwen Lou, Tianqi Gao, Patrick Faley, Michael Niemier, X. Sharon Hu and Siddharth Joshi [5-min video]

11:05 am – 11:20 am (ET)

15-minute Q&A session for all presented papers


 


Session 3B: Low power system and NVM

Chair: Marco Donato

Co-chair: Parth Malani


Abstract: This session is on cache coherency and memory interface in modern heterogeneous multicore architectures. The first paper presents a Bayesian optimization to find a coherence interface in accelerator-rich multi-core architectures. The second paper is about the design of energy efficient NVM-based NoC buffers. The third paper presents a cache line compression along with wear leveling to enhance the lifetime of NVM main memory.


11:25 am – 11:30 am (ET)


A Comprehensive Methodology to Determine Optimal Coherence Interfaces for Many-Accelerator SoCs
Kshitij Bhardwaj, Marton Havasi, Yuan Yao, David M. Brooks, Jose M. H. Lobato and Gu-Yeon Wei [5-min video]

11:30 am – 11:35 am (ET)

DidaSel: Dirty data based Selection of VC for effective utilization of NVM Buffers in On-Chip Interconnects
Khushboo Rani, Sukarn Agarwal and Hemangee Kapoor [5-min video]

 

11:35 am – 11:40 am (ET)

WELCOMF : Wear Leveling Assisted Compression using Frequent Words in Non-Volatile Main Memories
Arijit Nath and Hemangee Kapoor [5-min video]

 

11:40 am – 11:55 am (ET)

15-minute Q&A session for all presented papers




Session 3C: ML-based Low-Power Architecture

Chair: Sheldon Tan

Co-chair: Masanori Hashimoto


Abstract: The session investigates energy efficient deep neural network hardware implementation for many cognitive applications. The first paper presents a new hierarchical deep neural network for objective counting. The second paper proposes an integrated dynamic vision sensor with hyperdimensional computing for feature extractions and presentation. The third paper develops an energy-efficient transformer design using FPGA.


12:00 pm – 12:05 pm (ET)


Low-Power Object Counting with Hierarchical Neural Networks
Abhinav Goel, Caleb Tung, Sara Aghajanzadeh, Isha Ghodgaonkar, Shreya Ghosh, George Thiruvathukal and Yung-Hsiang Lu [5-min video]

12:05 pm – 12:10 pm (ET)

Integrating Event-based Dynamic Vision Sensors with Sparse Hyperdimensional Computing: A Low-power Accelerator with Online Learning Capability
Michael Hersche, Edoardo Mello Rella, Alfio Di Mauro, Luca Benini and Abbas Rahimi [5-min video]

 

12:10 pm – 12:15 pm (ET)

FTRANS: Energy-Efficient Acceleration of Transformers using FPGA
Bingbing Li, Santosh Pandey, Haowen Fang, Yanjun Lyv, Ji Li, Jieyang Chen, Mimi Xie, Lipeng Wan, Hang Liu and Caiwen Ding [5-min video]

 

12:15 pm – 12:30 pm (ET)

15-minute Q&A session for all presented papers