TUTORIAL PROGRAMME

                     
              

Day - 1 (20 Feb 2021),  Tutorial-T1,  Start: 9:00,  End: 12:15, Duration (in mnts) : 180

Tutorial Title: Open-Source Analog Layout Automation with Machine Learning

Tutorial Abstract

                     





Digital design automation has allowed digital ICs with multi-billon transistors, e.g., AMD’s Zen 2 based Epyc Rome microprocessor using TSMC’s 7nm FinFET technology has 39.54 billion transistors in eight chips. SPICE-like numerical simulation tools for analog designs have continued to evolve including the development of reduced order modeling techniques. However, tools for analog schematic synthesis and physical layout synthesis have lagged behind. The first generation of academic analog schematic and layout synthesis tools did result in industrial design automation tools. However, due to accuracy requirements of the device models for analog circuit synthesis the majority of these tools were primarily simulation-based device sizing tools limiting the maximum number of devices to approximately a few hundred transistors. Analog layout automation tools have largely been realized as a set of library generators for common device-level analog circuits that use augmented placement and routing algorithms that include symmetries, crosstalk and parasitic balance. The tutorial will cover the requirements for analog layout automation, discuss some background and history, and then go into the details of a specific layout engine for analog systems that can handle both bulk and FinFET layouts. ALIGN is an open-source modern layout engine that leverages machine learning. The work is the result of a joint university-industry collaboration between the University of Minnesota, Texas A&M University, and Intel.


About the speaker - Dr. Steven M. Burns, Intel Labs, USA

                     





Steven M. Burns is a Senior Principal Engineer with Intel Laboratories, where he leads a team of researchers in Design Construction. He received the Ph.D. degree from the California Institute of Technology, Pasadena, CA, USA. His current research interests include analog layout synthesis, transformation-based design environments, advanced synthesis algorithms and methods, physical synthesis of standard cells, and CAD for future process technologies.


About the speaker - Prof. Ramesh Harjani, University of Minnesota, USA

                     





Ramesh Harjani is the E.F. Johnson Professor in the Department of ECE at the University of Minnesota. He is a Fellow of the IEEE. He received his Ph.D. from Carnegie Mellon University in 1989. He has been a visiting professor at Lucent Bell Labs, PA and the Army Research Labs, MD. He co-founded Bermai, Inc, a startup company developing CMOS chips for wireless applications in 2001. His research interests include analog/RF circuits for wireless communication. He is an IEEE Fellow.

About the speaker - Prof. Jiang Hu, Texas A&M University, USA

                     





Jiang Hu is a Professor of Electrical and Computer Engineering at Texas A&M University. He has been active in VLSI physical design for 20 years. He previously worked at IBM, where he received an IBM Invention Award for an in-house physical synthesis tool. He has received Best Paper awards at DAC and ICCAD, and has been General Chair and Program Chair of the ISPD and Physical Design Track Chair at DAC. He is an IEEE Fellow.


About the speaker - Prof. Sachin S. Sapatnekar, University of Minnesota, USA (coordinating presenter)

                     





Sachin S. Sapatnekar is the Henle Chair Professor in Electrical and Computer Engineering and a Distinguished McKnight University Professor at the University of Minnesota. He has received the SRC Technical Excellence and the SIA University Research Awards. He has been General Chair for DAC and ISPD, Editor-in-Chief of IEEE Transactions on CAD, and has 10 Best Paper awards. He is a Fellow of the IEEE and ACM.


About the speaker - Dr. Arvind K. Sharma, University of Minnesota, USA

                     





Arvind K. Sharma received the Ph.D. degree from IIT Roorkee, Roorkee, India, in 2018. He is currently a Post-Doctoral Associate at the Department of Electrical and Computer Engineering, University of Minnesota, Minneapolis, MN, USA. His current research interests include device physics, circuit device interaction, layout automation, and variability-aware circuit design.

About the speaker - Dr. Soner Yaldiz, Intel Labs, USA

                     





Soner Yaldiz received the B.S. degree from Sabanci University, Istanbul, Turkey, in 2004, the M.S. degree from Koc University, Istanbul, Turkey, in 2006, and the Ph.D. degree from Carnegie Mellon University, PA, USA, in 2012. He has been with Intel Corporation since 2012. Dr. Yaldiz was a recipient of the 2011 Best of ICCAD Award. His research focuses on computer-aided design of electrical circuits and systems.



Day - 1 (20 Feb 2021),  Tutorial-T2,  Start: 13:15,  End: 14:45, Duration (in mnts) : 90

Tutorial Title: Scaling the Memory Wall 2.0: What, Why, How?

Tutorial Abstract

                     





The proposed tutorial aims to first intuitively establish the preliminaries regarding memory hierarchies in modern processors and systems-on-chip (SoCs), followed by a discussion of the current trends in manufacturing that have begun to significantly influence how we think about memory hierarchies. This will be followed by a discussion of the current trends at the design level, with special attention to cache prefetching and cache partitioning.


About the speaker - Preeti Ranjan Panda, Department of Computer Science and Engineering, Indian Institute of Technology Delhi

                     





Preeti Ranjan Panda received his B. Tech. degree in Computer Science and Engineering from the Indian Institute of Technology Madras and his M. S. and Ph.D. degrees in Information and Computer Science from the University of California at Irvine. He is currently a Professor in the Department of Computer Science and Engineering at the Indian Institute of Technology Delhi. He has previously worked at Texas Instruments, Bangalore, and the Advanced Technology Group at Synopsys Inc., Mountain View, and has been a visiting scholar at Stanford University. His research interests are: Embedded Systems Design, CAD/VLSI, Post-silicon De-bug/Validation, System Specification and Synthesis, Memory Architectures and Optimisations, Hardware/Software Codesign, and Low Power Design. He is the author of two books: Memory issues in Embedded Systems-on-chip: Optimizations and Exploration (Kluwer Academic Publishers) and Power-efficient System Design (Springer). He is a recipient of an IBM Faculty Award , the IESA Techno Mentor Award, and a Department of Science and Technology Young Scientist Award. Research works authored by Prof. Panda and his students have received several honours, including Best Paper nominations at CODES+ISSS,DATE, ASPDAC, and VLSI Design Conference, and Most downloaded paper of ACM TODAES journal. Prof. Panda currently serves as the Editor-in-Chief of IEEE Embedded Systems Letters (ESL). He has served on the editorial boards of IEEE Transactions on Computer Aided Design of Integrated Circuits and Systems (TCAD) , ACM Transactions on Design Automation of Electronic Systems (TODAES) , IEEE Embedded Systems Letters (ESL), IEEE Transactions on Multi-Scale Computing Systems (TMSCS) and International Journal of Parallel Programming (IJPP), General co-Chair of VLSI Design, and as Technical Program co-Chair of the International Conference on Hardware/Software Codesign and System Synthesis (CODES+ISSS) and International Conference on VLSI Design and Embedded Systems (VLSI Design). He has also served on the technical program committees and chaired sessions at several conferences in the areas of Embedded Systems and Design Automation, including DAC, ICCAD, DATE, CODES+ISSS, IPDPS, ASPDAC, and EMSOFT.


About the speaker - Sandeep Chandran, Department of Computer Science and Engineering, Indian Institute of Technology Palakkad.

                     





Sandeep Chandran is an Assistant Professor in the Department of Computer Science and Engineering at Indian Institute of Technology (IIT) Palakkad. He received his PhD from IIT Delhi where he received the IIT Delhi FITT award for Best Industry Relevant PhD. Prior to joining the Institute, he was a Senior Design Engineer at AMD where he was part of the performance modelling team that designed the Zen microarchitecture. He was also a Research Intern in the Silicon Bring-up Team at Freescale Semiconductors (now NXP semiconductors). He is currently working towards finding ways to reduce the effort involved in verifying complex processors and system-on-chips (SoCs). He is also investigating alternate designs for high-performance processors.


About the speaker - Rajshekar Kalayappan, Department of Computer Science and Engineering, Indian Institute of Technology Dharwad.

                     





Rajshekar Kalayappan is an Assistant Professor at the Indian Institute of Technology (IIT) Dharwad, Karnataka. He has received his PhD from IIT Delhi. His research interests include computer architecture, hardware reliability, hardware security, accountability issues in heterogeneous 3PIP-containing SOCs, and microarchitectural simulation. He is one of the chief designers, developers and maintainers of the popular open source architectural simulator Tejas. Tejas simulates state-of-the-art multi-core processors. It has been validated against real hardware, and is at par with the best academic simulators in terms of simulation speed. Please visit the Tejas web page to know more.


Day - 1 (20 Feb 2021),  Tutorial-T3,   Start: 15:00,  End: 16:30, Duration (in mnts) : 90

Tutorial Title: Efficient Computing with Non-Volatile Memory: from Devices to System-Level Management

Tutorial Abstract

                     





In this tutorial, three abstraction layers of the computing stack will be covered starting from bottom up: (1) Device-circuit layer (Speaker: Jörg Henkel and Hussam Amrouch) where we present emerging Ferroelectric Field-Effect Transistor (FeFET), which is a novel CMOS-compatible memory technology with promising advantages for ultra-low power circuits. In this part, we focus on how such new single-transistor memory devices operate and how they impact the existing trade-offs in computer architecture. (2) Architecture and Operating System layers (Speaker: Jian-Jia Chen) where we present memory analysis frameworks and various wear-leveling mechanisms that are inevitably needed when NVM technology is employed as the main memory. In this part, we focus on how memory tracing mechanisms perform and how the operating system can effectively optimize the lifetime of NVMs with and without special hardware. (3) Application layer (Speaker: Yuan-Hao Chang) where we present NVM friendly strategies for the training process of tree-based learning algorithms. In this part, we focus on how to exploit the concept of “reusing the sampled data” to trade the “randomness” of the sampled data for the reduced data movement across different layers. Afterwards, we discuss how to combine the pre-pruning and post pruning strategies to construct decision trees without loss of accuracy by exploiting the benefits of the multi-write modes of NVMs.


About the speaker - Jörg Henkel, Karlsruhe Institute of Technology, Germany

                     





Jörg Henkel is the Chair Professor for Embedded Systems at Karlsruhe Institute of Technology. His research interest is in co-design for embedded hardware/software systems with respect to power, thermal and reliability aspects. He has received six best paper awards from major CAD and embedded conferences. Among others, he has been the General Chair of ICCAD, ES Week, ISLPED. He is/has been in steering committees of all major CAD conferences as well as in various journals and has given more then ten keynotes at ES and CAD conferences. He is currently the Editor-in-Chief of the IEEE Design & Test Magazine and has been the Editor-in-Chief of ACM TECS for two consecutive terms. He is a Fellow of the IEEE.


About the speaker - Jian-Jia Chen, TU Dortmund University, Germany

                     





Jian-Jia Chen is a Professor at Department of Informatics in TU Dortmund University in Germany. He was Junior professor at Department of Informatics in Karlsruhe Institute of Technology (KIT) in Germany from May 2010 to March 2014. He received his Ph.D. degree from Department of Computer Science and Information Engineering, National Taiwan University, Taiwan in 2006. He received his B.S. degree from the Department of Chemistry at National Taiwan University 2001. Between Jan. 2008 and April 2010, he was a postdoc researcher at ETH Zurich, Switzerland. His research interests include real-time systems, embedded systems, energy-efficient scheduling, power-aware designs, temperature-aware scheduling, and distributed computing. He received the European Research Council (ERC) Consolidator Award in 2019. He has received more than 10 Best Paper Awards and Outstanding Paper Awards and has involved in Technical Committees in many international conferences.


About the speaker - Hussam Amrouch, University of Stuttgart, Germany

                     





Hussam Amrouch is a Jun.-Professor heading the Chair of Semiconductor Test and Reliability (STAR) within the Computer Science, Electrical Engineering Faculty at the University of Stuttgart as well as a Research Group Leader at the Karlsruhe Institute of Technology (KIT), Germany. He received his Ph.D. degree (Summa cum laude) from KIT in 2015. His main research interests are design for reliability and testing from device physics to systems, machine learning, security, approximate computing, and emerging technologies with a special focus on ferroelectric devices. He holds seven HiPEAC Paper Awards and three best paper nominations at top EDA conferences: DAC'16, DAC'17 and DATE'17 for his work on reliability. He currently serves as Associate Editor at Integration, the VLSI Journal. He has served in the technical program committees of many major EDA conferences such as DAC, ASP-DAC, ICCAD, etc. and as a reviewer in many top journals like T-ED, TCAS-I, TVLSI, TCAD, TC, etc. He has around 85 publications in multidisciplinary research areas across the entire computing stack, starting from semiconductor physics to circuit design all the way up to computeraided design and computer architecture.


About the speaker - Yuan-Hao Chang (Johnson Chang), Academia Sinica, Taiwan

                     





Yuan-Hao Chang received the PhD degree in computer science from the Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan, in 2009. He is currently a research fellow (equal to professor) with tenure at Institute of Information Science, Academia Sinica, Taipei, Taiwan. His research interests include memory/ storage systems, operating systems, embedded systems, and real-time systems. In these fields, he has published more than 120 articles on archival journals and peer-reviewed conferences. He serves as a member of many conference program committees and as a reviewer for various IEEE/ACM transactions and highly cited conferences. He is an associate editor of ACM Transactions on Cyber-Physical Systems (ACM TCPS), and was the program co-chair and general co-chair of IEEE Non-Volatile Memory Systems and Applications Symposium (NVMSA) 2017 and 2018 respectively.


Day - 2 (21 Feb 2021),  Tutorial-T4,  Start: 9:00,  End: 10:30, Duration (in mnts) : 90

Tutorial Title: High-Performance IP Design Using Pulse Logic

Tutorial Abstract

                     





This tutorial presents pulse logic, a family of self-resetting gates operating on atomic pulses, and its applications in the design of high-performance VLSI systems. Pulse-mode signaling uses pulses to encode both data and the time-of-arrival, enabling simple construction of either synchronous or asynchronous logic. It exhibits noise and timing correction properties that enable robust, high-performance systems with low overall power and footprint, even in older technology nodes. In particular, it offers some relief from PVT design dominance and enables simplified realization of otherwise full-custom blocks across a wide range of technology nodes. The first part of this tutorial will present methods for precision timing, distributed feedback clocking and communication strategies utilizing pulse logic. The second part will cover verification strategies for pulse logic circuits.


About the speaker - Forrest Brewer, Professor, Department of Electrical and Computer Engineering, University of California, Santa Barbara, Senior Member IEEE. 

                     





Forrest Brewer has 32 years experience as a professor and has published over 100 technical papers in the general area of VLSI. He has actively worked in pulse-gates since 2004, and been involved in the design of pulse-gate enabled designs from 0.6µm to 22nm FDSOI nodes, including 130nm radiation hardened designs for CERN LHC/LMS.


About the speaker - Prashansa Mukim, Postdoctoral Associate, University of Maryland/NIST

                     





Prashansa Mukim received her PhD in Electrical and Computer Engineering from the University of California at Santa Barbara in December 2020. Her research was on the analysis design of precision timing circuits using pulse mode signaling. She is currently working as a Postdoctoral Associate at the University of Maryland and National Institute of Standards and Technology (NIST), Gaithersburg.


About the speaker - David Mc Carthy, PhD Candidate, Department of Electrical and Computer Engineering, University of California, Santa Barbara, Student Member

                     





David Mc Carthy is a PhD candidate in Electrical and Computer Engineering at the University of California at Santa Barbara. His research is on electronic design automation for asynchronous circuits. Before that he received his B.E and M.Eng.Sc from University College Cork in Ireland in 2013 and 2014 respectively.


Day - 2 (21 Feb 2021),  Tutorial-T5,  Start: 10:45,  End: 12:15, Duration (in mnts) : 90

Tutorial Title: Advanced Delay Calculation and Variation Modelling Techniques for Lower Nodes and Voltages

Tutorial Abstract

                     





Static Timing Analysis (STA) is widely used technique to verify that the design can operate at desired clock speed and desired operating conditions to meet the design constraints. The accuracy of STA depends on the accuracy of delays of the design components (gates and interconnects). However, complete transistor level simulation of the entire design is practically not possible – so it becomes imperative to have accurate delay calculation which matches Spice accuracy without doing circuit simulation. In section 1, base delay calculations include Gate Delay Models, NLDM (Non-Linear Delay Model), CCS (Composite Current Source Model), CCS-N (CCS-Noise Model) – Calculation Methods, Lumped/Look-up based Delay Calculation, C-effective Method, Current Based Delay Calculation, Delay Calculation using CCS-N models and Slew Propagation. In section 2, Crosstalk delay calculation include Impact of Noise on Delay Calculation, Timing Windows, Attacker Selection, SI Delay Computation. In section 3, statistical On-chip variations (SOCV) include Introduction to SOCV, Library Modeling for SOCV, STA challenges with SOCV, Moments Propagation in STA, Timing Yield Computation. In section 4, advanced delay calculation includes Waveform Effects, Back-Miller Effect, Multi-Input Switching (MIS), IR Drop. 


About the speaker - Ratnakar Goyal, Software Architect at Cadence Design Systems 

                     





Ratnakar Goyal is currently working as a Software Architect at Cadence Design Systems. He is currently working in delay calculation with special emphasis on SOCV and advanced delay calculation. He joined Cadence in 2000 and has worked in the areas of Timing Library, Static Timing Analysis, Statistical Analysis, Clock-tree synthesis, Parasitic extraction, Delay Calculation and Crosstalk analysis. He has published 3 papers in international conferences and has been granted 3 patents by USPTO. He received his M.E. (Computer Science and Engineering) from Indian Institute of Science, Bangalore and B.Tech (Electronics and Communication Engineering) from NIT, Hamirpur.


Day - 2 (21 Feb 2021),  Tutorial-T6,  Start: 13:15,  End: 14:45, Duration (in mnts) : 90

Tutorial Title: Artificial Intelligence - State of the Union

Tutorial Abstract

                     





With unprecedented success of Artificial Intelligence algorithms in computer vision, language understanding, and recommendation systems; there is ever increasing interest and investments in this field. AI is changing the way we look at solving some of really tough problems. AI technology changing at such a rapid pace is diminishing the divide between research and its deployment. This tutorial will provide an overview of basic technology behind deep-learning, a quick survey of various architectures for AI accelerators, few state of the art AI algorithms, and key problems in deploying AI in the enterprise.


About the speaker - Saurabh Tiwari, Principal Engineer and Technical Manager in Data Centre Product

groups (AI Division), Intel Corporation 

                     





Saurabh Tiwari is a Principal Engineer and Technical Manager in Data Centre Products group (AI division) at Intel Corporation. Saurabh completed his Masters in Computer Engineering from Indian Institute of Science (IISc), and has 20 years of semiconductor industry experience with multinational companies like Texas Instruments and Intel Corporation in the field of architecture simulation. Saurabh has one approved patent in memory architecture from US patent office and has been technical committee chair in famous conferences like DV Con. Saurabh received Intel Achievement Award (IAA), one of the highest recognitions at company level for his technical contributions. During his carrier in Intel, he has contributed strongly to Intel’s mainstream microprocessors for Laptop computers and Compute accelerator architectures for Graphics, Imaging, and Artificial Intelligence. 

Day - 2 (21 Feb 2021),  Tutorial-T7,  Start: 15:00,  End: 16:30, Duration (in mnts) : 90

Tutorial Title: Low Power Design and Predictive Failure Analytics in Silicon in nm Era

Tutorial Abstract

                     





Power has become the key driving force in processor as well AI specific accelerator designs as the frequency scale-up is reaching saturation. In order to achieve low power system, circuit and technology co-design is essential. This talk focuses on related technology and important circuit techniques for nanoscale VLSI circuits. Achieving low power and high performance simultaneously is always difficult. Technology has seen major shifts from bulk to SOI and then to non-planar devices such as FinFET and Trigates. This talk consists of pros and cons analysis on technology from power perspective and various techniques to exploit lower power. As the technology pushes towards sub-7nm era, process variability and geometric variation in devices can cause variation in power. The reliability also plays an important role in the power-performance envelope. This talk also reviews the methodology to capture such effects and describes all the power components. All the key areas of low power optimization such as reduction in active power, leakage power, short circuit power and collision power are covered. Usage of clock gating, power gating, longer channel, multi-Vt design, stacking, header-footer device techniques and other methods are described for logic and memory used for processors and AI. Finally the talk summarizes key challenges in achieving low power. In addition the tutorial gives a brief overview of predictive failure analytics used in nm Technology. Process and environmental variations impact circuit behaviour it is important to model their effects to build robust circuits. The tutorial describe how key statistical techniques can be effectively used to analyze and build robust circuits.


About the speaker - Dr. Rajiv V. Joshi,T. J. Watson Research Center Yorktown Heights, USA 

                     





Dr. Rajiv V. Joshi is a research staff member and key technical lead at T. J. Watson research center, IBM. He received his B.Tech I.I.T (Bombay, India), M.S (M.I.T) and Dr. Eng. Sc. (Columbia University). His novel interconnects processes and structures for aluminum, tungsten and copper technologies which are widely used globally in various technologies from sub-0.5μm to 5nm. He has led successfully predictive failure analytic techniques for yield prediction and also the technology-driven SRAM at IBM Server Group. His statistical techniques are tailored for machine learning and AI . He developed novel memory designs which are universally accepted. He commercialized these techniques. He received 3 Outstanding Technical Achievement (OTAs), 3 highest Corporate Patent Portfolio awards for licensing contributions, holds 60 invention plateaus and has over 250 US patents and over 400 including international patents. His interests are in in-memory computation, CNN, DNN accelerators and Quantum computing. He has authored and co-authored over 200 papers. He has given over 45 invited/keynote talks and given several Seminars. He gave keynote talk in IITB techfest event where noble prize winners were invited. NY IP Law association announced award for him as the “Inventor of the year” in Feb 2020. He is awarded prestigious IEEE Daniel Noble award for 2018. He received the Best Editor Award from IEEE TVLSI journal. He is recipient of 2015 BMM award. He is inducted into New Jersey Inventor Hall of Fame in Aug 2014 along with pioneer Nicola Tesla. He is a recipient of 2013 IEEE CAS Industrial Pioneer award and 2013 Mehboob Khan Award from Semiconductor Research Corporation. He was awarded same award again in 2020 for AI initiatives in BRIC program funded by SRC. He won several best paper awards from ISSCC 1992, ICCAD 2012, ISQED, VMIC. . He is a member of IBM Academy of technology and a master inventor. He served as a Distinguished Lecturer for IEEE CAS and EDS society. He is currently Distinguished Lecturer for CEDA. He is IEEE, ISQED and World Technology Network fellow and distinguished alumnus of IIT Bombay. He serves in the Board of Governors for IEEE CAS as industrial liaison. He serves as an Associate Editor of TVLSI. He will and has served on committees of DAC 2019, AICAS 2019, ISCAS, ISLPED (Int. Symposium Low Power Electronic Design), IEEE VLSI design, IEEE CICC, IEEE Int. SOI conference, ISQED and Advanced Metallization Program committees. He initiated IBM CAS EDS symposium at IBM in 2017 and will continue into 2018 with Artificial Intelligence as the focal area. He served as a general chair for IEEE ISLPED. He is an industry liaison for universities as a part of the Semiconductor Research Corporation. Also he is in the industry liaison committee for IEEE CAS society.