Keynote: "Open, Secure, Near-Sensor Analytics: A Parallel Ultra-Low Power (PULP) Approach"
Luca Benini Architecture LP
Luca Benini holds the chair of digital Circuits and systems at ETHZ and is Full Professor at the
Universita di Bologna. Dr. Benini's research interests are in energy-efficient computing
systems design, from embedded to high-performance. He is also active in the design ultra-low power
VLSI Circuits and smart sensing micro-systems. He has published more than 900 peer-reviewed papers
and five books. He is a Fellow of the IEEE, of the ACM and a member of the Academia Europaea.
He is the recipient of the 2016 IEEE CAS Mac Van Valkenburg award.
Keynote: "Electronic Design Automation and Machine Learning Hardware"
Raul Camposano Analog verification
Hardware design is exciting again. Artificial intelligence, machine vision,cybersecurity, photonics, new memories, the internet of things, ultra low power wireless, 5G, smart power, medical devices, automotive, etc. are all areas that are driving innovative chip design. State-of-the art chips (integrated circuits) consist of up to tens of billions of transistors. These are designed using electronic design automation (EDA) tools. Understanding the inner workings of EDA tools is key to using them effectively. This motivated a class at Stanford University, EE292A taught partly by the author in spring of 2018. It covered cutting-edge optimization and analysis algorithms for the design of complex digital integrated circuits. The class focused on working knowledge of the key technologies in EDA and their use. The practical work consisted of designing machine learning hardware for a convolutional neural network, which was implemented on a state-of-the-art FPGA board. This talk summarizes the topics covered and the lessons learned from teaching the class.
Raúl is currently the CEO of Sage, a startup in physical design tools for semiconductors.
He is also a partner at Silicon Catalyst, an incubator for semiconductor solutions. He was
previously the CEO of Nimbic, a startup that was acquired by Mentor Graphics in 2014. From 1994 to 2007 he was with Synopsys, where he served as Chief Technology Officer,Senior Vice President, and General Manager. Prior to joining Synopsys, Raúl was a Director for the German National Research Center for Computer Science, Professor of Computer Science at the University of Paderborn, and a Research Staff Member at the IBM T.J. Watson Research Center. Raúl holds a B.S and M.S. in Electrical Engineering from the University of Chile, and a Ph.D. in Computer Science from the University of Karlsruhe. He has published over 70 technical papers and written and/or edited three
books on electronic design automation. Raúl has contributed significantly to building the
design community serving on numerous editorial, advisory and company boards. Raúl was also an Advisory Professor at Fudan University and the Chinese Academy of Sciences. He was elected a Fellow of the IEEE in 1999 and to the board of directors of ESDA (aka EDAC, the EDA Consortium) in 2012.
"Designing Application Specific Machine Learning/Deep Learning Cores"
Claudionor Coelho Google / ML support
A lot of attention has been given to designing programmable ML/DL cores. However, inside a SoC, there is a large number of opportunities for replacing control-flow intensive applications by well-crafted dataflow oriented applications based on ML/DL that improves design time, verification, and functionality. In this talk, we will present design considerations when designing Application Specific ML/DL cores, as these ASML cores cannot be designed using the same techniques as used in programmable cores, and even network design needs to be taken into account when designing these applications, as they usually contain area, power and timing constraints
Claudionor N. Coelho is a serial innovator, working on Machine Learning/Deep Learning hardware acceleration for video compression at Google. Previously, he was the VP of Software Engineering, Machine Learning and Deep Learning at NVXL Technology, being responsible for creating new hardware/software acceleration techniques that led to a USD 15 million investment from Alibaba. He did seminal work on AI at Synopsys Inc, the GM for Brazil for Cadence Design Systems, and previously the SVP of Engineering for Jasper Design Automation, leading the team that was awarded the Red Herring most innovative company in the US in 2013. He has more than 80 papers and patents, and he was an Associate Professor of Computer Science at UFMG, Brazil. He has a PhD in EE/CS from Stanford University, MBA from IBMEC Business School, and an MSCS and BSEE (summa cum laude) from UFMG, Brazil.
Keynote: "Solving Scalability Problems in EDA through Optimization"
Laleh Behjat Professor
Laleh Behjat is a Professor in the department of Electrical and Computer Engineering, Schulich School of Engineering, University of
Calgary. She joined the University of Calgary in 2002. Dr. Behjatâ€™s research focus is on developing electronic design automation (EDA)
techniques for physical design and application of large-scale optimization in EDA. Her research team has won several awards including
1st and 2nd places in ISPD 2014 and ISPD 2015 High Performance Routability Driven Placement Contests and 3rd place in DAC Design Perspective
Challenge in 2015. She is an Associate Editor of the IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems and Optimization in Engineering from Springer. Dr. Behjat has been developing new and innovative methods to teach EDA to students. She
acted as an academic advisor for of Google Technical Development Guide and has won several awards for her efforts in education including 2017
Killam Graduate Student Supervision and Mentorship Award. Her team, Schulich Engineering Outreach Team, was also the recipient of the ASTech
Leadership Excellence in Science and Technology Public Awareness Award in 2017.
The Integrated Circuits industry has seen an explosion in the numbers of the transistors that are being used while at the same time the
sizes of these transistors have been shrinking. These opposite forces have on one hand forced the engineers to solve very large-scale problems
consisting of billions of transistors, while on the other hand, they had to deal with the uncertainties arising from the very small scales of the transistors. In this talk, we will discuss how optimization and machine learning can be used to solve the problems of extremely large or
extremely small scales. In particular, we will focus on the convex optimization techniques and concepts that be used or adapted to solve
the problems seen in the physical design of integrated circuits.