Program

Keynotes Speakers

Keynote: "Open, Secure, Near-Sensor Analytics: A Parallel Ultra-Low Power (PULP) Approach"

...

Luca Benini
Architecture LP

Abstract:

Edge Artificial Intelligence is the new megatrend, as privacy concerns and networks bandwidth/latency bottlenecks prevent cloud offloading of sensor analytics functions in many application domains, from autonomous driving to advanced prosthetic. Hence we need to push data analytics and AI functionality toward sensors and actuators, and comply with the ensuing low-power low cost requirements. In this talk I will give an overview of recent efforts in developing systems of on chips capable of significant analytics and AI functions within the limited power budget of traditional microcontrollers. These "extreme edge analytics" platforms enable exciting research and business opportunity in many application domains.

Bio:

Luca Benini holds the chair of digital Circuits and systems at ETHZ and is Full Professor at the Universita di Bologna. Dr. Benini's research interests are in energy-efficient computing systems design, from embedded to high-performance. He is also active in the design ultra-low power VLSI Circuits and smart sensing micro-systems. He has published more than 900 peer-reviewed papers and five books. He is a Fellow of the IEEE, of the ACM and a member of the Academia Europaea. He is the recipient of the 2016 IEEE CAS Mac Van Valkenburg award.


Keynote: "Electronic Design Automation and Machine Learning Hardware"

...

Raul Camposano
Analog verification

Abstract:

Hardware design is exciting again. Artificial intelligence, machine vision,cybersecurity, photonics, new memories, the internet of things, ultra low power wireless, 5G, smart power, medical devices, automotive, etc. are all areas that are driving innovative chip design. State-of-the art chips (integrated circuits) consist of up to tens of billions of transistors. These are designed using electronic design automation (EDA) tools. Understanding the inner workings of EDA tools is key to using them effectively. This motivated a class at Stanford University, EE292A taught partly by the author in spring of 2018. It covered cutting-edge optimization and analysis algorithms for the design of complex digital integrated circuits. The class focused on working knowledge of the key technologies in EDA and their use. The practical work consisted of designing machine learning hardware for a convolutional neural network, which was implemented on a state-of-the-art FPGA board. This talk summarizes the topics covered and the lessons learned from teaching the class.

Bio:

Raúl is currently the CEO of Sage, a startup in physical design tools for semiconductors. He is also a partner at Silicon Catalyst, an incubator for semiconductor solutions. He was previously the CEO of Nimbic, a startup that was acquired by Mentor Graphics in 2014. From 1994 to 2007 he was with Synopsys, where he served as Chief Technology Officer,Senior Vice President, and General Manager. Prior to joining Synopsys, Raúl was a Director for the German National Research Center for Computer Science, Professor of Computer Science at the University of Paderborn, and a Research Staff Member at the IBM T.J. Watson Research Center. Raúl holds a B.S and M.S. in Electrical Engineering from the University of Chile, and a Ph.D. in Computer Science from the University of Karlsruhe. He has published over 70 technical papers and written and/or edited three books on electronic design automation. Raúl has contributed significantly to building the design community serving on numerous editorial, advisory and company boards. Raúl was also an Advisory Professor at Fudan University and the Chinese Academy of Sciences. He was elected a Fellow of the IEEE in 1999 and to the board of directors of ESDA (aka EDAC, the EDA Consortium) in 2012.

Keynote: "Solving Scalability Problems in EDA by Using Optimization and Machine Learning"

...

Laleh Behjat
Integrated Circuits

Abstract:

The Integrated Circuits industry has seen an explosion in the numbers of the transistors that are being used while at the same time the sizes of these transistors have been shrinking. These opposite forces have on one hand forced the engineers to solve very large-scale problems consisting of billions of transistors, while on the other hand, they had to deal with the uncertainties arising from the very small scales of the transistors. In this talk, we will discuss how optimization and machine learning can be used to solve the problems of extremely large or extremely small scales. We will first focus on convex optimization techniques and concepts that be used or adapted to solve the problems that have a good mathematical model. Then, we will discuss the shortcomings that the optimization techniques, and discuss how machine learning can be used to solve this problem.

Bio:

Laleh Behjat is a Professor in the department of Electrical and Computer Engineering, Schulich School of Engineering, University of Calgary. She joined the University of Calgary in 2002. Dr. Behjat’s research focus is on developing electronic design automation (EDA) techniques for physical design and application of large-scale optimization in EDA. Her research team has won several awards including 1st and 2nd places in ISPD 2014 and ISPD 2015 High Performance Routability Driven Placement Contests and 3rd place in DAC Design Perspective Challenge in 2015. She is an Associate Editor of the IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems and Optimization in Engineering from Springer. Dr. Behjat has been developing new and innovative methods to teach EDA to students. She acted as an academic advisor for of Google Technical Development Guide and has won several awards for her efforts in education including 2017 Killam Graduate Student Supervision and Mentorship Award. Her team, Schulich Engineering Outreach Team, was also the recipient of the ASTech Leadership Excellence in Science and Technology Public Awareness Award in 2017.

Invited Talks

"Designing Application Specific Machine Learning/Deep Learning Cores"

...

Claudionor Coelho
Google/ML/DL Sr Research Scientist

Abstract:

A lot of attention has been given to designing programmable ML/DL cores. However, inside a SoC, there is a large number of opportunities for replacing control-flow intensive applications by well-crafted dataflow oriented applications based on ML/DL that improves design time, verification, and functionality. In this talk, we will present design considerations when designing Application Specific ML/DL cores, as these ASML cores cannot be designed using the same techniques as used in programmable cores, and even network design needs to be taken into account when designing these applications, as they usually contain area, power and timing constraints

Bio:

Claudionor N. Coelho is a serial innovator, working on Machine Learning/Deep Learning hardware acceleration for video compression at Google. Previously, he was the VP of Software Engineering, Machine Learning and Deep Learning at NVXL Technology, being responsible for creating new hardware/software acceleration techniques that led to a USD 15 million investment from Alibaba. He did seminal work on AI at Synopsys Inc, the GM for Brazil for Cadence Design Systems, and previously the SVP of Engineering for Jasper Design Automation, leading the team that was awarded the Red Herring most innovative company in the US in 2013. He has more than 80 papers and patents, and he was an Associate Professor of Computer Science at UFMG, Brazil. He has a PhD in EE/CS from Stanford University, MBA from IBMEC Business School, and an MSCS and BSEE (summa cum laude) from UFMG, Brazil.


"Dance Partner Robots and a Co-worker Robot PaDY"

...

Kazuhiro Kosuge
Professor

Abstract:

A dance partner robot, PBDR (Partner Ballroom Dance Robot), developed in our laboratory was unveiled and gave dance demonstrations in EXPO 2005, Aichi, Japan. PBDR dances waltz as a female dancer together with a human male dancer. One of the key research issues for the development of the dance partner robot was how to read the male dancer’s lead, or how to estimate the male dancer’s intention. RoboDANTE(Robot DANce TEacher) is a dance instructor robot, which reacts more actively to its partner’s behavior based on a new concept of Progressive Teaching. The dance partner robot, as a research platform for Physical Human-Robot Interaction (pHRI), has given us a lot of opportunities to reconsider issues relating to pHRI. After discussing the issues, co-worker robots in automotive factories, PaDY, will be introduced as an example of real-world applications of pHRI. How the recent machine learning technology makes the co-worker robot realistic and applicable in many new fields will be also discussed.

Bio:

Dr. Kazuhiro Kosuge is a Professor in the Department of Robotics at Tohoku University, Japan. He received the B.S., M.S., and Ph.D. in Control Engineering from the Tokyo Institute of Technology, in 1978, 1980, and 1988 respectively. >From 1980 through 1982, he was a Research Staff in the Production Engineering Department, DENSO Co., Ltd. From 1982 through 1990, he was a Research Associate in the Department of Control Engineering at Tokyo Institute of Technology. From 1990 to 1995, he was an Associate Professor at Nagoya University. From 1995, he has been at Tohoku University. He is an IEEE Fellow, a JSME Fellow, a SICE Fellow, a RSJ Fellow, a JSAE Fellow, a member of IEEE HKN. He served as President of IEEE Robotics and Automation Society for 2010-2011, and Division X Director of the Board of Directors of IEEE for 2015-2016. He is 2019 IEEE Vice President-elect for Technical Activities. He received the Medal of Honor with Purple Ribbon from the Government of Japan in 2018.


"5G Radio Frequency Front-End: be wide-band, be low-power, be low-cost !"

...

Francois Rivet
University of Bordeaux, France

Abstract:

5G is about to arrive in our every day lives. The main access to the spectrum will be the sub-6GHz one. RF designers are facing to a more and more challenging paradigm between being wide-band, low-power and low-cost at the same time. This talk introduces a new methodology of design of RF Front-End, called "Design by Mathematics". We will focus on the example of the Riemann Pump. It is an integrated CMOS wide-band Arbitrary Waveform Generator (AWG) for Carrier Aggregation designed to target the sub-6 GHz 5G standard. We demonstrated with measurements in 65nm CMOS technology from TSMC the ability to generate 5G multi-carrier modulated signals with a power consumption under 1mW for a very reduced die area.

Bio:

Dr. Francois RIVET is Associate Professor at the EE department of Bordeaux Institute of Technology and IMS laboratory, in Bordeaux, France. His research activities are the design of integrated circuits and systems for wireless communications. Since 2014, he leads the research team "Circuits and Systems". He has contributed to the design of disruptive communication circuits developing a Design by Mathematics methodology. He published 80 technical papers and holds 14 patents. He participates in several technical and / or steering committees (IEEE RFIC, IEEE ESSCIRC-ESSDERC, IEEE ICECS, IEEE SBCCI, IEEE ASICON, ...).