Colloque annuel ReSMiQ

9 mai 2019, de 9h00 à 18h00
Université Concordia,
Engineering And Visual Arts Building,
1515 Stainte-Cathrine O.
2e et 3e étage

Merci pour leur soutien à



Programme en bref

Salle EV 3.309, 3e étage
 9h00 Message de bienvenue
Mounir Boukadoum, Dir.

Séminaire ReSMiQ I
Making Sense of the Data Trove Hidden in Medical Ultrasound Signals
Hassan Rivaz, Concordia University, Montreal, Canada
9h45 Concours d’affiche scientifique - présentations orales 1ère partie
Mon projet en 90·106 μs
Des étudiants de cycles supérieurs vont démontrer leur savoir-faire scientifique lors de présentations orale de 90 secondes.
Liste des projets en compétition
10h15 Pause café
10h30 Concours d’affiche scientifique - présentation orales 2ième partie
Salle EV 2.260, 2e étage
11h00 Concours d'affiche scientifique - présentation d'affiches
Des étudiants de cycles supérieurs vont démontrer leur savoir-faire scientifique lors de présentation d'affiches.
Atrium 2e étage, Salle EV 2.184
12h00 Dîner pour les participants
Salle EV 3.309, 3e étage

Séminaire invité I
Adaptive and Resilient Circuits for Processors
Keith Bowman, Qualcomm Technologies, Inc., États-Unis, IEEE SSCS Distinguished Lecturer
14h30 Séminaire invité II
Energy Effective Graphene Based Computing
Sorin Cotofana, Delft University of Technology, Pays-Bas, IEEE CASS Distinguished Lecturer
15h30 Pause café
15h45 Séminaire ReSMiQ II
"IoT Edge Computing using Emerging Technologies"
Gabriela Nicolescu, Polytechnique Montreal, Canada




Table ronde
Internet of Things (IoT) challenges for hardware designers.

Modérateur: Mounir Boukadoum (UQAM)

Hassan Rivaz, Université Concordia
Gabriela Nicolescu, Polytechnique Montréal
Keith Bowman, Qualcomm Technologies
Sorin Cotofana, Delft University of Technology

17h15 Cocktail et remise des prix



Nos conférenciers invités

Hassan Rivaz
Université Concordia, Montréal, Canada.

“Making Sense of the Data Trove Hidden in Medical Ultrasound Signals”

Résumé - This talk focuses on developing image analysis techniques that reveal otherwise hidden information in clinical ultrasound Signals. Ultrasound is one of the most commonly used imaging modalities because of its low cost and ease of use. However, it has two main drawbacks. First, raw ultrasound data is not suitable for visualization, and as such, is converted to the familiar grey-scale images which lead to a loss of most of its information. Second, these grey-scale images are hard to interpret since they are noisy and collected at oblique angles. In this talk, we tackle these issues by developing techniques that extract clinically useful information such as tissue elasticity from the complex raw ultrasound signals, and register them to other modalities such as Magnetic Resonance Imaging (MRI) to help with their interpretation.

Biographie - Hassan Rivaz is an Assistant Professor in Electrical and Computer Engineering and a Concordia University Research Chair in Medical Image Analysis. He is an Associate Editor of IEEE Transactions on Medical Imaging (TMI), and IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control (TUFFC). He served as an Area Chair of MICCAI 2017 and MICCAI 2018 conferences, co-organized CuRIOUS MICCAI 2018 Challenge on correction of brain shift using ultrasound, and co-organized CereVis MICCAI 2018 Workshop on Cerebral Data Visualization. He also co-organized the elastography tutorial at IEEE ISBI 2018, and will co-organize a similar tutorial at ISBI 2019. He directs the IMPACT lab: IMage Processing and Characterization of Tissue, which can be found at http://sonography.ai


Gabriela Nicolescu

Polytechnique Montréal, Montréal, Canada.

“IoT Edge Computing using Emerging Technologies”

Résumé - The current trend in the Internet of Things (IoT) is to design processing subsystems on the edge that are smarter, smaller and more autonomous. Therefore, it is expected that the Internet of Things of the future will not be composed of passive devices sending data to a server, it will be rather like a distributed computing fabric, where many of the IoT devices and subsystems themselves will process and analyze the data and make autonomous processing. Typical target applications include video-processing systems of smart cars, video surveillance cameras, mobile phones, home personal assistants, drones, healthcare devices, industrial monitoring, and control, etc. Moving intensive computation to the end-points promises multiple benefits for IoT applications: low latency, reduced bandwidth requirements, as well as improved privacy, security, and reliability. These benefits are crucial for mission-critical IoT applications. These will go beyond the current critical applications in the healthcare, industrial, and power systems industries. They are evolving to encompass a new breed of applications that are required to work as anticipated, without failure, every time, in industries like wearables, smart city, smart home, and others.

The key enablers for this new paradigm will be the innovative programmable devices and subsystems able to execute multiple complex algorithms, under tight timing and power constraints. The current requirements are very high: up to 10 Tera Operations Per Second (TOPS) per Watt. These new challenges demand innovation in the design process through new architectures based on emerging technologies and new system-level methods enabling designers to take advantages from the new architectures.  In this presentation, we will discuss architecture-level and system-level innovative solutions for future intensive computation at the edge of IoT.

Biographie - Gabriela Nicolescu obtained her MSc degree from Politechnica Bucharest and her PhD degree, in 2002, from INPG (Institut National Polytechnique de Grenoble) in France. She has been working at Ecole Polytechnique de Montreal (Canada) since 2003, where she is a professor in the Computer and Software Engineering Department. Dr. Nicolescu's research interests are in the field of programming, modeling, and simulation of advanced systems. She edited six books in the field and she is the author of more than 150 articles in journals, international conferences, and book chapters. 


Keith Bowman

Principal Engineer and Manager at Qualcomm Technologies, Inc., USA.
IEEE SSCS Distinguished Lecturer

“Adaptive and Resilient Circuits for Processors”

Résumé - Dynamic device, circuit, and system parameter variations degrade processor performance, energy efficiency, and yield across all market segments, ranging from small embedded cores in an Internet of Things (IoT) device to large multicore servers.  This lecture introduces the primary variations during the processor operational lifetime, including supply voltage drops, temperature changes, transistor and interconnect aging, radiation-induced soft errors, and workload fluctuations.  This presentation then describes the negative impact of these variations on processor logic and memory across a wide range of voltage and clock frequency operating conditions.  To mitigate the adverse effects from dynamic variations, this lecture presents adaptive and resilient circuits while highlighting the key design trade-offs and testing implications for product deployment.

Biographie - Keith A. Bowman is a Principal Engineer and Manager in the Processor Research Team at Qualcomm Technologies, Inc. in Raleigh, NC, USA.  He is responsible for researching and developing circuit technologies for enhancing the performance and energy efficiency of Qualcomm processors.  He pioneered the invention, design, and test of Qualcomm’s first commercially successful circuit for mitigating the adverse effects of supply voltage drops.  He received the B.S. degree from North Carolina State University in 1994 and the M.S. and Ph.D. degrees from the Georgia Institute of Technology in 1995 and 2001, respectively, all in electrical engineering.  From 2001 to 2013, he worked in the Technology Computer-Aided Design (CAD) Group and the Circuit Research Lab at Intel Corporation in Hillsboro, OR, USA.  In 2013, he joined Qualcomm Technologies, Inc.

Dr. Bowman has published over 80 technical papers in refereed conferences and journals, authored one book chapter, received 19 patents, and presented 38 tutorials on variation-tolerant circuit designs.  He received the 2016 Qualcomm Corporate Research and Development (CRD) Distinguished Contributor Award for Technical Contributions, representing CRD’s highest recognition, for the pioneering invention of the auto-calibrating adaptive clock distribution circuit, which significantly enhances processor performance, energy efficiency, and yield and is integral to the success of the Qualcomm® Snapdragon™ 820 and future processors.  He was the Technical Program Committee (TPC) Chair and the General Conference Chair for ISQED in 2012 and 2013, respectively, and for ICICDT in 2014 and 2015, respectively.  Since 2016, he has served on the ISSCC TPC. 


Sorin Cotofana
Delft University of Technology, The Netherlands.
IEEE CASS Distinguished Lecturer

“Energy Effective Graphene Based Computing”

Résumé - In this presentation, we argue and provide Non-Equilibrium Green’s Function Landauer formalism-based simulation evidence that in spite of Graphene’s bandgap absence, Graphene Nanoribbons (GNRs) can provide support for energy effective computing.  We start by demonstrating that: (i) band gap can be opened by means of GNR topology and (ii) GNR’s conductance can be mold according to some desired functionality, i.e., 2- and 3-input AND, NAND, OR, NOR, XOR, and XNOR, via shape and electrostatic interaction. Afterward, we introduce a generic GNR based Boolean gate structure composed of a pull-up GNR performing the gate Boolean function and a pull-down GNR performing the gate inverted Boolean function, and, by properly adjusting GNRs' dimensions and topology, we design and evaluate by means of SPICE simulations inverter, buffer, and 2-input GNR based AND, NAND, and XOR gates. When compared with state-of-the-art graphene FET and CMOS based counterparts the GNR-based gates outperform its challenges, e.g., up to 6x smaller propagation delay, 2 orders of magnitude smaller power consumption, while requiring 1 to 2 orders of magnitude smaller active area footprint than 7nm CMOS equivalents. Finally, to get better inside in the practical implications of the proposed approach, we present Full Adder (FA) and SRAM cell GNR designs, as they are currently fundamental components for the construction of any computation system. For an effective FA implementation, we introduce a 3-input MAJORITY gate, which apart of being able to directly compute FA's carry-out is an essential element in the implementation of Error-Correcting Codes codecs, that outperforms a 7nm CMOS equivalent Carry-Out calculation circuit by 2 and 3 orders of magnitude in terms of delay and power consumption, respectively, while requiring 2 orders of magnitude less area. The proposed FA exhibits 6x smaller delay, 3 orders of magnitude less power consumption, while requiring 2 orders of magnitude less area than a 7 nm FinFET CMOS counterpart. However, because of the effective carry-out circuitry, a GNR-based n-bit Ripple Carry Adder, whose performance is linear in the Carry-Out path delay, will be 108x faster than an equivalent CMOS implementation. The GNR-based SRAM cell provides a slightly better resilience to DC-noise characteristics, while performance-wise has a 3x smaller delay, consumes 2 orders of magnitude less power, and requires one order of magnitude less area than the CMOS equivalent. These results clearly indicate that the proposed GNR-based approach is opening a promising avenue towards future competitive carbon-based nanoelectronics.

Biographie - Sorin Cotofana (M’93-SM’00-F’17) received the MSc degree in Computer Science from the "Politechnica" University of Bucharest, Romania, and the PhD degree in Electrical Engineering from Delft University of Technology, The Netherlands. He is currently with the Electrical Engineering, Mathematics and Computer Science Faculty, Delft University of Technology, Delft, the Netherlands. His current research is focused on: (i) the design and implementation of dependable/reliable systems out of unpredictable/unreliable components; (ii) aging assessment/prediction and lifetime reliability-aware resource management; and (iii) unconventional computation paradigms and computation with emerging nano-devices. He co-authored more than 250 papers in peer-reviewed international journal and conferences, and received 12 international conferences best paper awards, e.g., 2012 IEEE Conference on Nanotechnology, 2012 ACM/IEEE International Symposium on Nanoscale Architectures, 2005 IEEE Conference on Nanotechnology, 2001 International Conference on Computer Design. He served as Associate Editor for IEEE Transactions on CAS I (2009-2011), IEEE Transactions on Nanotechnology (2008-2014), member of the IEEE Journal on Emerging and Selected Topics in Circuits and Systems Senior Editorial Board (2016-2017), Steering Committee member for IEEE Transactions on Multi-Scale Computing Systems (2014-2018), Chair of the Giga-Nano IEEE CASS Technical Committee (2013-2015), and IEEE Nano Council CASS representative (2013-2014) and has been actively involved as reviewer, Technical Program Committee (TPC) member, and TPC (track) and general (co)-chair, in the organization of numerous international conferences. He is currently Associate Editor in Chief and Senior Editor for IEEE Transactions on Nanotechnology and Associate Editor for IEEE Transactions on Computers. He is a Fellow IEEE member (Circuits and System Society (CASS) and Computer Society) and a HiPEAC member.


Merci pour leur soutien à