Research Talk

RESEARCH KEYNOTE SERIES

                                   Michael Bronstein

                                   (Professor, University of Oxford)

 
 
Bio:

Michael Bronstein is the DeepMind Professor of AI at the University of Oxford and Head of Graph Learning Research at Twitter. He was previously a professor at Imperial College London and held visiting appointments at Stanford, MIT, and Harvard, and has also been affiliated with three Institutes for Advanced Study (at TUM as a Rudolf Diesel Fellow (2017-2019), at Harvard as a Radcliffe fellow (2017-2018), and at Princeton as a short-time scholar (2020)). Michael received his PhD from the Technion in 2007. He is the recipient of the Royal Society Wolfson Research Merit Award, Royal Academy of Engineering Silver Medal, five ERC grants, two Google Faculty Research Awards, and two Amazon AWS ML Research Awards. He is a Member of the Academia Europaea, Fellow of IEEE, IAPR, BCS, and ELLIS, ACM Distinguished Speaker, and World Economic Forum Young Scientist. In addition to his academic career, Michael is a serial entrepreneur and founder of multiple startup companies, including Novafora, Invision (acquired by Intel in 2012), Videocites, and Fabula AI (acquired by Twitter in 2019).

Talk Title: Physics-​inspired learning on graphs

Abstract : The message-​passing paradigm has been the “battle horse” of deep learning on graphs for several years, making graph neural networks a big success in a wide range of applications, from particle physics to protein design. From a theoretical viewpoint, it established the link to the Weisfeiler-​Lehman hierarchy, allowing to analyse the expressive power of GNNs. We argue that the very “node-​and-edge”-​centric mindset of current graph deep learning schemes may hinder future progress in the field. As an alternative, we propose physics-​inspired “continuous” learning models that open up a new trove of tools from the fields of differential geometry, algebraic topology, and differential equations so far largely unexplored in graph ML.


 
                                                                                                                       

                                          Aldo Faisal

                                   (Professor, Imperial College London)

Bio:

Professor Aldo Faisal (@FaisalLab) is the Professor of AI & Neuroscience at the Dept. of Computing and the Dept. of Bioengineering at Imperial College London (UK) and Chair of Digital Health at the University of Bayreuth (Germany). In 2021 he was awarded a prestigious 5-year UKRI Turing AI Fellowship. Since 2019, Aldo is the founding director of the £20Mio UKRI Centre for Doctoral Training in AI for Healthcare, and leads the Behaviour Analytics Lab at the Data Science Institute, London. Aldo works at the interface of Machine Learning, Medicine and translational Biomedical Rngineering to help people in diseases and health. He currently is one of the few engineers world-wide that lead their own clinical trials to validate their technology. In this space his works focusses on Digital Biomarkers and AI for medical intervention (Makin et al,Nat Biomed Eng; Komorowski et al, NatMed, 2018; Gottessmann et al NatMed, 2019). His work received a number of prizes and awards, including the $50,000 Research Discovery Prize by the Toyota Foundation.

Talk Title: Ethomics - the AI-enabled way to understand human behaviour

Abstract : Here we present a novel AI-driven approach to human behaviour analytics called Ethomics (Kadirvelu et Faisal, 2023, Nature Medicine, Ricotti et Faisal, 2023, Nature Medicine) that allows unprecedented resolution in diagnosing and measuring disease progression. We apply the same AI framework to two very different degenerative diseases affecting adults (Friedreichs) and children (Duchenne). Crucially the AI method detects imperceptible changes to human behaviour that allows us to measure gene transcription rates from movement changes alone and can predict each individual patient’s disease trajectory up to a year into the future. Our ethomics technology allows us therefore to dramatically de-risk and accelerate drug development for rare diseases, as it allows us to cut the duration of clinical trials in half and requires only a fraction of patients to determine if a treatment is working compared to current „gold“-standard methods.


 

           

                                      Faramarz Fekri

                                         (Professor, Georgia Tech)

Bio:

Faramarz Fekri received the B.Sc. and M.Sc. degrees from Sharif University of Technology, Tehran, Iran.Before coming to Georgia Tech, he was with the Telecommunication Research Laboratories (TRLabs) , Calgary, Canada.In Fall of 1996, he started his Ph.D. studies at Georgia Tech. After receiving his Ph.D. degree in Summer 2000, Dr. Fekri joined to the faculty of the School of ECE at Georgia Tech, where he is now a full professor.Prof. Fekri is the founder of the SPC Research Lab that has a multidisciplinary flavor working in three intertwined areas of Sensing, Processing, and Communications.Dr. Fekri serves on the editorial board of the IEEE Transactions on Molecular, Biological, and Multi-Scale Communications, and on both the Executive and Technical Program Committees of several IEEE conferences. In the past, he served on the editorial board of IEEE Transactions on Communications, and the Elsevier Journal on PHYCOM. Dr. Fekri has received several prestigious awards, including the Outstanding Junior Faculty Member Award of the School of ECE, Southeastern Center for Electrical Engineering Education (SCEEE) Young Faculty Development Award, National Science Foundation CAREER Award, the Cutting Edge Research Award (co-recipient) from the College of Engineering, and Best Ph.D. Thesis Award of the Georgia Institute of Technology chapter of Sigma Xi. He is an IEEE Fellow.

Talk Title: A Machine Learning Framework for Distributed Functional Compression over

Abstract : Deep Learning has revolutionized machine learning and has expanded its reach into many diverse fields, from autonomous driving to augmented reality and distributed IoT devices. Not unexpectedly, this has also led to deep- learning based design of communication systems. In particular, in all these applications, we often need to compute specific target functions that do not have any simple forms, e.g., obstacle detection, object recognition, etc. However, traditional cloud-based methods that focus on transferring data to a central location either for training or inference place enormous strain on wireless network resources. To address this, we develop a machine learning framework for distributed functional compression over wireless channels. We advocate that our machine learning framework can, by design, compute any arbitrary function for the desired functional compression task in IoT. In particular, the raw sensory data are never transferred to a central node for training or inference, thus reducing communication overhead. For these algorithms, we provide theoretical convergence guarantees and upper bounds on communication. Our simulations show that the learned encoders and decoders for functional compression perform significantly better than traditional approaches, are robust to channel condition changes and sensor outages. Compared to the cloud-based scenario, our algorithms reduce channel use by two orders of magnitude. Finally, we turn our attention to the problem of privacy in the distributed functional compression, where the node(s) are looking to hide private attributes correlated with the function value. We first study the single node and receiver problem. We then return to the distributed functional compression problem and devise a framework that demonstrates a state-of-the-art privacy-utility trade-off in the distributed scenario.


 

Yuejie Chi

       (Professor, Carnegie Mellon University)

Bio:

Dr. Yuejie Chi is a Professor in the department of Electrical and Computer Engineering, and a faculty affiliate with the Machine Learning department and CyLab at Carnegie Mellon University. She received her Ph.D. and M.A. from Princeton University, and B. Eng. (Hon.) from Tsinghua University, all in Electrical Engineering. Her research interests lie in the theoretical and algorithmic foundations of data science, signal processing, machine learning and inverse problems, with applications in sensing, imaging, decision making, and societal systems, broadly defined. Among others, Dr. Chi received the Presidential Early Career Award for Scientists and Engineers (PECASE) and the inaugural IEEE Signal Processing Society Early Career Technical Achievement Award for contributions to high-dimensional structured signal processing. She is an IEEE Fellow (Class of 2023) for contributions to statistical signal processing with low-dimensional structures.

Talk Title: Accelerating Ill-conditioned Low-rank Estimation via Scaled Gradient Descent
 
Abstract: Many problems encountered in science and engineering can be formulated as estimating a low-rank object from incomplete, and possibly corrupted, linear measurements; prominent examples include matrix completion and tensor completion. Through the lens of matrix and tensor factorization, one of the most popular approaches is to employ simple iterative algorithms such as gradient descent to recover the low-rank factors directly, which allow for small memory and computation footprints. However, the convergence rate of gradient descent depends linearly, and sometimes even quadratically, on the condition number of the low-rank object, and therefore, slows down painstakingly when the problem is ill-conditioned. This talk introduces a new algorithmic approach, dubbed scaled gradient descent (ScaledGD), that provably converges linearly at a constant rate independent of the condition number of the low-rank object, while maintaining the low per-iteration cost of gradient descent. A nonsmooth variant of ScaledGD provides further robustness to corruptions by optimizing the least absolute deviation loss. In addition, ScaledGD continues to admit fast global convergence, again almost independent of the condition number, from a small random initialization when the rank is over-specified. In total, ScaledGD highlights the power of appropriate preconditioning in accelerating nonconvex statistical estimation, where the iteration-varying preconditioners promote desirable invariance properties of the trajectory with respect to the symmetry in low-rank factorization.

                                                                                           Ehsan Afshari

                                                                                                                                     

      Ehsan Afshari

                                        (Professor, University of Michigan)

Bio: Prof. Afshari received his Ph.D. EE from Caltech in 2006 and joined the ECE department of Cornell University. Ten years later, he joined the EECS department of the university of Michigan, Ann Arbor. His team is engaged in the theoretical foundations, design and experimental validation of analog, RF, mm-wave, and THz integrated devices, circuits and systems for a variety of applications including communications, imaging, and sensing. His work is funded by federal agencies such NSF, DARPA, ONR, and ARL as well as industry such as Intel, TI, Raytheon, and Qualcomm. He has been the recipient of several awards and honors, including a 2008 DARPA Young Faculty Award, a 2010 NSF CAREER Award, a first place at Stanford-Berkeley-Caltech Innovation Challenge in 2005, and several best paper awards at the leading conferences in his field. He has also served as a Distinguished Lecturer for the IEEE Solid-State Circuits Society. He is selected as one of 50 most distinguished alumni of Sharif University. His doctoral students have also received several prestigious awards and fellowships, including the 2018, 2017, 2012, 2011, and 2010 Solid-State Circuit Society Predoctoral Achievement Award, 2011, 2013, and 2017 IEEE MTT-S Microwave Engineering Graduate Fellowships, Cornell Best Ph.D. Thesis Award in 2011 and 2014, as well as many best paper awards. The Ph.D. graduates of his group are the leaders of the field including faculty members at MIT, UC Davis, UC Irvine, Penn State University, and University of Minnesota, and companies such as IBM, Bell Labs, Qualcomm and Broadcom.
 
Talk Title: Superman Vision: Fully Integrated Terahertz Imaging Radar in CMOS

Abstract: The increasing demands for compact, low-cost, and high-resolution imaging radar systems have pushed the operation frequency to the terahertz range due to the shorter wavelength and larger available bandwidth. These radars can be used in security screening, industrial quality control and biological hydration sensing applications. In this talk, we review basics of imaging radar systems as well as recent advances in this area.


Alexandre Bayen

(Professor, UC Berkeley)

Bio: Alexandre Bayen is the Associate Provost for Moffett Field Program Development at UC Berkeley, and the Liao-Cho Professor of Engineering at UC Berkeley. He is a Professor of Electrical Engineering and Computer Science(link is external), and Civil and Environmental Engineering(link is external). From 2014 - 2021, he served as the Director of the Institute of Transportation Studies(link is external) at UC Berkeley (ITS). He is also a Faculty Scientist in Mechanical Engineering, at the Lawrence Berkeley National Laboratory(link is external) (LBNL). He received the Engineering Degree in applied mathematics from the Ecole Polytechnique, France, in 1998, the M.S. and Ph.D. in aeronautics and astronautics from Stanford University in 1999 and 2004, respectively. He was a Visiting Researcher at NASA Ames Research Center from 2000 to 2003. Between January 2004 and December 2004, he worked as the Research Director of the Autonomous Navigation Laboratory at the Laboratoire de Recherches Balistiques et Aerodynamiques, (Ministere de la Defense, Vernon, France), where he holds the rank of Major. He has been on the faculty at UC Berkeley since 2005. Bayen has authored two books and over 200 articles in peer reviewed journals and conferences. He is the recipient of the Ballhaus Award from Stanford University, 2004, of the CAREER award from the National Science Foundation, 2009 and he is a NASA Top 10 Innovators on Water Sustainability, 2010. His projects Mobile Century and Mobile Millennium received the 2008 Best of ITS Award for ‘Best Innovative Practice’, at the ITS World Congress and a TRANNY Award from the California Transportation Foundation, 2009. Mobile Millennium has been featured more than 200 times in the media, including TV channels and radio stations (CBS, NBC, ABC, CNET, NPR, KGO, the BBC), and in the popular press (Wall Street Journal, Washington Post, LA Times). Bayen is the recipient of the Presidential Early Career Award for Scientists and Engineers (PECASE) award from the White House, 2010. He is also the recipient of the Okawa Research Grant Award, the Ruberti Prize from the IEEE, and the Huber Prize from the ASCE.

Talk Title: The MegaVanderTest
 
Abstract: This lecture will present the story of the MegaVanderTest, a test involving 103 self-driving vehicles, which ran on Nov. 18, 2022 on I24 in Nashville, TN. The MegaVanderTest is to our knowledge the test which achieved the largest concentration of self-driving vehicles collaboratively controlling traffic on a single stretch of freeway in the history of self-driving vehicles. The lecture will explain the objectives of CIRCLES, a consortium led by UC Berkeley, which conducted the MegaVanderTest. It will explain the algorithms and policies that ran in during the test. It will finally show some preliminary results, on the way to our quest: leveraging 1% to 2% of the total flow of vehicles to to improve the fuel economy of every car on that freeway on that day (not just ours), by up to 10%.

Torsten Hoefler

(Professor , ETH Zurich)

Bio : Torsten Hoefler is a Professor of Computer Science at ETH Zurich, a member of Academia Europaea, and a Fellow of the ACM and IEEE. Following a “Performance as a Science” vision, he combines mathematical models of architectures and applications to design optimized computing systems. Before joining ETH Zurich, he led the performance modeling and simulation efforts for the first sustained Petascale supercomputer, Blue Waters, at the University of Illinois at Urbana-Champaign. He is also a key contributor to the Message Passing Interface (MPI) standard where he chaired the "Collective Operations and Topologies" working group. Torsten won best paper awards at ACM/IEEE Supercomputing in 2010, 2013, 2014, 2019, 2022, and at other international conferences. He has published numerous peer-reviewed scientific articles and authored chapters of the MPI-2.2 and MPI-3.0 standards. For his work, Torsten received the IEEE CS Sidney Fernbach Memorial Award in 2022, the ACM Gordon Bell Prize in 2019, the IEEE TCSC Award of Excellence (MCR), ETH Zurich's Latsis Prize, the SIAM SIAG/Supercomputing Junior Scientist Prize, the IEEE TCSC Young Achievers in Scalable Computing Award, and the BenchCouncil Rising Star Award. Following his Ph.D., he received the 2014 Young Alumni Award and the 2022 Distinguished Alumni Award of his alma mater, Indiana University. Torsten was elected to the first steering committee of ACM's SIGHPC in 2013 and he was re-elected for every term since then. He was the first European to receive many of those honors; he also received both an ERC Starting and Consolidator grant. His research interests revolve around the central topic of performance-centric system design and include scalable networks, parallel programming techniques, and performance modeling for large-scale simulations and artificial intelligence systems. Additional information about Torsten can be found on his homepage at htor.inf.ethz.ch.

Talk Title: Distributed AI Supercomputing: From Large-Scale Transformers to Graph Neural Networks
 
Abstract: Much of the astonishing progress in modern artificial intelligence is made by scaling to larger and larger deep learning models trained with more data. For example, the GPT-3 model with 175 billion parameters, trained with more than 500 billion tokens forms the basis for many services like Microsoft's Copilot and ChatGPT. We explain principles for training such large-scale models efficiently and how to design AI systems for this purpose. We then extend our discussion to emerging parallel and distributed Graph Neural Networks.
 
 

Important Deadlines

Full Paper Submission:16th January 2023

25th January 2023
Acceptance Notification: 1st February 2023

11th February 2023
Final Paper Submission:15th February
2023


22nd February 2023
Early Bird Registration: 15th February 2023

21st February 2023
Presentation Submission: 17th February 2023

28th February 2023
Conference: 8 - 11 March 2023

Previous Conference

IEEE CCWC 2022

Sister Conferences

IEEE UEMCON 2022

IEEE IEMCON 2022

IEEE AIIOT 2022

Search

Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors

Announcements

•    Best Paper Award will be given for each track
•    Conference Record No- 57344