Corporate Talk



Sami Abu-El-Haija

       (Senior Research Scientist, Google Research)


Bio: Sami is a Senior Research Scientist at Google Research, working at the Algorithms & Optimizations research group. He has published several papers in top-tier venues. He studied at top-tier institutions. Most recently, at the University of Southern California for his PhD.

Talk Title: Initializing deep Fully-connected Neural Networks with closed-form solutions.

Abstract : Feed-forward neural networks -- such as, Graph Neural Networks (GNNs) achieve outstanding empirical performance on several prediction tasks -- such as link prediction and node classification, on graphs, e.g., on social or biological graphs. However, state-of-the-art (SOTA) models require long training time (hours-to-days, even on expensive GPUs). On the other hand, shallow (1-layer) neural networks pose convex objective functions. In some cases, their optimal parameters can be estimated in closed-form, without calculating gradients. Sami will describe his journey in explaining a new kind of deep neural networks, that are specially hand-crafted, such that, on one hyperplane in their parameter space, the networks will be equivalent to standard MLP network with ReLu activation. On another hyperplane, the network becomes linear in its parameters. Such networks can be initialized, in closed-form, by restricting projecting their parameters onto the linear hyperplane. Afterwards, these networks can be fine-tuned in the usual regime. In his experiments, such a training paradigm can speed-up training hundreds or thousands of times.

Pin-Yu Chen

       (Principal Research Scientist, IBM)

Bio: Dr. Pin-Yu Chen is a principal research scientist at IBM Thomas J. Watson Research Center, Yorktown Heights, NY, USA. He is also the chief scientist of RPI-IBM AI Research Collaboration and PI of ongoing MIT-IBM Watson AI Lab projects. Dr. Chen received his Ph.D. in electrical engineering and computer science from the University of Michigan, Ann Arbor, USA, in 2016. Dr. Chen’s recent research focuses on adversarial machine learning and robustness of neural networks. His long-term research vision is to build trustworthy machine learning systems.  At IBM Research, he received several research accomplishment awards, including an IBM Master Inventor and IBM Corporate Technical Award in 2021. His research contributes to IBM open-source libraries including Adversarial Robustness Toolbox (ART 360) and AI Explainability 360 (AIX 360). He has published more than 50 papers related to trustworthy machine learning at major AI and machine learning conferences, given tutorials at NeurIPS’22, AAAI’22, IJCAI’21, CVPR(’20,’21), ECCV’20, ICASSP’20, KDD’19, and Big Data’18, and organized several workshops for adversarial machine learning. He received the IEEE GLOBECOM 2010 GOLD Best Paper Award and UAI 2022 Best Paper Runner-Up Award.

Talk Title: AI Model Inspector: Towards Holistic Adversarial Robustness for Deep Learning

Abstract: In this talk, I will share my research journey toward building an AI model inspector for evaluating, improving, and exploiting adversarial robustness for deep learning. I will start by providing an overview of research topics concerning adversarial robustness and machine learning, including attacks, defenses, verification, and novel applications. For each topic, I will summarize my key research findings, such as (i) practical optimization-based attacks and their applications to explainability and scientific discovery, (ii) Plug-and-play defenses for model repairing and patching, and (iii) attack-agnostic robustness assessment. Finally, I will conclude my talk with my vision of preparing deep learning for the real world and the research methodology of learning with an adversary.

Ben Adlam

(Research Scientist , Google Brain)

Bio: Dr Ben is a Research Scientist at Google Brain working to understand deep learning and apply it to basic-science problems. He joined Google in 2018 as an AI Resident, and before that he was a PhD student in applied math at Harvard, where he used techniques from probability theory and stochastic processes to study evolutionary dynamics and random matrices.
Talk Title: Kernel Regression with Infinite-Width Neural Networks on Millions of Examples

Abstract: While kernel regression remains an important practical method, its connection to neural networks as their width becomes large has initiated fresh research. These neural kernels have drastically increased performance on diverse and nonstandard data modalities but require significantly more compute, which previously limited their application to smaller datasets. In this work, we address this by massively parallelizing their computation across many GPUs. We combine this with a distributed, preconditioned conjugate gradients algorithm to enable kernel regression at a large scale (i.e. up to five million examples). Using this approach, we study scaling laws of several neural kernels across many orders of magnitude for the CIFAR-5m dataset. Using data augmentation to expand the original CIFAR-10 training dataset by a factor of 20, we obtain a test accuracy of 91.2% (SotA for a pure kernel method). Moreover, we explore neural kernels on other data modalities, obtaining results on protein and small molecule prediction tasks that are competitive with SotA methods.

Important Deadlines

Full Paper Submission:16th January 2023

25th January 2023
Acceptance Notification: 1st February 2023

11th February 2023
Final Paper Submission:15th February

22nd February 2023
Early Bird Registration: 15th February 2023

21st February 2023
Presentation Submission: 17th February 2023

28th February 2023
Conference: 8 - 11 March 2023

Previous Conference


Sister Conferences





Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors


•    Best Paper Award will be given for each track
•    Conference Record No- 57344