Friday, July 22 - 2022 International Conference on Machine Learning - Baltimore, MD
Accepted Papers (Oral Presentation)
Supernet Training for Federated Image Classification Taehyeon Aaron Kim (KAIST)*; Se-Young Yun (KAIST)
[PDF]
Achieving High TinyML Accuracy through Selective Cloud Interactions Anil Kag (Boston University)*; Igor Fedorov (Arm Research); Aditya Gangrade (Boston University); Paul Whatmough (Arm Research); Venkatesh Saligrama (Boston University)
[PDF]
Triangular Dropout: Variable Network Width without Retraining Edward W Staley (JHUAPL)*; Jared Markowitz (Johns Hopkins University Applied Physics Laboratory)
[PDF]
A Theoretical View on Sparsely Activated Networks Cenk Baykal (Google Research); Nishanth Dikkala (Google Research)*; Rina Panigrahy (Google); Cyrus Rashtchian (Google); Xin Wang (Google)
[PDF]
[Poster]
Does Continual Learning Equally Forget All Parameters? Haiyan Zhao (University of Technology Sydney)*; Tianyi Zhou (University of Washington); Guodong Long (University of Technology Sydney); Jing Jiang (University of Technology Sydney); Chengqi Zhang (University of Technology Sydney)
[PDF]
Slimmable Quantum Federated Learning Won Joon Yun (Korea University); Jae Pyoung Kim (Korea University); Soyi Jung (Hallym Univeristy); Jihong Park (Deakin University); Mehdi Bennis (University of Oulu); Joongheon Kim (Korea University, School of Electrical Engineering)*
[PDF]
Sparsifying Transformer Models with Trainable Representation Pooling Michał Pietruszka (Jagiellonian University)*; Łukasz Borchmann (Applica.ai); Łukasz Garncarek (Applica.ai)
Play It Cool: Dynamic Shifting Prevents Thermal Throttling Yang Zhou (University of Texas at Austin )*; Feng Liang (The University of Texas at Austin); Ting-Wu Chin (Carnegie Mellon University); Diana Marculescu (The University of Texas at Austin)
[PDF]
Efficient Sparsely Activated Transformers Salar Latifi (University of Michigan)*; Saurav Muralidharan (NVIDIA); Michael Garland (NVIDIA)
[PDF]
Sparse Relational Reasoning with Object-centric Representations Alex F Spies (Imperial College London)*
Accepted Papers (Poster Presentation)
The Spike Gating Flow: A Hierarchical Structure Based Spiking Neural Network for Spatiotemporal Computing Zihao Zhao (Fudan University)*; Yanhong Wang (Fudan university); Qiaosha Zou (Fudan University); Xiaoan Wang (BrainUp Research Lab); C.-J. Richard Shi (Fudan University); Junwen Luo (BrainUp Research Lab)
[PDF]
Back to the Source: Test-Time Diffusion-Driven Adaptation Jin Gao (Shanghai Jiaotong University); Jialing Zhang (Shanghai Jiaotong University); Xihui Liu (UC Berkeley); Trevor Darrell (UC Berkeley); Evan Shelhamer (DeepMind); Dequan Wang (UC Berkeley)*
Dynamic Split Computing for Efficient Deep Edge Intelligence Arian Bakhtiarnia (Aarhus University)*; Nemanja B Milosevic (UNSPMF); Qi Zhang (Aarhus University); Dragana Bajović (University of Novi Sad); Alexandros Iosifidis (Aarhus University)
[PDF]
Inductive Biases for Object-Centric Representations in the Presence of Complex Textures Samuele Papa (University of Amsterdam)*; Ole Winther (DTU and KU); Andrea Dittadi (Technical University of Denmark)
[PDF]
Noisy Heuristics NAS: A Network Morphism based Neural Architecture Search using Heuristics Suman Sapkota (NAAMII)*; Binod Bhattarai (University College London)
[PDF]
[Poster]
FLOWGEN: Fast and slow graph generation Aman Madaan (Carnegie Mellon University)*; Yiming Yang (Carnegie Mellon University)
[PDF]
[Poster]
Fault-Tolerant Collaborative Inference through the Edge-PRUNE Framework Jani Boutellier (University of Vaasa)*; Bo Tan (Tampere University); Jari Nurmi (Tampere University, Finland)
[PDF]
Vote for Nearest Neighbors Meta-Pruning of Self-Supervised Networks Haiyan Zhao (University of Technology Sydney)*; Tianyi Zhou (University of Washington); Guodong Long (University of Technology Sydney); Jing Jiang (University of Technology Sydney); Chengqi Zhang (University of Technology Sydney)
[PDF]
A Product of Experts Approach to Early-Exit Ensembles James U Allingham (University of Cambridge)*; Eric Nalisnick (University of Amsterdam)
[PDF]
Neural Architecture Search with Loss Flatness-aware Measure Joonhyun Jeong (Clova Image Vision, NAVER Corp.)*; Joonsang Yu (NAVER CLOVA); Dongyoon Han (NAVER AI Lab); Youngjoon Yoo (Clova AI Research, NAVER Corp.)
[PDF]
[Poster]
Is a Modular Architecture Enough? Sarthak Mittal (Mila)*; Yoshua Bengio (Mila); Guillaume Lajoie (Mila, Université de Montréal)
[PDF]
[Poster]
Parameter efficient dendritic-tree neurons outperform perceptrons Ziwen Han (University of Toronto)*; Evgeniya Gorobets (University of Toronto); Pan Chen (University of Toronto)
[PDF]
Simple, Practical and Fast Dynamic Truncation Kernel Multiplication Lianke Qin (UCSB)*; Somdeb Sarkhel (Adobe); Zhao Song (Adobe Research); Danyang Zhuo (Duke University)
[PDF]
[Poster]
Confident Adaptive Language Modeling Tal Schuster (Google)*; Adam Fisch (MIT); Jai Gupta (Google); Mostafa Dehghani (Google Brain); Dara Bahri (Google); Vinh Q Tran (Google); Yi Tay (Google); Donald Metzler (Google)
[PDF]
Provable Hierarchical Lifelong Learning with a Sketch-based Modular Architecture ZIHAO DENG (Washington University in St. Louis)*; Zee Fryer (Google Research); Brendan Juba (Washington University in St Louis); Rina Panigrahy (Google); Xin Wang (Google)
[PDF]
SnapStar Algorithm: a new way to ensemble Neural Networks Sergey Zinchenko (NSU)*; Dmitry Lishudi (Higher School of Economics)
[PDF]
[Poster]
HARNAS: Neural Architecture Search Jointly Optimizing for Hardware Efficiency and Adversarial Robustness of Convolutional and Capsule Networks Alberto Marchisio (Technische Universität Wien (TU Wien))*; Vojtech Mrazek (Brno University of Technology); Andrea Massa (Politecnico di Torino); Beatrice Bussolino (Politecnico di Torino); Maurizio Martina (Politecnico di Torino); Muhammad Shafique (New York University Abu Dhabi)
[PDF]
[Poster]
Dynamic Transformer Networks Amanuel N Mersha (Addis Ababa Institute Technology)*
[PDF]
Just-in-Time Sparsity: Learning Dynamic Sparsity Schedules Kale-ab Tessera (InstaDeep)*; Chiratidzo Matowe (InstaDeep); Arnu Pretorius (Instadeep); Benjamin Rosman (University of the Witwatersrand); Sara Hooker (Cohere)
[PDF]
FedHeN: Federated Learning in Heterogeneous Networks Durmus Alp Emre Acar (Boston University)*; Venkatesh Saligrama (Boston University)
[PDF]
APP: Anytime Progressive Pruning Diganta Misra (MILA)*; Bharat Runwal (Indian Institute of Technology(IIT), Delhi); Tianlong Chen (Unversity of Texas at Austin); Zhangyang Wang (University of Texas at Austin); Irina Rish (Mila/UdeM)
[PDF]
[Poster]
Deep Policy Generators Francesco Faccio (The Swiss AI Lab IDSIA); Vincent Herrmann (IDSIA)*; Aditya Ramesh (The Swiss AI Lab IDSIA); Louis Kirsch (Swiss AI Lab IDSIA); Jürgen Schmidhuber (IDSIA - Lugano)
[PDF]
Connectivity Properties of Neural Networks Under Performance-Resources Trade-off Aleksandra I Nowak (Jagiellonian Univeristy)*; Romuald Janik (Jagiellonian University)
[PDF]