EVENFLOW participated at the 12th International Conference on Learning Representations – ICLR 2024, AUT

EVENFLOW project participated in the 12th International Conference on Learning Representations that took place from 7 May until 11 May 2024, in Vienna, Austria. More specifically, Imperial College London -ICL partner represented the project with the conference paper titled Expressive Losses for Verified Robustness via Convex Combinations.

Abstract of the paper: In order to train networks for verified adversarial robustness, it is common to over-approximate the worst-case loss over perturbation regions, resulting in networks that attain verifiability at the expense of standard performance. As shown in recent work, better trade-offs between accuracy and robustness can be obtained by carefully coupling adversarial training with over-approximations. We hypothesize that the expressivity of a loss function, which we formalize as the ability to span a range of trade-offs between lower and upper bounds to the worst-case loss through a single parameter (the over-approximation coefficient), is key to attaining state-of-the-art performance. To support our hypothesis, we show that trivial expressive losses, obtained via convex combinations between adversarial attacks and IBP bounds, yield state-of-the-art results across a variety of settings in spite of their conceptual simplicity. We provide a detailed analysis of the relationship between the over-approximation coefficient and performance profiles across different expressive losses, showing that, while expressivity is essential, better approximations of the worst-case loss are not necessarily linked to superior robustness-accuracy trade-offs.

The International Conference on Learning Representations (ICLR) is the premier gathering of professionals dedicated to the advancement of the branch of artificial intelligence called representation learning, but generally referred to as deep learning.

ICLR is globally renowned for presenting and publishing cutting-edge research on all aspects of deep learning used in the fields of artificial intelligence, statistics and data science, as well as important application areas such as machine vision, computational biology, speech recognition, text understanding, gaming, and robotics.

Participants at ICLR span a wide range of backgrounds, from academic and industrial researchers, to entrepreneurs and engineers, to graduate students and postdocs.

A non-exhaustive list of relevant topics explored at the conference include:

  • unsupervised, semi-supervised, and supervised representation learning
  • representation learning for planning and reinforcement learning
  • representation learning for computer vision and natural language processing
  • metric learning and kernel learning
  • sparse coding and dimensionality expansion
  • hierarchical models
  • optimization for representation learning
  • learning representations of outputs or states
  • optimal transport
  • theoretical issues in deep learning
  • societal considerations of representation learning including fairness, safety, privacy, and interpretability, and explainability
  • visualization or interpretation of learned representations
  • implementation issues, parallelization, software platforms, hardware
  • climate, sustainability
  • applications in audio, speech, robotics, neuroscience,  biology, or any other field

Go Up