title
stringlengths 10
192
| authors
stringlengths 7
342
| abstract
stringlengths 82
4.51k
| url
stringlengths 44
59
| detail_url
stringlengths 44
59
| abs
stringlengths 44
59
| OpenReview
stringclasses 1
value | Download PDF
stringlengths 47
77
| tags
stringclasses 1
value |
|---|---|---|---|---|---|---|---|---|
Machine Unlearning for Random Forests
|
Jonathan Brophy, Daniel Lowd
|
Responding to user data deletion requests, removing noisy examples, or deleting corrupted training data are just a few reasons for wanting to delete instances from a machine learning (ML) model. However, efficiently removing this data from an ML model is generally difficult. In this paper, we introduce data removal-enabled (DaRE) forests, a variant of random forests that enables the removal of training data with minimal retraining. Model updates for each DaRE tree in the forest are exact, meaning that removing instances from a DaRE model yields exactly the same model as retraining from scratch on updated data. DaRE trees use randomness and caching to make data deletion efficient. The upper levels of DaRE trees use random nodes, which choose split attributes and thresholds uniformly at random. These nodes rarely require updates because they only minimally depend on the data. At the lower levels, splits are chosen to greedily optimize a split criterion such as Gini index or mutual information. DaRE trees cache statistics at each node and training data at each leaf, so that only the necessary subtrees are updated as data is removed. For numerical attributes, greedy nodes optimize over a random subset of thresholds, so that they can maintain statistics while approximating the optimal threshold. By adjusting the number of thresholds considered for greedy nodes, and the number of random nodes, DaRE trees can trade off between more accurate predictions and more efficient updates. In experiments on 13 real-world datasets and one synthetic dataset, we find DaRE forests delete data orders of magnitude faster than retraining from scratch while sacrificing little to no predictive power.
|
https://proceedings.mlr.press/v139/brophy21a.html
|
https://proceedings.mlr.press/v139/brophy21a.html
|
https://proceedings.mlr.press/v139/brophy21a.html
|
http://proceedings.mlr.press/v139/brophy21a/brophy21a.pdf
|
ICML 2021
|
|
Value Alignment Verification
|
Daniel S Brown, Jordan Schneider, Anca Dragan, Scott Niekum
|
As humans interact with autonomous agents to perform increasingly complicated, potentially risky tasks, it is important to be able to efficiently evaluate an agent’s performance and correctness. In this paper we formalize and theoretically analyze the problem of efficient value alignment verification: how to efficiently test whether the behavior of another agent is aligned with a human’s values? The goal is to construct a kind of "driver’s test" that a human can give to any agent which will verify value alignment via a minimal number of queries. We study alignment verification problems with both idealized humans that have an explicit reward function as well as problems where they have implicit values. We analyze verification of exact value alignment for rational agents, propose and test heuristics for value alignment verification in gridworlds and a continuous autonomous driving domain, and prove that there exist sufficient conditions such that we can verify epsilon-alignment in any environment via a constant-query-complexity alignment test.
|
https://proceedings.mlr.press/v139/brown21a.html
|
https://proceedings.mlr.press/v139/brown21a.html
|
https://proceedings.mlr.press/v139/brown21a.html
|
http://proceedings.mlr.press/v139/brown21a/brown21a.pdf
|
ICML 2021
|
|
Model-Free and Model-Based Policy Evaluation when Causality is Uncertain
|
David A Bruns-Smith
|
When decision-makers can directly intervene, policy evaluation algorithms give valid causal estimates. In off-policy evaluation (OPE), there may exist unobserved variables that both impact the dynamics and are used by the unknown behavior policy. These “confounders” will introduce spurious correlations and naive estimates for a new policy will be biased. We develop worst-case bounds to assess sensitivity to these unobserved confounders in finite horizons when confounders are drawn iid each period. We demonstrate that a model-based approach with robust MDPs gives sharper lower bounds by exploiting domain knowledge about the dynamics. Finally, we show that when unobserved confounders are persistent over time, OPE is far more difficult and existing techniques produce extremely conservative bounds.
|
https://proceedings.mlr.press/v139/bruns-smith21a.html
|
https://proceedings.mlr.press/v139/bruns-smith21a.html
|
https://proceedings.mlr.press/v139/bruns-smith21a.html
|
http://proceedings.mlr.press/v139/bruns-smith21a/bruns-smith21a.pdf
|
ICML 2021
|
|
Narrow Margins: Classification, Margins and Fat Tails
|
Francois Buet-Golfouse
|
It is well-known that, for separable data, the regularised two-class logistic regression or support vector machine re-normalised estimate converges to the maximal margin classifier as the regularisation hyper-parameter $\lambda$ goes to 0. The fact that different loss functions may lead to the same solution is of theoretical and practical relevance as margin maximisation allows more straightforward considerations in terms of generalisation and geometric interpretation. We investigate the case where this convergence property is not guaranteed to hold and show that it can be fully characterised by the distribution of error terms in the latent variable interpretation of linear classifiers. In particular, if errors follow a regularly varying distribution, then the regularised and re-normalised estimate does not converge to the maximal margin classifier. This shows that classification with fat tails has a qualitatively different behaviour, which should be taken into account when considering real-life data.
|
https://proceedings.mlr.press/v139/buet-golfouse21a.html
|
https://proceedings.mlr.press/v139/buet-golfouse21a.html
|
https://proceedings.mlr.press/v139/buet-golfouse21a.html
|
http://proceedings.mlr.press/v139/buet-golfouse21a/buet-golfouse21a.pdf
|
ICML 2021
|
|
Differentially Private Correlation Clustering
|
Mark Bun, Marek Elias, Janardhan Kulkarni
|
Correlation clustering is a widely used technique in unsupervised machine learning. Motivated by applications where individual privacy is a concern, we initiate the study of differentially private correlation clustering. We propose an algorithm that achieves subquadratic additive error compared to the optimal cost. In contrast, straightforward adaptations of existing non-private algorithms all lead to a trivial quadratic error. Finally, we give a lower bound showing that any pure differentially private algorithm for correlation clustering requires additive error $\Omega$(n).
|
https://proceedings.mlr.press/v139/bun21a.html
|
https://proceedings.mlr.press/v139/bun21a.html
|
https://proceedings.mlr.press/v139/bun21a.html
|
http://proceedings.mlr.press/v139/bun21a/bun21a.pdf
|
ICML 2021
|
|
Disambiguation of Weak Supervision leading to Exponential Convergence rates
|
Vivien A Cabannnes, Francis Bach, Alessandro Rudi
|
Machine learning approached through supervised learning requires expensive annotation of data. This motivates weakly supervised learning, where data are annotated with incomplete yet discriminative information. In this paper, we focus on partial labelling, an instance of weak supervision where, from a given input, we are given a set of potential targets. We review a disambiguation principle to recover full supervision from weak supervision, and propose an empirical disambiguation algorithm. We prove exponential convergence rates of our algorithm under classical learnability assumptions, and we illustrate the usefulness of our method on practical examples.
|
https://proceedings.mlr.press/v139/cabannnes21a.html
|
https://proceedings.mlr.press/v139/cabannnes21a.html
|
https://proceedings.mlr.press/v139/cabannnes21a.html
|
http://proceedings.mlr.press/v139/cabannnes21a/cabannnes21a.pdf
|
ICML 2021
|
|
Finite mixture models do not reliably learn the number of components
|
Diana Cai, Trevor Campbell, Tamara Broderick
|
Scientists and engineers are often interested in learning the number of subpopulations (or components) present in a data set. A common suggestion is to use a finite mixture model (FMM) with a prior on the number of components. Past work has shown the resulting FMM component-count posterior is consistent; that is, the posterior concentrates on the true, generating number of components. But consistency requires the assumption that the component likelihoods are perfectly specified, which is unrealistic in practice. In this paper, we add rigor to data-analysis folk wisdom by proving that under even the slightest model misspecification, the FMM component-count posterior diverges: the posterior probability of any particular finite number of components converges to 0 in the limit of infinite data. Contrary to intuition, posterior-density consistency is not sufficient to establish this result. We develop novel sufficient conditions that are more realistic and easily checkable than those common in the asymptotics literature. We illustrate practical consequences of our theory on simulated and real data.
|
https://proceedings.mlr.press/v139/cai21a.html
|
https://proceedings.mlr.press/v139/cai21a.html
|
https://proceedings.mlr.press/v139/cai21a.html
|
http://proceedings.mlr.press/v139/cai21a/cai21a.pdf
|
ICML 2021
|
|
A Theory of Label Propagation for Subpopulation Shift
|
Tianle Cai, Ruiqi Gao, Jason Lee, Qi Lei
|
One of the central problems in machine learning is domain adaptation. Different from past theoretical works, we consider a new model of subpopulation shift in the input or representation space. In this work, we propose a provably effective framework based on label propagation by using an input consistency loss. In our analysis we used a simple but realistic “expansion” assumption, which has been proposed in \citet{wei2021theoretical}. It turns out that based on a teacher classifier on the source domain, the learned classifier can not only propagate to the target domain but also improve upon the teacher. By leveraging existing generalization bounds, we also obtain end-to-end finite-sample guarantees on deep neural networks. In addition, we extend our theoretical framework to a more general setting of source-to-target transfer based on an additional unlabeled dataset, which can be easily applied to various learning scenarios. Inspired by our theory, we adapt consistency-based semi-supervised learning methods to domain adaptation settings and gain significant improvements.
|
https://proceedings.mlr.press/v139/cai21b.html
|
https://proceedings.mlr.press/v139/cai21b.html
|
https://proceedings.mlr.press/v139/cai21b.html
|
http://proceedings.mlr.press/v139/cai21b/cai21b.pdf
|
ICML 2021
|
|
Lenient Regret and Good-Action Identification in Gaussian Process Bandits
|
Xu Cai, Selwyn Gomes, Jonathan Scarlett
|
In this paper, we study the problem of Gaussian process (GP) bandits under relaxed optimization criteria stating that any function value above a certain threshold is “good enough”. On the theoretical side, we study various {\em lenient regret} notions in which all near-optimal actions incur zero penalty, and provide upper bounds on the lenient regret for GP-UCB and an elimination algorithm, circumventing the usual $O(\sqrt{T})$ term (with time horizon $T$) resulting from zooming extremely close towards the function maximum. In addition, we complement these upper bounds with algorithm-independent lower bounds. On the practical side, we consider the problem of finding a single “good action” according to a known pre-specified threshold, and introduce several good-action identification algorithms that exploit knowledge of the threshold. We experimentally find that such algorithms can typically find a good action faster than standard optimization-based approaches.
|
https://proceedings.mlr.press/v139/cai21c.html
|
https://proceedings.mlr.press/v139/cai21c.html
|
https://proceedings.mlr.press/v139/cai21c.html
|
http://proceedings.mlr.press/v139/cai21c/cai21c.pdf
|
ICML 2021
|
|
A Zeroth-Order Block Coordinate Descent Algorithm for Huge-Scale Black-Box Optimization
|
Hanqin Cai, Yuchen Lou, Daniel Mckenzie, Wotao Yin
|
We consider the zeroth-order optimization problem in the huge-scale setting, where the dimension of the problem is so large that performing even basic vector operations on the decision variables is infeasible. In this paper, we propose a novel algorithm, coined ZO-BCD, that exhibits favorable overall query complexity and has a much smaller per-iteration computational complexity. In addition, we discuss how the memory footprint of ZO-BCD can be reduced even further by the clever use of circulant measurement matrices. As an application of our new method, we propose the idea of crafting adversarial attacks on neural network based classifiers in a wavelet domain, which can result in problem dimensions of over one million. In particular, we show that crafting adversarial examples to audio classifiers in a wavelet domain can achieve the state-of-the-art attack success rate of 97.9% with significantly less distortion.
|
https://proceedings.mlr.press/v139/cai21d.html
|
https://proceedings.mlr.press/v139/cai21d.html
|
https://proceedings.mlr.press/v139/cai21d.html
|
http://proceedings.mlr.press/v139/cai21d/cai21d.pdf
|
ICML 2021
|
|
GraphNorm: A Principled Approach to Accelerating Graph Neural Network Training
|
Tianle Cai, Shengjie Luo, Keyulu Xu, Di He, Tie-Yan Liu, Liwei Wang
|
Normalization is known to help the optimization of deep neural networks. Curiously, different architectures require specialized normalization methods. In this paper, we study what normalization is effective for Graph Neural Networks (GNNs). First, we adapt and evaluate the existing methods from other domains to GNNs. Faster convergence is achieved with InstanceNorm compared to BatchNorm and LayerNorm. We provide an explanation by showing that InstanceNorm serves as a preconditioner for GNNs, but such preconditioning effect is weaker with BatchNorm due to the heavy batch noise in graph datasets. Second, we show that the shift operation in InstanceNorm results in an expressiveness degradation of GNNs for highly regular graphs. We address this issue by proposing GraphNorm with a learnable shift. Empirically, GNNs with GraphNorm converge faster compared to GNNs using other normalization. GraphNorm also improves the generalization of GNNs, achieving better performance on graph classification benchmarks.
|
https://proceedings.mlr.press/v139/cai21e.html
|
https://proceedings.mlr.press/v139/cai21e.html
|
https://proceedings.mlr.press/v139/cai21e.html
|
http://proceedings.mlr.press/v139/cai21e/cai21e.pdf
|
ICML 2021
|
|
On Lower Bounds for Standard and Robust Gaussian Process Bandit Optimization
|
Xu Cai, Jonathan Scarlett
|
In this paper, we consider algorithm independent lower bounds for the problem of black-box optimization of functions having a bounded norm is some Reproducing Kernel Hilbert Space (RKHS), which can be viewed as a non-Bayesian Gaussian process bandit problem. In the standard noisy setting, we provide a novel proof technique for deriving lower bounds on the regret, with benefits including simplicity, versatility, and an improved dependence on the error probability. In a robust setting in which the final point is perturbed by an adversary, we strengthen an existing lower bound that only holds for target success probabilities very close to one, by allowing for arbitrary target success probabilities in (0, 1). Furthermore, in a distinct robust setting in which every sampled point may be perturbed by a constrained adversary, we provide a novel lower bound for deterministic strategies, demonstrating an inevitable joint dependence of the cumulative regret on the corruption level and the time horizon, in contrast with existing lower bounds that only characterize the individual dependencies.
|
https://proceedings.mlr.press/v139/cai21f.html
|
https://proceedings.mlr.press/v139/cai21f.html
|
https://proceedings.mlr.press/v139/cai21f.html
|
http://proceedings.mlr.press/v139/cai21f/cai21f.pdf
|
ICML 2021
|
|
High-dimensional Experimental Design and Kernel Bandits
|
Romain Camilleri, Kevin Jamieson, Julian Katz-Samuels
|
In recent years methods from optimal linear experimental design have been leveraged to obtain state of the art results for linear bandits. A design returned from an objective such as G-optimal design is actually a probability distribution over a pool of potential measurement vectors. Consequently, one nuisance of the approach is the task of converting this continuous probability distribution into a discrete assignment of N measurements. While sophisticated rounding techniques have been proposed, in d dimensions they require N to be at least d, d log(log(d)), or d^2 based on the sub-optimality of the solution. In this paper we are interested in settings where N may be much less than d, such as in experimental design in an RKHS where d may be effectively infinite. In this work, we propose a rounding procedure that frees N of any dependence on the dimension d, while achieving nearly the same performance guarantees of existing rounding procedures. We evaluate the procedure against a baseline that projects the problem to a lower dimensional space and performs rounding there, which requires N to just be at least a notion of the effective dimension. We also leverage our new approach in a new algorithm for kernelized bandits to obtain state of the art results for regret minimization and pure exploration. An advantage of our approach over existing UCB-like approaches is that our kernel bandit algorithms are provably robust to model misspecification.
|
https://proceedings.mlr.press/v139/camilleri21a.html
|
https://proceedings.mlr.press/v139/camilleri21a.html
|
https://proceedings.mlr.press/v139/camilleri21a.html
|
http://proceedings.mlr.press/v139/camilleri21a/camilleri21a.pdf
|
ICML 2021
|
|
A Gradient Based Strategy for Hamiltonian Monte Carlo Hyperparameter Optimization
|
Andrew Campbell, Wenlong Chen, Vincent Stimper, Jose Miguel Hernandez-Lobato, Yichuan Zhang
|
Hamiltonian Monte Carlo (HMC) is one of the most successful sampling methods in machine learning. However, its performance is significantly affected by the choice of hyperparameter values. Existing approaches for optimizing the HMC hyperparameters either optimize a proxy for mixing speed or consider the HMC chain as an implicit variational distribution and optimize a tractable lower bound that can be very loose in practice. Instead, we propose to optimize an objective that quantifies directly the speed of convergence to the target distribution. Our objective can be easily optimized using stochastic gradient descent. We evaluate our proposed method and compare to baselines on a variety of problems including sampling from synthetic 2D distributions, reconstructing sparse signals, learning deep latent variable models and sampling molecular configurations from the Boltzmann distribution of a 22 atom molecule. We find that our method is competitive with or improves upon alternative baselines in all these experiments.
|
https://proceedings.mlr.press/v139/campbell21a.html
|
https://proceedings.mlr.press/v139/campbell21a.html
|
https://proceedings.mlr.press/v139/campbell21a.html
|
http://proceedings.mlr.press/v139/campbell21a/campbell21a.pdf
|
ICML 2021
|
|
Asymmetric Heavy Tails and Implicit Bias in Gaussian Noise Injections
|
Alexander Camuto, Xiaoyu Wang, Lingjiong Zhu, Chris Holmes, Mert Gurbuzbalaban, Umut Simsekli
|
Gaussian noise injections (GNIs) are a family of simple and widely-used regularisation methods for training neural networks, where one injects additive or multiplicative Gaussian noise to the network activations at every iteration of the optimisation algorithm, which is typically chosen as stochastic gradient descent (SGD). In this paper, we focus on the so-called ‘implicit effect’ of GNIs, which is the effect of the injected noise on the dynamics of SGD. We show that this effect induces an \emph{asymmetric heavy-tailed noise} on SGD gradient updates. In order to model this modified dynamics, we first develop a Langevin-like stochastic differential equation that is driven by a general family of \emph{asymmetric} heavy-tailed noise. Using this model we then formally prove that GNIs induce an ‘implicit bias’, which varies depending on the heaviness of the tails and the level of asymmetry. Our empirical results confirm that different types of neural networks trained with GNIs are well-modelled by the proposed dynamics and that the implicit effect of these injections induces a bias that degrades the performance of networks.
|
https://proceedings.mlr.press/v139/camuto21a.html
|
https://proceedings.mlr.press/v139/camuto21a.html
|
https://proceedings.mlr.press/v139/camuto21a.html
|
http://proceedings.mlr.press/v139/camuto21a/camuto21a.pdf
|
ICML 2021
|
|
Fold2Seq: A Joint Sequence(1D)-Fold(3D) Embedding-based Generative Model for Protein Design
|
Yue Cao, Payel Das, Vijil Chenthamarakshan, Pin-Yu Chen, Igor Melnyk, Yang Shen
|
Designing novel protein sequences for a desired 3D topological fold is a fundamental yet non-trivial task in protein engineering. Challenges exist due to the complex sequence–fold relationship, as well as the difficulties to capture the diversity of the sequences (therefore structures and functions) within a fold. To overcome these challenges, we propose Fold2Seq, a novel transformer-based generative framework for designing protein sequences conditioned on a specific target fold. To model the complex sequence–structure relationship, Fold2Seq jointly learns a sequence embedding using a transformer and a fold embedding from the density of secondary structural elements in 3D voxels. On test sets with single, high-resolution and complete structure inputs for individual folds, our experiments demonstrate improved or comparable performance of Fold2Seq in terms of speed, coverage, and reliability for sequence design, when compared to existing state-of-the-art methods that include data-driven deep generative models and physics-based RosettaDesign. The unique advantages of fold-based Fold2Seq, in comparison to a structure-based deep model and RosettaDesign, become more evident on three additional real-world challenges originating from low-quality, incomplete, or ambiguous input structures. Source code and data are available at https://github.com/IBM/fold2seq.
|
https://proceedings.mlr.press/v139/cao21a.html
|
https://proceedings.mlr.press/v139/cao21a.html
|
https://proceedings.mlr.press/v139/cao21a.html
|
http://proceedings.mlr.press/v139/cao21a/cao21a.pdf
|
ICML 2021
|
|
Learning from Similarity-Confidence Data
|
Yuzhou Cao, Lei Feng, Yitian Xu, Bo An, Gang Niu, Masashi Sugiyama
|
Weakly supervised learning has drawn considerable attention recently to reduce the expensive time and labor consumption of labeling massive data. In this paper, we investigate a novel weakly supervised learning problem of learning from similarity-confidence (Sconf) data, where only unlabeled data pairs equipped with confidence that illustrates their degree of similarity (two examples are similar if they belong to the same class) are needed for training a discriminative binary classifier. We propose an unbiased estimator of the classification risk that can be calculated from only Sconf data and show that the estimation error bound achieves the optimal convergence rate. To alleviate potential overfitting when flexible models are used, we further employ a risk correction scheme on the proposed risk estimator. Experimental results demonstrate the effectiveness of the proposed methods.
|
https://proceedings.mlr.press/v139/cao21b.html
|
https://proceedings.mlr.press/v139/cao21b.html
|
https://proceedings.mlr.press/v139/cao21b.html
|
http://proceedings.mlr.press/v139/cao21b/cao21b.pdf
|
ICML 2021
|
|
Parameter-free Locally Accelerated Conditional Gradients
|
Alejandro Carderera, Jelena Diakonikolas, Cheuk Yin Lin, Sebastian Pokutta
|
Projection-free conditional gradient (CG) methods are the algorithms of choice for constrained optimization setups in which projections are often computationally prohibitive but linear optimization over the constraint set remains computationally feasible. Unlike in projection-based methods, globally accelerated convergence rates are in general unattainable for CG. However, a very recent work on Locally accelerated CG (LaCG) has demonstrated that local acceleration for CG is possible for many settings of interest. The main downside of LaCG is that it requires knowledge of the smoothness and strong convexity parameters of the objective function. We remove this limitation by introducing a novel, Parameter-Free Locally accelerated CG (PF-LaCG) algorithm, for which we provide rigorous convergence guarantees. Our theoretical results are complemented by numerical experiments, which demonstrate local acceleration and showcase the practical improvements of PF-LaCG over non-accelerated algorithms, both in terms of iteration count and wall-clock time.
|
https://proceedings.mlr.press/v139/carderera21a.html
|
https://proceedings.mlr.press/v139/carderera21a.html
|
https://proceedings.mlr.press/v139/carderera21a.html
|
http://proceedings.mlr.press/v139/carderera21a/carderera21a.pdf
|
ICML 2021
|
|
Optimizing persistent homology based functions
|
Mathieu Carriere, Frederic Chazal, Marc Glisse, Yuichi Ike, Hariprasad Kannan, Yuhei Umeda
|
Solving optimization tasks based on functions and losses with a topological flavor is a very active and growing field of research in data science and Topological Data Analysis, with applications in non-convex optimization, statistics and machine learning. However, the approaches proposed in the literature are usually anchored to a specific application and/or topological construction, and do not come with theoretical guarantees. To address this issue, we study the differentiability of a general map associated with the most common topological construction, that is, the persistence map. Building on real analytic geometry arguments, we propose a general framework that allows us to define and compute gradients for persistence-based functions in a very simple way. We also provide a simple, explicit and sufficient condition for convergence of stochastic subgradient methods for such functions. This result encompasses all the constructions and applications of topological optimization in the literature. Finally, we provide associated code, that is easy to handle and to mix with other non-topological methods and constraints, as well as some experiments showcasing the versatility of our approach.
|
https://proceedings.mlr.press/v139/carriere21a.html
|
https://proceedings.mlr.press/v139/carriere21a.html
|
https://proceedings.mlr.press/v139/carriere21a.html
|
http://proceedings.mlr.press/v139/carriere21a/carriere21a.pdf
|
ICML 2021
|
|
Online Policy Gradient for Model Free Learning of Linear Quadratic Regulators with $\sqrt$T Regret
|
Asaf B Cassel, Tomer Koren
|
We consider the task of learning to control a linear dynamical system under fixed quadratic costs, known as the Linear Quadratic Regulator (LQR) problem. While model-free approaches are often favorable in practice, thus far only model-based methods, which rely on costly system identification, have been shown to achieve regret that scales with the optimal dependence on the time horizon T. We present the first model-free algorithm that achieves similar regret guarantees. Our method relies on an efficient policy gradient scheme, and a novel and tighter analysis of the cost of exploration in policy space in this setting.
|
https://proceedings.mlr.press/v139/cassel21a.html
|
https://proceedings.mlr.press/v139/cassel21a.html
|
https://proceedings.mlr.press/v139/cassel21a.html
|
http://proceedings.mlr.press/v139/cassel21a/cassel21a.pdf
|
ICML 2021
|
|
Multi-Receiver Online Bayesian Persuasion
|
Matteo Castiglioni, Alberto Marchesi, Andrea Celli, Nicola Gatti
|
Bayesian persuasion studies how an informed sender should partially disclose information to influence the behavior of a self-interested receiver. Classical models make the stringent assumption that the sender knows the receiver’s utility. This can be relaxed by considering an online learning framework in which the sender repeatedly faces a receiver of an unknown, adversarially selected type. We study, for the first time, an online Bayesian persuasion setting with multiple receivers. We focus on the case with no externalities and binary actions, as customary in offline models. Our goal is to design no-regret algorithms for the sender with polynomial per-iteration running time. First, we prove a negative result: for any 0 < $\alpha$ $\leq$ 1, there is no polynomial-time no-$\alpha$-regret algorithm when the sender’s utility function is supermodular or anonymous. Then, we focus on the setting of submodular sender’s utility functions and we show that, in this case, it is possible to design a polynomial-time no-(1-1/e)-regret algorithm. To do so, we introduce a general online gradient descent framework to handle online learning problems with a finite number of possible loss functions. This requires the existence of an approximate projection oracle. We show that, in our setting, there exists one such projection oracle which can be implemented in polynomial time.
|
https://proceedings.mlr.press/v139/castiglioni21a.html
|
https://proceedings.mlr.press/v139/castiglioni21a.html
|
https://proceedings.mlr.press/v139/castiglioni21a.html
|
http://proceedings.mlr.press/v139/castiglioni21a/castiglioni21a.pdf
|
ICML 2021
|
|
Marginal Contribution Feature Importance - an Axiomatic Approach for Explaining Data
|
Amnon Catav, Boyang Fu, Yazeed Zoabi, Ahuva Libi Weiss Meilik, Noam Shomron, Jason Ernst, Sriram Sankararaman, Ran Gilad-Bachrach
|
In recent years, methods were proposed for assigning feature importance scores to measure the contribution of individual features. While in some cases the goal is to understand a specific model, in many cases the goal is to understand the contribution of certain properties (features) to a real-world phenomenon. Thus, a distinction has been made between feature importance scores that explain a model and scores that explain the data. When explaining the data, machine learning models are used as proxies in settings where conducting many real-world experiments is expensive or prohibited. While existing feature importance scores show great success in explaining models, we demonstrate their limitations when explaining the data, especially in the presence of correlations between features. Therefore, we develop a set of axioms to capture properties expected from a feature importance score when explaining data and prove that there exists only one score that satisfies all of them, the Marginal Contribution Feature Importance (MCI). We analyze the theoretical properties of this score function and demonstrate its merits empirically.
|
https://proceedings.mlr.press/v139/catav21a.html
|
https://proceedings.mlr.press/v139/catav21a.html
|
https://proceedings.mlr.press/v139/catav21a.html
|
http://proceedings.mlr.press/v139/catav21a/catav21a.pdf
|
ICML 2021
|
|
Disentangling syntax and semantics in the brain with deep networks
|
Charlotte Caucheteux, Alexandre Gramfort, Jean-Remi King
|
The activations of language transformers like GPT-2 have been shown to linearly map onto brain activity during speech comprehension. However, the nature of these activations remains largely unknown and presumably conflate distinct linguistic classes. Here, we propose a taxonomy to factorize the high-dimensional activations of language models into four combinatorial classes: lexical, compositional, syntactic, and semantic representations. We then introduce a statistical method to decompose, through the lens of GPT-2’s activations, the brain activity of 345 subjects recorded with functional magnetic resonance imaging (fMRI) during the listening of 4.6 hours of narrated text. The results highlight two findings. First, compositional representations recruit a more widespread cortical network than lexical ones, and encompass the bilateral temporal, parietal and prefrontal cortices. Second, contrary to previous claims, syntax and semantics are not associated with separated modules, but, instead, appear to share a common and distributed neural substrate. Overall, this study introduces a versatile framework to isolate, in the brain activity, the distributed representations of linguistic constructs.
|
https://proceedings.mlr.press/v139/caucheteux21a.html
|
https://proceedings.mlr.press/v139/caucheteux21a.html
|
https://proceedings.mlr.press/v139/caucheteux21a.html
|
http://proceedings.mlr.press/v139/caucheteux21a/caucheteux21a.pdf
|
ICML 2021
|
|
Fair Classification with Noisy Protected Attributes: A Framework with Provable Guarantees
|
L. Elisa Celis, Lingxiao Huang, Vijay Keswani, Nisheeth K. Vishnoi
|
We present an optimization framework for learning a fair classifier in the presence of noisy perturbations in the protected attributes. Compared to prior work, our framework can be employed with a very general class of linear and linear-fractional fairness constraints, can handle multiple, non-binary protected attributes, and outputs a classifier that comes with provable guarantees on both accuracy and fairness. Empirically, we show that our framework can be used to attain either statistical rate or false positive rate fairness guarantees with a minimal loss in accuracy, even when the noise is large, in two real-world datasets.
|
https://proceedings.mlr.press/v139/celis21a.html
|
https://proceedings.mlr.press/v139/celis21a.html
|
https://proceedings.mlr.press/v139/celis21a.html
|
http://proceedings.mlr.press/v139/celis21a/celis21a.pdf
|
ICML 2021
|
|
Best Model Identification: A Rested Bandit Formulation
|
Leonardo Cella, Massimiliano Pontil, Claudio Gentile
|
We introduce and analyze a best arm identification problem in the rested bandit setting, wherein arms are themselves learning algorithms whose expected losses decrease with the number of times the arm has been played. The shape of the expected loss functions is similar across arms, and is assumed to be available up to unknown parameters that have to be learned on the fly. We define a novel notion of regret for this problem, where we compare to the policy that always plays the arm having the smallest expected loss at the end of the game. We analyze an arm elimination algorithm whose regret vanishes as the time horizon increases. The actual rate of convergence depends in a detailed way on the postulated functional form of the expected losses. We complement our analysis with lower bounds, indicating strengths and limitations of the proposed solution.
|
https://proceedings.mlr.press/v139/cella21a.html
|
https://proceedings.mlr.press/v139/cella21a.html
|
https://proceedings.mlr.press/v139/cella21a.html
|
http://proceedings.mlr.press/v139/cella21a/cella21a.pdf
|
ICML 2021
|
|
Revisiting Rainbow: Promoting more insightful and inclusive deep reinforcement learning research
|
Johan Samir Obando Ceron, Pablo Samuel Castro
|
Since the introduction of DQN, a vast majority of reinforcement learning research has focused on reinforcement learning with deep neural networks as function approximators. New methods are typically evaluated on a set of environments that have now become standard, such as Atari 2600 games. While these benchmarks help standardize evaluation, their computational cost has the unfortunate side effect of widening the gap between those with ample access to computational resources, and those without. In this work we argue that, despite the community’s emphasis on large-scale environments, the traditional small-scale environments can still yield valuable scientific insights and can help reduce the barriers to entry for underprivileged communities. To substantiate our claims, we empirically revisit the paper which introduced the Rainbow algorithm [Hessel et al., 2018] and present some new insights into the algorithms used by Rainbow.
|
https://proceedings.mlr.press/v139/ceron21a.html
|
https://proceedings.mlr.press/v139/ceron21a.html
|
https://proceedings.mlr.press/v139/ceron21a.html
|
http://proceedings.mlr.press/v139/ceron21a/ceron21a.pdf
|
ICML 2021
|
|
Learning Routines for Effective Off-Policy Reinforcement Learning
|
Edoardo Cetin, Oya Celiktutan
|
The performance of reinforcement learning depends upon designing an appropriate action space, where the effect of each action is measurable, yet, granular enough to permit flexible behavior. So far, this process involved non-trivial user choices in terms of the available actions and their execution frequency. We propose a novel framework for reinforcement learning that effectively lifts such constraints. Within our framework, agents learn effective behavior over a routine space: a new, higher-level action space, where each routine represents a set of ’equivalent’ sequences of granular actions with arbitrary length. Our routine space is learned end-to-end to facilitate the accomplishment of underlying off-policy reinforcement learning objectives. We apply our framework to two state-of-the-art off-policy algorithms and show that the resulting agents obtain relevant performance improvements while requiring fewer interactions with the environment per episode, improving computational efficiency.
|
https://proceedings.mlr.press/v139/cetin21a.html
|
https://proceedings.mlr.press/v139/cetin21a.html
|
https://proceedings.mlr.press/v139/cetin21a.html
|
http://proceedings.mlr.press/v139/cetin21a/cetin21a.pdf
|
ICML 2021
|
|
Learning Node Representations Using Stationary Flow Prediction on Large Payment and Cash Transaction Networks
|
Ciwan Ceylan, Salla Franzén, Florian T. Pokorny
|
Banks are required to analyse large transaction datasets as a part of the fight against financial crime. Today, this analysis is either performed manually by domain experts or using expensive feature engineering. Gradient flow analysis allows for basic representation learning as node potentials can be inferred directly from network transaction data. However, the gradient model has a fundamental limitation: it cannot represent all types of of network flows. Furthermore, standard methods for learning the gradient flow are not appropriate for flow signals that span multiple orders of magnitude and contain outliers, i.e. transaction data. In this work, the gradient model is extended to a gated version and we prove that it, unlike the gradient model, is a universal approximator for flows on graphs. To tackle the mentioned challenges of transaction data, we propose a multi-scale and outlier robust loss function based on the Student-t log-likelihood. Ethereum transaction data is used for evaluation and the gradient models outperform MLP models using hand-engineered and node2vec features in terms of relative error. These results extend to 60 synthetic datasets, with experiments also showing that the gated gradient model learns qualitative information about the underlying synthetic generative flow distributions.
|
https://proceedings.mlr.press/v139/ceylan21a.html
|
https://proceedings.mlr.press/v139/ceylan21a.html
|
https://proceedings.mlr.press/v139/ceylan21a.html
|
http://proceedings.mlr.press/v139/ceylan21a/ceylan21a.pdf
|
ICML 2021
|
|
GRAND: Graph Neural Diffusion
|
Ben Chamberlain, James Rowbottom, Maria I Gorinova, Michael Bronstein, Stefan Webb, Emanuele Rossi
|
We present Graph Neural Diffusion (GRAND) that approaches deep learning on graphs as a continuous diffusion process and treats Graph Neural Networks (GNNs) as discretisations of an underlying PDE. In our model, the layer structure and topology correspond to the discretisation choices of temporal and spatial operators. Our approach allows a principled development of a broad new class of GNNs that are able to address the common plights of graph learning models such as depth, oversmoothing, and bottlenecks. Key to the success of our models are stability with respect to perturbations in the data and this is addressed for both implicit and explicit discretisation schemes. We develop linear and nonlinear versions of GRAND, which achieve competitive results on many standard graph benchmarks.
|
https://proceedings.mlr.press/v139/chamberlain21a.html
|
https://proceedings.mlr.press/v139/chamberlain21a.html
|
https://proceedings.mlr.press/v139/chamberlain21a.html
|
http://proceedings.mlr.press/v139/chamberlain21a/chamberlain21a.pdf
|
ICML 2021
|
|
HoroPCA: Hyperbolic Dimensionality Reduction via Horospherical Projections
|
Ines Chami, Albert Gu, Dat P Nguyen, Christopher Re
|
This paper studies Principal Component Analysis (PCA) for data lying in hyperbolic spaces. Given directions, PCA relies on: (1) a parameterization of subspaces spanned by these directions, (2) a method of projection onto subspaces that preserves information in these directions, and (3) an objective to optimize, namely the variance explained by projections. We generalize each of these concepts to the hyperbolic space and propose HoroPCA, a method for hyperbolic dimensionality reduction. By focusing on the core problem of extracting principal directions, HoroPCA theoretically better preserves information in the original data such as distances, compared to previous generalizations of PCA. Empirically, we validate that HoroPCA outperforms existing dimensionality reduction methods, significantly reducing error in distance preservation. As a data whitening method, it improves downstream classification by up to 3.9% compared to methods that don’t use whitening. Finally, we show that HoroPCA can be used to visualize hyperbolic data in two dimensions.
|
https://proceedings.mlr.press/v139/chami21a.html
|
https://proceedings.mlr.press/v139/chami21a.html
|
https://proceedings.mlr.press/v139/chami21a.html
|
http://proceedings.mlr.press/v139/chami21a/chami21a.pdf
|
ICML 2021
|
|
Goal-Conditioned Reinforcement Learning with Imagined Subgoals
|
Elliot Chane-Sane, Cordelia Schmid, Ivan Laptev
|
Goal-conditioned reinforcement learning endows an agent with a large variety of skills, but it often struggles to solve tasks that require more temporally extended reasoning. In this work, we propose to incorporate imagined subgoals into policy learning to facilitate learning of complex tasks. Imagined subgoals are predicted by a separate high-level policy, which is trained simultaneously with the policy and its critic. This high-level policy predicts intermediate states halfway to the goal using the value function as a reachability metric. We don’t require the policy to reach these subgoals explicitly. Instead, we use them to define a prior policy, and incorporate this prior into a KL-constrained policy iteration scheme to speed up and regularize learning. Imagined subgoals are used during policy learning, but not during test time, where we only apply the learned policy. We evaluate our approach on complex robotic navigation and manipulation tasks and show that it outperforms existing methods by a large margin.
|
https://proceedings.mlr.press/v139/chane-sane21a.html
|
https://proceedings.mlr.press/v139/chane-sane21a.html
|
https://proceedings.mlr.press/v139/chane-sane21a.html
|
http://proceedings.mlr.press/v139/chane-sane21a/chane-sane21a.pdf
|
ICML 2021
|
|
Locally Private k-Means in One Round
|
Alisa Chang, Badih Ghazi, Ravi Kumar, Pasin Manurangsi
|
We provide an approximation algorithm for k-means clustering in the \emph{one-round} (aka \emph{non-interactive}) local model of differential privacy (DP). Our algorithm achieves an approximation ratio arbitrarily close to the best \emph{non private} approximation algorithm, improving upon previously known algorithms that only guarantee large (constant) approximation ratios. Furthermore, ours is the first constant-factor approximation algorithm for k-means that requires only \emph{one} round of communication in the local DP model, positively resolving an open question of Stemmer (SODA 2020). Our algorithmic framework is quite flexible; we demonstrate this by showing that it also yields a similar near-optimal approximation algorithm in the (one-round) shuffle DP model.
|
https://proceedings.mlr.press/v139/chang21a.html
|
https://proceedings.mlr.press/v139/chang21a.html
|
https://proceedings.mlr.press/v139/chang21a.html
|
http://proceedings.mlr.press/v139/chang21a/chang21a.pdf
|
ICML 2021
|
|
Modularity in Reinforcement Learning via Algorithmic Independence in Credit Assignment
|
Michael Chang, Sid Kaushik, Sergey Levine, Tom Griffiths
|
Many transfer problems require re-using previously optimal decisions for solving new tasks, which suggests the need for learning algorithms that can modify the mechanisms for choosing certain actions independently of those for choosing others. However, there is currently no formalism nor theory for how to achieve this kind of modular credit assignment. To answer this question, we define modular credit assignment as a constraint on minimizing the algorithmic mutual information among feedback signals for different decisions. We introduce what we call the modularity criterion for testing whether a learning algorithm satisfies this constraint by performing causal analysis on the algorithm itself. We generalize the recently proposed societal decision-making framework as a more granular formalism than the Markov decision process to prove that for decision sequences that do not contain cycles, certain single-step temporal difference action-value methods meet this criterion while all policy-gradient methods do not. Empirical evidence suggests that such action-value methods are more sample efficient than policy-gradient methods on transfer problems that require only sparse changes to a sequence of previously optimal decisions.
|
https://proceedings.mlr.press/v139/chang21b.html
|
https://proceedings.mlr.press/v139/chang21b.html
|
https://proceedings.mlr.press/v139/chang21b.html
|
http://proceedings.mlr.press/v139/chang21b/chang21b.pdf
|
ICML 2021
|
|
Image-Level or Object-Level? A Tale of Two Resampling Strategies for Long-Tailed Detection
|
Nadine Chang, Zhiding Yu, Yu-Xiong Wang, Animashree Anandkumar, Sanja Fidler, Jose M Alvarez
|
Training on datasets with long-tailed distributions has been challenging for major recognition tasks such as classification and detection. To deal with this challenge, image resampling is typically introduced as a simple but effective approach. However, we observe that long-tailed detection differs from classification since multiple classes may be present in one image. As a result, image resampling alone is not enough to yield a sufficiently balanced distribution at the object-level. We address object-level resampling by introducing an object-centric sampling strategy based on a dynamic, episodic memory bank. Our proposed strategy has two benefits: 1) convenient object-level resampling without significant extra computation, and 2) implicit feature-level augmentation from model updates. We show that image-level and object-level resamplings are both important, and thus unify them with a joint resampling strategy. Our method achieves state-of-the-art performance on the rare categories of LVIS, with 1.89% and 3.13% relative improvements over Forest R-CNN on detection and instance segmentation.
|
https://proceedings.mlr.press/v139/chang21c.html
|
https://proceedings.mlr.press/v139/chang21c.html
|
https://proceedings.mlr.press/v139/chang21c.html
|
http://proceedings.mlr.press/v139/chang21c/chang21c.pdf
|
ICML 2021
|
|
DeepWalking Backwards: From Embeddings Back to Graphs
|
Sudhanshu Chanpuriya, Cameron Musco, Konstantinos Sotiropoulos, Charalampos Tsourakakis
|
Low-dimensional node embeddings play a key role in analyzing graph datasets. However, little work studies exactly what information is encoded by popular embedding methods, and how this information correlates with performance in downstream learning tasks. We tackle this question by studying whether embeddings can be inverted to (approximately) recover the graph used to generate them. Focusing on a variant of the popular DeepWalk method \cite{PerozziAl-RfouSkiena:2014, QiuDongMa:2018}, we present algorithms for accurate embedding inversion – i.e., from the low-dimensional embedding of a graph $G$, we can find a graph $\tilde G$ with a very similar embedding. We perform numerous experiments on real-world networks, observing that significant information about $G$, such as specific edges and bulk properties like triangle density, is often lost in $\tilde G$. However, community structure is often preserved or even enhanced. Our findings are a step towards a more rigorous understanding of exactly what information embeddings encode about the input graph, and why this information is useful for learning tasks.
|
https://proceedings.mlr.press/v139/chanpuriya21a.html
|
https://proceedings.mlr.press/v139/chanpuriya21a.html
|
https://proceedings.mlr.press/v139/chanpuriya21a.html
|
http://proceedings.mlr.press/v139/chanpuriya21a/chanpuriya21a.pdf
|
ICML 2021
|
|
Differentiable Spatial Planning using Transformers
|
Devendra Singh Chaplot, Deepak Pathak, Jitendra Malik
|
We consider the problem of spatial path planning. In contrast to the classical solutions which optimize a new plan from scratch and assume access to the full map with ground truth obstacle locations, we learn a planner from the data in a differentiable manner that allows us to leverage statistical regularities from past data. We propose Spatial Planning Transformers (SPT), which given an obstacle map learns to generate actions by planning over long-range spatial dependencies, unlike prior data-driven planners that propagate information locally via convolutional structure in an iterative manner. In the setting where the ground truth map is not known to the agent, we leverage pre-trained SPTs in an end-to-end framework that has the structure of mapper and planner built into it which allows seamless generalization to out-of-distribution maps and goals. SPTs outperform prior state-of-the-art differentiable planners across all the setups for both manipulation and navigation tasks, leading to an absolute improvement of 7-19%.
|
https://proceedings.mlr.press/v139/chaplot21a.html
|
https://proceedings.mlr.press/v139/chaplot21a.html
|
https://proceedings.mlr.press/v139/chaplot21a.html
|
http://proceedings.mlr.press/v139/chaplot21a/chaplot21a.pdf
|
ICML 2021
|
|
Solving Challenging Dexterous Manipulation Tasks With Trajectory Optimisation and Reinforcement Learning
|
Henry J Charlesworth, Giovanni Montana
|
Training agents to autonomously control anthropomorphic robotic hands has the potential to lead to systems capable of performing a multitude of complex manipulation tasks in unstructured and uncertain environments. In this work, we first introduce a suite of challenging simulated manipulation tasks where current reinforcement learning and trajectory optimisation techniques perform poorly. These include environments where two simulated hands have to pass or throw objects between each other, as well as an environment where the agent must learn to spin a long pen between its fingers. We then introduce a simple trajectory optimisation algorithm that performs significantly better than existing methods on these environments. Finally, on the most challenging “PenSpin" task, we combine sub-optimal demonstrations generated through trajectory optimisation with off-policy reinforcement learning, obtaining performance that far exceeds either of these approaches individually. Videos of all of our results are available at: https://dexterous-manipulation.github.io
|
https://proceedings.mlr.press/v139/charlesworth21a.html
|
https://proceedings.mlr.press/v139/charlesworth21a.html
|
https://proceedings.mlr.press/v139/charlesworth21a.html
|
http://proceedings.mlr.press/v139/charlesworth21a/charlesworth21a.pdf
|
ICML 2021
|
|
Classification with Rejection Based on Cost-sensitive Classification
|
Nontawat Charoenphakdee, Zhenghang Cui, Yivan Zhang, Masashi Sugiyama
|
The goal of classification with rejection is to avoid risky misclassification in error-critical applications such as medical diagnosis and product inspection. In this paper, based on the relationship between classification with rejection and cost-sensitive classification, we propose a novel method of classification with rejection by learning an ensemble of cost-sensitive classifiers, which satisfies all the following properties: (i) it can avoid estimating class-posterior probabilities, resulting in improved classification accuracy. (ii) it allows a flexible choice of losses including non-convex ones, (iii) it does not require complicated modifications when using different losses, (iv) it is applicable to both binary and multiclass cases, and (v) it is theoretically justifiable for any classification-calibrated loss. Experimental results demonstrate the usefulness of our proposed approach in clean-labeled, noisy-labeled, and positive-unlabeled classification.
|
https://proceedings.mlr.press/v139/charoenphakdee21a.html
|
https://proceedings.mlr.press/v139/charoenphakdee21a.html
|
https://proceedings.mlr.press/v139/charoenphakdee21a.html
|
http://proceedings.mlr.press/v139/charoenphakdee21a/charoenphakdee21a.pdf
|
ICML 2021
|
|
Actionable Models: Unsupervised Offline Reinforcement Learning of Robotic Skills
|
Yevgen Chebotar, Karol Hausman, Yao Lu, Ted Xiao, Dmitry Kalashnikov, Jacob Varley, Alex Irpan, Benjamin Eysenbach, Ryan C Julian, Chelsea Finn, Sergey Levine
|
We consider the problem of learning useful robotic skills from previously collected offline data without access to manually specified rewards or additional online exploration, a setting that is becoming increasingly important for scaling robot learning by reusing past robotic data. In particular, we propose the objective of learning a functional understanding of the environment by learning to reach any goal state in a given dataset. We employ goal-conditioned Q-learning with hindsight relabeling and develop several techniques that enable training in a particularly challenging offline setting. We find that our method can operate on high-dimensional camera images and learn a variety of skills on real robots that generalize to previously unseen scenes and objects. We also show that our method can learn to reach long-horizon goals across multiple episodes through goal chaining, and learn rich representations that can help with downstream tasks through pre-training or auxiliary objectives.
|
https://proceedings.mlr.press/v139/chebotar21a.html
|
https://proceedings.mlr.press/v139/chebotar21a.html
|
https://proceedings.mlr.press/v139/chebotar21a.html
|
http://proceedings.mlr.press/v139/chebotar21a/chebotar21a.pdf
|
ICML 2021
|
|
Unified Robust Semi-Supervised Variational Autoencoder
|
Xu Chen
|
In this paper, we propose a novel noise-robust semi-supervised deep generative model by jointly tackling noisy labels and outliers simultaneously in a unified robust semi-supervised variational autoencoder (URSVAE). Typically, the uncertainty of of input data is characterized by placing uncertainty prior on the parameters of the probability density distributions in order to ensure the robustness of the variational encoder towards outliers. Subsequently, a noise transition model is integrated naturally into our model to alleviate the detrimental effects of noisy labels. Moreover, a robust divergence measure is employed to further enhance the robustness, where a novel variational lower bound is derived and optimized to infer the network parameters. By proving the influence function on the proposed evidence lower bound is bounded, the enormous potential of the proposed model in the classification in the presence of the compound noise is demonstrated. The experimental results highlight the superiority of the proposed framework by the evaluating on image classification tasks and comparing with the state-of-the-art approaches.
|
https://proceedings.mlr.press/v139/chen21a.html
|
https://proceedings.mlr.press/v139/chen21a.html
|
https://proceedings.mlr.press/v139/chen21a.html
|
http://proceedings.mlr.press/v139/chen21a/chen21a.pdf
|
ICML 2021
|
|
Unsupervised Learning of Visual 3D Keypoints for Control
|
Boyuan Chen, Pieter Abbeel, Deepak Pathak
|
Learning sensorimotor control policies from high-dimensional images crucially relies on the quality of the underlying visual representations. Prior works show that structured latent space such as visual keypoints often outperforms unstructured representations for robotic control. However, most of these representations, whether structured or unstructured are learned in a 2D space even though the control tasks are usually performed in a 3D environment. In this work, we propose a framework to learn such a 3D geometric structure directly from images in an end-to-end unsupervised manner. The input images are embedded into latent 3D keypoints via a differentiable encoder which is trained to optimize both a multi-view consistency loss and downstream task objective. These discovered 3D keypoints tend to meaningfully capture robot joints as well as object movements in a consistent manner across both time and 3D space. The proposed approach outperforms prior state-of-art methods across a variety of reinforcement learning benchmarks. Code and videos at https://buoyancy99.github.io/unsup-3d-keypoints/.
|
https://proceedings.mlr.press/v139/chen21b.html
|
https://proceedings.mlr.press/v139/chen21b.html
|
https://proceedings.mlr.press/v139/chen21b.html
|
http://proceedings.mlr.press/v139/chen21b/chen21b.pdf
|
ICML 2021
|
|
Integer Programming for Causal Structure Learning in the Presence of Latent Variables
|
Rui Chen, Sanjeeb Dash, Tian Gao
|
The problem of finding an ancestral acyclic directed mixed graph (ADMG) that represents the causal relationships between a set of variables is an important area of research on causal inference. Most existing score-based structure learning methods focus on learning directed acyclic graph (DAG) models without latent variables. A number of score-based methods have recently been proposed for the ADMG learning, yet they are heuristic in nature and do not guarantee an optimal solution. We propose a novel exact score-based method that solves an integer programming (IP) formulation and returns a score-maximizing ancestral ADMG for a set of continuous variables that follow a multivariate Gaussian distribution. We generalize the state-of-the-art IP model for DAG learning problems and derive new classes of valid inequalities to formulate an IP model for ADMG learning. Empirically, our model can be solved efficiently for medium-sized problems and achieves better accuracy than state-of-the-art score-based methods as well as benchmark constraint-based methods.
|
https://proceedings.mlr.press/v139/chen21c.html
|
https://proceedings.mlr.press/v139/chen21c.html
|
https://proceedings.mlr.press/v139/chen21c.html
|
http://proceedings.mlr.press/v139/chen21c/chen21c.pdf
|
ICML 2021
|
|
Improved Corruption Robust Algorithms for Episodic Reinforcement Learning
|
Yifang Chen, Simon Du, Kevin Jamieson
|
We study episodic reinforcement learning under unknown adversarial corruptions in both the rewards and the transition probabilities of the underlying system. We propose new algorithms which, compared to the existing results in \cite{lykouris2020corruption}, achieve strictly better regret bounds in terms of total corruptions for the tabular setting. To be specific, firstly, our regret bounds depend on more precise numerical values of total rewards corruptions and transition corruptions, instead of only on the total number of corrupted episodes. Secondly, our regret bounds are the first of their kind in the reinforcement learning setting to have the number of corruptions show up additively with respect to $\min\{ \sqrt{T},\text{PolicyGapComplexity} \}$ rather than multiplicatively. Our results follow from a general algorithmic framework that combines corruption-robust policy elimination meta-algorithms, and plug-in reward-free exploration sub-algorithms. Replacing the meta-algorithm or sub-algorithm may extend the framework to address other corrupted settings with potentially more structure.
|
https://proceedings.mlr.press/v139/chen21d.html
|
https://proceedings.mlr.press/v139/chen21d.html
|
https://proceedings.mlr.press/v139/chen21d.html
|
http://proceedings.mlr.press/v139/chen21d/chen21d.pdf
|
ICML 2021
|
|
Scalable Computations of Wasserstein Barycenter via Input Convex Neural Networks
|
Jiaojiao Fan, Amirhossein Taghvaei, Yongxin Chen
|
Wasserstein Barycenter is a principled approach to represent the weighted mean of a given set of probability distributions, utilizing the geometry induced by optimal transport. In this work, we present a novel scalable algorithm to approximate the Wasserstein Barycenters aiming at high-dimensional applications in machine learning. Our proposed algorithm is based on the Kantorovich dual formulation of the Wasserstein-2 distance as well as a recent neural network architecture, input convex neural network, that is known to parametrize convex functions. The distinguishing features of our method are: i) it only requires samples from the marginal distributions; ii) unlike the existing approaches, it represents the Barycenter with a generative model and can thus generate infinite samples from the barycenter without querying the marginal distributions; iii) it works similar to Generative Adversarial Model in one marginal case. We demonstratethe efficacy of our algorithm by comparing it with the state-of-art methods in multiple experiments.
|
https://proceedings.mlr.press/v139/fan21d.html
|
https://proceedings.mlr.press/v139/fan21d.html
|
https://proceedings.mlr.press/v139/fan21d.html
|
http://proceedings.mlr.press/v139/fan21d/fan21d.pdf
|
ICML 2021
|
|
Neural Feature Matching in Implicit 3D Representations
|
Yunlu Chen, Basura Fernando, Hakan Bilen, Thomas Mensink, Efstratios Gavves
|
Recently, neural implicit functions have achieved impressive results for encoding 3D shapes. Conditioning on low-dimensional latent codes generalises a single implicit function to learn shared representation space for a variety of shapes, with the advantage of smooth interpolation. While the benefits from the global latent space do not correspond to explicit points at local level, we propose to track the continuous point trajectory by matching implicit features with the latent code interpolating between shapes, from which we corroborate the hierarchical functionality of the deep implicit functions, where early layers map the latent code to fitting the coarse shape structure, and deeper layers further refine the shape details. Furthermore, the structured representation space of implicit functions enables to apply feature matching for shape deformation, with the benefits to handle topology and semantics inconsistency, such as from an armchair to a chair with no arms, without explicit flow functions or manual annotations.
|
https://proceedings.mlr.press/v139/chen21f.html
|
https://proceedings.mlr.press/v139/chen21f.html
|
https://proceedings.mlr.press/v139/chen21f.html
|
http://proceedings.mlr.press/v139/chen21f/chen21f.pdf
|
ICML 2021
|
|
Decentralized Riemannian Gradient Descent on the Stiefel Manifold
|
Shixiang Chen, Alfredo Garcia, Mingyi Hong, Shahin Shahrampour
|
We consider a distributed non-convex optimization where a network of agents aims at minimizing a global function over the Stiefel manifold. The global function is represented as a finite sum of smooth local functions, where each local function is associated with one agent and agents communicate with each other over an undirected connected graph. The problem is non-convex as local functions are possibly non-convex (but smooth) and the Steifel manifold is a non-convex set. We present a decentralized Riemannian stochastic gradient method (DRSGD) with the convergence rate of $\mathcal{O}(1/\sqrt{K})$ to a stationary point. To have exact convergence with constant stepsize, we also propose a decentralized Riemannian gradient tracking algorithm (DRGTA) with the convergence rate of $\mathcal{O}(1/K)$ to a stationary point. We use multi-step consensus to preserve the iteration in the local (consensus) region. DRGTA is the first decentralized algorithm with exact convergence for distributed optimization on Stiefel manifold.
|
https://proceedings.mlr.press/v139/chen21g.html
|
https://proceedings.mlr.press/v139/chen21g.html
|
https://proceedings.mlr.press/v139/chen21g.html
|
http://proceedings.mlr.press/v139/chen21g/chen21g.pdf
|
ICML 2021
|
|
Learning Self-Modulating Attention in Continuous Time Space with Applications to Sequential Recommendation
|
Chao Chen, Haoyu Geng, Nianzu Yang, Junchi Yan, Daiyue Xue, Jianping Yu, Xiaokang Yang
|
User interests are usually dynamic in the real world, which poses both theoretical and practical challenges for learning accurate preferences from rich behavior data. Among existing user behavior modeling solutions, attention networks are widely adopted for its effectiveness and relative simplicity. Despite being extensively studied, existing attentions still suffer from two limitations: i) conventional attentions mainly take into account the spatial correlation between user behaviors, regardless the distance between those behaviors in the continuous time space; and ii) these attentions mostly provide a dense and undistinguished distribution over all past behaviors then attentively encode them into the output latent representations. This is however not suitable in practical scenarios where a user’s future actions are relevant to a small subset of her/his historical behaviors. In this paper, we propose a novel attention network, named \textit{self-modulating attention}, that models the complex and non-linearly evolving dynamic user preferences. We empirically demonstrate the effectiveness of our method on top-N sequential recommendation tasks, and the results on three large-scale real-world datasets show that our model can achieve state-of-the-art performance.
|
https://proceedings.mlr.press/v139/chen21h.html
|
https://proceedings.mlr.press/v139/chen21h.html
|
https://proceedings.mlr.press/v139/chen21h.html
|
http://proceedings.mlr.press/v139/chen21h/chen21h.pdf
|
ICML 2021
|
|
Mandoline: Model Evaluation under Distribution Shift
|
Mayee Chen, Karan Goel, Nimit S Sohoni, Fait Poms, Kayvon Fatahalian, Christopher Re
|
Machine learning models are often deployed in different settings than they were trained and validated on, posing a challenge to practitioners who wish to predict how well the deployed model will perform on a target distribution. If an unlabeled sample from the target distribution is available, along with a labeled sample from a possibly different source distribution, standard approaches such as importance weighting can be applied to estimate performance on the target. However, importance weighting struggles when the source and target distributions have non-overlapping support or are high-dimensional. Taking inspiration from fields such as epidemiology and polling, we develop Mandoline, a new evaluation framework that mitigates these issues. Our key insight is that practitioners may have prior knowledge about the ways in which the distribution shifts, which we can use to better guide the importance weighting procedure. Specifically, users write simple "slicing functions" {–} noisy, potentially correlated binary functions intended to capture possible axes of distribution shift {–} to compute reweighted performance estimates. We further describe a density ratio estimation framework for the slices and show how its estimation error scales with slice quality and dataset size. Empirical validation on NLP and vision tasks shows that Mandoline can estimate performance on the target distribution up to 3x more accurately compared to standard baselines.
|
https://proceedings.mlr.press/v139/chen21i.html
|
https://proceedings.mlr.press/v139/chen21i.html
|
https://proceedings.mlr.press/v139/chen21i.html
|
http://proceedings.mlr.press/v139/chen21i/chen21i.pdf
|
ICML 2021
|
|
Order Matters: Probabilistic Modeling of Node Sequence for Graph Generation
|
Xiaohui Chen, Xu Han, Jiajing Hu, Francisco Ruiz, Liping Liu
|
A graph generative model defines a distribution over graphs. Typically, the model consists of a sequential process that creates and adds nodes and edges. Such sequential process defines an ordering of the nodes in the graph. The computation of the model’s likelihood requires to marginalize the node orderings; this makes maximum likelihood estimation (MLE) challenging due to the (factorial) number of possible permutations. In this work, we provide an expression for the likelihood of a graph generative model and show that its calculation is closely related to the problem of graph automorphism. In addition, we derive a variational inference (VI) algorithm for fitting a graph generative model that is based on the maximization of a variational bound of the log-likelihood. This allows the model to be trained with node orderings from the approximate posterior instead of ad-hoc orderings. Our experiments show that our log-likelihood bound is significantly tighter than the bound of previous schemes. The models fitted with the VI algorithm are able to generate high-quality graphs that match the structures of target graphs not seen during training.
|
https://proceedings.mlr.press/v139/chen21j.html
|
https://proceedings.mlr.press/v139/chen21j.html
|
https://proceedings.mlr.press/v139/chen21j.html
|
http://proceedings.mlr.press/v139/chen21j/chen21j.pdf
|
ICML 2021
|
|
CARTL: Cooperative Adversarially-Robust Transfer Learning
|
Dian Chen, Hongxin Hu, Qian Wang, Li Yinli, Cong Wang, Chao Shen, Qi Li
|
Transfer learning eases the burden of training a well-performed model from scratch, especially when training data is scarce and computation power is limited. In deep learning, a typical strategy for transfer learning is to freeze the early layers of a pre-trained model and fine-tune the rest of its layers on the target domain. Previous work focuses on the accuracy of the transferred model but neglects the transfer of adversarial robustness. In this work, we first show that transfer learning improves the accuracy on the target domain but degrades the inherited robustness of the target model. To address such a problem, we propose a novel cooperative adversarially-robust transfer learning (CARTL) by pre-training the model via feature distance minimization and fine-tuning the pre-trained model with non-expansive fine-tuning for target domain tasks. Empirical results show that CARTL improves the inherited robustness by about 28% at most compared with the baseline with the same degree of accuracy. Furthermore, we study the relationship between the batch normalization (BN) layers and the robustness in the context of transfer learning, and we reveal that freezing BN layers can further boost the robustness transfer.
|
https://proceedings.mlr.press/v139/chen21k.html
|
https://proceedings.mlr.press/v139/chen21k.html
|
https://proceedings.mlr.press/v139/chen21k.html
|
http://proceedings.mlr.press/v139/chen21k/chen21k.pdf
|
ICML 2021
|
|
Finding the Stochastic Shortest Path with Low Regret: the Adversarial Cost and Unknown Transition Case
|
Liyu Chen, Haipeng Luo
|
We make significant progress toward the stochastic shortest path problem with adversarial costs and unknown transition. Specifically, we develop algorithms that achieve $O(\sqrt{S^2ADT_\star K})$ regret for the full-information setting and $O(\sqrt{S^3A^2DT_\star K})$ regret for the bandit feedback setting, where $D$ is the diameter, $T_\star$ is the expected hitting time of the optimal policy, $S$ is the number of states, $A$ is the number of actions, and $K$ is the number of episodes. Our work strictly improves (Rosenberg and Mansour, 2020) in the full information setting, extends (Chen et al., 2020) from known transition to unknown transition, and is also the first to consider the most challenging combination: bandit feedback with adversarial costs and unknown transition. To remedy the gap between our upper bounds and the current best lower bounds constructed via a stochastically oblivious adversary, we also propose algorithms with near-optimal regret for this special case.
|
https://proceedings.mlr.press/v139/chen21l.html
|
https://proceedings.mlr.press/v139/chen21l.html
|
https://proceedings.mlr.press/v139/chen21l.html
|
http://proceedings.mlr.press/v139/chen21l/chen21l.pdf
|
ICML 2021
|
|
SpreadsheetCoder: Formula Prediction from Semi-structured Context
|
Xinyun Chen, Petros Maniatis, Rishabh Singh, Charles Sutton, Hanjun Dai, Max Lin, Denny Zhou
|
Spreadsheet formula prediction has been an important program synthesis problem with many real-world applications. Previous works typically utilize input-output examples as the specification for spreadsheet formula synthesis, where each input-output pair simulates a separate row in the spreadsheet. However, this formulation does not fully capture the rich context in real-world spreadsheets. First, spreadsheet data entries are organized as tables, thus rows and columns are not necessarily independent from each other. In addition, many spreadsheet tables include headers, which provide high-level descriptions of the cell data. However, previous synthesis approaches do not consider headers as part of the specification. In this work, we present the first approach for synthesizing spreadsheet formulas from tabular context, which includes both headers and semi-structured tabular data. In particular, we propose SpreadsheetCoder, a BERT-based model architecture to represent the tabular context in both row-based and column-based formats. We train our model on a large dataset of spreadsheets, and demonstrate that SpreadsheetCoder achieves top-1 prediction accuracy of 42.51%, which is a considerable improvement over baselines that do not employ rich tabular context. Compared to the rule-based system, SpreadsheetCoder assists 82% more users in composing formulas on Google Sheets.
|
https://proceedings.mlr.press/v139/chen21m.html
|
https://proceedings.mlr.press/v139/chen21m.html
|
https://proceedings.mlr.press/v139/chen21m.html
|
http://proceedings.mlr.press/v139/chen21m/chen21m.pdf
|
ICML 2021
|
|
Large-Margin Contrastive Learning with Distance Polarization Regularizer
|
Shuo Chen, Gang Niu, Chen Gong, Jun Li, Jian Yang, Masashi Sugiyama
|
\emph{Contrastive learning} (CL) pretrains models in a pairwise manner, where given a data point, other data points are all regarded as dissimilar, including some that are \emph{semantically} similar. The issue has been addressed by properly weighting similar and dissimilar pairs as in \emph{positive-unlabeled learning}, so that the objective of CL is \emph{unbiased} and CL is \emph{consistent}. However, in this paper, we argue that this great solution is still not enough: its weighted objective \emph{hides} the issue where the semantically similar pairs are still pushed away; as CL is pretraining, this phenomenon is not our desideratum and might affect downstream tasks. To this end, we propose \emph{large-margin contrastive learning} (LMCL) with \emph{distance polarization regularizer}, motivated by the distribution characteristic of pairwise distances in \emph{metric learning}. In LMCL, we can distinguish between \emph{intra-cluster} and \emph{inter-cluster} pairs, and then only push away inter-cluster pairs, which \emph{solves} the above issue explicitly. Theoretically, we prove a tighter error bound for LMCL; empirically, the superiority of LMCL is demonstrated across multiple domains, \emph{i.e.}, image classification, sentence representation, and reinforcement learning.
|
https://proceedings.mlr.press/v139/chen21n.html
|
https://proceedings.mlr.press/v139/chen21n.html
|
https://proceedings.mlr.press/v139/chen21n.html
|
http://proceedings.mlr.press/v139/chen21n/chen21n.pdf
|
ICML 2021
|
|
Z-GCNETs: Time Zigzags at Graph Convolutional Networks for Time Series Forecasting
|
Yuzhou Chen, Ignacio Segovia, Yulia R. Gel
|
There recently has been a surge of interest in developing a new class of deep learning (DL) architectures that integrate an explicit time dimension as a fundamental building block of learning and representation mechanisms. In turn, many recent results show that topological descriptors of the observed data, encoding information on the shape of the dataset in a topological space at different scales, that is, persistent homology of the data, may contain important complementary information, improving both performance and robustness of DL. As convergence of these two emerging ideas, we propose to enhance DL architectures with the most salient time-conditioned topological information of the data and introduce the concept of zigzag persistence into time-aware graph convolutional networks (GCNs). Zigzag persistence provides a systematic and mathematically rigorous framework to track the most important topological features of the observed data that tend to manifest themselves over time. To integrate the extracted time-conditioned topological descriptors into DL, we develop a new topological summary, zigzag persistence image, and derive its theoretical stability guarantees. We validate the new GCNs with a time-aware zigzag topological layer (Z-GCNETs), in application to traffic forecasting and Ethereum blockchain price prediction. Our results indicate that Z-GCNET outperforms 13 state-of-the-art methods on 4 time series datasets.
|
https://proceedings.mlr.press/v139/chen21o.html
|
https://proceedings.mlr.press/v139/chen21o.html
|
https://proceedings.mlr.press/v139/chen21o.html
|
http://proceedings.mlr.press/v139/chen21o/chen21o.pdf
|
ICML 2021
|
|
A Unified Lottery Ticket Hypothesis for Graph Neural Networks
|
Tianlong Chen, Yongduo Sui, Xuxi Chen, Aston Zhang, Zhangyang Wang
|
With graphs rapidly growing in size and deeper graph neural networks (GNNs) emerging, the training and inference of GNNs become increasingly expensive. Existing network weight pruning algorithms cannot address the main space and computational bottleneck in GNNs, caused by the size and connectivity of the graph. To this end, this paper first presents a unified GNN sparsification (UGS) framework that simultaneously prunes the graph adjacency matrix and the model weights, for effectively accelerating GNN inference on large-scale graphs. Leveraging this new tool, we further generalize the recently popular lottery ticket hypothesis to GNNs for the first time, by defining a graph lottery ticket (GLT) as a pair of core sub-dataset and sparse sub-network, which can be jointly identified from the original GNN and the full dense graph by iteratively applying UGS. Like its counterpart in convolutional neural networks, GLT can be trained in isolation to match the performance of training with the full model and graph, and can be drawn from both randomly initialized and self-supervised pre-trained GNNs. Our proposal has been experimentally verified across various GNN architectures and diverse tasks, on both small-scale graph datasets (Cora, Citeseer and PubMed), and large-scale datasets from the challenging Open Graph Benchmark (OGB). Specifically, for node classification, our found GLTs achieve the same accuracies with 20% 98% MACs saving on small graphs and 25% 85% MACs saving on large ones. For link prediction, GLTs lead to 48% 97% and 70% MACs saving on small and large graph datasets, respectively, without compromising predictive performance. Codes are at https://github.com/VITA-Group/Unified-LTH-GNN.
|
https://proceedings.mlr.press/v139/chen21p.html
|
https://proceedings.mlr.press/v139/chen21p.html
|
https://proceedings.mlr.press/v139/chen21p.html
|
http://proceedings.mlr.press/v139/chen21p/chen21p.pdf
|
ICML 2021
|
|
Network Inference and Influence Maximization from Samples
|
Wei Chen, Xiaoming Sun, Jialin Zhang, Zhijie Zhang
|
Influence maximization is the task of selecting a small number of seed nodes in a social network to maximize the spread of the influence from these seeds, and it has been widely investigated in the past two decades. In the canonical setting, the whole social network as well as its diffusion parameters is given as input. In this paper, we consider the more realistic sampling setting where the network is unknown and we only have a set of passively observed cascades that record the set of activated nodes at each diffusion step. We study the task of influence maximization from these cascade samples (IMS), and present constant approximation algorithms for this task under mild conditions on the seed set distribution. To achieve the optimization goal, we also provide a novel solution to the network inference problem, that is, learning diffusion parameters and the network structure from the cascade data. Comparing with prior solutions, our network inference algorithm requires weaker assumptions and does not rely on maximum-likelihood estimation and convex programming. Our IMS algorithms enhance the learning-and-then-optimization approach by allowing a constant approximation ratio even when the diffusion parameters are hard to learn, and we do not need any assumption related to the network structure or diffusion parameters.
|
https://proceedings.mlr.press/v139/chen21q.html
|
https://proceedings.mlr.press/v139/chen21q.html
|
https://proceedings.mlr.press/v139/chen21q.html
|
http://proceedings.mlr.press/v139/chen21q/chen21q.pdf
|
ICML 2021
|
|
Data-driven Prediction of General Hamiltonian Dynamics via Learning Exactly-Symplectic Maps
|
Renyi Chen, Molei Tao
|
We consider the learning and prediction of nonlinear time series generated by a latent symplectic map. A special case is (not necessarily separable) Hamiltonian systems, whose solution flows give such symplectic maps. For this special case, both generic approaches based on learning the vector field of the latent ODE and specialized approaches based on learning the Hamiltonian that generates the vector field exist. Our method, however, is different as it does not rely on the vector field nor assume its existence; instead, it directly learns the symplectic evolution map in discrete time. Moreover, we do so by representing the symplectic map via a generating function, which we approximate by a neural network (hence the name GFNN). This way, our approximation of the evolution map is always \emph{exactly} symplectic. This additional geometric structure allows the local prediction error at each step to accumulate in a controlled fashion, and we will prove, under reasonable assumptions, that the global prediction error grows at most \emph{linearly} with long prediction time, which significantly improves an otherwise exponential growth. In addition, as a map-based and thus purely data-driven method, GFNN avoids two additional sources of inaccuracies common in vector-field based approaches, namely the error in approximating the vector field by finite difference of the data, and the error in numerical integration of the vector field for making predictions. Numerical experiments further demonstrate our claims.
|
https://proceedings.mlr.press/v139/chen21r.html
|
https://proceedings.mlr.press/v139/chen21r.html
|
https://proceedings.mlr.press/v139/chen21r.html
|
http://proceedings.mlr.press/v139/chen21r/chen21r.pdf
|
ICML 2021
|
|
Analysis of stochastic Lanczos quadrature for spectrum approximation
|
Tyler Chen, Thomas Trogdon, Shashanka Ubaru
|
The cumulative empirical spectral measure (CESM) $\Phi[\mathbf{A}] : \mathbb{R} \to [0,1]$ of a $n\times n$ symmetric matrix $\mathbf{A}$ is defined as the fraction of eigenvalues of $\mathbf{A}$ less than a given threshold, i.e., $\Phi[\mathbf{A}](x) := \sum_{i=1}^{n} \frac{1}{n} {\large\unicode{x1D7D9}}[ \lambda_i[\mathbf{A}]\leq x]$. Spectral sums $\operatorname{tr}(f[\mathbf{A}])$ can be computed as the Riemann–Stieltjes integral of $f$ against $\Phi[\mathbf{A}]$, so the task of estimating CESM arises frequently in a number of applications, including machine learning. We present an error analysis for stochastic Lanczos quadrature (SLQ). We show that SLQ obtains an approximation to the CESM within a Wasserstein distance of $t \: | \lambda_{\text{max}}[\mathbf{A}] - \lambda_{\text{min}}[\mathbf{A}] |$ with probability at least $1-\eta$, by applying the Lanczos algorithm for $\lceil 12 t^{-1} + \frac{1}{2} \rceil$ iterations to $\lceil 4 ( n+2 )^{-1}t^{-2} \ln(2n\eta^{-1}) \rceil$ vectors sampled independently and uniformly from the unit sphere. We additionally provide (matrix-dependent) a posteriori error bounds for the Wasserstein and Kolmogorov–Smirnov distances between the output of this algorithm and the true CESM. The quality of our bounds is demonstrated using numerical experiments.
|
https://proceedings.mlr.press/v139/chen21s.html
|
https://proceedings.mlr.press/v139/chen21s.html
|
https://proceedings.mlr.press/v139/chen21s.html
|
http://proceedings.mlr.press/v139/chen21s/chen21s.pdf
|
ICML 2021
|
|
Large-Scale Multi-Agent Deep FBSDEs
|
Tianrong Chen, Ziyi O Wang, Ioannis Exarchos, Evangelos Theodorou
|
In this paper we present a scalable deep learning framework for finding Markovian Nash Equilibria in multi-agent stochastic games using fictitious play. The motivation is inspired by theoretical analysis of Forward Backward Stochastic Differential Equations and their implementation in a deep learning setting, which is the source of our algorithm’s sample efficiency improvement. By taking advantage of the permutation-invariant property of agents in symmetric games, the scalability and performance is further enhanced significantly. We showcase superior performance of our framework over the state-of-the-art deep fictitious play algorithm on an inter-bank lending/borrowing problem in terms of multiple metrics. More importantly, our approach scales up to 3000 agents in simulation, a scale which, to the best of our knowledge, represents a new state-of-the-art. We also demonstrate the applicability of our framework in robotics on a belief space autonomous racing problem.
|
https://proceedings.mlr.press/v139/chen21t.html
|
https://proceedings.mlr.press/v139/chen21t.html
|
https://proceedings.mlr.press/v139/chen21t.html
|
http://proceedings.mlr.press/v139/chen21t/chen21t.pdf
|
ICML 2021
|
|
Representation Subspace Distance for Domain Adaptation Regression
|
Xinyang Chen, Sinan Wang, Jianmin Wang, Mingsheng Long
|
Regression, as a counterpart to classification, is a major paradigm with a wide range of applications. Domain adaptation regression extends it by generalizing a regressor from a labeled source domain to an unlabeled target domain. Existing domain adaptation regression methods have achieved positive results limited only to the shallow regime. A question arises: Why learning invariant representations in the deep regime less pronounced? A key finding of this paper is that classification is robust to feature scaling but regression is not, and aligning the distributions of deep representations will alter feature scale and impede domain adaptation regression. Based on this finding, we propose to close the domain gap through orthogonal bases of the representation spaces, which are free from feature scaling. Inspired by Riemannian geometry of Grassmann manifold, we define a geometrical distance over representation subspaces and learn deep transferable representations by minimizing it. To avoid breaking the geometrical properties of deep representations, we further introduce the bases mismatch penalization to match the ordering of orthogonal bases across representation subspaces. Our method is evaluated on three domain adaptation regression benchmarks, two of which are introduced in this paper. Our method outperforms the state-of-the-art methods significantly, forming early positive results in the deep regime.
|
https://proceedings.mlr.press/v139/chen21u.html
|
https://proceedings.mlr.press/v139/chen21u.html
|
https://proceedings.mlr.press/v139/chen21u.html
|
http://proceedings.mlr.press/v139/chen21u/chen21u.pdf
|
ICML 2021
|
|
Overcoming Catastrophic Forgetting by Bayesian Generative Regularization
|
Pei-Hung Chen, Wei Wei, Cho-Jui Hsieh, Bo Dai
|
In this paper, we propose a new method to over-come catastrophic forgetting by adding generative regularization to Bayesian inference frame-work. Bayesian method provides a general frame-work for continual learning. We could further construct a generative regularization term for all given classification models by leveraging energy-based models and Langevin dynamic sampling to enrich the features learned in each task. By combining discriminative and generative loss together, we empirically show that the proposed method outperforms state-of-the-art methods on a variety of tasks, avoiding catastrophic forgetting in continual learning. In particular, the proposed method outperforms baseline methods over 15%on the Fashion-MNIST dataset and 10%on the CUB dataset.
|
https://proceedings.mlr.press/v139/chen21v.html
|
https://proceedings.mlr.press/v139/chen21v.html
|
https://proceedings.mlr.press/v139/chen21v.html
|
http://proceedings.mlr.press/v139/chen21v/chen21v.pdf
|
ICML 2021
|
|
Cyclically Equivariant Neural Decoders for Cyclic Codes
|
Xiangyu Chen, Min Ye
|
Neural decoders were introduced as a generalization of the classic Belief Propagation (BP) decoding algorithms, where the Trellis graph in the BP algorithm is viewed as a neural network, and the weights in the Trellis graph are optimized by training the neural network. In this work, we propose a novel neural decoder for cyclic codes by exploiting their cyclically invariant property. More precisely, we impose a shift invariant structure on the weights of our neural decoder so that any cyclic shift of inputs results in the same cyclic shift of outputs. Extensive simulations with BCH codes and punctured Reed-Muller (RM) codes show that our new decoder consistently outperforms previous neural decoders when decoding cyclic codes. Finally, we propose a list decoding procedure that can significantly reduce the decoding error probability for BCH codes and punctured RM codes. For certain high-rate codes, the gap between our list decoder and the Maximum Likelihood decoder is less than $0.1$dB. Code available at github.com/cyclicallyneuraldecoder
|
https://proceedings.mlr.press/v139/chen21w.html
|
https://proceedings.mlr.press/v139/chen21w.html
|
https://proceedings.mlr.press/v139/chen21w.html
|
http://proceedings.mlr.press/v139/chen21w/chen21w.pdf
|
ICML 2021
|
|
A Receptor Skeleton for Capsule Neural Networks
|
Jintai Chen, Hongyun Yu, Chengde Qian, Danny Z Chen, Jian Wu
|
In previous Capsule Neural Networks (CapsNets), routing algorithms often performed clustering processes to assemble the child capsules’ representations into parent capsules. Such routing algorithms were typically implemented with iterative processes and incurred high computing complexity. This paper presents a new capsule structure, which contains a set of optimizable receptors and a transmitter is devised on the capsule’s representation. Specifically, child capsules’ representations are sent to the parent capsules whose receptors match well the transmitters of the child capsules’ representations, avoiding applying computationally complex routing algorithms. To ensure the receptors in a CapsNet work cooperatively, we build a skeleton to organize the receptors in different capsule layers in a CapsNet. The receptor skeleton assigns a share-out objective for each receptor, making the CapsNet perform as a hierarchical agglomerative clustering process. Comprehensive experiments verify that our approach facilitates efficient clustering processes, and CapsNets with our approach significantly outperform CapsNets with previous routing algorithms on image classification, affine transformation generalization, overlapped object recognition, and representation semantic decoupling.
|
https://proceedings.mlr.press/v139/chen21x.html
|
https://proceedings.mlr.press/v139/chen21x.html
|
https://proceedings.mlr.press/v139/chen21x.html
|
http://proceedings.mlr.press/v139/chen21x/chen21x.pdf
|
ICML 2021
|
|
Accelerating Gossip SGD with Periodic Global Averaging
|
Yiming Chen, Kun Yuan, Yingya Zhang, Pan Pan, Yinghui Xu, Wotao Yin
|
Communication overhead hinders the scalability of large-scale distributed training. Gossip SGD, where each node averages only with its neighbors, is more communication-efficient than the prevalent parallel SGD. However, its convergence rate is reversely proportional to quantity $1-\beta$ which measures the network connectivity. On large and sparse networks where $1-\beta \to 0$, Gossip SGD requires more iterations to converge, which offsets against its communication benefit. This paper introduces Gossip-PGA, which adds Periodic Global Averaging to accelerate Gossip SGD. Its transient stage, i.e., the iterations required to reach asymptotic linear speedup stage, improves from $\Omega(\beta^4 n^3/(1-\beta)^4)$ to $\Omega(\beta^4 n^3 H^4)$ for non-convex problems. The influence of network topology in Gossip-PGA can be controlled by the averaging period $H$. Its transient-stage complexity is also superior to local SGD which has order $\Omega(n^3 H^4)$. Empirical results of large-scale training on image classification (ResNet50) and language modeling (BERT) validate our theoretical findings.
|
https://proceedings.mlr.press/v139/chen21y.html
|
https://proceedings.mlr.press/v139/chen21y.html
|
https://proceedings.mlr.press/v139/chen21y.html
|
http://proceedings.mlr.press/v139/chen21y/chen21y.pdf
|
ICML 2021
|
|
ActNN: Reducing Training Memory Footprint via 2-Bit Activation Compressed Training
|
Jianfei Chen, Lianmin Zheng, Zhewei Yao, Dequan Wang, Ion Stoica, Michael Mahoney, Joseph Gonzalez
|
The increasing size of neural network models has been critical for improvements in their accuracy, but device memory is not growing at the same rate. This creates fundamental challenges for training neural networks within limited memory environments. In this work, we propose ActNN, a memory-efficient training framework that stores randomly quantized activations for back propagation. We prove the convergence of ActNN for general network architectures, and we characterize the impact of quantization on the convergence via an exact expression for the gradient variance. Using our theory, we propose novel mixed-precision quantization strategies that exploit the activation’s heterogeneity across feature dimensions, samples, and layers. These techniques can be readily applied to existing dynamic graph frameworks, such as PyTorch, simply by substituting the layers. We evaluate ActNN on mainstream computer vision models for classification, detection, and segmentation tasks. On all these tasks, ActNN compresses the activation to 2 bits on average, with negligible accuracy loss. ActNN reduces the memory footprint of the activation by 12x, and it enables training with a 6.6x to 14x larger batch size.
|
https://proceedings.mlr.press/v139/chen21z.html
|
https://proceedings.mlr.press/v139/chen21z.html
|
https://proceedings.mlr.press/v139/chen21z.html
|
http://proceedings.mlr.press/v139/chen21z/chen21z.pdf
|
ICML 2021
|
|
SPADE: A Spectral Method for Black-Box Adversarial Robustness Evaluation
|
Wuxinlin Cheng, Chenhui Deng, Zhiqiang Zhao, Yaohui Cai, Zhiru Zhang, Zhuo Feng
|
A black-box spectral method is introduced for evaluating the adversarial robustness of a given machine learning (ML) model. Our approach, named SPADE, exploits bijective distance mapping between the input/output graphs constructed for approximating the manifolds corresponding to the input/output data. By leveraging the generalized Courant-Fischer theorem, we propose a SPADE score for evaluating the adversarial robustness of a given model, which is proved to be an upper bound of the best Lipschitz constant under the manifold setting. To reveal the most non-robust data samples highly vulnerable to adversarial attacks, we develop a spectral graph embedding procedure leveraging dominant generalized eigenvectors. This embedding step allows assigning each data point a robustness score that can be further harnessed for more effective adversarial training of ML models. Our experiments show promising empirical results for neural networks trained with the MNIST and CIFAR-10 data sets.
|
https://proceedings.mlr.press/v139/cheng21a.html
|
https://proceedings.mlr.press/v139/cheng21a.html
|
https://proceedings.mlr.press/v139/cheng21a.html
|
http://proceedings.mlr.press/v139/cheng21a/cheng21a.pdf
|
ICML 2021
|
|
Self-supervised and Supervised Joint Training for Resource-rich Machine Translation
|
Yong Cheng, Wei Wang, Lu Jiang, Wolfgang Macherey
|
Self-supervised pre-training of text representations has been successfully applied to low-resource Neural Machine Translation (NMT). However, it usually fails to achieve notable gains on resource-rich NMT. In this paper, we propose a joint training approach, F2-XEnDec, to combine self-supervised and supervised learning to optimize NMT models. To exploit complementary self-supervised signals for supervised learning, NMT models are trained on examples that are interbred from monolingual and parallel sentences through a new process called crossover encoder-decoder. Experiments on two resource-rich translation benchmarks, WMT’14 English-German and WMT’14 English-French, demonstrate that our approach achieves substantial improvements over several strong baseline methods and obtains a new state of the art of 46.19 BLEU on English-French when incorporating back translation. Results also show that our approach is capable of improving model robustness to input perturbations such as code-switching noise which frequently appears on the social media.
|
https://proceedings.mlr.press/v139/cheng21b.html
|
https://proceedings.mlr.press/v139/cheng21b.html
|
https://proceedings.mlr.press/v139/cheng21b.html
|
http://proceedings.mlr.press/v139/cheng21b/cheng21b.pdf
|
ICML 2021
|
|
Exact Optimization of Conformal Predictors via Incremental and Decremental Learning
|
Giovanni Cherubin, Konstantinos Chatzikokolakis, Martin Jaggi
|
Conformal Predictors (CP) are wrappers around ML models, providing error guarantees under weak assumptions on the data distribution. They are suitable for a wide range of problems, from classification and regression to anomaly detection. Unfortunately, their very high computational complexity limits their applicability to large datasets. In this work, we show that it is possible to speed up a CP classifier considerably, by studying it in conjunction with the underlying ML method, and by exploiting incremental&decremental learning. For methods such as k-NN, KDE, and kernel LS-SVM, our approach reduces the running time by one order of magnitude, whilst producing exact solutions. With similar ideas, we also achieve a linear speed up for the harder case of bootstrapping. Finally, we extend these techniques to improve upon an optimization of k-NN CP for regression. We evaluate our findings empirically, and discuss when methods are suitable for CP optimization.
|
https://proceedings.mlr.press/v139/cherubin21a.html
|
https://proceedings.mlr.press/v139/cherubin21a.html
|
https://proceedings.mlr.press/v139/cherubin21a.html
|
http://proceedings.mlr.press/v139/cherubin21a/cherubin21a.pdf
|
ICML 2021
|
|
Problem Dependent View on Structured Thresholding Bandit Problems
|
James Cheshire, Pierre Menard, Alexandra Carpentier
|
We investigate the \textit{problem dependent regime} in the stochastic \emph{Thresholding Bandit problem} (\tbp) under several \emph{shape constraints}. In the \tbp the objective of the learner is to output, after interacting with the environment, the set of arms whose means are above a given threshold. The vanilla, unstructured, case is already well studied in the literature. Taking $K$ as the number of arms, we consider the case where (i) the sequence of arm’s means $(\mu_k){k=1}^K$ is monotonically increasing (\textit{MTBP}) and (ii) the case where $(\mu_k){k=1}^K$ is concave (\textit{CTBP}). We consider both cases in the \emph{problem dependent} regime and study the probability of error - i.e. the probability to mis-classify at least one arm. In the fixed budget setting, we provide nearly matching upper and lower bounds for the probability of error in both the concave and monotone settings, as well as associated algorithms. Of interest, is that for both the monotone and concave cases, optimal bounds on probability of error are of the same order as those for the two armed bandit problem.
|
https://proceedings.mlr.press/v139/cheshire21a.html
|
https://proceedings.mlr.press/v139/cheshire21a.html
|
https://proceedings.mlr.press/v139/cheshire21a.html
|
http://proceedings.mlr.press/v139/cheshire21a/cheshire21a.pdf
|
ICML 2021
|
|
Online Optimization in Games via Control Theory: Connecting Regret, Passivity and Poincaré Recurrence
|
Yun Kuen Cheung, Georgios Piliouras
|
We present a novel control-theoretic understanding of online optimization and learning in games, via the notion of passivity. Passivity is a fundamental concept in control theory, which abstracts energy conservation and dissipation in physical systems. It has become a standard tool in analysis of general feedback systems, to which game dynamics belong. Our starting point is to show that all continuous-time Follow-the-Regularized-Leader (FTRL) dynamics, which include the well-known Replicator Dynamic, are lossless, i.e. it is passive with no energy dissipation. Interestingly, we prove that passivity implies bounded regret, connecting two fundamental primitives of control theory and online optimization. The observation of energy conservation in FTRL inspires us to present a family of lossless learning dynamics, each of which has an underlying energy function with a simple gradient structure. This family is closed under convex combination; as an immediate corollary, any convex combination of FTRL dynamics is lossless and thus has bounded regret. This allows us to extend the framework of Fox & Shamma [Games 2013] to prove not just global asymptotic stability results for game dynamics, but Poincar{é} recurrence results as well. Intuitively, when a lossless game (e.g. graphical constant-sum game) is coupled with lossless learning dynamic, their interconnection is also lossless, which results in a pendulum-like energy-preserving recurrent behavior, generalizing Piliouras & Shamma [SODA 2014] and Mertikopoulos et al. [SODA 2018].
|
https://proceedings.mlr.press/v139/cheung21a.html
|
https://proceedings.mlr.press/v139/cheung21a.html
|
https://proceedings.mlr.press/v139/cheung21a.html
|
http://proceedings.mlr.press/v139/cheung21a/cheung21a.pdf
|
ICML 2021
|
|
Understanding and Mitigating Accuracy Disparity in Regression
|
Jianfeng Chi, Yuan Tian, Geoffrey J. Gordon, Han Zhao
|
With the widespread deployment of large-scale prediction systems in high-stakes domains, e.g., face recognition, criminal justice, etc., disparity on prediction accuracy between different demographic subgroups has called for fundamental understanding on the source of such disparity and algorithmic intervention to mitigate it. In this paper, we study the accuracy disparity problem in regression. To begin with, we first propose an error decomposition theorem, which decomposes the accuracy disparity into the distance between marginal label distributions and the distance between conditional representations, to help explain why such accuracy disparity appears in practice. Motivated by this error decomposition and the general idea of distribution alignment with statistical distances, we then propose an algorithm to reduce this disparity, and analyze its game-theoretic optima of the proposed objective functions. To corroborate our theoretical findings, we also conduct experiments on five benchmark datasets. The experimental results suggest that our proposed algorithms can effectively mitigate accuracy disparity while maintaining the predictive power of the regression models.
|
https://proceedings.mlr.press/v139/chi21a.html
|
https://proceedings.mlr.press/v139/chi21a.html
|
https://proceedings.mlr.press/v139/chi21a.html
|
http://proceedings.mlr.press/v139/chi21a/chi21a.pdf
|
ICML 2021
|
|
Private Alternating Least Squares: Practical Private Matrix Completion with Tighter Rates
|
Steve Chien, Prateek Jain, Walid Krichene, Steffen Rendle, Shuang Song, Abhradeep Thakurta, Li Zhang
|
We study the problem of differentially private (DP) matrix completion under user-level privacy. We design a joint differentially private variant of the popular Alternating-Least-Squares (ALS) method that achieves: i) (nearly) optimal sample complexity for matrix completion (in terms of number of items, users), and ii) the best known privacy/utility trade-off both theoretically, as well as on benchmark data sets. In particular, we provide the first global convergence analysis of ALS with noise introduced to ensure DP, and show that, in comparison to the best known alternative (the Private Frank-Wolfe algorithm by Jain et al. (2018)), our error bounds scale significantly better with respect to the number of items and users, which is critical in practical problems. Extensive validation on standard benchmarks demonstrate that the algorithm, in combination with carefully designed sampling procedures, is significantly more accurate than existing techniques, thus promising to be the first practical DP embedding model.
|
https://proceedings.mlr.press/v139/chien21a.html
|
https://proceedings.mlr.press/v139/chien21a.html
|
https://proceedings.mlr.press/v139/chien21a.html
|
http://proceedings.mlr.press/v139/chien21a/chien21a.pdf
|
ICML 2021
|
|
Light RUMs
|
Flavio Chierichetti, Ravi Kumar, Andrew Tomkins
|
A Random Utility Model (RUM) is a distribution on permutations over a universe of items. For each subset of the universe, a RUM induces a natural distribution of the winner in the subset: choose a permutation according to the RUM distribution and pick the maximum item in the subset according to the chosen permutation. RUMs are widely used in the theory of discrete choice. In this paper we consider the question of the (lossy) compressibility of RUMs on a universe of size $n$, i.e., the minimum number of bits required to approximate the winning probabilities of each slate. Our main result is that RUMs can be approximated using $\tilde{O}(n^2)$ bits, an exponential improvement over the standard representation; furthermore, we show that this bound is optimal. En route, we sharpen the classical existential result of McFadden and Train (2000) by showing that the minimum size of a mixture of multinomial logits required to can approximate a general RUM is $\tilde{\Theta}(n)$.
|
https://proceedings.mlr.press/v139/chierichetti21a.html
|
https://proceedings.mlr.press/v139/chierichetti21a.html
|
https://proceedings.mlr.press/v139/chierichetti21a.html
|
http://proceedings.mlr.press/v139/chierichetti21a/chierichetti21a.pdf
|
ICML 2021
|
|
Parallelizing Legendre Memory Unit Training
|
Narsimha Reddy Chilkuri, Chris Eliasmith
|
Recently, a new recurrent neural network (RNN) named the Legendre Memory Unit (LMU) was proposed and shown to achieve state-of-the-art performance on several benchmark datasets. Here we leverage the linear time-invariant (LTI) memory component of the LMU to construct a simplified variant that can be parallelized during training (and yet executed as an RNN during inference), resulting in up to 200 times faster training. We note that our efficient parallelizing scheme is general and is applicable to any deep network whose recurrent components are linear dynamical systems. We demonstrate the improved accuracy of our new architecture compared to the original LMU and a variety of published LSTM and transformer networks across seven benchmarks. For instance, our LMU sets a new state-of-the-art result on psMNIST, and uses half the parameters while outperforming DistilBERT and LSTM models on IMDB sentiment analysis.
|
https://proceedings.mlr.press/v139/chilkuri21a.html
|
https://proceedings.mlr.press/v139/chilkuri21a.html
|
https://proceedings.mlr.press/v139/chilkuri21a.html
|
http://proceedings.mlr.press/v139/chilkuri21a/chilkuri21a.pdf
|
ICML 2021
|
|
Quantifying and Reducing Bias in Maximum Likelihood Estimation of Structured Anomalies
|
Uthsav Chitra, Kimberly Ding, Jasper C.H. Lee, Benjamin J Raphael
|
Anomaly estimation, or the problem of finding a subset of a dataset that differs from the rest of the dataset, is a classic problem in machine learning and data mining. In both theoretical work and in applications, the anomaly is assumed to have a specific structure defined by membership in an anomaly family. For example, in temporal data the anomaly family may be time intervals, while in network data the anomaly family may be connected subgraphs. The most prominent approach for anomaly estimation is to compute the Maximum Likelihood Estimator (MLE) of the anomaly; however, it was recently observed that for normally distributed data, the MLE is a biased estimator for some anomaly families. In this work, we demonstrate that in the normal means setting, the bias of the MLE depends on the size of the anomaly family. We prove that if the number of sets in the anomaly family that contain the anomaly is sub-exponential, then the MLE is asymptotically unbiased. We also provide empirical evidence that the converse is true: if the number of such sets is exponential, then the MLE is asymptotically biased. Our analysis unifies a number of earlier results on the bias of the MLE for specific anomaly families. Next, we derive a new anomaly estimator using a mixture model, and we prove that our anomaly estimator is asymptotically unbiased regardless of the size of the anomaly family. We illustrate the advantages of our estimator versus the MLE on disease outbreak data and highway traffic data.
|
https://proceedings.mlr.press/v139/chitra21a.html
|
https://proceedings.mlr.press/v139/chitra21a.html
|
https://proceedings.mlr.press/v139/chitra21a.html
|
http://proceedings.mlr.press/v139/chitra21a/chitra21a.pdf
|
ICML 2021
|
|
Robust Learning-Augmented Caching: An Experimental Study
|
Jakub Chłędowski, Adam Polak, Bartosz Szabucki, Konrad Tomasz Żołna
|
Effective caching is crucial for performance of modern-day computing systems. A key optimization problem arising in caching – which item to evict to make room for a new item – cannot be optimally solved without knowing the future. There are many classical approximation algorithms for this problem, but more recently researchers started to successfully apply machine learning to decide what to evict by discovering implicit input patterns and predicting the future. While machine learning typically does not provide any worst-case guarantees, the new field of learning-augmented algorithms proposes solutions which leverage classical online caching algorithms to make the machine-learned predictors robust. We are the first to comprehensively evaluate these learning-augmented algorithms on real-world caching datasets and state-of-the-art machine-learned predictors. We show that a straightforward method – blindly following either a predictor or a classical robust algorithm, and switching whenever one becomes worse than the other – has only a low overhead over a well-performing predictor, while competing with classical methods when the coupled predictor fails, thus providing a cheap worst-case insurance.
|
https://proceedings.mlr.press/v139/chledowski21a.html
|
https://proceedings.mlr.press/v139/chledowski21a.html
|
https://proceedings.mlr.press/v139/chledowski21a.html
|
http://proceedings.mlr.press/v139/chledowski21a/chledowski21a.pdf
|
ICML 2021
|
|
Unifying Vision-and-Language Tasks via Text Generation
|
Jaemin Cho, Jie Lei, Hao Tan, Mohit Bansal
|
Existing methods for vision-and-language learning typically require designing task-specific architectures and objectives for each task. For example, a multi-label answer classifier for visual question answering, a region scorer for referring expression comprehension, and a language decoder for image captioning, etc. To alleviate these hassles, in this work, we propose a unified framework that learns different tasks in a single architecture with the same language modeling objective, i.e., multimodal conditional text generation, where our models learn to generate labels in text based on the visual and textual inputs. On 7 popular vision-and-language benchmarks, including visual question answering, referring expression comprehension, visual commonsense reasoning, most of which have been previously modeled as discriminative tasks, our generative approach (with a single unified architecture) reaches comparable performance to recent task-specific state-of-the-art vision-and-language models. Moreover, our generative approach shows better generalization ability on questions that have rare answers. Also, we show that our framework allows multi-task learning in a single architecture with a single set of parameters, achieving similar performance to separately optimized single-task models. Our code is publicly available at: https://github.com/j-min/VL-T5
|
https://proceedings.mlr.press/v139/cho21a.html
|
https://proceedings.mlr.press/v139/cho21a.html
|
https://proceedings.mlr.press/v139/cho21a.html
|
http://proceedings.mlr.press/v139/cho21a/cho21a.pdf
|
ICML 2021
|
|
Learning from Nested Data with Ornstein Auto-Encoders
|
Youngwon Choi, Sungdong Lee, Joong-Ho Won
|
Many of real-world data, e.g., the VGGFace2 dataset, which is a collection of multiple portraits of individuals, come with nested structures due to grouped observation. The Ornstein auto-encoder (OAE) is an emerging framework for representation learning from nested data, based on an optimal transport distance between random processes. An attractive feature of OAE is its ability to generate new variations nested within an observational unit, whether or not the unit is known to the model. A previously proposed algorithm for OAE, termed the random-intercept OAE (RIOAE), showed an impressive performance in learning nested representations, yet lacks theoretical justification. In this work, we show that RIOAE minimizes a loose upper bound of the employed optimal transport distance. After identifying several issues with RIOAE, we present the product-space OAE (PSOAE) that minimizes a tighter upper bound of the distance and achieves orthogonality in the representation space. PSOAE alleviates the instability of RIOAE and provides more flexible representation of nested data. We demonstrate the high performance of PSOAE in the three key tasks of generative models: exemplar generation, style transfer, and new concept generation.
|
https://proceedings.mlr.press/v139/choi21a.html
|
https://proceedings.mlr.press/v139/choi21a.html
|
https://proceedings.mlr.press/v139/choi21a.html
|
http://proceedings.mlr.press/v139/choi21a/choi21a.pdf
|
ICML 2021
|
|
Variational Empowerment as Representation Learning for Goal-Conditioned Reinforcement Learning
|
Jongwook Choi, Archit Sharma, Honglak Lee, Sergey Levine, Shixiang Shane Gu
|
Learning to reach goal states and learning diverse skills through mutual information maximization have been proposed as principled frameworks for unsupervised reinforcement learning, allowing agents to acquire broadly applicable multi-task policies with minimal reward engineering. In this paper, we discuss how these two approaches {—} goal-conditioned RL (GCRL) and MI-based RL {—} can be generalized into a single family of methods, interpreting mutual information maximization and variational empowerment as representation learning methods that acquire function-ally aware state representations for goal reaching.Starting from a simple observation that the standard GCRL is encapsulated by the optimization objective of variational empowerment, we can derive novel variants of GCRL and variational empowerment under a single, unified optimization objective, such as adaptive-variance GCRL and linear-mapping GCRL, and study the characteristics of representation learning each variant provides. Furthermore, through the lens of GCRL, we show that adapting powerful techniques fromGCRL such as goal relabeling into the variationalMI context as well as proper regularization on the variational posterior provides substantial gains in algorithm performance, and propose a novel evaluation metric named latent goal reaching (LGR)as an objective measure for evaluating empowerment algorithms akin to goal-based RL. Through principled mathematical derivations and careful experimental validations, our work lays a novel foundation from which representation learning can be evaluated and analyzed in goal-based RL
|
https://proceedings.mlr.press/v139/choi21b.html
|
https://proceedings.mlr.press/v139/choi21b.html
|
https://proceedings.mlr.press/v139/choi21b.html
|
http://proceedings.mlr.press/v139/choi21b/choi21b.pdf
|
ICML 2021
|
|
Label-Only Membership Inference Attacks
|
Christopher A. Choquette-Choo, Florian Tramer, Nicholas Carlini, Nicolas Papernot
|
Membership inference is one of the simplest privacy threats faced by machine learning models that are trained on private sensitive data. In this attack, an adversary infers whether a particular point was used to train the model, or not, by observing the model’s predictions. Whereas current attack methods all require access to the model’s predicted confidence score, we introduce a label-only attack that instead evaluates the robustness of the model’s predicted (hard) labels under perturbations of the input, to infer membership. Our label-only attack is not only as-effective as attacks requiring access to confidence scores, it also demonstrates that a class of defenses against membership inference, which we call “confidence masking” because they obfuscate the confidence scores to thwart attacks, are insufficient to prevent the leakage of private information. Our experiments show that training with differential privacy or strong L2 regularization are the only current defenses that meaningfully decrease leakage of private information, even for points that are outliers of the training distribution.
|
https://proceedings.mlr.press/v139/choquette-choo21a.html
|
https://proceedings.mlr.press/v139/choquette-choo21a.html
|
https://proceedings.mlr.press/v139/choquette-choo21a.html
|
http://proceedings.mlr.press/v139/choquette-choo21a/choquette-choo21a.pdf
|
ICML 2021
|
|
Modeling Hierarchical Structures with Continuous Recursive Neural Networks
|
Jishnu Ray Chowdhury, Cornelia Caragea
|
Recursive Neural Networks (RvNNs), which compose sequences according to their underlying hierarchical syntactic structure, have performed well in several natural language processing tasks compared to similar models without structural biases. However, traditional RvNNs are incapable of inducing the latent structure in a plain text sequence on their own. Several extensions have been proposed to overcome this limitation. Nevertheless, these extensions tend to rely on surrogate gradients or reinforcement learning at the cost of higher bias or variance. In this work, we propose Continuous Recursive Neural Network (CRvNN) as a backpropagation-friendly alternative to address the aforementioned limitations. This is done by incorporating a continuous relaxation to the induced structure. We demonstrate that CRvNN achieves strong performance in challenging synthetic tasks such as logical inference (Bowman et al., 2015b) and ListOps (Nangia & Bowman, 2018). We also show that CRvNN performs comparably or better than prior latent structure models on real-world tasks such as sentiment analysis and natural language inference.
|
https://proceedings.mlr.press/v139/chowdhury21a.html
|
https://proceedings.mlr.press/v139/chowdhury21a.html
|
https://proceedings.mlr.press/v139/chowdhury21a.html
|
http://proceedings.mlr.press/v139/chowdhury21a/chowdhury21a.pdf
|
ICML 2021
|
|
Scaling Multi-Agent Reinforcement Learning with Selective Parameter Sharing
|
Filippos Christianos, Georgios Papoudakis, Muhammad A Rahman, Stefano V Albrecht
|
Sharing parameters in multi-agent deep reinforcement learning has played an essential role in allowing algorithms to scale to a large number of agents. Parameter sharing between agents significantly decreases the number of trainable parameters, shortening training times to tractable levels, and has been linked to more efficient learning. However, having all agents share the same parameters can also have a detrimental effect on learning. We demonstrate the impact of parameter sharing methods on training speed and converged returns, establishing that when applied indiscriminately, their effectiveness is highly dependent on the environment. We propose a novel method to automatically identify agents which may benefit from sharing parameters by partitioning them based on their abilities and goals. Our approach combines the increased sample efficiency of parameter sharing with the representational capacity of multiple independent networks to reduce training time and increase final returns.
|
https://proceedings.mlr.press/v139/christianos21a.html
|
https://proceedings.mlr.press/v139/christianos21a.html
|
https://proceedings.mlr.press/v139/christianos21a.html
|
http://proceedings.mlr.press/v139/christianos21a/christianos21a.pdf
|
ICML 2021
|
|
Beyond Variance Reduction: Understanding the True Impact of Baselines on Policy Optimization
|
Wesley Chung, Valentin Thomas, Marlos C. Machado, Nicolas Le Roux
|
Bandit and reinforcement learning (RL) problems can often be framed as optimization problems where the goal is to maximize average performance while having access only to stochastic estimates of the true gradient. Traditionally, stochastic optimization theory predicts that learning dynamics are governed by the curvature of the loss function and the noise of the gradient estimates. In this paper we demonstrate that the standard view is too limited for bandit and RL problems. To allow our analysis to be interpreted in light of multi-step MDPs, we focus on techniques derived from stochastic optimization principles (e.g., natural policy gradient and EXP3) and we show that some standard assumptions from optimization theory are violated in these problems. We present theoretical results showing that, at least for bandit problems, curvature and noise are not sufficient to explain the learning dynamics and that seemingly innocuous choices like the baseline can determine whether an algorithm converges. These theoretical findings match our empirical evaluation, which we extend to multi-state MDPs.
|
https://proceedings.mlr.press/v139/chung21a.html
|
https://proceedings.mlr.press/v139/chung21a.html
|
https://proceedings.mlr.press/v139/chung21a.html
|
http://proceedings.mlr.press/v139/chung21a/chung21a.pdf
|
ICML 2021
|
|
First-Order Methods for Wasserstein Distributionally Robust MDP
|
Julien Grand Clement, Christian Kroer
|
Markov decision processes (MDPs) are known to be sensitive to parameter specification. Distributionally robust MDPs alleviate this issue by allowing for \textit{ambiguity sets} which give a set of possible distributions over parameter sets. The goal is to find an optimal policy with respect to the worst-case parameter distribution. We propose a framework for solving Distributionally robust MDPs via first-order methods, and instantiate it for several types of Wasserstein ambiguity sets. By developing efficient proximal updates, our algorithms achieve a convergence rate of $O\left(NA^{2.5}S^{3.5}\log(S)\log(\epsilon^{-1})\epsilon^{-1.5} \right)$ for the number of kernels $N$ in the support of the nominal distribution, states $S$, and actions $A$; this rate varies slightly based on the Wasserstein setup. Our dependence on $N,A$ and $S$ is significantly better than existing methods, which have a complexity of $O\left(N^{3.5}A^{3.5}S^{4.5}\log^{2}(\epsilon^{-1}) \right)$. Numerical experiments show that our algorithm is significantly more scalable than state-of-the-art approaches across several domains.
|
https://proceedings.mlr.press/v139/clement21a.html
|
https://proceedings.mlr.press/v139/clement21a.html
|
https://proceedings.mlr.press/v139/clement21a.html
|
http://proceedings.mlr.press/v139/clement21a/clement21a.pdf
|
ICML 2021
|
|
Phasic Policy Gradient
|
Karl W Cobbe, Jacob Hilton, Oleg Klimov, John Schulman
|
We introduce Phasic Policy Gradient (PPG), a reinforcement learning framework which modifies traditional on-policy actor-critic methods by separating policy and value function training into distinct phases. In prior methods, one must choose between using a shared network or separate networks to represent the policy and value function. Using separate networks avoids interference between objectives, while using a shared network allows useful features to be shared. PPG is able to achieve the best of both worlds by splitting optimization into two phases, one that advances training and one that distills features. PPG also enables the value function to be more aggressively optimized with a higher level of sample reuse. Compared to PPO, we find that PPG significantly improves sample efficiency on the challenging Procgen Benchmark.
|
https://proceedings.mlr.press/v139/cobbe21a.html
|
https://proceedings.mlr.press/v139/cobbe21a.html
|
https://proceedings.mlr.press/v139/cobbe21a.html
|
http://proceedings.mlr.press/v139/cobbe21a/cobbe21a.pdf
|
ICML 2021
|
|
Riemannian Convex Potential Maps
|
Samuel Cohen, Brandon Amos, Yaron Lipman
|
Modeling distributions on Riemannian manifolds is a crucial component in understanding non-Euclidean data that arises, e.g., in physics and geology. The budding approaches in this space are limited by representational and computational tradeoffs. We propose and study a class of flows that uses convex potentials from Riemannian optimal transport. These are universal and can model distributions on any compact Riemannian manifold without requiring domain knowledge of the manifold to be integrated into the architecture. We demonstrate that these flows can model standard distributions on spheres, and tori, on synthetic and geological data.
|
https://proceedings.mlr.press/v139/cohen21a.html
|
https://proceedings.mlr.press/v139/cohen21a.html
|
https://proceedings.mlr.press/v139/cohen21a.html
|
http://proceedings.mlr.press/v139/cohen21a/cohen21a.pdf
|
ICML 2021
|
|
Scaling Properties of Deep Residual Networks
|
Alain-Sam Cohen, Rama Cont, Alain Rossier, Renyuan Xu
|
Residual networks (ResNets) have displayed impressive results in pattern recognition and, recently, have garnered considerable theoretical interest due to a perceived link with neural ordinary differential equations (neural ODEs). This link relies on the convergence of network weights to a smooth function as the number of layers increases. We investigate the properties of weights trained by stochastic gradient descent and their scaling with network depth through detailed numerical experiments. We observe the existence of scaling regimes markedly different from those assumed in neural ODE literature. Depending on certain features of the network architecture, such as the smoothness of the activation function, one may obtain an alternative ODE limit, a stochastic differential equation or neither of these. These findings cast doubts on the validity of the neural ODE model as an adequate asymptotic description of deep ResNets and point to an alternative class of differential equations as a better description of the deep network limit.
|
https://proceedings.mlr.press/v139/cohen21b.html
|
https://proceedings.mlr.press/v139/cohen21b.html
|
https://proceedings.mlr.press/v139/cohen21b.html
|
http://proceedings.mlr.press/v139/cohen21b/cohen21b.pdf
|
ICML 2021
|
|
Differentially-Private Clustering of Easy Instances
|
Edith Cohen, Haim Kaplan, Yishay Mansour, Uri Stemmer, Eliad Tsfadia
|
Clustering is a fundamental problem in data analysis. In differentially private clustering, the goal is to identify k cluster centers without disclosing information on individual data points. Despite significant research progress, the problem had so far resisted practical solutions. In this work we aim at providing simple implementable differentrially private clustering algorithms when the the data is "easy," e.g., when there exists a significant separation between the clusters. For the easy instances we consider, we have a simple implementation based on utilizing non-private clustering algorithms, and combining them privately. We are able to get improved sample complexity bounds in some cases of Gaussian mixtures and k-means. We complement our theoretical algorithms with experiments of simulated data.
|
https://proceedings.mlr.press/v139/cohen21c.html
|
https://proceedings.mlr.press/v139/cohen21c.html
|
https://proceedings.mlr.press/v139/cohen21c.html
|
http://proceedings.mlr.press/v139/cohen21c/cohen21c.pdf
|
ICML 2021
|
|
Improving Ultrametrics Embeddings Through Coresets
|
Vincent Cohen-Addad, Rémi De Joannis De Verclos, Guillaume Lagarde
|
To tackle the curse of dimensionality in data analysis and unsupervised learning, it is critical to be able to efficiently compute “simple” faithful representations of the data that helps extract information, improves understanding and visualization of the structure. When the dataset consists of $d$-dimensional vectors, simple representations of the data may consist in trees or ultrametrics, and the goal is to best preserve the distances (i.e.: dissimilarity values) between data elements. To circumvent the quadratic running times of the most popular methods for fitting ultrametrics, such as average, single, or complete linkage, \citet{CKL20} recently presented a new algorithm that for any $c \ge 1$, outputs in time $n^{1+O(1/c^2)}$ an ultrametric $\Delta$ such that for any two points $u, v$, $\Delta(u, v)$ is within a multiplicative factor of $5c$ to the distance between $u$ and $v$ in the “best” ultrametric representation. We improve the above result and show how to improve the above guarantee from $5c$ to $\sqrt{2}c + \varepsilon$ while achieving the same asymptotic running time. To complement the improved theoretical bound, we additionally show that the performances of our algorithm are significantly better for various real-world datasets.
|
https://proceedings.mlr.press/v139/cohen-addad21a.html
|
https://proceedings.mlr.press/v139/cohen-addad21a.html
|
https://proceedings.mlr.press/v139/cohen-addad21a.html
|
http://proceedings.mlr.press/v139/cohen-addad21a/cohen-addad21a.pdf
|
ICML 2021
|
|
Correlation Clustering in Constant Many Parallel Rounds
|
Vincent Cohen-Addad, Silvio Lattanzi, Slobodan Mitrović, Ashkan Norouzi-Fard, Nikos Parotsidis, Jakub Tarnawski
|
Correlation clustering is a central topic in unsupervised learning, with many applications in ML and data mining. In correlation clustering, one receives as input a signed graph and the goal is to partition it to minimize the number of disagreements. In this work we propose a massively parallel computation (MPC) algorithm for this problem that is considerably faster than prior work. In particular, our algorithm uses machines with memory sublinear in the number of nodes in the graph and returns a constant approximation while running only for a constant number of rounds. To the best of our knowledge, our algorithm is the first that can provably approximate a clustering problem using only a constant number of MPC rounds in the sublinear memory regime. We complement our analysis with an experimental scalability evaluation of our techniques.
|
https://proceedings.mlr.press/v139/cohen-addad21b.html
|
https://proceedings.mlr.press/v139/cohen-addad21b.html
|
https://proceedings.mlr.press/v139/cohen-addad21b.html
|
http://proceedings.mlr.press/v139/cohen-addad21b/cohen-addad21b.pdf
|
ICML 2021
|
|
Concentric mixtures of Mallows models for top-$k$ rankings: sampling and identifiability
|
Fabien Collas, Ekhine Irurozki
|
In this paper, we study mixtures of two Mallows models for top-$k$ rankings with equal location parameters but with different scale parameters (a mixture of concentric Mallows models). These models arise when we have a heterogeneous population of voters formed by two populations, one of which is a subpopulation of expert voters. We show the identifiability of both components and the learnability of their respective parameters. These results are based upon, first, bounding the sample complexity for the Borda algorithm with top-$k$ rankings. Second, we characterize the distances between rankings, showing that an off-the-shelf clustering algorithm separates the rankings by components with high probability -provided the scales are well-separated.As a by-product, we include an efficient sampling algorithm for Mallows top-$k$ rankings. Finally, since the rank aggregation will suffer from a large amount of noise introduced by the non-expert voters, we adapt the Borda algorithm to be able to recover the ground truth consensus ranking which is especially consistent with the expert rankings.
|
https://proceedings.mlr.press/v139/collas21a.html
|
https://proceedings.mlr.press/v139/collas21a.html
|
https://proceedings.mlr.press/v139/collas21a.html
|
http://proceedings.mlr.press/v139/collas21a/collas21a.pdf
|
ICML 2021
|
|
Exploiting Shared Representations for Personalized Federated Learning
|
Liam Collins, Hamed Hassani, Aryan Mokhtari, Sanjay Shakkottai
|
Deep neural networks have shown the ability to extract universal feature representations from data such as images and text that have been useful for a variety of learning tasks. However, the fruits of representation learning have yet to be fully-realized in federated settings. Although data in federated settings is often non-i.i.d. across clients, the success of centralized deep learning suggests that data often shares a global {\em feature representation}, while the statistical heterogeneity across clients or tasks is concentrated in the {\em labels}. Based on this intuition, we propose a novel federated learning framework and algorithm for learning a shared data representation across clients and unique local heads for each client. Our algorithm harnesses the distributed computational power across clients to perform many local-updates with respect to the low-dimensional local parameters for every update of the representation. We prove that this method obtains linear convergence to the ground-truth representation with near-optimal sample complexity in a linear setting, demonstrating that it can efficiently reduce the problem dimension for each client. Further, we provide extensive experimental results demonstrating the improvement of our method over alternative personalized federated learning approaches in heterogeneous settings.
|
https://proceedings.mlr.press/v139/collins21a.html
|
https://proceedings.mlr.press/v139/collins21a.html
|
https://proceedings.mlr.press/v139/collins21a.html
|
http://proceedings.mlr.press/v139/collins21a/collins21a.pdf
|
ICML 2021
|
|
Differentiable Particle Filtering via Entropy-Regularized Optimal Transport
|
Adrien Corenflos, James Thornton, George Deligiannidis, Arnaud Doucet
|
Particle Filtering (PF) methods are an established class of procedures for performing inference in non-linear state-space models. Resampling is a key ingredient of PF necessary to obtain low variance likelihood and states estimates. However, traditional resampling methods result in PF-based loss functions being non-differentiable with respect to model and PF parameters. In a variational inference context, resampling also yields high variance gradient estimates of the PF-based evidence lower bound. By leveraging optimal transport ideas, we introduce a principled differentiable particle filter and provide convergence results. We demonstrate this novel method on a variety of applications.
|
https://proceedings.mlr.press/v139/corenflos21a.html
|
https://proceedings.mlr.press/v139/corenflos21a.html
|
https://proceedings.mlr.press/v139/corenflos21a.html
|
http://proceedings.mlr.press/v139/corenflos21a/corenflos21a.pdf
|
ICML 2021
|
|
Fairness and Bias in Online Selection
|
Jose Correa, Andres Cristi, Paul Duetting, Ashkan Norouzi-Fard
|
There is growing awareness and concern about fairness in machine learning and algorithm design. This is particularly true in online selection problems where decisions are often biased, for example, when assessing credit risks or hiring staff. We address the issues of fairness and bias in online selection by introducing multi-color versions of the classic secretary and prophet problem. Interestingly, existing algorithms for these problems are either very unfair or very inefficient, so we develop optimal fair algorithms for these new problems and provide tight bounds on their competitiveness. We validate our theoretical findings on real-world data.
|
https://proceedings.mlr.press/v139/correa21a.html
|
https://proceedings.mlr.press/v139/correa21a.html
|
https://proceedings.mlr.press/v139/correa21a.html
|
http://proceedings.mlr.press/v139/correa21a/correa21a.pdf
|
ICML 2021
|
|
Relative Deviation Margin Bounds
|
Corinna Cortes, Mehryar Mohri, Ananda Theertha Suresh
|
We present a series of new and more favorable margin-based learning guarantees that depend on the empirical margin loss of a predictor. e give two types of learning bounds, in terms of either the Rademacher complexity or the empirical $\ell_\infty$-covering number of the hypothesis set used, both distribution-dependent and valid for general families. Furthermore, using our relative deviation margin bounds, we derive distribution-dependent generalization bounds for unbounded loss functions under the assumption of a finite moment. We also briefly highlight several applications of these bounds and discuss their connection with existing results.
|
https://proceedings.mlr.press/v139/cortes21a.html
|
https://proceedings.mlr.press/v139/cortes21a.html
|
https://proceedings.mlr.press/v139/cortes21a.html
|
http://proceedings.mlr.press/v139/cortes21a/cortes21a.pdf
|
ICML 2021
|
|
A Discriminative Technique for Multiple-Source Adaptation
|
Corinna Cortes, Mehryar Mohri, Ananda Theertha Suresh, Ningshan Zhang
|
We present a new discriminative technique for the multiple-source adaptation (MSA) problem. Unlike previous work, which relies on density estimation for each source domain, our solution only requires conditional probabilities that can be straightforwardly accurately estimated from unlabeled data from the source domains. We give a detailed analysis of our new technique, including general guarantees based on Rényi divergences, and learning bounds when conditional Maxent is used for estimating conditional probabilities for a point to belong to a source domain. We show that these guarantees compare favorably to those that can be derived for the generative solution, using kernel density estimation. Our experiments with real-world applications further demonstrate that our new discriminative MSA algorithm outperforms the previous generative solution as well as other domain adaptation baselines.
|
https://proceedings.mlr.press/v139/cortes21b.html
|
https://proceedings.mlr.press/v139/cortes21b.html
|
https://proceedings.mlr.press/v139/cortes21b.html
|
http://proceedings.mlr.press/v139/cortes21b/cortes21b.pdf
|
ICML 2021
|
|
Characterizing Fairness Over the Set of Good Models Under Selective Labels
|
Amanda Coston, Ashesh Rambachan, Alexandra Chouldechova
|
Algorithmic risk assessments are used to inform decisions in a wide variety of high-stakes settings. Often multiple predictive models deliver similar overall performance but differ markedly in their predictions for individual cases, an empirical phenomenon known as the “Rashomon Effect.” These models may have different properties over various groups, and therefore have different predictive fairness properties. We develop a framework for characterizing predictive fairness properties over the set of models that deliver similar overall performance, or “the set of good models.” Our framework addresses the empirically relevant challenge of selectively labelled data in the setting where the selection decision and outcome are unconfounded given the observed data features. Our framework can be used to 1) audit for predictive bias; or 2) replace an existing model with one that has better fairness properties. We illustrate these use cases on a recidivism prediction task and a real-world credit-scoring task.
|
https://proceedings.mlr.press/v139/coston21a.html
|
https://proceedings.mlr.press/v139/coston21a.html
|
https://proceedings.mlr.press/v139/coston21a.html
|
http://proceedings.mlr.press/v139/coston21a/coston21a.pdf
|
ICML 2021
|
|
Two-way kernel matrix puncturing: towards resource-efficient PCA and spectral clustering
|
Romain Couillet, Florent Chatelain, Nicolas Le Bihan
|
The article introduces an elementary cost and storage reduction method for spectral clustering and principal component analysis. The method consists in randomly “puncturing” both the data matrix $X\in\mathbb{C}^{p\times n}$ (or $\mathbb{R}^{p\times n}$) and its corresponding kernel (Gram) matrix $K$ through Bernoulli masks: $S\in\{0,1\}^{p\times n}$ for $X$ and $B\in\{0,1\}^{n\times n}$ for $K$. The resulting “two-way punctured” kernel is thus given by $K=\frac1p[(X\odot S)^\H (X\odot S)]\odot B$. We demonstrate that, for $X$ composed of independent columns drawn from a Gaussian mixture model, as $n,p\to\infty$ with $p/n\to c_0\in(0,\infty)$, the spectral behavior of $K$ – its limiting eigenvalue distribution, as well as its isolated eigenvalues and eigenvectors – is fully tractable and exhibits a series of counter-intuitive phenomena. We notably prove, and empirically confirm on various image databases, that it is possible to drastically puncture the data, thereby providing possibly huge computational and storage gains, for a virtually constant (clustering or PCA) performance. This preliminary study opens as such the path towards rethinking, from a large dimensional standpoint, computational and storage costs in elementary machine learning models.
|
https://proceedings.mlr.press/v139/couillet21a.html
|
https://proceedings.mlr.press/v139/couillet21a.html
|
https://proceedings.mlr.press/v139/couillet21a.html
|
http://proceedings.mlr.press/v139/couillet21a/couillet21a.pdf
|
ICML 2021
|
|
Explaining Time Series Predictions with Dynamic Masks
|
Jonathan Crabbé, Mihaela Van Der Schaar
|
How can we explain the predictions of a machine learning model? When the data is structured as a multivariate time series, this question induces additional difficulties such as the necessity for the explanation to embody the time dependency and the large number of inputs. To address these challenges, we propose dynamic masks (Dynamask). This method produces instance-wise importance scores for each feature at each time step by fitting a perturbation mask to the input sequence. In order to incorporate the time dependency of the data, Dynamask studies the effects of dynamic perturbation operators. In order to tackle the large number of inputs, we propose a scheme to make the feature selection parsimonious (to select no more feature than necessary) and legible (a notion that we detail by making a parallel with information theory). With synthetic and real-world data, we demonstrate that the dynamic underpinning of Dynamask, together with its parsimony, offer a neat improvement in the identification of feature importance over time. The modularity of Dynamask makes it ideal as a plug-in to increase the transparency of a wide range of machine learning models in areas such as medicine and finance, where time series are abundant.
|
https://proceedings.mlr.press/v139/crabbe21a.html
|
https://proceedings.mlr.press/v139/crabbe21a.html
|
https://proceedings.mlr.press/v139/crabbe21a.html
|
http://proceedings.mlr.press/v139/crabbe21a/crabbe21a.pdf
|
ICML 2021
|
|
Generalised Lipschitz Regularisation Equals Distributional Robustness
|
Zac Cranko, Zhan Shi, Xinhua Zhang, Richard Nock, Simon Kornblith
|
The problem of adversarial examples has highlighted the need for a theory of regularisation that is general enough to apply to exotic function classes, such as universal approximators. In response, we have been able to significantly sharpen existing results regarding the relationship between distributional robustness and regularisation, when defined with a transportation cost uncertainty set. The theory allows us to characterise the conditions under which the distributional robustness equals a Lipschitz-regularised model, and to tightly quantify, for the first time, the slackness under very mild assumptions. As a theoretical application we show a new result explicating the connection between adversarial learning and distributional robustness. We then give new results for how to achieve Lipschitz regularisation of kernel classifiers, which are demonstrated experimentally.
|
https://proceedings.mlr.press/v139/cranko21a.html
|
https://proceedings.mlr.press/v139/cranko21a.html
|
https://proceedings.mlr.press/v139/cranko21a.html
|
http://proceedings.mlr.press/v139/cranko21a/cranko21a.pdf
|
ICML 2021
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.