Search ICLR 2019

Searching papers submitted to ICLR 2019 can be painful. You might want to know which paper uses technique X, dataset D, or cites author ME. Unfortunately, search is limited to titles, abstracts, and keywords, missing the actual contents of the paper. This Frankensteinian search has returned from 2018 to help scour the papers of ICLR by ripping out their souls using pdftotext.

Good luck! Warranty's not included :)


Need random search inspiration..? Grab something from the list of all tags! ^_^
How about: imitation, spectral graph wavelet transform, training dynamics, multiple instance discriminator, policy robustness ..?


Sanity Disclaimer: As you stare at the continuous stream of ICLR and arXiv papers, don't lose confidence or feel overwhelmed. This isn't a competition, it's a search for knowledge. You and your work are valuable and help carve out the path for progress in our field :)

"multi-domain learning" has 4 results


Multi-task Learning with Gradient Communication    

tl;dr We introduce an inductive bias for multi-task learning, allowing different tasks to communicate by gradient passing.

In this paper, we describe a general framework to systematically analyze current neural models for multi-task learning, in which we find that existing models expect to disentangle features into different spaces while features learned in practice are still entangled in shared space, leaving potential hazards for other training or unseen tasks. We propose to alleviate this problem by incorporating a new inductive bias into the process of multi-task learning, that different tasks can communicate with each other not only by passing hidden variables but gradients explicitly. Experimentally, we evaluate proposed methods on three groups of tasks and two types of settings (\textsc{in-task} and \textsc{out-of-task}). Quantitative and qualitative results show their effectiveness.


Variational Domain Adaptation    

tl;dr This paper proposes variational domain adaptation, a unified, scalable, simple framework for learning multiple distributions through variational inference

This paper proposes variational domain adaptation, a unified, scalable, simple framework for learning multiple distributions through variational inference. Un- like the existing methods on domain transfer through deep generative models, such as CycleGAN (Zhu et al., 2017a) and StarGAN (Choi et al., 2017), the variational domain adaptation has three advantages. Firstly, the samples from the target are not required. Instead, the framework requries one known source as a prior p(x) and binary discriminators, p(D i |x), discriminating the target domain D i from oth- ers. Consequently, the framework regards a target as a posterior that can be ex- plicitly formulated through the Bayesian inference, p(x|D i ) ∝ p(D i |x)p(x), as exhibited by a further proposed model of multi-domain variational autoencoder (MD-VAE). Secondly, the framework is scablable to large-scale domains. MD- VAE sophisticatedly puts together all the domains as well as the samples drawn from the prior into normal distributions in the same latent space as embeddings. The model enables us to expand the method to uncountable infinite domains such as continuous domains as well as interpolation. Thirdly, with MD-VAE, no need to search hyperparameter anymore. Although several domain transfer based on adversarial learning need sophisticated automatic/manual hyperparameter search, MD-VAE fast converges with less tuning because it has only one trainable matrix in addition to VAE. In the experiment part, we experimentally demonstrate the benefit with multi-domain image generation task on CelebA and facial image data that are obtained based on evaluation by 60 users, the model generates an ideal image that can be evaluated to be good by multiple users. Additionally, our exper- imental result exhibits that our model outperforms several state-of-the-art models.


Mode Normalization    

tl;dr We present a novel normalization method for deep neural networks that is robust to multi-modalities in intermediate feature distributions.

Normalization methods are a central building block in the deep learning toolbox. They accelerate and stabilize training, while decreasing the dependence on manually tuned learning rate schedules. When learning from multi-modal distributions, the effectiveness of batch normalization (BN), arguably the most prominent normalization method, is reduced. As a remedy, we propose a more flexible approach: by extending the normalization to more than a single mean and variance, we detect modes of data on-the-fly, jointly normalizing samples that share common features. We demonstrate that our method outperforms BN and other widely used normalization techniques in several experiments, including single and multi-task datasets.


Multi-Domain Adversarial Learning    

tl;dr Adversarial Domain adaptation and Multi-domain learning: a new loss to handle multi- and single-domain classes in the semi-supervised setting.

Multi-domain learning (MDL) aims at obtaining a model with minimal average risk across multiple domains. Our empirical motivation is automated microscopy data, where cultured cells are imaged after being exposed to known and unknown chemical perturbations, and each dataset displays significant experimental bias. This paper presents a multi-domain adversarial learning approach, MuLANN, to leverage multiple datasets with overlapping but distinct class sets, in a semi-supervised setting. Our contributions include: i) a bound on the average- and worst-domain risk in MDL, obtained using the H-divergence; ii) a new loss to accommodate semi-supervised multi-domain learning and domain adaptation; iii) the experimental validation of the approach, improving on the state-of-the-art on two standard image benchmarks, and a novel bioimage dataset, Cell.