Large deep neural networks are powerful, but exhibit undesirable behaviors
such as memorization and sensitivity to adversarial examples. Mixup is a
simple learning principle to alleviate these issues. In essence, mixup
trains a neural network on convex combinations of pairs of examples and
their labels. By doing so, mixup regularizes the neural network to favor
simple linear behavior in-between training examples. Transfer Learning is
a hot topic in machine learning, it has many subfields, such as domain
adaptation and multi-domain text classification. For unsupervised domain
adaptation (UDA), it aims to learn a good predictive model in a target
domain without any labeled data by leveraging the knowledge from a
label-rich source domain. Recent advances on UDA rely on adversarial
learning to disentangle the explanatory and transferable features for
domain adaptation. However, there are two issues with the existing
methods. First, the discriminability of the latent space cannot be fully
guaranteed without considering the class-aware information in the target
domain. Second, only discriminating the samples from the source and target
domains is not sufficient for domain-invariant feature extracting in most
parts of the latent space. In order to alleviate the above issues, we
propose a novel dual mixup regularized learning method (DMRL) for
unsupervised domain adaptation, which not only guides the classifier in
enhancing consistent predictions in-between samples per domain, but also
enriches the intrinsic structures of the latent space. The DMRL jointly
conducts category and domain mixup on the pixel level to improve the
effectiveness of the models. A series of empirical studies on four domain
adaptation benchmarks demonstrate that our approach can achieve
state-of-the-art performance. For multi-domain text classification, many
existing approaches are highly dependent on the availability of sufficient
large training data. However, some datasets have abundant labeled training
samples, while others have scarce or no labeled data. Here, we propose a
mixup regularized adversarial network (MRAN) to tackle multi-domain text
classification (MDTC) task. This novel model adopts the shared-private
paradigm and applies mixup to conduct interpolation regularization on both
category-level and domain-level simultaneously, to help enforce feature
alignment among different distributions. Experimental results on two
benchmarks demonstrate that our method is superior to existing approaches
in the literature.