[Home] [Research] [Seminar] [Links] Google Scholar [Java] [ICAPS Workshop]
Signal Processing
Statistical Analysis of HRR Signals
Higher Order Statistics
Tomographic Imaging
Multiaspect Partial Reconstruction
Machine Learning
Learning under Imperfections
Multitask Learning
Plan-in-Advance Active Learning
Planning and Scheduling
Information-Theoretic Myopic Planning
Partially Observable Markov Decision Process
Bioinformatics
Computer Vision

Multitask Learning

In real world applications, one is frequently confronted with the situation in which multiple tasks are at hand waiting to be solved. Oftentimes these tasks are not independent but correlated in a certain way, which implies what is learned from one task is transferable to another correlated task. By taking use of this transferability, each task is made easier to solve. In machine learning, the concept of explicitly exploiting the transferability of expertise between tasks, by learning the tasks simultaneously under a unified representation, is formally referred to as “multi-task learning (MLT)”. In contrast to MLT, learning each task in isolation is called “single-task learning (SLT)”. MLT is more a learning scenario than a single method or algorithm. One may approach MTL with different formulations of the problem.

In this work, we consider regression/classification of several data sets. For any given data set, it may be correlated with some but not necessarily all other sets. It is assumed that the training set of each individual task is weak, i.e., it contains insufficient samples, and therefore learning each task in isolation leads to poor generalization performances. Our goal is to enhance, in a mutually beneficial way, the weak training sets of all tasks, by exploiting the inter-dependency between the tasks.

Evidently, MLT becomes superfluous when the data sets all come from the same generating distribution, as in that case we can simply take union of them and perform a single-task learning. In the other extremity, when all the tasks are independent, there is no correlation to utilize and MLT does not apply either.

The key issue in mutiltask learning is to find which task is related to which task. After these relations are revealed, the tasks are partitioned into a number of disjoint subsets of tasks, so that a task is correlated to the tasks in the same subset, and not correlated to the tasks in other subsets.

Usually the tasks are defined by physics of the application, so the problem is naturally MTL. In certain scenarios, the task appears as a single task, but by artificially decomposing the task into a number of simpler sub-tasks, one artificially makes up a MLT problem. The artificial MTL problem is useful when each sub-task is simpler and easier to solve than the original SLT problem. A method of solving the artificial MTL problem is hierarchical mixture of experts (HME).

Xuejun Liao and Lawrence Carin, "Radial Basis Function Network for Multi-task Learning", Neural Information Processing Systems (NIPS), December 6-8, 2005
Ya Xue, Xuejun Liao, Lawrence Carin, and Balaji Krishnapuram, "Learning multiple classifiers with Dirichlet process mixture priors", NIPS Workshop on Open Problems and Challenges for Nonparametric Bayesian Methods in Machine Learning, December 10, 2005
Y. Xue, X. Liao, B. Krishnapuram, and L. Carin, "Bayesian Hierarchical Mixture of Experts for Pattern Classification", Submitted to IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), October 2005
Back to top