M26: Interplay Between Machine Learning and Modern Regularization Theory: Multi-Parameter Regularization

Learning theory has a long tradition of using regularization methods for constructing and analyzing learning algorithms. Close cooperation of the researchers working in learning and regularization theories has resulted in a series of very interesting and important findings in a wide range of applications. However, new challenges in data-based learning still call for the close attention of both the learning and regularization communities. Among these challenges are learning from the so-called Big Data, Manifold learning, multitasks and multiple kernel learning. Applications in signal and image processing often deal with situations when the signal or image one is interested in can be modeled as a combination of several components of different nature that one wishes to identify and try to separate. In this case, the “reconstruction problem” can be interpreted as an inverse problem of unmixing type. The separation, or unmixing, can for instance be achieved by solving a variational optimization problem with multiple penalty terms, each of which favors a specific component of the solution. As the number of the components increase, a proper choice of the parameters affecting the solution of the optimization problem becomes a big challenge. However, this issue has been studied and addressed only to a very moderate degree by researchers in the different disciplines involved.

Inspired by all these challenges, potential benefits and applications, this mini-symposium aims at:

• providing an opportunity for the experts and young reserachers from the fields of Regularization and Learning to present their achievements and discuss and identify "hot" topics for possible future cooperations and,
• discussing recent theoretical and numerical developments and advances in multi-penalty regularization as an important tool for component separation.

Organizers:
Sergei V. Pereverzyev, Johann-Radon-Institute for Computational and Applied Mathematics, RICAM, Linz, Austria. This email address is being protected from spambots. You need JavaScript enabled to view it.
Ruben D. Spies, Instituto de Matemática Aplicada del Litoral IMAL, CONICET-UNL, Santa Fe, Argentina. This email address is being protected from spambots. You need JavaScript enabled to view it.

Invited Speakers (in alphabetical order) :
Bangti Jin, Department of Computer Science, University College London, This email address is being protected from spambots. You need JavaScript enabled to view it.
Why is the stochastic gradient descent good for inverse problems ?

Timo Klock, Simula Research Laboratory in Fornebu, Norway, This email address is being protected from spambots. You need JavaScript enabled to view it.
Multi-penalty regularization with data-driven parameter choice for unmixing problems

Johannes Maly, Technical University Munich, Germany, This email address is being protected from spambots. You need JavaScript enabled to view it.
Matrix sensing using combined sparsity and low-rank constraints

Sergiy Pereverzyev Jr., Applied Mathematics Group, Department of Mathematics, University of Innsbruck, Innsbruck, Austria, This email address is being protected from spambots. You need JavaScript enabled to view it.
Regularized integral operators in two-sample problem

Lorenzo A. Rosasco, Massachusetts Institute of Technology, Cambridge, MA, USA and Universita' di Genova, Italy, This email address is being protected from spambots. You need JavaScript enabled to view it.
Efficient optimal learning via regularization with projections

Ruben D. Spies, Instituto de Matemática Aplicada del Litoral IMAL, CONICET-UNL, Santa Fe, Argentina, This email address is being protected from spambots. You need JavaScript enabled to view it.
Mixed penalization for enhancing class separability of evoked related potentials in Brain-Computer Interfaces