Ottimizzazione per data science

Convex optimization and data science are intimately related: on the one hand, convexity plays a key role to solve optimization and equilibrium problems for signal recovery in machine learning and inverse problems; on the other hand, recent advances in convex optimization algorithms have been largely motivated by applications in the above areas. Mathematically, signal recovery problems are often formulated as variational problems, to which a priori information gives a specific composite structure. Usually, the optimization variable has a high or inifinite dimension, and the problem depends on a huge number of noisy data.

In this context, first order splitting methods are the methods of choice, due to the low memory requirements, the low cost per iteration, and their ability to activate each component in the optimization problem independently. The most popular one is forward-backward splitting, involving a forward gradient step and a backward proximal one.

In variational formulations of inverse and machine learning problems, modeling and computations are treated separately: once that the regularization scheme has been suitably chosen, model selection corresponds to the choice of the regularization parameter. Since, this stage is usually computationally intensive, recently alternative approaches based on iterative (a.k.a. implicit) regularization proved to be very effective. These techniques rely on the observation that many iterative schemes exhibit a self-regularizing property in the sense that early termination of the iterative process has a regularizing effect. For this class of methods the number of iterations plays the role of regularization parameter and the stopping rule performs model selection. Such approaches perform surprisingly well in practice, especially for deep learning methods.

Our research group interests lie in the above areas, and we are particularly interested in studying convergence and stability properties of splitting algorithms and iterative regularization schemes in the presence of (stochastic) noise in the involved quantities and using acceleration steps. In order to do so, we mix techniques from convex optimization with machine learning, inverse problems and functional analysis tools. We are also interested in extensions of the above techniques to zeroth order optimization, where only functions evaluations are performed at each step, and to the extension of convergence results to the nonconvex, but structured settings, such as the one of deep learning.

Funded projects::

MSCA-ITN-ETN - European Training Networks 2019: TraDE-OPT, Training Data-driven Experts in OPTimization 2020-2024, (3.774.874 euro)

Progetto AFOSR ARIA-ML: Adaptive, Robust and Informed Algorithm for modern Machine Learning 2022-2024 (250.000 euro)

Ultimo aggiornamento 20 Luglio 2022