SiROP
Login   
Language
  • English
    • English
    • German
Home
Menu
  • Login
  • Register
  • Search Opportunity
  • Search Organization
  • Create project alert
Information
  • About SiROP
  • Team
  • Network
  • Partners
  • Advisory Board
  • Media
  • Contact
  • FAQ
  • Imprint
  • Terms & conditions
Register now After registration you will be able to apply for this opportunity online.

Debiasing regularization in learning and system identification algorithms

Regularized optimization has reported positive results in system identification recently. Such methods aim to avoid overfitting at the cost of inducing bias. In this project, we will investigate debiasing techniques in the context of learning and system identification.

Keywords: regularization, learning, system identification, bias-variance tradeoff

  • In classical system identification approaches, a particular model structure is postulated and the model parameters are estimated that maximize the adherence to the model. If the model is appropriately designed, this gives the best unbiased estimator. However, the selection of model structure and complexity is very hard in practice, which leads to overfitting and high variances. Therefore, regularization methods are proposed to overcome this issue by applying a general non-parametric model. In addition to the data adherence objective, another objective, known as the regularizer, is introduced that encodes the prior knowledge on the model (e.g., stability, smoothness, low-complexity). Then, the estimator minimizes a weighted sum of both objectives. Such methods are shown to be effective to avoid overfitting. In particular, we will investigate two types of common regularizers in linear system identification: the smoothness regularizer with the kernel-based method and the low-complexity regularizer with the atomic norm. These correspond to the smooth function approximation and sparse feature selection in learning respectively. The cost to pay for this is that bias is also induced by regularization. The appropriate amount of regularization is usually determined by model validation techniques to obtain a sweet point along the bias-variance tradeoff curve. In this project, we will investigate advanced strategies for debiasing regularization in learning and identification algorithms: - For the kernel-based regularizer, extrapolation techniques from numerical analysis will be explored that compensates the bias by a linear combination of candidate regularized estimators. - For the atomic-norm regularizer, recent results in high-dimensional statistics (e.g., debiased lasso, multi-sample splitting) will be applied to the sparse estimator to improve its bias property. **Application:** The candidate should be familiar with optimization and have an interest in doing a Master thesis project on the application of data science in dynamical systems. Meanwhile, having background in system identification, machine learning, statistics, and related fields is an asset in this project.

    In classical system identification approaches, a particular model structure is postulated and the model parameters are estimated that maximize the adherence to the model. If the model is appropriately designed, this gives the best unbiased estimator.

    However, the selection of model structure and complexity is very hard in practice, which leads to overfitting and high variances. Therefore, regularization methods are proposed to overcome this issue by applying a general non-parametric model. In addition to the data adherence objective, another objective, known as the regularizer, is introduced that encodes the prior knowledge on the model (e.g., stability, smoothness, low-complexity). Then, the estimator minimizes a weighted sum of both objectives. Such methods are shown to be effective to avoid overfitting.

    In particular, we will investigate two types of common regularizers in linear system identification: the smoothness regularizer with the kernel-based method and the low-complexity regularizer with the atomic norm. These correspond to the smooth function approximation and sparse feature selection in learning respectively.

    The cost to pay for this is that bias is also induced by regularization. The appropriate amount of regularization is usually determined by model validation techniques to obtain a sweet point along the bias-variance tradeoff curve. In this project, we will investigate advanced strategies for debiasing regularization in learning and identification algorithms:

    - For the kernel-based regularizer, extrapolation techniques from numerical analysis will be explored that compensates the bias by a linear combination of candidate regularized estimators.
    - For the atomic-norm regularizer, recent results in high-dimensional statistics (e.g., debiased lasso, multi-sample splitting) will be applied to the sparse estimator to improve its bias property.

    **Application:** The candidate should be familiar with optimization and have an interest in doing a Master thesis project on the application of data science in dynamical systems. Meanwhile, having background in system identification, machine learning, statistics, and related fields is an asset in this project.

  • - Understand the idea of regularization and its common paradigms in system identification and learning. - Study existing literature on nonparametric and sparse estimation. - Investigate debiasing techniques in numerical analysis and regression. - Apply the debiasing techniques in linear system identification. - Gain experience in convex optimization and model validation. - Implement the debiased regularization algorithms and set up numerical tests to assess their performances.

    - Understand the idea of regularization and its common paradigms in system identification and learning.
    - Study existing literature on nonparametric and sparse estimation.
    - Investigate debiasing techniques in numerical analysis and regression.
    - Apply the debiasing techniques in linear system identification.
    - Gain experience in convex optimization and model validation.
    - Implement the debiased regularization algorithms and set up numerical tests to assess their performances.

  • Mingzhou Yin - myin@control.ee.ethz.ch, Mohammad Khosravi - khosravm@control.ee.ethz.ch, Dr. Andrea Iannelli - iannelli@control.ee.ethz.ch, Prof. Roy S. Smith - rsmith@control.ee.ethz.ch

    Mingzhou Yin - myin@control.ee.ethz.ch,
    Mohammad Khosravi - khosravm@control.ee.ethz.ch,
    Dr. Andrea Iannelli - iannelli@control.ee.ethz.ch,
    Prof. Roy S. Smith - rsmith@control.ee.ethz.ch

Calendar

Earliest start2021-01-18
Latest endNo date

Location

Automatic Control Laboratory (ETHZ)

Labels

Master Thesis

Theory (IfA)

Computation (IfA)

Topics

  • Mathematical Sciences
  • Information, Computing and Communication Sciences
  • Engineering and Technology

Documents

NameCommentSizeActions
MA_Description_debiased_regularization_IfA.pdf87KBDownload
SiROP PARTNER INSTITUTIONS