Register now After registration you will be able to apply for this opportunity online.
This opportunity is not published. No applications will be accepted.
Learn to Initialize – Deep Learning of initial solutions for the optimization of robust cost functions
See description
The optimization of non-convex, robust cost functions is required in any pipeline for 3D reconstruction, localization, pose estimation and scan registration. Similarly, it is the core of robust model fitting for deformable objects like human hands or skeletons to data.
For those time-critical, medium to large scale problems, continuous optimization techniques, in particular, non-linear least squares is considered the primary method of choice. Further, to find a good optimum, techniques include graduated non-convexity, the construction of surrogate loss functions or employing lifted robust kernels per residual. While these techniques are related to each other, eg. lifting can be regarded as constructing a quadratic surrogate per residual, they all require a good initial solution for the problem or, equivalently, an initial weighting of the residuals.
The goal of this work is to leverage the power of deep learning to deliver this initial guess. To that end we will fall back on known deep learning models that can handle a variable number of inputs [1]. This initial solution can be refined by an optimization network that will be the core contribution of this thesis. For this part we will implement a conventional Gauss-Newton solver, extend the algorithm to robust optimization, eg. [2]. To facilitate the learning process, our implementation will allow to backpropagate through the optimization process.
[1] Qi et al., “PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space”, Neurips 2017
[2] Zach et al, “Descending, lifting or smoothing: Secrets of robust cost optimization”, ECCV 2018
The optimization of non-convex, robust cost functions is required in any pipeline for 3D reconstruction, localization, pose estimation and scan registration. Similarly, it is the core of robust model fitting for deformable objects like human hands or skeletons to data. For those time-critical, medium to large scale problems, continuous optimization techniques, in particular, non-linear least squares is considered the primary method of choice. Further, to find a good optimum, techniques include graduated non-convexity, the construction of surrogate loss functions or employing lifted robust kernels per residual. While these techniques are related to each other, eg. lifting can be regarded as constructing a quadratic surrogate per residual, they all require a good initial solution for the problem or, equivalently, an initial weighting of the residuals. The goal of this work is to leverage the power of deep learning to deliver this initial guess. To that end we will fall back on known deep learning models that can handle a variable number of inputs [1]. This initial solution can be refined by an optimization network that will be the core contribution of this thesis. For this part we will implement a conventional Gauss-Newton solver, extend the algorithm to robust optimization, eg. [2]. To facilitate the learning process, our implementation will allow to backpropagate through the optimization process. [1] Qi et al., “PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space”, Neurips 2017 [2] Zach et al, “Descending, lifting or smoothing: Secrets of robust cost optimization”, ECCV 2018
Develop a pipeline for robust cost function optimization which leverages Deep Learning for smart initialization in end-to-end fashion
Develop a pipeline for robust cost function optimization which leverages Deep Learning for smart initialization in end-to-end fashion
Christoph Vogel (Christoph.vogel@microsoft.com)
Ondrej Miksik (ondrej.miksik@microsoft.com)
Christoph Vogel (Christoph.vogel@microsoft.com) Ondrej Miksik (ondrej.miksik@microsoft.com)