Model Averaging in Large Scale Learning

Abstract : This thesis explores properties of estimations procedures related to aggregation in the problem of high-dimensional regression in a sparse setting. The exponentially weighted aggregate (EWA) is well studied in the literature. It benefits from strong results in fixed and random designs with a PAC-Bayesian approach. However, little is known about the properties of the EWA with Laplace prior. Chapter 2 analyses the statistical behaviour of the prediction loss of the EWA with Laplace prior in the fixed design setting. Sharp oracle inequalities which generalize the properties of the Lasso to a larger family of estimators are established. These results also bridge the gap from the Lasso to the Bayesian Lasso. Chapter 3 introduces an adjusted Langevin Monte Carlo sampling method that approximates the EWA with Laplace prior in an explicit finite number of iterations for any targeted accuracy. Chapter 4 explores the statisctical behaviour of adjusted versions of the Lasso for the transductive and semi-supervised learning task in the random design setting.
Complete list of metadatas

https://pastel.archives-ouvertes.fr/tel-01735320
Contributor : Abes Star <>
Submitted on : Thursday, March 15, 2018 - 5:19:07 PM
Last modification on : Tuesday, April 30, 2019 - 6:00:06 PM
Long-term archiving on : Tuesday, September 11, 2018 - 12:10:21 AM

File

73105_GRAPPIN_2018_archivage.p...
Version validated by the jury (STAR)

Identifiers

  • HAL Id : tel-01735320, version 1

Citation

Edwin Grappin. Model Averaging in Large Scale Learning. Statistics [math.ST]. Université Paris-Saclay, 2018. English. ⟨NNT : 2018SACLG001⟩. ⟨tel-01735320⟩

Share

Metrics

Record views

471

Files downloads

279