, The next Proposition states some important properties of c( ) = (c k ( )) k2N when satisfies (?). In what follows, we write for brevity c k = c k (

E. Abbe, Community detection and stochastic block models, 2017.

E. Abbe, A. S. Bandeira, and G. Hall, Exact recovery in the stochastic block model, 2014.

E. Abbe, J. Fan, K. Wang, and Y. Zhong, Entrywise eigenvector analysis of random matrices with low expected rank, 2017.

F. Abramovich and Y. Benjamini, Thresholding of wavelet coefficients as multiple hypotheses testing procedure, Wavelets and statistics, pp.5-14, 1995.

F. Abramovich, Y. Benjamini, D. L. Donoho, and I. M. Johnstone, Adapting to unknown sparsity by controlling the false discovery rate, The Annals of Statistics, vol.34, issue.2, pp.584-653, 2006.

M. Abramowitz and I. A. Stegun, Handbook of mathematical functions: with formulas, graphs, and mathematical tables, Courier Corporation, vol.55, 1964.

S. Aeron, V. Saligrama, and M. Zhao, Information theoretic bounds for compressed sensing, IEEE Transactions on Information Theory, vol.56, issue.10, pp.5111-5130, 2010.

C. Aksoylar, G. K. Atia, and V. Saligrama, Sparse signal processing with linear and nonlinear observations: A unified Shannon-theoretic approach, IEEE Transactions on Information Theory, vol.63, issue.2, pp.749-776, 2017.

N. Alon, Y. Matias, and M. Szegedy, The space complexity of approximating the frequency moments, Journal of Computer and System Sciences, vol.58, issue.1, pp.137-147, 1999.

E. Arias-castro and S. Chen, Distribution-free multiple testing, Electronic Journal of Statistics, vol.11, issue.1, pp.1983-2001, 2017.

P. Awasthi and O. Sheffet, Improved spectral-norm bounds for clustering, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, pp.37-49, 2012.

A. Ayache and M. S. Taqqu, Rate optimality of wavelet series approximations of fractional brownian motion, Journal of Fourier Analysis and Applications, vol.9, issue.5, pp.451-471, 2003.

J. Banks, C. Moore, R. Vershynin, N. Verzelen, and J. Xu, Informationtheoretic bounds and phase transitions in clustering, sparse pca, and submatrix localization, IEEE Transactions on Information Theory, 2018.

P. C. Bellec, The noise barrier and the large signal bias of the lasso and other convex estimators, 2018.

P. C. Bellec, G. Lecué, and A. B. Tsybakov, Slope meets lasso: improved oracle bounds and optimality, The Annals of Statistics, vol.46, issue.6B, pp.3603-3642, 2018.

A. Belloni, V. Chernozhukov, W. , and L. , Pivotal estimation via square-root lasso in nonparametric regression, The Annals of Statistics, vol.42, issue.2, pp.757-788, 2014.

D. Belomestny, M. Trabs, and A. B. Tsybakov, Sparse covariance matrix estimation in high-dimensional deconvolution, 2017.

F. Benaych-georges and R. R. Nadakuditi, The eigenvalues and eigenvectors of finite, low rank perturbations of large random matrices, Advances in Mathematics, vol.227, issue.1, pp.494-521, 2011.
URL : https://hal.archives-ouvertes.fr/hal-00423593

F. Benaych-georges and R. R. Nadakuditi, The singular values and vectors of low rank perturbations of large rectangular random matrices, Journal of Multivariate Analysis, vol.111, pp.120-135, 2012.
URL : https://hal.archives-ouvertes.fr/hal-00575203

Q. Berthet and P. Rigollet, Computational lower bounds for sparse pca, 2013.

K. Bertin and G. Lecué, Selection of variables and dimension reduction in highdimensional non-parametric regression, Electronic Journal of Statistics, vol.2, pp.1224-1241, 2008.

M. Bogdan, . Van-den, E. Berg, C. Sabatti, W. Su et al., Slopeadaptive variable selection via convex optimization, The annals of applied statistics, vol.9, issue.3, p.1103, 2015.

S. R. Broadbent and J. M. Hammersley, Percolation processes: I. crystals and mazes, Mathematical Proceedings of the Cambridge Philosophical Society, vol.53, pp.629-641, 1957.

C. Butucea, Y. I. Ingster, and I. A. Suslina, Sharp variable selection of a sparse submatrix in a high-dimensional noisy matrix, ESAIM: Probability and Statistics, vol.19, pp.115-134, 2015.

C. Butucea and C. Matias, Minimax estimation of the noise level and of the deconvolution density in a semiparametric convolution model, Bernoulli, vol.11, issue.2, pp.309-340, 2005.
URL : https://hal.archives-ouvertes.fr/hal-00101845

C. Butucea, M. Ndaoud, N. A. Stepanova, and A. B. Tsybakov, Variable selection with hamming loss, The Annals of Statistics, vol.46, issue.5, pp.1837-1875, 2018.

C. Butucea and N. Stepanova, Adaptive variable selection in nonparametric sparse additive models, Electronic Journal of Statistics, vol.11, issue.1, pp.2321-2357, 2017.

T. T. Cai and J. Jin, Optimal rates of convergence for estimating the null density and proportion of nonnull effects in large-scale multiple testing, The Annals of Statistics, vol.38, issue.1, pp.100-145, 2010.

T. T. Cai and L. Wang, Orthogonal matching pursuit for sparse signal recovery with noise, IEEE Transactions on Information Theory, vol.57, issue.7, pp.4680-4688, 2011.

E. Candes and T. Tao, The dantzig selector: Statistical estimation when p is much larger than n. The annals of Statistics, vol.35, pp.2313-2351, 2007.

A. Carpentier, O. Collier, L. Comminges, A. B. Tsybakov, W. et al., , 2018.

, Minimax rate of testing in sparse linear regression

A. Carpentier and N. Verzelen, Adaptive estimation of the sparsity in the gaussian vector model, The Annals of Statistics, vol.47, issue.1, pp.93-126, 2019.

O. Catoni, Challenging the empirical mean and empirical variance: a deviation study, Annales de l'IHP Probabilités et statistiques, vol.48, pp.1148-1185, 2012.
URL : https://hal.archives-ouvertes.fr/hal-00517206

L. Cavalier, G. Golubev, D. Picard, and A. Tsybakov, Oracle inequalities for inverse problems, Annals of Statistics, vol.30, issue.3, pp.843-874, 2002.

M. Chen, C. Gao, and Z. Ren, Robust covariance and scatter matrix estimation under huber's contamination model, The Annals of Statistics, vol.46, issue.5, pp.1932-1960, 2018.

Y. Cherapanamjeri, N. Flammarion, and P. L. Bartlett, Fast mean estimation with sub-gaussian rates, 2019.

O. Collier, L. Comminges, and A. B. Tsybakov, Minimax estimation of linear and quadratic functionals on sparsity classes, The Annals of Statistics, vol.45, issue.3, pp.923-958, 2017.
URL : https://hal.archives-ouvertes.fr/hal-01215833

O. Collier, L. Comminges, A. B. Tsybakov, and N. Verzelen, Optimal adaptive estimation of linear functionals under sparsity, The Annals of Statistics, vol.46, issue.6A, pp.3130-3150, 2018.
URL : https://hal.archives-ouvertes.fr/hal-01425801

O. Collier and A. S. Dalalyan, Minimax estimation of a p-dimensional linear functional in sparse gaussian models and robust estimation of the mean, 2017.

L. Comminges, O. Collier, M. Ndaoud, and A. B. Tsybakov, Adaptive robust estimation in sparse vector model, 2018.

L. Comminges and A. S. Dalalyan, Tight conditions for consistency of variable selection in the context of high dimensionality, The Annals of Statistics, vol.40, issue.5, pp.2667-2696, 2012.
URL : https://hal.archives-ouvertes.fr/hal-00602211

S. Corlay, Functional quantization-based stratified sampling methods, 2010.
URL : https://hal.archives-ouvertes.fr/hal-01172309

S. Dasgupta and L. Schulman, A probabilistic analysis of em for mixtures of separated, spherical gaussians, Journal of Machine Learning Research, vol.8, pp.203-226, 2007.

A. Aspremont, L. E. Ghaoui, M. I. Jordan, and G. R. Lanckriet, A direct formulation for sparse pca using semidefinite programming, Advances in neural information processing systems, pp.41-48, 2005.

P. Deheuvels, A karhunen-loève expansion for a mean-centered brownian bridge, Statistics & probability letters, vol.77, issue.12, pp.1190-1200, 2007.

A. Derumigny, Improved bounds for square-root lasso and square-root slope, Electronic Journal of Statistics, vol.12, issue.1, pp.741-766, 2018.

L. Devroye, M. Lerasle, G. Lugosi, and R. I. Oliveira, Sub-gaussian mean estimators, The Annals of Statistics, vol.44, issue.6, pp.2695-2725, 2016.
URL : https://hal.archives-ouvertes.fr/hal-01204519

I. Diakonikolas, G. Kamath, D. M. Kane, J. Li, A. Moitra et al., Robust estimators in high dimensions without the computational intractability, 2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS), pp.655-664, 2016.

I. Diakonikolas, G. Kamath, D. M. Kane, J. Li, A. Moitra et al., Being robust (in high dimensions) can be practical, Proceedings of the 34th International Conference on Machine Learning, vol.70, pp.999-1008, 2017.

C. Dietrich and G. N. Newsam, Fast and exact simulation of stationary gaussian processes through circulant embedding of the covariance matrix, SIAM Journal on Scientific Computing, vol.18, issue.4, pp.1088-1107, 1997.

L. Ding, A. Yurtsever, V. Cevher, J. A. Tropp, U. et al., An optimalstorage approach to semidefinite programming using approximate complementarity, 2019.

D. L. Donoho, I. M. Johnstone, J. C. Hoch, and A. S. Stern, Maximum entropy and the nearly black object, Journal of the Royal Statistical Society: Series B (Methodological), vol.54, issue.1, pp.41-67, 1992.

K. Dzhaparidze and H. Van-zanten, A series expansion of fractional brownian motion, Probability theory and related fields, vol.130, pp.39-55, 2004.

P. Erd?s and A. Rényi, On the evolution of random graphs, Publ. Math. Inst. Hungar. Acad. Sci, vol.5, pp.17-61, 1960.

J. Fan and J. Lv, Sure independence screening for ultrahigh dimensional feature space, Journal of the Royal Statistical Society: Series B (Statistical Methodology), vol.70, issue.5, pp.849-911, 2008.

Y. Fei and Y. Chen, Hidden integrality of sdp relaxation for sub-gaussian mixture models, 2018.

V. Feldman, W. Perkins, and S. Vempala, Subsampled power iteration: a unified algorithm for block models and planted csp's, Advances in Neural Information Processing Systems, pp.2836-2844, 2015.

L. Feng and C. Zhang, Sorted concave penalized regression, 2017.

D. Féral and S. Péché, The largest eigenvalue of rank one deformation of large wigner matrices, Communications in mathematical physics, vol.272, issue.1, pp.185-228, 2007.

A. K. Fletcher, S. Rangan, and V. K. Goyal, Necessary and sufficient conditions for sparsity pattern recovery, IEEE Transactions on Information Theory, vol.55, issue.12, pp.5758-5772, 2009.

L. Florescu and W. Perkins, Spectral thresholds in the bipartite stochastic block model. In Conference on Learning Theory, pp.943-959, 2016.

S. Foucart and G. Lecué, An iht algorithm for sparse recovery from subexponential measurements, IEEE Signal Processing Letters, vol.24, issue.9, pp.1280-1283, 2017.

D. Gamarnik and I. Zadik, Sparse high-dimensional linear regression. Algorithmic barriers and a local search algorithm, 2017.

C. Gao, Y. Lu, and D. Zhou, Exact exponent in optimal rates for crowdsourcing, International Conference on Machine Learning, pp.603-611, 2016.

C. Gao, Z. Ma, A. Y. Zhang, and H. H. Zhou, Community detection in degree-corrected block models, The Annals of Statistics, vol.46, issue.5, pp.2153-2185, 2018.

Z. Gao and S. Stoev, Fundamental limits of exact support recovery in high dimensions, 2018.

E. Gautier and A. B. Tsybakov, Pivotal estimation in high-dimensional regression via linear programming, Empirical inference, pp.195-204, 2013.
URL : https://hal.archives-ouvertes.fr/hal-00805556

C. R. Genovese, J. Jin, L. Wasserman, and Z. Yao, A comparison of the lasso and marginal regression, Journal of Machine Learning Research, vol.13, pp.2107-2143, 2012.

C. Giraud, Introduction to high-dimensional statistics, 2014.

C. Giraud and N. Verzelen, Partial recovery bounds for clustering with the relaxed k means, 2018.

Y. Golubev and E. Krymova, On estimation of the noise variance in highdimensional linear models, 2017.

Z. Guo, W. Wang, T. T. Cai, L. , and H. , Optimal estimation of genetic relatedness in high-dimensional linear models, Journal of the American Statistical Association, pp.1-12, 2018.

P. Hall and J. Jin, Innovated higher criticism for detecting sparse signals in correlated noise, The Annals of Statistics, vol.38, issue.3, pp.1686-1732, 2010.

T. E. Harris, A lower bound for the critical probability in a certain percolation process, Mathematical Proceedings of the Cambridge Philosophical Society, vol.56, pp.13-20, 1960.

Y. Hochberg, A sharper bonferroni procedure for multiple tests of significance, Biometrika, vol.75, issue.4, pp.800-802, 1988.

P. W. Holland, K. B. Laskey, and S. Leinhardt, Stochastic blockmodels: First steps, Social networks, vol.5, issue.2, pp.109-137, 1983.

S. Holm, A simple sequentially rejective multiple test procedure, Scandinavian journal of statistics, pp.65-70, 1979.

S. B. Hopkins, Sub-gaussian mean estimation in polynomial time, 2018.

D. Hsu, S. Kakade, and T. Zhang, A tail inequality for quadratic forms of subgaussian random vectors. Electronic Communications in Probability, p.17, 2012.

P. J. Huber, Robust estimation of a location parameter, Breakthroughs in statistics, pp.492-518, 1992.

P. J. Huber, Robust statistics, 2011.

I. A. Ibragimov and R. Z. Has'minskii, Statistical estimation: asymptotic theory, vol.16, 2013.

E. Iglói, A rate-optimal trigonometric series expansion of the fractional brownian motion, Electron. J. Probab, vol.10, pp.1381-1397, 2005.

T. Inglot, Inequalities for quantiles of the chi-square distribution, Probability and Mathematical Statistics, vol.30, issue.2, pp.339-351, 2010.

Y. Ingster and N. Stepanova, Adaptive variable selection in nonparametric sparse regression, Journal of Mathematical Sciences, vol.199, issue.2, pp.184-201, 2014.

K. Itô and M. Nisio, On the convergence of sums of independent banach space valued random variables, Osaka Journal of Mathematics, vol.5, issue.1, pp.35-48, 1968.

L. Janson, R. F. Barber, and E. Candes, Eigenprism: inference for high dimensional signal-to-noise ratios, Journal of the Royal Statistical Society: Series B (Statistical Methodology), vol.79, issue.4, pp.1037-1065, 2017.

A. Javanmard and A. Montanari, Debiasing the lasso: Optimal sample size for gaussian designs, The Annals of Statistics, vol.46, issue.6A, pp.2593-2622, 2018.

M. R. Jerrum, L. G. Valiant, and V. V. Vazirani, Random generation of combinatorial structures from a uniform distribution, Theoretical Computer Science, vol.43, pp.169-188, 1986.

P. Ji and J. Jin, Ups delivers optimal phase diagram in high-dimensional variable selection, The Annals of Statistics, vol.40, issue.1, pp.73-103, 2012.

J. Jin, C. Zhang, and Q. Zhang, Optimality of graphlet screening in high dimensional variable selection, Journal of Machine Learning Research, vol.15, issue.1, pp.2723-2772, 2014.

A. Joseph, Variable selection in high-dimension with random designs and orthogonal matching pursuit, Journal of Machine Learning Research, vol.14, issue.1, pp.1771-1800, 2013.

S. Junglen and H. Luschgy, A constructive sharp approach to functional quantization of stochastic processes, Journal of Applied Mathematics, 2010.

R. M. Karp, Reducibility among combinatorial problems, Complexity of computer computations, pp.85-103, 1972.

T. Ke, J. Jin, F. , and J. , Covariance assisted screening and estimation, Annals of statistics, vol.42, issue.6, p.2202, 2014.

H. Kesten, The critical probability of bond percolation on the square lattice equals 1/2. Communications in mathematical physics, vol.74, pp.41-59, 1980.

J. M. Klusowski and W. Brinda, Statistical guarantees for estimating the centers of a two-component gaussian mixture by em, 2016.

V. Koltchinskii, K. Lounici, and A. B. Tsybakov, Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion, The Annals of Statistics, vol.39, issue.5, pp.2302-2329, 2011.
URL : https://hal.archives-ouvertes.fr/hal-00676868

T. Kühn and W. Linde, Optimal series representation of fractional brownian sheets, Bernoulli, pp.669-696, 2002.

A. Kumar and R. Kannan, Clustering with spectral norm and the k-means algorithm, 2010 IEEE 51st Annual Symposium on Foundations of Computer Science, pp.299-308, 2010.

J. Lafferty and L. Wasserman, Rodeo: sparse, greedy nonparametric regression, The Annals of Statistics, vol.36, issue.1, pp.28-63, 2008.

G. Lecué and M. Lerasle, Robust machine learning by median-of-means: theory and practice, 2017.

E. L. Lehmann and G. Casella, Theory of point estimation, 2006.

E. L. Lehmann and J. P. Romano, Testing statistical hypotheses, 2006.

H. Liu and R. F. Barber, Between hard and soft thresholding: optimal iterative thresholding algorithms, 2018.

S. Lloyd, Least squares quantization in pcm, IEEE transactions on information theory, vol.28, pp.129-137, 1982.

K. Lounici, Sup-norm convergence rate and sign concentration property of lasso and dantzig estimators, Electronic Journal of statistics, vol.2, pp.90-102, 2008.
URL : https://hal.archives-ouvertes.fr/hal-00222251

K. Lounici, M. Pontil, S. Van-de-geer, and A. Tsybakov, Oracle inequalities and optimal inference under group sparsity, Annals of Statistics, vol.39, issue.5, pp.2164-2204, 2011.
URL : https://hal.archives-ouvertes.fr/hal-00501509

Y. Lu and H. H. Zhou, Statistical and computational guarantees of lloyd's algorithm and its variants, 2016.

H. Luschgy and G. Pagès, Functional quantization of gaussian processes, Journal of Functional Analysis, vol.196, issue.2, pp.486-531, 2002.
URL : https://hal.archives-ouvertes.fr/hal-00102159

H. Luschgy and G. Pagès, High-resolution product quantization for gaussian processes under sup-norm distortion, Bernoulli, vol.13, issue.3, pp.653-671, 2007.
URL : https://hal.archives-ouvertes.fr/hal-00171801

H. Luschgy and G. Pagès, Expansions for gaussian processes and parseval frames, Electron. J. Probab, vol.14, issue.42, pp.1198-1221, 2009.
URL : https://hal.archives-ouvertes.fr/hal-00437672

N. Meinshausen and P. Bühlmann, High-dimensional graphs and variable selection with the lasso. The annals of statistics, pp.1436-1462, 2006.

N. Meinshausen and P. Bühlmann, Stability selection, Journal of the Royal Statistical Society: Series B (Statistical Methodology), vol.72, issue.4, pp.417-473, 2010.

D. G. Mixon, S. Villar, and R. Ward, Clustering subgaussian mixtures by semidefinite programming, 2016.

A. Moitra and G. Valiant, Settling the polynomial learnability of mixtures of gaussians, 2010 IEEE 51st Annual Symposium on Foundations of Computer Science, pp.93-102, 2010.

E. Mossel, J. Neeman, and A. Sly, Consistency thresholds for the planted bisection model, Proceedings of the forty-seventh annual ACM symposium on Theory of computing, pp.69-75, 2015.

M. Ndaoud, Harmonic analysis meets stationarity: A general framework for series expansions of special gaussian processes, 2018.

M. Ndaoud, Sharp optimal recovery in the two component gaussian mixture model, 2018.

M. Ndaoud, Interplay of minimax estimation and minimax support recovery under sparsity, 2019.

M. Ndaoud and A. B. Tsybakov, Optimal variable selection and adaptive noisy compressed sensing, 2018.

A. Nemirovskii and D. B. Yudin, Problem Complexity and Method Efficiency in Optimization, 1983.

P. Neuvial and E. Roquain, On false discovery rate thresholding for classification under sparsity, The Annals of Statistics, vol.40, issue.5, pp.2572-2600, 2012.
URL : https://hal.archives-ouvertes.fr/hal-00604427

A. Perry, A. S. Wein, A. S. Bandeira, and A. Moitra, Optimality and sub-optimality of pca i: Spiked random matrix models, The Annals of Statistics, vol.46, issue.5, pp.2416-2451, 2018.

V. V. Petrov and J. Printems, Functional quantization for numerics with an application to option pricing, Monte Carlo Methods and Applications mcma, vol.11, issue.4, pp.407-446, 1995.

K. R. Rad, Nearly sharp sufficient conditions on exact sparsity pattern recovery, IEEE Transactions on Information Theory, vol.57, issue.7, pp.4672-4679, 2011.

M. Royer, Adaptive clustering through semidefinite programming, Advances in Neural Information Processing Systems, pp.1795-1803, 2017.
URL : https://hal.archives-ouvertes.fr/hal-01524677

V. Saligrama and M. Zhao, Thresholded basis pursuit: LP algorithm for orderwise optimal support recovery for sparse and approximately sparse signals from noisy random measurements, IEEE Transactions on Information Theory, vol.57, issue.3, pp.1567-1586, 2011.

A. Sanjeev and R. Kannan, Learning mixtures of arbitrary gaussians, Proceedings of the thirty-third annual ACM symposium on Theory of computing, pp.247-257, 2001.

J. Shen and P. Li, A tight bound of hard thresholding, The Journal of Machine Learning Research, vol.18, issue.1, pp.7650-7691, 2017.

G. R. Shorack and J. A. Wellner, Empirical processes with applications to statistics, vol.59, 2009.

Z. ?idák, Rectangular confidence regions for the means of multivariate normal distributions, Journal of the American Statistical Association, vol.62, issue.318, pp.626-633, 1967.

W. Su and E. Candes, Slope is adaptive to unknown sparsity and asymptotically minimax, The Annals of Statistics, vol.44, issue.3, pp.1038-1068, 2016.

T. Sun and C. Zhang, Scaled sparse linear regression, Biometrika, vol.99, issue.4, pp.879-898, 2012.

B. Szabo and H. Van-zanten, An asymptotic analysis of distributed nonparametric methods, 2017.

J. A. Tropp and A. C. Gilbert, Signal recovery from random measurements via orthogonal matching pursuit, IEEE Transactions on information theory, vol.53, issue.12, pp.4655-4666, 2007.

A. B. Tsybakov, Introduction to Nonparametric Estimation, 2008.

J. W. Tukey, Mathematics and the picturing of data, Proceedings of the International Congress of Mathematician, vol.2, pp.523-531, 1975.

S. Vempala and G. Wang, A spectral algorithm for learning mixture models, Journal of Computer and System Sciences, vol.68, issue.4, pp.841-860, 2004.

R. Vershynin, Introduction to the non-asymptotic analysis of random matrices, pp.210-268, 2012.

R. Vershynin, High-dimensional probability: An introduction with applications in data science, vol.47, 2018.

N. Verzelen, Minimax risks for sparse regressions: Ultra-high dimensional phenomenons, Electronic Journal of Statistics, vol.6, pp.38-90, 2012.
URL : https://hal.archives-ouvertes.fr/hal-00508339

N. Verzelen and E. Gassiat, Adaptive estimation of high-dimensional signal-tonoise ratios, Bernoulli, vol.24, issue.4B, pp.3683-3710, 2018.

B. Von-bahr and C. Esseen, Inequalities for the r th absolute moment of a sum of random variables, 1 ? r ? 2, The Annals of Mathematical Statistics, vol.36, issue.1, pp.299-303, 1965.

M. J. Wainwright, Information-theoretic limits on sparsity recovery in the high-dimensional and noisy setting, IEEE Transactions on Information Theory, vol.55, issue.12, pp.5728-5741, 2009.

M. J. Wainwright, Sharp thresholds for high-dimensional and noisy sparsity recovery using`1-constrained quadratic programming (lasso), IEEE transactions on information theory, vol.55, pp.2183-2202, 2009.

S. Wang, H. Weng, and A. Maleki, Which bridge estimator is optimal for variable selection?, 2017.

W. Wang, M. J. Wainwright, and K. Ramchandran, Information-theoretic limits on sparse signal recovery: Dense versus sparse measurement matrices, IEEE Transactions on Information Theory, vol.56, issue.6, pp.2967-2979, 2010.

L. Wasserman, All of statistics: a concise course in statistical inference, 2013.

L. Wasserman and K. Roeder, High dimensional variable selection, Annals of statistics, vol.37, issue.5A, p.2178, 2009.

M. Wegkamp, Model selection in nonparametric regression, Annals of Statistics, vol.31, issue.1, pp.252-273, 2003.

X. Wei and S. Minsker, Estimation of the covariance structure of heavy-tailed distributions, Advances in Neural Information Processing Systems, pp.2859-2868, 2017.

E. Wigner, On the distribution of the roots of certain symmetric matrices, The Annals of Mathematics, vol.67, pp.325-328, 1958.

Z. Wu and H. H. Zhou, Model selection and sharp asymptotic minimaxity. Probability Theory and Related Fields, vol.156, pp.165-191, 2013.

D. Xia and F. Zhou, The sup-norm perturbation of hosvd and low rank tensor denoising, 2017.

A. Zhang, T. T. Cai, and Y. Wu, Heteroskedastic pca: Algorithm, optimality, and applications, 2018.

A. Y. Zhang and H. H. Zhou, Minimax rates of community detection in stochastic block models, The Annals of Statistics, vol.44, issue.5, pp.2252-2280, 2016.

C. Zhang, Nearly unbiased variable selection under minimax concave penalty. The Annals of statistics, vol.38, pp.894-942, 2010.

C. Zhang and J. Huang, The sparsity and bias of the lasso selection in high-dimensional linear regression, The Annals of Statistics, vol.36, issue.4, pp.1567-1594, 2008.

T. Zhang, Some sharp performance bounds for least squares regression with l1 regularization, The Annals of Statistics, vol.37, issue.5A, pp.2109-2144, 2009.

T. Zhang, Adaptive forward-backward greedy algorithm for learning sparse representations, IEEE transactions on information theory, vol.57, issue.7, pp.4689-4708, 2011.

T. Zhang, Sparse recovery with orthogonal matching pursuit under rip, IEEE Transactions on Information Theory, vol.57, issue.9, pp.6215-6221, 2011.

P. Zhao and B. Yu, On model selection consistency of lasso, The Journal of Machine Learning Research, vol.7, pp.2541-2563, 2006.

H. Zou, The adaptive lasso and its oracle properties, Journal of the American statistical association, vol.101, issue.476, pp.1418-1429, 2006.

, List of Figures 1.1 Illustration of percolation on a square lattice of size 500 ? 500. Bonds are red if open, white if blocked and percolation paths are in green

, The empirical spectral distribution of a matrix drawn from the 1500?1500 GOE

, 10 1.4 A two-dimensional projection of a two component Gaussian mixture with n = 100 and p = 1000

, Product quantization of a centered Ornstein-Uhlenbeck process, starting from Y 0 = 0 (left), and a fBm (right)

. .. , 19 2.2 Rates of convergence of the minimax risks, List of Tables 2.1 Phase transitions in Gaussian setting