·. ·-·-,-?-t-,-·-·-·-,-? and T. , on the common measurable target spaces (R d , B(R d )) for all t, and such that ? 0 is easy to sample from, and ? T = ?. We focus on tempering to construct intermediate distributions, that is ? t (x) ? p(x)l(y|x) ? t , where the sequence of exponents ? t is such that 0 = ? 0 < · · · < ? t < · · · < ? T = 1. These exponents do not need to be pre-specified: they may be automatically selected during the run of a SMC sampler, of targets: tempering from the prior to the posterior Sequential Monte Carlo (SMC) approaches the problem of sampling from ? by introducing a sequence of intermediate distributions ? 0, 2006.

. Chopin, Note also that we assume throughout the article that the prior distribution p(x) is a proper probability distribution, from which samples can be drawn, 2013.

S. Agapiou, O. Papaspiliopoulos, D. Sanz-alonso, and A. M. Stuart, Importance sampling: computational complexity and intrinsic dimension, 2015.
DOI : 10.1214/17-sts611

URL : http://arxiv.org/pdf/1511.06196

P. Alquier, N. Friel, R. Everitt, and A. Boland, Noisy monte carlo: Convergence of markov chains with approximate transition kernels, Statistics and Computing, vol.26, issue.1-2, pp.29-47, 2016.
DOI : 10.1007/s11222-014-9521-x

URL : http://arxiv.org/pdf/1403.5496

P. Alquier, J. Ridgway, C. , and N. , On the properties of variational approximations of gibbs posteriors, The Journal of Machine Learning Research, vol.17, issue.1, pp.8374-8414, 2016.

C. Andrieu, A. Doucet, and R. Holenstein, Particle markov chain monte carlo methods, Journal of the Royal Statistical Society: Series B (Statistical Methodology), vol.72, issue.3, pp.269-342, 2010.
DOI : 10.1111/j.1467-9868.2009.00736.x

URL : https://rss.onlinelibrary.wiley.com/doi/pdf/10.1111/j.1467-9868.2009.00736.x

C. Andrieu and G. O. Roberts, The pseudo-marginal approach for efficient Monte Carlo computations, Ann. Statist, vol.37, issue.2, pp.697-725, 2009.
DOI : 10.1214/07-aos574

URL : https://doi.org/10.1214/07-aos574

C. Andrieu and J. Thoms, A tutorial on adaptive MCMC, Statistics and computing, vol.18, issue.4, pp.343-373, 2008.
DOI : 10.1007/s11222-008-9110-y

I. A. Antonov and V. Saleev, An economic method of computing LP?sequences, USSR Computational Mathematics and Mathematical Physics, vol.19, issue.1, pp.252-256, 1979.
DOI : 10.1016/0041-5553(79)90085-5

Y. F. Atchadé and J. S. Rosenthal, On adaptive markov chain monte carlo algorithms, Bernoulli, vol.11, issue.5, pp.815-828, 2005.

R. Bardenet, A. Doucet, and C. Holmes, On markov chain monte carlo methods for tall data, The Journal of Machine Learning Research, vol.18, issue.1, pp.1515-1557, 2017.
URL : https://hal.archives-ouvertes.fr/hal-01355287

S. Barthelmé and N. Chopin, Expectation propagation for likelihood-free inference, Journal of the American Statistical Association, vol.109, issue.505, pp.315-333, 2014.

M. A. Beaumont, Estimation of population growth or decline in genetically monitored populations, Genetics, vol.164, issue.3, pp.1139-1160, 2003.

M. A. Beaumont, J. Cornuet, J. Marin, R. , and C. P. , Adaptive approximate Bayesian computation, Biometrika, vol.96, issue.4, pp.983-990, 2009.
DOI : 10.1093/biomet/asp052

URL : https://hal.archives-ouvertes.fr/hal-00280461

E. Bernton, P. E. Jacob, M. Gerber, R. , and C. P. , Inference in generative models using the Wasserstein distance, 2017.
URL : https://hal.archives-ouvertes.fr/hal-01517550

A. Beskos, A. Jasra, N. Kantas, and A. Thiery, On the convergence of adaptive sequential Monte Carlo methods, The Annals of Applied Probability, vol.26, issue.2, pp.1111-1146, 2016.

A. Beskos, N. Pillai, G. Roberts, J. Sanz-serna, and A. Stuart, Optimal tuning of the hybrid Monte Carlo algorithm, Bernoulli, vol.19, issue.5A, pp.1501-1534, 2013.
DOI : 10.3150/12-bej414

URL : https://doi.org/10.3150/12-bej414

M. Betancourt, Identifying the optimal integration time in Hamiltonian Monte Carlo, 2016.

M. Betancourt, S. Byrne, G. , and M. , Optimizing the integrator step size for Hamiltonian Monte Carlo, 2014.

J. Bierkens, P. Fearnhead, and G. Roberts, The zig-zag process and superefficient sampling for bayesian analysis of big data, 2016.

C. M. Bishop, Pattern Recognition and Machine Learning (Information Science and Statistics), 2006.

D. M. Blei, A. Kucukelbir, and J. D. Mcauliffe, Variational inference: A review for statisticians, Journal of the American Statistical Association, vol.112, issue.518, pp.859-877, 2017.
DOI : 10.1080/01621459.2017.1285773

URL : http://arxiv.org/pdf/1601.00670

M. G. Blum, Approximate Bayesian computation: A nonparametric perspective, Journal of the American Statistical Association, vol.105, issue.491, pp.1178-1187, 2010.
DOI : 10.1198/jasa.2010.tm09448

URL : https://hal.archives-ouvertes.fr/hal-00373301

L. Bornn, N. S. Pillai, A. Smith, and D. Woodard, The use of a single pseudosample in approximate Bayesian computation, Statistics and Computing, vol.27, issue.3, pp.1-14, 2015.

L. Bottou, F. E. Curtis, and J. Nocedal, Optimization methods for large-scale machine learning, 2016.

N. Bou-rabee and J. M. Sanz-serna, Geometric integrators and the Hamiltonian Monte Carlo method, Acta Numerica, vol.27, pp.113-206, 2018.
DOI : 10.1017/s0962492917000101

URL : http://arxiv.org/pdf/1711.05337

A. Bouchard-côté, S. J. Vollmer, and A. Doucet, The bouncy particle sampler: A nonreversible rejection-free markov chain monte carlo method, Journal of the American Statistical Association, pp.1-13, 2018.

A. Buchholz and N. Chopin, Improving approximate Bayesian computation via quasi Monte Carlo, 2017.
DOI : 10.1080/10618600.2018.1497511

URL : http://arxiv.org/pdf/1710.01057

Y. Burda, R. Grosse, and R. Salakhutdinov, Importance weighted autoencoders, Proceedings of the International Conference on Learning Representations, 2016.

O. Cappé, A. Guillin, J. Marin, R. , and C. P. , Population monte carlo, Journal of Computational and Graphical Statistics, vol.13, issue.4, pp.907-929, 2004.

O. Cappé, O. Rydén, E. Moulines, T. Ryden, R. et al., Inference in Hidden Markov Models, Springer Series in Statistics, 2005.

B. Carpenter, A. Gelman, M. Hoffman, D. Lee, B. Goodrich et al., Stan: A Probabilistic Programming Language, Journal of Statistical Software, vol.76, issue.1, pp.1-32, 2017.
DOI : 10.18637/jss.v076.i01

URL : https://www.jstatsoft.org/index.php/jss/article/view/v076i01/v76i01.pdf

S. Chatterjee and P. Diaconis, The sample size required in importance sampling, The Annals of Applied Probability, vol.28, issue.2, pp.1099-1135, 2018.

S. Chen, J. Dick, and A. B. Owen, Consistency of markov chain quasi-monte carlo on continuous state spaces, The Annals of Statistics, vol.39, issue.2, pp.673-701, 2011.

T. Chen, E. Fox, G. , and C. , Stochastic gradient hamiltonian monte carlo, International Conference on Machine Learning, pp.1683-1691, 2014.

B. Chérief-abdellatif and P. Alquier, Consistency of variational bayes inference for estimation and model selection in mixtures, 2018.

N. Chopin, A sequential particle filter method for static models, Biometrika, vol.89, issue.3, pp.539-552, 2002.

N. Chopin and J. Ridgway, Leave Pima Indians alone: binary regression as a benchmark for Bayesian computation, Statistical Science, vol.32, issue.1, pp.64-87, 2017.

N. Chopin, J. Rousseau, and B. Liseo, Computational aspects of Bayesian spectral density estimation, Journal of Computational and Graphical Statistics, vol.22, issue.3, pp.533-557, 2013.
URL : https://hal.archives-ouvertes.fr/hal-00767466

O. F. Christensen, G. O. Roberts, and J. S. Rosenthal, Scaling limits for the transient phase of local Metropolis-Hastings algorithms, Journal of the Royal Statistical Society: Series B (Statistical Methodology), vol.67, issue.2, pp.253-268, 2005.

D. Christophe and S. Petr, randtoolbox: Generating and Testing Random Numbers, 2015.

J. Cornuet, J. Marin, A. Mira, R. , and C. P. , Adaptive multiple importance sampling, Scandinavian Journal of Statistics, vol.39, issue.4, pp.798-812, 2012.
URL : https://hal.archives-ouvertes.fr/hal-00403248

R. Cranley and T. N. Patterson, Randomization of number theoretic methods for multiple integration, SIAM Journal on Numerical Analysis, vol.13, issue.6, pp.904-914, 1976.

A. Defazio, F. Bach, and S. Lacoste-julien, Saga: A fast incremental gradient method with support for non-strongly convex composite objectives, Advances in Neural Information Processing Systems, pp.1646-1654, 2014.
URL : https://hal.archives-ouvertes.fr/hal-01016843

P. Del-moral, Feynman-kac formulae: genealogical and interacting particle systems with applications, Feynman-Kac Formulae, pp.47-93, 2004.

P. Del-moral, A. Doucet, J. , and A. , Sequential Monte Carlo samplers, Journal of the Royal Statistical Society: Series B (Statistical Methodology), vol.68, issue.3, pp.411-436, 2006.
URL : https://hal.archives-ouvertes.fr/hal-01593880

P. Del-moral, A. Doucet, J. , and A. , Sequential Monte Carlo for Bayesian Computation, Bayesian Statistics, issue.8, pp.1-34, 2007.
URL : https://hal.archives-ouvertes.fr/hal-00641462

P. Del-moral, A. Doucet, J. , and A. , An adaptive sequential Monte Carlo method for approximate Bayesian computation, Statistics and Computing, vol.22, issue.5, pp.1009-1020, 2012.

L. Devroye, Non-Uniform Random Variate Generation, 1986.

J. Dick, F. Y. Kuo, and I. H. Sloan, High-dimensional integration: the quasiMonte Carlo way, Acta Numerica, vol.22, pp.133-288, 2013.

J. Dick and F. Pillichshammer, Digital nets and sequences: discrepancy theory and quasi-Monte Carlo integration, 2010.

S. S. Drew and H. Mello, Quasi-monte carlo strategies for stochastic optimization, Proceedings of the 38th conference on Winter simulation, pp.774-782, 2006.

, Winter Simulation Conference

C. C. Drovandi and M. Tran, Improving the Efficiency of Fully Bayesian Optimal Design of Experiments Using Randomised Quasi-Monte Carlo, Bayesian Anal, vol.13, issue.1, pp.139-162, 2018.

S. Duane, A. D. Kennedy, B. J. Pendleton, R. , and D. , Hybrid Monte Carlo, Physics letters B, vol.195, issue.2, pp.216-222, 1987.

J. Duchi, E. Hazan, and Y. Singer, Adaptive subgradient methods for online learning and stochastic optimization, Journal of Machine Learning Research, vol.12, pp.2121-2159, 2011.

P. Fearnhead and D. Prangle, Constructing summary statistics for approximate Bayesian computation: Semi-automatic approximate Bayesian computation, Journal of the Royal Statistical Society. Series B: Statistical Methodology, vol.74, issue.3, pp.419-474, 2012.

P. Fearnhead and B. M. Taylor, An adaptive sequential Monte Carlo sampler, Bayesian Analysis, vol.8, issue.2, pp.411-438, 2013.

D. T. Frazier, G. M. Martin, C. P. Robert, and J. Rousseau, Asymptotic properties of approximate bayesian computation, Biometrika, p.27, 2018.
URL : https://hal.archives-ouvertes.fr/hal-01517556

A. Gelman and J. Hill, Data analysis using regression and multilevel/hierarchical models, 2006.

A. Gelman, A. Kiss, and J. Fagan, An analysis of the nypd's stop-and-frisk policy in the context of claims of racial bias. Columbia Public Law & Legal Theory Working Papers, p.595, 2006.

S. Geman and D. Geman, Stochastic relaxation, gibbs distributions, and the bayesian restoration of images, IEEE Transactions, issue.6, pp.721-741, 1984.

M. Gerber, On integration methods based on scrambled nets of arbitrary size, Journal of Complexity, vol.31, issue.6, pp.798-816, 2015.

M. Gerber and L. Bornn, Improving simulated annealing through derandomization, Journal of Global Optimization, vol.68, issue.1, pp.189-217, 2017.

M. Gerber and N. Chopin, Sequential quasi Monte Carlo, Journal of the Royal Statistical Society: Series B (Statistical Methodology), vol.77, issue.3, pp.509-579, 2015.

M. Gerber, N. Chopin, and N. Whiteley, Negative association, ordering and convergence of resampling methods, 2017.

P. Germain, F. Bach, A. Lacoste, S. Lacoste-julien, D. D. Lee et al., Pac-bayesian theory meets bayesian inference, Advances in Neural Information Processing Systems, vol.29, pp.1884-1892, 2016.
URL : https://hal.archives-ouvertes.fr/hal-01324072

J. Geweke, Bayesian inference in econometric models using Monte Carlo integration, Econometrica: Journal of the Econometric Society, pp.1317-1339, 1989.

M. Girolami and B. Calderhead, Riemann manifold Langevin and Hamiltonian Monte Carlo methods, Journal of the Royal Statistical Society: Series B (Statistical Methodology), vol.73, issue.2, pp.123-214, 2011.

P. Glasserman, , 2013.

, Monte Carlo methods in financial engineering, vol.53

I. Goodfellow, J. Pouget-abadie, M. Mirza, B. Xu, D. Warde-farley et al., Generative adversarial nets, Advances in neural information processing systems, pp.2672-2680, 2014.

J. Gorham and L. Mackey, Measuring sample quality with Stein's method, Advances in Neural Information Processing Systems, pp.226-234, 2015.

D. Gunawan, R. Kohn, M. Quiroz, K. Dang, T. et al., Subsampling Sequential Monte Carlo for Static Bayesian Models, 2018.

M. Gutmann and J. Corander, Bayesian optimization for likelihood-free inference of simulator-based statistical models, Journal of Machine Learning Research, vol.17, issue.125, pp.1-47, 2016.

E. Hairer, C. Lubich, and G. Wanner, Geometric numerical integration illustrated by the Störmer-Verlet method, Acta numerica, vol.12, pp.399-450, 2003.

E. Hairer, C. Lubich, and G. Wanner, Geometric numerical integration: structurepreserving algorithms for ordinary differential equations, vol.31, 2006.

J. H. Halton, Algorithm 247: Radical-inverse quasi-random point sequence, Communications of the ACM, vol.7, issue.12, pp.701-702, 1964.

G. H. Hardy, On double Fourier series, and especially those which represent the double zeta-function with real and incommensurable parameters, Quart. J, vol.37, pp.53-79, 1905.

W. K. Hastings, Monte carlo sampling methods using markov chains and their applications, Biometrika, vol.57, issue.1, pp.97-109, 1970.

J. Heng and P. E. Jacob, Unbiased Hamiltonian Monte Carlo with couplings, 2017.

F. J. Hickernell, Koksma-Hlawka Inequality, 2006.
DOI : 10.1002/9781118445112.stat03070

F. J. Hickernell, C. Lemieux, and A. B. Owen, Control variates for quasimonte carlo, Statistical Science, vol.20, issue.1, pp.1-31, 2005.
DOI : 10.1214/088342304000000468

URL : https://doi.org/10.1214/088342304000000468

M. D. Hoffman, D. M. Blei, C. Wang, P. , and J. , Stochastic variational inference, The Journal of Machine Learning Research, vol.14, issue.1, pp.1303-1347, 2013.

M. D. Hoffman and A. Gelman, The No-U-turn sampler: adaptively setting path lengths in Hamiltonian Monte Carlo, Journal of Machine Learning Research, vol.15, issue.1, pp.1593-1623, 2014.

J. H. Huggins and D. M. Roy, Sequential Monte Carlo as Approximate Sampling: bounds, adaptive resampling via ?-ESS, and an application to Particle Gibbs, 2015.
DOI : 10.3150/17-bej999

URL : http://arxiv.org/pdf/1503.00966

T. S. Jaakkola and M. I. Jordan, Bayesian parameter estimation via variational methods, Statistics and Computing, vol.10, issue.1, pp.25-37, 2000.

P. E. Jacob, J. O&apos;leary, and Y. F. Atchadé, Unbiased Markov chain Monte Carlo with couplings, 2017.

A. Jasra, D. Paulin, and A. H. Thiery, Error Bounds for Sequential Monte Carlo Samplers for Multimodal Distributions, 2015.

A. Jasra, D. A. Stephens, A. Doucet, and T. Tsagaris, Inference for LévyDriven Stochastic Volatility Models via Adaptive Sequential Monte Carlo, Scandinavian Journal of Statistics, vol.38, issue.1, pp.1-22, 2011.

N. L. Johnson, A. W. Kemp, and S. Kotz, Univariate discrete distributions, 2005.
DOI : 10.1002/0471715816

R. Johnson and T. Zhang, Accelerating stochastic gradient descent using predictive variance reduction, Advances in neural information processing systems, pp.315-323, 2013.

M. I. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. K. Saul, An introduction to variational methods for graphical models, Machine learning, vol.37, issue.2, pp.183-233, 1999.
DOI : 10.1007/978-94-011-5014-9_5

URL : http://www.cis.upenn.edu/~mkearns/papers/barbados/jgjs-var.pdf

C. Joy, P. P. Boyle, and K. S. Tan, Quasi-monte carlo methods in numerical finance, Management Science, vol.42, issue.6, pp.926-938, 1996.
DOI : 10.1287/mnsc.42.6.926

J. Kiefer and J. Wolfowitz, Stochastic estimation of the maximum of a regression function, Ann. Math. Statist, vol.23, issue.3, pp.462-466, 1952.

D. Kingma and J. Ba, Adam: A method for stochastic optimization, Proceedings of the International Conference on Learning Representations, 2015.

D. P. Kingma and M. Welling, Auto-encoding variational bayes, Proceedings of the International Conference on Learning Representations, 2014.

A. Kong, J. S. Liu, and W. H. Wong, Sequential imputation and Bayesian missing data problems, Journal of the American statistical association, vol.89, pp.278-288, 1994.

L. Kuipers and H. Niederreiter, Uniform distribution of sequences. Courier Corporation, 2012.

L. &apos;ecuyer and P. , Randomized Quasi-Monte Carlo: An Introduction for Practitioners, 12th International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing, 2016.

L. &apos;ecuyer, P. Lécot, C. Tuffin, and B. , A randomized quasi-monte carlo simulation method for markov chains, Operations Research, vol.56, issue.4, pp.958-975, 2008.

L. &apos;ecuyer, P. Lemieux, and C. , Recent advances in randomized quasi-Monte Carlo methods, Modeling uncertainty, pp.419-474, 2005.

A. Lee, On the the choice of MCMC kernels for approximate Bayesian computation with SMC samplers, Simulation Conference (WSC), Proceedings of the 2012 Winter, pp.1-12, 2012.

A. Lee and K. ?atuszy´nski?atuszy´-?atuszy´nski, Variance bounding and geometric ergodicity of Markov chain Monte Carlo kernels for approximate Bayesian computation, Biometrika, vol.101, issue.3, pp.655-671, 2014.

A. Lee and N. Whiteley, Variance estimation in the particle filter, Biometrika, p.28, 2018.

B. Leimkuhler and C. Matthews, Molecular Dynamics, 2016.
URL : https://hal.archives-ouvertes.fr/hal-00854791

C. Lemieux and P. Ecuyer, On the use of quasi-monte carlo methods in computational finance, International Conference on Computational Science, pp.607-616, 2001.

G. Leobacher and F. Pillichshammer, Introduction to quasi-Monte Carlo integration and applications, 2014.

D. Levy, M. D. Hoffman, and J. Sohl-dickstein, Generalizing hamiltonian monte carlo with neural networks, International Conference on Learning Representations, 2018.

J. Lintusaari, M. U. Gutmann, R. Dutta, S. Kaski, C. et al., Fundamentals and recent developments in approximate Bayesian computation, Systematic biology, vol.66, issue.1, pp.66-82, 2017.

L. &apos;ecuyer and P. , Quasi-monte carlo methods with applications in finance, Finance and Stochastics, vol.13, issue.3, pp.307-349, 2009.

L. &apos;ecuyer, P. Sanvido, and C. , Coupling from the past with randomized quasimonte carlo, The Sixth IMACS Seminar on Monte Carlo Methods Applied Scientific Computing VII. Forward Numerical Grid Generation, Approximation and Simulation, vol.81, pp.476-489, 2010.

S. Mandt, M. D. Hoffman, and D. M. Blei, Stochastic Gradient Descent as Approximate Bayesian Inference, Journal of Machine Learning Research, vol.18, issue.134, pp.1-35, 2017.

O. Mangoubi, N. S. Pillai, and A. Smith, Does Hamiltonian Monte Carlo mix faster than a random walk on multimodal densities?, 2018.

O. Mangoubi and A. Smith, Rapid mixing of Hamiltonian Monte Carlo on strongly log-concave distributions, 2017.

J. Marin, P. Pudlo, C. Robert, and R. Ryder, Approximate Bayesian computational methods, Statistics and Computing, vol.22, issue.6, pp.1167-1180, 2012.
DOI : 10.1007/s11222-011-9288-2

URL : https://hal.archives-ouvertes.fr/hal-00567240

J. Marin, L. Raynal, P. Pudlo, M. Ribatet, R. et al., ABC random forests for Bayesian parameter inference, 2016.
URL : https://hal.archives-ouvertes.fr/hal-01337189

P. Marjoram, J. Molitor, V. Plagnol, T. , and S. , Markov chain Monte Carlo without likelihoods, Proceedings of the National Academy of Sciences of the United States of America, vol.100, pp.15324-15332, 2003.
DOI : 10.1073/pnas.0306899100

URL : http://europepmc.org/articles/pmc307566?pdf=render

N. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. H. Teller, and E. Teller, Equation of state calculations by fast computing machines, The journal of chemical physics, vol.21, issue.6, pp.1087-1092, 1953.

N. Metropolis and S. Ulam, The monte carlo method, Journal of the American Statistical Association, vol.44, issue.247, pp.335-341, 1949.

A. C. Miller, N. Foti, A. D&apos;amour, A. , and R. P. , Reducing reparameterization gradient variance, Advances in Neural Information Processing Systems, 2017.

T. P. Minka, Expectation propagation for approximate Bayesian inference, Proceedings of the Seventeenth conference on Uncertainty in artificial intelligence, pp.362-369, 2001.

S. Mohamed, N. De-freitas, W. , and Z. , Adaptive Hamiltonian and Riemann manifold Monte Carlo samplers, 2013.

E. Moulines and F. R. Bach, Non-asymptotic analysis of stochastic approximation algorithms for machine learning, Advances in Neural Information Processing Systems, 2011.
URL : https://hal.archives-ouvertes.fr/hal-00608041

L. M. Murray, A. Lee, J. , and P. E. , Parallel resampling in the particle filter, Journal of Computational and Graphical Statistics, vol.25, issue.3, pp.789-805, 2016.

R. M. Neal, Bayesian learning via stochastic dynamics, Advances in neural information processing systems, pp.475-482, 1993.

R. M. Neal, Annealed importance sampling, Statistics and computing, vol.11, issue.2, pp.125-139, 2001.

R. M. Neal, MCMC using Hamiltonian dynamics. Handbook of Markov Chain Monte Carlo, vol.2, 2011.

Y. Nesterov, Introductory lectures on convex optimization: A basic course, vol.87, 2013.

Y. E. Nesterov, A method for solving the convex programming problem with convergence rate o(1/k 2 ), Dokl. Akad. Nauk SSSR, vol.269, pp.543-547, 1983.

H. Niederreiter, Quasi-Monte Carlo methods and pseudo-random numbers, Bulletin of the American Mathematical Society, vol.84, issue.6, pp.957-1041, 1978.

H. Niederreiter, Random number generation and quasi-Monte Carlo methods, 1992.

C. Oates and M. Girolami, Control functionals for quasi-monte carlo integration, In Artificial Intelligence and Statistics, pp.56-65, 2016.

C. J. Oates, M. Girolami, C. , and N. , Control functionals for Monte Carlo integration, Journal of the Royal Statistical Society: Series B (Statistical Methodology), vol.79, issue.3, pp.695-718, 2017.

M. Oh and J. O. Berger, Adaptive importance sampling in monte carlo integration, Journal of Statistical Computation and Simulation, vol.41, issue.3-4, pp.143-168, 1992.

G. Ökten, B. Tuffin, and V. Burago, A central limit theorem and improved error bounds for a hybrid-Monte Carlo sequence with applications in computational finance, Journal of Complexity, vol.22, issue.4, pp.435-458, 2006.

A. Owen, Monte Carlo extension of quasi-Monte Carlo, Winter Simulation Conference. Proceedings (Cat. No.98CH36274), vol.1, pp.571-577, 1998.

A. B. Owen, Scrambled net variance for integrals of smooth functions, The Annals of Statistics, vol.25, issue.4, pp.1541-1562, 1997.

A. B. Owen, Local antithetic sampling with scrambled nets, The Annals of Statistics, vol.36, issue.5, pp.2319-2343, 2008.

A. B. Owen and S. D. Tribble, A quasi-monte carlo metropolis algorithm, Proceedings of the National Academy of Sciences, vol.102, issue.25, pp.8844-8849, 2005.

J. Paisley, D. Blei, J. , and M. , Variational Bayesian inference with stochastic search, International Conference on Machine Learning, 2012.

A. Pakman, D. Gilboa, D. Carlson, P. , and L. , Stochastic bouncy particle sampler, International Conference on Machine Learning, pp.2741-2750, 2017.

G. Papamakarios and I. Murray, Fast ?-free inference of simulation models with Bayesian conditional density estimation, Advances in Neural Information Processing Systems, pp.1028-1036, 2016.

C. Pasarica and A. Gelman, Adaptively scaling the Metropolis algorithm using expected squared jumped distance, Statistica Sinica, pp.343-364, 2010.

M. K. Pitt and N. Shephard, Filtering via simulation: Auxiliary particle filters, Journal of the American statistical association, vol.94, issue.446, pp.590-599, 1999.

L. F. Price, C. C. Drovandi, A. Lee, and D. J. Nott, Bayesian synthetic likelihood, Journal of Computational and Graphical Statistics, vol.27, issue.1, pp.1-11, 2018.

P. Pudlo, J. Marin, A. Estoup, J. Cornuet, M. Gautier et al., Reliable abc model choice via random forests, Bioinformatics, vol.32, issue.6, pp.859-866, 2016.
URL : https://hal.archives-ouvertes.fr/hal-01067925

R. Ranganath, S. Gerrish, and D. M. Blei, Black box variational inference, Proceedings of the International Conference on Artificial Intelligence and Statistics, 2014.

D. J. Rezende, S. Mohamed, and D. Wierstra, Stochastic backpropagation and approximate inference in deep generative models, Proceedings of the International Conference on Machine Learning, 2014.

J. Ridgway, Computation of Gaussian orthant probabilities in high dimension, Statistics and computing, vol.26, issue.4, pp.899-916, 2016.
URL : https://hal.archives-ouvertes.fr/hal-01438314

H. Robbins and S. Monro, A stochastic approximation method. The annals of mathematical statistics, pp.400-407, 1951.

C. Robert, The Bayesian choice: from decision-theoretic foundations to computational implementation, 2007.

C. Robert and G. Casella, Monte Carlo Statistical Methods, 2013.

G. O. Roberts, A. Gelman, and W. R. Gilks, Weak convergence and optimal scaling of random walk Metropolis algorithms. The annals of applied probability, vol.7, pp.110-120, 1997.

G. O. Roberts and J. S. Rosenthal, Optimal scaling of discrete approximations to Langevin diffusions, Journal of the Royal Statistical Society: Series B (Statistical Methodology), vol.60, issue.1, pp.255-268, 1998.

G. O. Roberts and J. S. Rosenthal, Optimal scaling for various metropolishastings algorithms, Statist. Sci, vol.16, issue.4, pp.351-367, 2001.

G. O. Roberts and R. L. Tweedie, Exponential convergence of langevin distributions and their discrete approximations, Bernoulli, vol.2, issue.4, pp.341-363, 1996.

G. Roeder, Y. Wu, and D. K. Duvenaud, Sticking the landing: Simple, lowervariance gradient estimators for variational inference, Advances in Neural Information Processing Systems, 2017.

M. Rosenblatt, Remarks on a multivariate transformation, Ann. Math. Statist, vol.23, issue.3, pp.470-472, 1952.

F. J. Ruiz, M. K. Titsias, and D. M. Blei, Overdispersed black-box variational inference, Proceedings of the Conference on Uncertainty in Artificial Intelligence, 2016.

F. R. Ruiz, M. Titsias, and D. Blei, The generalized reparameterization gradient, Advances in Neural Information Processing Systems, 2016.

T. Ryder, A. Golightly, A. S. Mcgough, P. , and D. , Black-box variational inference for stochastic differential equations, 2018.

R. Salomone, L. F. South, C. C. Drovandi, and D. P. Kroese, Unbiased and consistent nested sampling via sequential Monte Carlo, 2018.

C. Schäfer and N. Chopin, Sequential Monte Carlo on large binary sampling spaces, Statistics and Computing, pp.1-22, 2013.

T. Schwedes and B. Calderhead, Quasi markov chain monte carlo methods, 2018.

N. Schweizer, Non-asymptotic error bounds for sequential MCMC and stability of Feynman-Kac propagators, 2012.

N. Schweizer, Non-asymptotic error bounds for sequential MCMC methods in multimodal settings, 2012.

M. Sedki, P. Pudlo, J. Marin, C. P. Robert, C. et al., Efficient learning in ABC algorithms, 2012.
URL : https://hal.archives-ouvertes.fr/hal-00741572

C. Sherlock and A. H. Thiery, A discrete bouncy particle sampler, 2017.

S. A. Sisson, Y. Fan, M. M. Tanaka, A. Rogers, Y. Huang et al., Correction for Sisson et al., Sequential Monte Carlo without likelihoods, Proceedings of the National Academy of Sciences, vol.106, issue.39, pp.16889-16889, 2009.

J. Snoek, H. Larochelle, A. , and R. P. , Practical Bayesian optimization of machine learning algorithms, Advances in neural information processing systems, pp.2951-2959, 2012.

R. S. Sutton and A. G. Barto, Reinforcement learning: An introduction, 1998.

M. M. Tanaka, A. R. Francis, F. Luciani, and S. A. Sisson, Using approximate Bayesian computation to estimate tuberculosis transmission parameters from genotype data, Genetics, vol.173, issue.3, pp.1511-1520, 2006.

S. Tavaré, D. J. Balding, R. C. Griffiths, and P. Donnelly, Inferring coalescence times from DNA sequence data, Genetics, vol.145, issue.2, pp.505-518, 1997.

L. Tierney, Markov chains for exploring posterior distributions, Ann. Statist, vol.22, issue.4, pp.1701-1728, 1994.

T. Toni, D. Welch, N. Strelkowa, A. Ipsen, and M. P. Stumpf, Approximate Bayesian computation scheme for parameter inference and model selection in dynamical systems, Journal of The Royal Society Interface, vol.6, issue.31, pp.187-202, 2009.

D. Tran, A. Kucukelbir, A. B. Dieng, M. Rudolph, D. Liang et al., , 2016.

. Edward, A library for probabilistic modeling, inference, and criticism

D. Tran, R. Ranganath, and D. Blei, Hierarchical implicit models and likelihood-free variational inference, Advances in Neural Information Processing Systems, pp.5523-5533, 2017.

M. Tran, D. J. Nott, and R. Kohn, Variational bayes with intractable likelihood, Journal of Computational and Graphical Statistics, vol.26, issue.4, pp.873-882, 2017.

P. Vanetti, A. Bouchard-côté, G. Deligiannidis, and A. Doucet, Piecewise Deterministic Markov Chain Monte Carlo, 2017.

D. Vats, J. M. Flegal, and G. L. Jones, Multivariate output analysis for Markov chain Monte Carlo, 2015.

Y. Wang and D. M. Blei, Frequentist consistency of variational bayes, Journal of the American Statistical Association, pp.1-85, 2018.

Z. Wang, S. Mohamed, and N. Freitas, Adaptive hamiltonian and riemann manifold monte carlo, International Conference on Machine Learning, pp.1462-1470, 2013.

M. Welling and Y. W. Teh, Bayesian learning via stochastic gradient langevin dynamics, Proceedings of the 28th International Conference on Machine Learning (ICML-11), pp.681-688, 2011.

A. Wibisono, A. C. Wilson, J. , and M. I. , A variational perspective on accelerated methods in optimization, Proceedings of the National Academy of Sciences, vol.113, issue.47, pp.7351-7358, 2016.

R. D. Wilkinson, Accelerating ABC methods using Gaussian processes, Proceedings of the 17 th International Conference on Artificial Intelligence and Statistics (AISTATS), vol.33, pp.1015-1023, 2014.

R. J. Williams, Simple statistical gradient-following algorithms for connectionist reinforcement learning, Machine learning, vol.8, issue.3-4, pp.229-256, 1992.

J. Yang, V. Sindhwani, H. Avron, and M. Mahoney, Quasi-monte carlo feature maps for shift-invariant kernels, International Conference on Machine Learning, pp.485-493, 2014.

C. Zhang, J. Butepage, H. Kjellstrom, and S. Mandt, Advances in variational inference, 2017.

C. Zhang, H. Kjellström, and S. Mandt, Determinantal point processes for mini-batch diversification, Uncertainty in Artificial Intelligence, 2017.

Y. Zhou, A. M. Johansen, and J. A. Aston, Toward Automatic Model Comparison: An Adaptive Sequential Monte Carlo Approach, Journal of Computational and Graphical Statistics, vol.25, issue.3, pp.701-726, 2016.