Bandwidth allocation in large stochastic networks

Abstract : The purpose of this thesis is to tackle three problems inspired by large distributed systems. The tools used for that purpose come from probability and more specifically queueing theory. The studies led during this thesis allowed to understand the behavior of the observed systems and algorithms but also to prove some interesting theoretical results and to emphasize some probabilistic phenomena. In Chapter II, a bandwidth sharing network model is analyzed. Contrary to what has been studied in the literature, users do not use a congestion control mechanism; they are assumed to send data at their maximum rate and to protect their transmission against loss thanks to some error code. The obtained model is analyzed for two different topologies: linear networks and upstream trees. Using fluid limits, the stability condition of these networks is obtained. Due to these fluid limits, an interesting stochastic averaging phenomenon arises. Another scaling method is used to prove that the stability condition of these networks tends to the optimal one when the maximal rate of users is infinitely smaller than the links capacity. In Chapter III, CSMA/CA, a classical channel access algorithm for wireless networks, is studied. Each link consists in a sender/receiver pair and an interference graph models potential collisions between links. Link arrivals and departures are taken into account by the model. An approximation is made by assuming that the channel access dynamics is infinitely faster than link arrivals and departures dynamics. It is proved that CSMA results in an optimal use of radio ressources for ad-hoc networks. However, it is also established that this algorithm is not efficient for infrastructure-based networks where a bias in favor of upstream traffic is observed. At the end of the chapter, the time-scale separation assumption is discussed. The last two chapters of this thesis are dedicated to the study of a large distributed storage system with failures. The purpose is to estimate the decay rate of the system or the durability of a given file. In Chapter IV, the first point of view is studied. The system is then considered as a whole. It consists in a large set of files which can have two copies. Each copy is lost after a random time. A centralized back-up mechanism allows to restore lost copies. A file with no copy is definitively lost. The system is analyzed when the number of files grows to infinity. In order to describe correctly its behavior, three different time scales are considered. The slow time-scale allows to describe the back-up mechanism; the normal time scale to define the capacity of the system and the fast time scale to evaluate the decay rate. The stochastic averaging principle is also observed on the fast time scale. In Chapter V, the point of view of a file is taken. Links are established with the classical Ehrenfest and Engset models, from statistical physics and telecommunication. Exponential martingales methods are used to compute the Laplace transform of hitting times. The asymptotic behavior of these hitting times, in particular the durability of a file, is then estimated.
Complete list of metadatas

https://pastel.archives-ouvertes.fr/pastel-00717671
Contributor : Mathieu Feuillet <>
Submitted on : Friday, July 13, 2012 - 1:00:43 PM
Last modification on : Friday, May 25, 2018 - 12:02:03 PM
Long-term archiving on : Sunday, October 14, 2012 - 2:45:54 AM

Identifiers

  • HAL Id : pastel-00717671, version 1

Collections

Citation

Mathieu Feuillet. Bandwidth allocation in large stochastic networks. Networking and Internet Architecture [cs.NI]. Ecole Polytechnique X, 2012. English. ⟨pastel-00717671⟩

Share

Metrics

Record views

525

Files downloads

638