Data variability could be your friend
published: April 17, 2008, recorded: March 2008, views: 5455
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Deterministic modeling, in the form of ordinary differential equations (ODE), is the dominant paradigm in systems biology. This stems partially from the type of data that is available. Input data (e.g. gene expression data, protein concentrations) for these models is normally derived from whole cell populations. Consequently, what is modeled is the behaviour of one average cell rather than a multitude of individual cells. Variability within the data originates mainly from the measurement apparatus (technical error) or from difficult-to-control environmental conditions that precede the measurement (biological variability) and can constitute an impediment to clear cut conclusions. For example, kinetic parameters cannot be known with absolute precision and have to be accompanied with confidence intervals that are generally commensurate with the rather high variability attached to biological data. Data variability can also put obstacles in the way of decisive model selection.
Measurement techniques are, however, increasingly being applied to individual cells. It is possible to average the individual cell observations, estimate the dispersion of this synthetic measurement, and use these data along with the modeling paradigms outlined above. However, inter-cell variability can be the result of intrinsic system noise. In particular, this is the case if molecular species involved exist in very low concentrations, such as in signaling networks. We argue that because this variability is in part intrinsic, it can be harnessed rather than tolerated, so that it provides novel insights into the mechanisms governing the system under study. This requires a paradigm shift –from deterministic to stochastic modeling- even though ODEs are still central in the latter. To illustrate this, the example system we use is DNA Double Strand Break repair dynamics in irradiated human cells.
Recent assaying techniques allow the quantification of DNA Double Strand Break (DSB) at the individual cell level. Repeated measurements in time form a dynamic image of the DSB decay process of cells after they have been exposed to a pulse of ionising irradiation. Crucially, individual cell measurements allow the monitoring of distributional features of the DSB count in a population. Existing deterministic models correctly mimic global features in this system. In particular, they can fit very well different decay regimes that are being observed when one focuses on the average DSB count in the population.
We show however that these models, when translated into the stochastic realm, provide a poor data fit when one considers distributional features, such as the variance of the DSB count. Furthermore, using simple stochastic models that are partly amenable to analytical manipulation, we show that enriching the existing models with extra feedback loops produces an outcome more in tune with observations. Three independent data sets are used. Possible biological consequences are briefly discussed.
Download slides: licsb08_barenco_dvf_01.pdf (836.3 KB)
Download slides: licsb08_barenco_dvf_01.ppt (1.1 MB)
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !