A matrix product algorithm for the far-from-equilibrium evolution of dynamical processes on networks

author: Caterina De Bacco, University of Paris-Sud 11
published: March 7, 2016,   recorded: December 2015,   views: 36
Categories

Slides

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.
  Bibliography

Description

In the last years, we have seen increased efforts of statistical physicists to tackle the evolution of stochastic dynamical processes in homogeneous and heterogeneous networks. While exact solutions are limited to very simple homogeneous and low dimensional models, like the Glauber dynamics of the 1D Ising model and a few other examples, in general the only available tools are numerical simulations or dynamical mean field theories with various degrees of sophistication. If one focuses on the description of the transient dynamics, farfrom-equilibrium, the description is characterized by an exponential computational complexity in the duration of the process that prevents to tackle the problem in his general setting. As a consequence its study has been limited to either irreversible dynamics or by recurring to approximate methods that fail to capture the transient part of the dynamics.

Here we propose and test a novel algorithm to address this problem. Our model combines two successful ideas developed in different contexts of statistical physics: in the classical framework the cavity method, or message-passing algorithm, in its dynamical version; in quantum many-body theory the idea of matrix product approximation of a state function. This unusual combination of statistical formalisms allows to effectively approximate a dynamical process on networks were variables evolve through parallel updates, by reducing the computational complexity from exponential to polynomial in both system size and duration in time. A crucial ingredient is the possibility to tune the trade-off between the approximation accuracy and the computational complexity arbitrarily.

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Write your own review or comment:

make sure you have javascript enabled or clear this field: