MA469 R-projects for 2015/6 from R.S.MacKay
1.Hubble’s law without Friedmann’s equations
Background: Hubble’s law (that redshift z is proportional to distance D) or its refinements are nowadays based on assuming not only a Friedmann universe ds^2 = -dt^2 + S^2(t)|dx|^2 where |dx| is Euclidean (or possibly standard spherical or hyperbolic) metric on R^3 (or a 3-manifold) and S(t) is some function of t, with matter moving roughly along x=constant, but also Friedmann’s equations which constrain the form of the function Sto satisfy two differential equations.
But one could attempt to fit the observations of redshift z and D (“corrected luminosity distance” to be precise) directly to the equations 1+z = S(t_r)/S(t_e) and D = S(t_r)\intdt/S(t) (which follow for a Friedmann universe in the Euclidean case) to determine the function S. Namely, its inverse function is given by S^{-1}(1/1+z) = t_r - \int_0^D dD/1+z.
Project: Find suitable data sources for (z,D) and plot the resulting function S. Take care in interpreting the data: many sources provide luminosity distance D_L instead of corrected luminosity distance; they are related by D_L = (1+z)D. Or they may give -5 log_10{D_L}. A good reference on the variety of distance measures is Perlick
Extrapolate your functionS to determine t* such that S(t*)=0 (the time of the big-bang).
2. Hierarchical Aggregation of Markov processes
Background: To compute the stationary distribution and mean first passage time for a Markov process (or semi-Markov process) one can eliminate parts of the state space successively to make smaller processes with equivalent properties, thereby speeding up the computation. This was developed by Wales inInt Rev PhysChem 25 (2006) 237 and J ChemPhys 130 (2009) 2014111. I proposed an improvement on Wales’ methods, which works by aggregating groups of states rather than eliminating them and I believe would be more efficient.
The idea is that for a continuous-time regular jump homogeneous Markov chain with state space S, mean waiting times T_s, transition probabilities P_stfrom s to t (equivalently transition rates q_st = P_st/T_s), one can aggregate groups of states s in A_i for any chosen disjoint subsets A_i into “super-states” which we call A_i again, but the process becomes one on the set of remaining (directed) edges and is no longer Markov but mean times can be preserved. It may also result in multiple edges from a state to a super-state or vice versa or between super-states, but further aggregation produces nothing more complicated. Thus we take a more general starting point where the process is on a directed graph G that can have multiple edges between nodes (can allow self-edges if desired but wlog eliminate them), each edge e has a mean waiting time T_e (in the standard case can say this is the waiting time at the destinationnode for the edge [could do origin node if preferred]) and each edge e has a probability distribution P_effor the next edge f, with the successive transitions being independent. The result of aggregating edges a in A is to update the mean waiting times on incoming edges e to A to
T’_e = T_e + P_eA (I-P_AA)^{-1} T_A
and transition probabilities form incoming edges e to outgoing ones f to
P’_ef = P_ef + P_eA (I-P_AA)^{-1} P_Af,
whereP_eA is the vector of transition probabilities P_ea for edges a in A, P_AA is the matrix of probabilities for transitions between edges within A, T_A is the vector of mean waiting timesT_a on edges within A, and P_Af is the vector of probabilities P_af for transition from edges a in A to f. The resulting process (incompletely defined because I didn’t specify the distribution of waiting times, but it is only mean times that I aim to extract) has the induced mean first passage times and stationary distribution. Once small enough one could solve the standard system of equations
T_eB = T_e + \sum_fP_efT_fB
for the mean first passage time from edge e to set B of edges (B can be a singleton).
Project: Check the above theory to make sure you understand it. Add things like how to compute the stationary distribution and how to compute quantities for edges that have at some stage been aggregated from the hierarchical aggregation tree. Implement in software. This will require heuristics for which groups of edges to aggregate. Test on some realistic examples (David Wales can probably provide some from physical chemistry).
3. Metrics on probability distributions for probabilistic cellular automata
Background: There are many metrics on spaces of probability distributions, but most are not suitable for probability distributions on large or infinite products, such as for the state of a probabilistic cellular automaton. In my paper Robustness of Markov processes on large networks, J Difference EqnsApplns 17 (2011) 1155–67, I rejected a lot and proposed a good one, which I named after Dobrushin.
Project: The project has three related parts:
- Decide whether Prokhorov metric is good in my sense or not (e.g. Parthasarathy, Probability measures on metric spaces).
- Compare Dobrushin metric with Föllmer & Horst, Stoch Process Appl 96 (2001) 99-121
- Compare Dobrushin metric with Steif, Convergence to equilibrium and space—time bernoullicity for spin systems in the M < ε case. Ergodic Theory and Dynamical Systems, 11 (1991) 547-575.
4. Dynamic dependency graphs for evaluation of policy
Background: In evaluating proposed public or commercial policy it is important to estimate the interdependency between the dynamics of the relevant agents. For a simple class of stochastic systems (probabilistic cellular automata), which can include time-inhomogeneity, if the interdependence is weak then the statistical outcome is unique and varies smoothly with parameters. In contrast, if it is strong then the outcome may depend on the initial conditions and the realisation, and may exhibit jumps as parameters are varied. See my notes, Dependency graphs for stochastic dynamics of complex systems.
Project: Extend the above theory to continuous-time processes (known as interacting particle systems: many results already exist so it is more a question of formulating and treating them in the same way as I did the discrete-time case) and to cases where the independent updates are to pairs of units rather than single units.