Entropic Gravity Part 2: Diffusion in non-uniform time and space (unfinished)

November 16, 2025

This post is a continuation of Entropic Gravity Part 1: Diffusion in non-uniform time and space

Classical Diffusion in Inhomogeneous Time

One of the consequences of GR is gravitational time dilation (for a refresher watch Interstellar, it's a good movie), i.e. time runs slower near massive objects. Outside of a spherical mass, the local rate of the flow of time goes like dtdτ=12GMrc2\frac{dt}{d\tau} = \sqrt{1 - \frac{2 \cdot G \cdot M}{r c^2}}, leading to a rate of flow of time dependent on the radial distance. This is what I'm going to build towards, but first we should build some intuition on some simple examples.

First, let's explore what effect different rates of flow of time have on a particle diffusing on a 1D ring. As a baseline, here's simple Monte-Carlo simulation using an unbiased random walk with homogeneous time flow in MATLAB (20000 time steps per trial, 1000 trials) and plotting the histogram of where the particle is located at each step shows the expected uniform distribution.

So what effect do variations in the flow of time have on diffusing particles? To make the flow of time inhomogeneous, I picked the first 10 bins on the ring and made the random walk in that region only update once for every two updates on the rest of the ring. Running the same Monte-Carlo simulation now yields a different result: The particle on average spends twice as much time in the region with slowed time.

This simulation can be run with many different amounts of time dilation (or time-dilation factors), but it always yields the same result:
The probability of finding a particle in a bin with slowed time relative to a bin outside that region is directly proportional to the amout of time dilation, so let's try to generalize this point. We can add the effect of time dilation to the Fokker-Planck equation for diffusion by using the chain rule on coordinate time (local flow of time) and use proper time instead, or by incorporating the slower rate of passage of time into the diffusion constant. I will use the latter approach.

We start from the usual one-dimensional Fokker–Planck equation written in proper time (the particle’s own clock, wherever it is):

pτ(x,τ)=D02px2(x,τ)\frac{\partial p}{\partial \tau}(x,\tau) = D_0\,\frac{\partial^2 p}{\partial x^2}(x,\tau)

The outside world measures coordinate time tt. The two clocks are linked by the time-dilation factor:

α(x)=dtdτ(x)>1\alpha(x) = \frac{dt}{d\tau}(x) > 1

Or alternatively:

dτ=dtα(x)d\tau = \frac{dt}{\alpha(x)}

Because τ\tau advances more slowly where α(x)\alpha(x) is large, the effective diffusion coefficient seen in coordinate time is:

D(x)=D0dτdt=D0α(x)D(x) = D_0\,\frac{d\tau}{dt} = \frac{D_0}{\alpha(x)}

Replacing D0D_0 with D(x)D(x) gives the Fokker–Planck equation in coordinate time:

pt(x,t)=2x2[D(x)p(x,t)]\frac{\partial p}{\partial t}(x,t) = \frac{\partial^2}{\partial x^2} \left[ D(x)\,p(x,t) \right]

In equilibrium, pt=0\frac{\partial p}{\partial t} = 0, so the equation becomes:

2x2[D(x)pstat(x)]=0\frac{\partial^2}{\partial x^2} \left[ D(x)\,p_{\text{stat}}(x) \right] = 0

Integrating twice (and assuming reflecting boundaries so the net probability flux is zero) gives:

D(x)pstat(x)=constD(x)\,p_{\text{stat}}(x) = \text{const}

Substituting D(x)=D0α(x) D(x) = \frac{D_0}{\alpha(x)} gives the final result:

pstat(x)α(x)=constp_{\text{stat}}(x)\,\alpha(x) = \text{const}

Or in other words:

pstat(x)time-slow-down factor=const\frac{p_{\text{stat}}(x)}{\text{time-slow-down factor}} = \text{const}

This is a general statement of what the earlier Matlab simulations showed, namely that regions with slowed time lead to a higher probability of finding a particle inside.

From a classical thermodynamical perspective, we could interpret this result as stemming from a potential that attracts the particle to this region. Adding this potential to a Fokker-Planck equation (commonly called the Smoluchowski equation in this form) with homogeneous flow of time gives us:

dpdt=D(d2pdx2+ddx(pkBTdUdx))\frac{dp}{dt} = D\left( \frac{d^2p}{dx^2} + \frac{d}{dx} \left( \frac{p}{k_B T} \frac{dU}{dx} \right) \right)

dpdx=1kBTpdUdx\frac{dp}{dx} = - \frac{1}{k_B T}\,p\,\frac{dU}{dx} for the stationary distribution.

The potential that fulfills this condition can be found by integrating both sides:

1pdp=1kBTdUdxdx\int \frac{1}{p}\,dp = -\,\frac{1}{k_B T} \int \frac{dU}{dx}\,dx

lnp=1kBTU(x)+const\ln p = -\,\frac{1}{k_B T}\,U(x) + \text{const}

Exponentiating and plugging in the partition function for the constant of integration leads to the standard form of the Boltzmann distribution in this choice of coordinates: p(x)  =  1ZeU(x)kBTp(x)\;=\;\frac{1}{Z}\,e^{-\,\frac{U(x)}{k_B T}}

And we get an expression for the apparent entropic potential energy as a function of α(x)\alpha(x):

U(x)=kBTln ⁣(dt(x)dτ)+C=kBTln ⁣(α(x))+C. U(x)=k_B T\,\ln\!\bigl(\tfrac{dt(x)}{d\tau}\bigr)+C = k_B T\,\ln\!\bigl(\alpha(x)\bigr)+C .

So what looks like diffusion or a random walk in one set of coordinates (using local proper time everywhere) is consistent with an entropic force and associated potential in another set of coordinates (using the same coordinate time everywhere).

Classical Diffusion in Inhomogeneous Space

When it’s space—rather than time—that is stretched or compressed, the intuition is even simpler than in the time-dilation case. For a unit of distance in stretched space ds=σ(x)dxds = \sigma(x)\,dx, the stretch factor σ\sigma tells you how the coordinate length really corresponds to in proper distance. The stationary probability density in the coordinate xx then is weighted each by its local σ(x)\sigma(x):

pstat(x)σ(x)=constp_{\text{stat}}(x)\,\sigma(x) = \text{const}

So a region where space is stretched (σ>1\sigma > 1) collects proportionally more probability mass, while a compressed region (σ<1\sigma < 1) gets proportionally less.

Diffusion under a Schwarzschild Metric (unfinished)

Given that the probability increase of finding a particle in some region (p_observed) is proportional to dtdτ\frac{dt}{d\tau}, we can write for the arising entropic potential U(x)=kBTln(dt(x)dτ)U(x) = k_B T \ln\left(\frac{dt(x)}{d\tau}\right), and using gravitational time dilation dtdτ=12GMrc2\frac{dt}{d\tau} = \sqrt{1 - \frac{2 \cdot G \cdot M}{r c^2}}:

formula

The first couple terms of the Taylor expansion of this formula for r>c22GMr > \frac{c^2}{2 \cdot G \cdot M} give us:

formula

Using the dtdτ\frac{dt}{d\tau} from before we can place a massive object inside of the ring of our 1D-diffusing particle (off-center to introduce inhomogeneous rates of time). Running the Monte-Carlo simulation again yields a result that is in agreement with the previous simulation: When the particle is closer to the mass, it experiences more time dilation leading to an increased apparent entropy than when it is further away.

[image of slowed-time distribution with offset mass.]

To quantiy the increased probability of finding the particle there, we can plot the time_dilation_factor (proportional to probability) vs. distance. For reference, I've included the Newtonian gravitational potential for the same system given by gravitational_potential = -1e-4 ./ (distances).

It is important to point out that the Einsteinian gravitational potential [Formula] matches the formula for the distance dependent probability [formula], and the Newtonian case can be derived from either by taking the first order MacLaurin series either: [Formula].

The distance dependent apparent entropy goes as [Formula] and calcluating the entropic force due to the entropy gradient gives [Formula], or [Formula] by taking a MacLaurin series matching Newtonian gravity. Simulating a 3D-random walk around the massive object shows the particle is more likely to be found the mass and integrating the probability by distance yields a distribution that matches the classical gravitational potential. While this is an interesting result, gravity acts on individual particles that don't diffuse through Brownian motion or collisions with nearby particles.

Additional Comments:

Notes on Temperature in GR

While writing this post I learned something very interesting about temperature in GR: temperature actually becomes observer-dependent due to differences in the local flow of time. Let's look at two observers, AA and BB. AA sits far away from a massive object, where proper time aligns with coordinate time (α=dtdτ=1\alpha = \frac{dt}{d\tau} = 1). BB is located much closer to the object with α>1\alpha > 1. At thermal equilibrium the flux of blackbody radiation from BB traveling to AA will exactly balance out with the flux from AA to BB.

However, photons energies going from BB to AA are redshifted by a factor 1α\frac{1}{\alpha} and their emission rates (per AA's coordinate time) appear decreased by 1α\frac{1}{\alpha}. In total the observed power at AA is reduced 1α2\frac{1}{\alpha^2} and redshifted. The reverse is true for photons going from AA to BB: from BB's perspective energies are blueshifted by α\alpha and emission rates are increased by α\alpha.

Now suppose B's local temperature is fixed at TBT_B. The power emitted locally from his frame is:

If AA and BB are at the same temperature TT, the ratio of energy flux ΦAB\Phi_{AB} over ΦBA\Phi_{BA} would be

ΦABΦBA=α4\frac{\Phi_{AB}}{\Phi_{BA}} = \alpha^4

This means they're not in thermal equilibrium as there is a net energy flux.
To achieve equilibrium, each region must have a temperature such that the flux is equal. As radiated blackbody power follows the Stefan-Boltzmann law ΦAB=σTA4\Phi_{AB} = \sigma T_{A}^4 and ΦBA=σTB4\Phi_{BA} = \sigma T_{B}^4, there must be a temperature gradient:

ΦAB=σα4TA4=!σTB4=ΦBA\Phi_{AB} = \sigma \, \alpha^4 \, T_A^4 \stackrel{!}{=} \sigma \, T_B^4 = \Phi_{BA}

This means BB's absolute temperature must be higher than AA's: TB=TAα\, \, T_B = T_A\, \alpha. What we've derived here is the Ehrenfest–Tolman law, which in general can be written in terms of the space-time metric gg:

T(x)g00(x)=constT(x) \cdot \sqrt{-g_{00}(x)} = \text{const}

or

T(x)=Tα(x)T(x) = T_\infty \cdot \alpha(x)

Tα(x)T_\infty \cdot \alpha(x) is called also the redshifted temperature. So in simple words, AA and BB are at different temperatures to compensate for photon energy shifts and differences in emission rates due to time dilation.

In the Fokker–Planck equation from earlier we've ignored this effect. What we wrote was:

pt(x,t)=2x2[D(x)p(x,t)]\frac{\partial p}{\partial t}(x,t) = \frac{\partial^2}{\partial x^2} \left[ D(x)\,p(x,t) \right] and D(x)=D0α(x)D(x) = \frac{D_0}{\alpha(x)}

However, our formulation of including time dilation in the diffusion constant turns out to be equivalent, which we can see by relating the diffusion constant to the local temperature using the Stokes-Einstein relation:

D(x)=kBT(x)6πηR=kBT6πηRα(x)=D0α(x)D(x) = \frac{k_B T(x)}{6\pi \eta R} = \frac{k_B T_\infty}{6\pi \eta R} \cdot \alpha(x) = D_0 \cdot \alpha(x)

Why would Entropy be Affected by Time Dilation?

While I think the fact that regions of stretched space have higher entropy (more states) is fairly intuitive, since we're used to thinking of entropy being proportional to volume as an extensive property. But why would entropy care about time? I think there are a couple of interesting perspectives on this.

One definition that is more general than the usual Gibbs entropy is Gibbs-Shannon entropy, which counts states through an integral over phase space density with respect to position and momentum coordinates (with or without the quantum normalization factor/ state volume):

S  =  kBρ(x,p)ln[ρ(x,p)]  d3xd3p(2π)3.S \;=\; -k_B \int \rho(\mathbf{x},\mathbf{p}) \,\ln\bigl[\rho(\mathbf{x},\mathbf{p})\bigr]\;\frac{d^3x\,d^3p}{(2\pi\hbar)^3}\,.

Now for the interesting part: time dilation in curved space-time stretches proper time. A slower rate of time corresponds to a stretching in the momentum space direction. Since phase space volume is measured as dxdpdx \, dp, and proper time affects pmdxdτp \sim m \frac{dx}{d\tau}, then a slower dτd\tau expands the volume element in momentum.

In other words: Time dilation increases the density of available microstates in momentum space.

More microstates = more entropy = more probability weight accumulates there. From a flat-space viewpoint, this looks like a gravitational potential, but thermodynamically it’s just where entropy is highest.

The geometry of the phase space is related to the space-time metric, and the invariant phase space volume element in curved space-time can be calculated as:

dΓ=d3xd3p(2π)3g(3)1p0δ(pμpμ+m2)Θ(p0)d\Gamma = \frac{d^3x \, d^3p}{(2\pi\hbar)^3} \cdot \sqrt{g^{(3)}} \cdot \frac{1}{p^0} \cdot \delta(p^\mu p_\mu + m^2) \cdot \Theta(p^0)

Here, the g(3)\sqrt{g^{(3)}} term incorporates the spatial geometry, and the delta function enforces the mass-shell constraint. This formulation makes it clear that the space-time metric directly shapes the density of states.

(Note to self: Make sure I understand this in more detail. Volume element over cotangent bundle of spacetime. Spme sources are: https://physics.stackexchange.com/questions/83260/lorentz-invariant-integration-measure https://arxiv.org/abs/2106.09235 https://www.icranet.org/veresh/RKT.pdf https://physics.stackexchange.com/questions/167813/proving-the-lorentz-invariance-of-the-lorentz-invariant-phase-space-element )

So the particle isn’t pulled down by gravity—it’s drawn into regions with more available phase space, i.e., where the density of states is higher. That’s what an entropic force is.

This is why I think of gravity not as a classical force from curved geodesics, but as a statistical bias toward regions of higher microscopic degeneracy. And it's why the Fokker–Planck diffusion equation with time-dilation–modulated D(x)D(x) naturally leads to the Tolman–Ehrenfest equilibrium. All the entropy accounting is built into the coordinates, and the free energy remains the same.