*started 13th October 2016*

I built a circuit which generates pulses at random times (see Phase Noise). To test how random they were I tried autocorrelation of the times between pulses and was surprised to find the lag 1 value was always -0.5. Eventually I read in [1] ("5.5.4. The Lag 1 Autocorrelation") that this value is expected. I could not see why and I could not find a derivation online. Hence this page.

The autocorrelation at lag k is defined as:

where ` z_t ` is the time series, μ is its mean value, ` σ_z^2 ` is its variance, and E denotes the expected value.

For a stationary process

for all `t`.

By the definition of `sigma` `rho_0` equals 1.

If ` z_t` is a random variable, then `rho_k` equals 0 because the values of `z_t` and `z_(t+1)` are independent:

and ` E[(z_t-mu)]` is zero.

If ` z_t=r_t-r_(t-1)+mu ` with ` r_t ` a random variable with mean 0 then

Expanding the argument of E and removing the terms which are zero by the independence assumption

The best intuitive way of looking at this I can think of is to imagine three data values, the first and last will average out to their mean positions, the correlation between the difference between the middle point and the first and the difference between the middle point and last will always consist of a value less than the mean multiplied by a value greater than the mean.

Put it another way average values with a correlation of 1 with those with a correlation of 0 get a correlation of 0.5.

Credits

References