What happens when you make measurements over different sizes of time interval?
random variation across years (see also Pielou)
give each a random starting size check after an sampled number of years
That’s not quite what I expected – why isn’t it flatter in the sample?
so maybe the right way to think about it is as a compound distribution . Just as the gamma-poisson happens, when you have a poisson count but the rate parameter varies – here you have a normal value but the variance parameter varies. and we just don’t know which is which, and the result is a t-distribution
just as in the neg-binomial case, there is another solution – fine control, with random effects for sites, species, years, etc.
but i still find it unsatisfying that the nu
parameter,
the degrees of freedon, doesn’t factor into this model in the same
way!
is the inverse gamma a distribution of sample standard deviations?
The mean of the inverse gamma rows with \(\nu\) like this:
\[ \mu = \frac{\beta}{\alpha + 1} = \frac{\nu\sigma^2/2}{\nu/2 + 1} = \frac{\nu\sigma^2}{\nu + 2} \]
Of course that makes sense – why would the sample standard deviation be biased in any way?
inspired by here: https://www.sumsar.net/blog/2013/12/t-as-a-mixture-of-normals/
But is it related to sample size?
oh wow it just the sampling distribution of the mean according to the central limit theorem
it looks like – mayyybe this is it. definitely something to think about more!