What went wrong?
Several explanations for this paradox have been put forward (3), p. 124–5, (4). In randomised trials we expect the distribution of the baseline values to be the same. Then, it makes sense to compare the groups adjusting for the baseline value, and estimate the difference between the groups given the same baseline value.
The question is whether a similar comparison is meaningful when there may be systematic differences between the groups at the start of the study. We believe that it is not, and that by using an analysis of covariance we will answer a completely different research question than when comparing the average change in the two groups using, for example, a t-test. With an analysis of covariance, we attempt to answer the following question: How large is the difference between the groups at the time of follow-up, given that the baseline value is the same? It is probably of no interest to answer this question when we know that the baseline value is different.
We believe that some of the paradox lies in the interpretation of the result, and that the problem occurs because one fails to give sufficient thought to the research question that the analysis answers. Moreover, in a randomised trial, the group allocation will not affect the baseline value, since this has been measured before the intervention (exposure). In an observational study, the participants will often also have been exposed before the start of the study, and this may have impacted on the baseline value. The baseline value will thereby be a mediator, and an analysis of covariance will lead to biased estimates (4).
In any case, our simple advice is this: adjust for the baseline value in randomised trials, but not in observational studies.