I respectfully disagree. Standardized effect sizes can be extremely helpful. In the example you present, the effect size does indeed have little relevance. A month is a difference readily understood by anybody, lay person or professional. But often, as a health care professional or researcher, one reads papers in which the outcome measure - the results - are completely unfamiliar, as are its units e.g. the Disabilities of the Arm, Shoulder and Hand (DASH) questionnaire or the Neck Disability Index. In these cases, the effect size gives a clear indication of the difference, or change, in outcome scores. It also permits, for example, a comparison of two studies (or more) looking at the same intervention but using different outcome measures for which direct comparisons are impossible.
Effect sizes should always be given, and given with 95% (or 99%) confidence intervals. If the arms of the confidence interval cross the 'line of no effect' then the intervention cannot be concluded as effective, no matter where the central point lies.