Lactivism > Need help interpreting data (anti-BF study)

Can someone who gets statistics help me interpret the following?

*The number of febrile episodes, although "extremely low" in both groups of infants, differed significantly, with median values of 1.16 in the formula-fed group (P25–P75, 0.06 - 2.38) and 1.24 in the breast-fed group (P25–P75, 0.51 - 3.45) (P < .05, van Elteren test).*

I'm gathering that the frequency of fevers in bfed babies is higher than in non-bfed, but I'd like to be sure (and to understand what percentage higher).

The quote is from a May 2010 study, funded by Danone, that claims that bfing does not provide more protection against infection than formula-feeding in developed countries.

TIA.

I'm gathering that the frequency of fevers in bfed babies is higher than in non-bfed, but I'd like to be sure (and to understand what percentage higher).

The quote is from a May 2010 study, funded by Danone, that claims that bfing does not provide more protection against infection than formula-feeding in developed countries.

TIA.

Not sure I'll explain this well but I'll try... The median is the number in the middle of a group of numbers. So the middle of the range of # of fever episodes was 1.16 for formula fed and 1.24 for breastfed infants. The difference between these two numbers was statistically significant (i.e., unlikely to have occurred by chance), but that doesn't mean it was clinically significant. The other numbers shown I think show the 25th and 75th percentiles for each group (to show the range of # of febrile episodes per group). The *p<.05* just indicates that the difference was statistically significant.

This is the study you're looking at, right? http://www.medscape.com/viewarticle/721515

It looks like this was a poster session presented at a convention, so it wouldn't have been peer reviewed (to help point out any methodological flaws). It's hard to tell for sure from just the abstract (and the Medscape writeup), but it looks like there are major flaws in the way the study was designed. The study followed both breastfed and formula fed infants for a full year, but the breastfed ones only had to be exclusively breastfed for at least four months (so a number of them may have been on formula after that, for the rest of the year). Also, the formula fed infants were followed beginning around 50 days old while breastfed infants were followed sooner, beginning around 32 days of age. I'm wondering if this difference could have accounted for the slight increase in # of febrile episodes in breastfed infants that was reported. Another major problem is that the respiratory and GI illnesses were not diagnosed by a doctor, but self-reported by parents.

I'd be surprised if this ever made it into a journal (as anything other than a poster abstract...)

This is the study you're looking at, right? http://www.medscape.com/viewarticle/721515

It looks like this was a poster session presented at a convention, so it wouldn't have been peer reviewed (to help point out any methodological flaws). It's hard to tell for sure from just the abstract (and the Medscape writeup), but it looks like there are major flaws in the way the study was designed. The study followed both breastfed and formula fed infants for a full year, but the breastfed ones only had to be exclusively breastfed for at least four months (so a number of them may have been on formula after that, for the rest of the year). Also, the formula fed infants were followed beginning around 50 days old while breastfed infants were followed sooner, beginning around 32 days of age. I'm wondering if this difference could have accounted for the slight increase in # of febrile episodes in breastfed infants that was reported. Another major problem is that the respiratory and GI illnesses were not diagnosed by a doctor, but self-reported by parents.

I'd be surprised if this ever made it into a journal (as anything other than a poster abstract...)

I am wondering if they had a reason for using median values instead of mean values. As the PP states a median value is the middle value of a range of numbers like (Like 1 to 5, the median is 3), the mean is the average (3+3+3/3 = 3). To me, it makes me think that the mean didn't show a statistical significant difference, but the median did. I personally don't like to use the median when I am doing statistical analysis, because outliers (really high and really low values) can really shift the results. Also, there are different p values at which statisticians will allow significance to show up, <.05 is the highest value (with .01 and <.001 being the lowest ones). For some statisticians, they will not accept a value that high.

There are also tests called post-hoc, that tell you what type of an effect size there is. Things can be significant, but have very little effect sizes, which means they are harder to generalize. I notice that they did not report this, which makes me think it didn't have a large effect size.

I also wonder if getting breastfed infants into the study earlier (32 days compared with 50 days for formula fed infants) made a difference. They didn't say they adjusted for that.

It's really hard to make conclusions about such a limited study. Without multiple replications of it, it would be really hard to generalize this to a larger community. Sometimes I just want to see these researchers data and run it myself. LOL.

Also, the PP is right in saying that this wasn't a published study. Published studies go through and extensive peer review process that looks at things like the statistical analysis done, the methods for doing the study, and the conclusions being made.

There are also tests called post-hoc, that tell you what type of an effect size there is. Things can be significant, but have very little effect sizes, which means they are harder to generalize. I notice that they did not report this, which makes me think it didn't have a large effect size.

I also wonder if getting breastfed infants into the study earlier (32 days compared with 50 days for formula fed infants) made a difference. They didn't say they adjusted for that.

It's really hard to make conclusions about such a limited study. Without multiple replications of it, it would be really hard to generalize this to a larger community. Sometimes I just want to see these researchers data and run it myself. LOL.

Also, the PP is right in saying that this wasn't a published study. Published studies go through and extensive peer review process that looks at things like the statistical analysis done, the methods for doing the study, and the conclusions being made.