After listening to a lecture, third-year dental students at Harvard were surveyed about distractions by electronic devices and given a 12-question quiz. Although 65% of the students admitted to having been distracted by emails, Facebook, and/or texting during the lecture, distracted students had an average score of 9.85 correct ,compared to 10.444 for students who said they weren’t distracted. The difference was not significant, p = 0.652.

The authors concluded, “Those who were distracted during the lecture performed similarly in the post-lecture test to the non-distracted group.”

The full text of the paper is available online. As an exercise, you may want to take a look at it and critique it yourself before reading my review. It will only take you a few minutes.

As you consider any research paper, you should ask yourself a number of questions such as are the journal and authors credible, were the methods appropriate, were there enough subjects, were the conclusions supported by the data, and do I believe the study?

Of course, many more questions could be included. Google “how to critique an article,” and you will find numerous lengthy treatises on the subject.

“The authors are from Harvard, so they must be credible.”

 

The paper appears in PeerJ, a fairly new open access journal with a different format. Authors have to pay to have papers published, but they can opt for a reasonably priced plan for lifetime memberships with variable numbers of papers included.

It’s too new to have an impact factor, but stats on the website state that the paper has had over 3,100 views and been downloaded 218 times. PeerJ has a 70% acceptance rate for submissions [ie, not very selective].

The authors are from Harvard, so they must be credible.

The study is described as quasi-experimental, meaning not randomized. That is not necessarily bad especially because it is said to be a pilot study.

The main problem with the paper is that it was underpowered to detect a difference because there were only 26 subjects, 17 distracted and 9 not. The null hypothesis—that distractions do not affect test scores—was accepted as true, which is called a “Type II” error by statisticians.

Other issues with the paper are that distracting behaviors may have been under-reported by the students, the test questions may have been too easy, and the two groups may have differed in their baseline knowledge of the material. Harvard dental students may not be representative of typical students or people in general. A couple of my colleagues on Twitter suggested that the lecture could have been either so good, or so bad, that paying total attention was unnecessary.

Did I mention that one of the two authors of the paper is an “Academic Editor” for the journal?

Bottom line: This paper should not convince you that distractions by electronic devices are not harmful to learners.

Skeptical Scalpel is a retired surgeon and was a surgical department chairman and residency program director for many years. He is board-certified in general surgery and critical care and has re-certified in both several times. He blogs at SkepticalScalpel.blogspot.com and tweets as @SkepticScalpel.

Author