How to tell the good from the not so good.

There are literally thousands of scientific articles published every year around the world by research groups. Inevitably some will be great and others will not. Even a trained scientist can find interpreting research difficult especially with so many studies coming out. So, how can you tell if a research paper is something you should be interested in?
There are a few questions to ask when reading and interpreting evidence that may make your life a bit easier when it comes to understanding the science behind treatment claims.

Is this research relevant to me?

This is a very important question. Firstly, there is the question of species. While animal studies are very important for research they really have very little relevance to actual humans with a specific condition. Animal models are used to test theories and make observations, but science is littered with the corpses of ideas that seemed plausible in animal models but did not pan out in humans. This doesn’t mean animal studies are not relevant, but you need to be very cautious looking at results from animal studies and drawing parallels to people.

If animal studies require caution, then studies done in cells in culture (that is in dishes outside the body) need to be viewed with even more caution. Cell studies are very useful when teasing apart specific genetic or cellular mechanisms. These studies are used to develop theories to then test in animal models which then get tested in humans. They should not be interpreted as clinical information.
When you get to clinical studies in human then the relevance is obvious. However, you still need to be able to spot studies that are well designed and relevant to you and ones that are not.

Keep this word in your head: generalisability

This means, can the results of the study you are reading be applicable to the general population of people with FSHD? This rule encompasses three aspects of study design, study size, study length and who was included in the study.

How many people were in the study?

Having a small number of people in a clinical study is a bit of a red flag. This is because there might not be enough people in the study to measure changes in response to an intervention. This doesn’t mean all small studies are bad, just that you need to keep the small size in mind when reading it.

How long did the study go for?

There are a lot of studies where people included are only followed for a short period of time. This means that you cannot infer long term effects, or the safety of a particular treatment or intervention.

Who was included?

All clinical studies have inclusion and exclusion criteria. The researchers running the trial may need to exclude people who are reliant on wheelchairs for mobility because the key outcome is a walking test. They may want to exclude people with other conditions because they want to be able to isolate the effect of the intervention on FSHD and other conditions may interfere with their results. These are both valid however, this does pose a problem when interpreting results. For example, you may come across a study that says that Drug A is effective. The study only included males aged between 18 and 34 who were not reliant on mobility aids and had no other conditions. If you are a 55-year-old female with FSHD who uses a wheelchair and has diabetes, then the results of the study may not be directly applicable to you.
Of course this does not mean that small, short studies with a lot of exclusions are of no use. You just need to put them in the context of studies that have a bigger population, go for long periods of time, include people with a range of conditions and FSHD severity.

Be on the lookout for red flags

There are certain red flags that you get to know when you are reading scientific studies a lot. If you spot one of these this does not mean the study should be dismissed, but you might want to be a bit more sceptical of the results.

1. The study makes very VERY big claims about treatment benefit or cures but is published in an obscure journal (big results mean big journals!). This is especially true for studies that have been done in animals that extrapolate results to humans with little justification. Sometimes publication in an obscure journal is because the area is so highly specialised the blockbuster journals like Nature or Science don’t want it so you can’t just rely on this flag.

2. The authors publicise the results before they have been peer-reviewed. Peer-review is the self-correcting side of science. Before a paper is published in a journal it gets torn to shreds by other scientists. View any results that are being publicised without first being published with extreme caution. Often this is not the researchers fault, sometimes the research institute releases findings before publication.

3. For clinical trials, look out for ones that are not listed on This website is open to everyone and was developed to improve transparency in clinical research. It is now very unusual for a trial not to be listed here when it is initiated and to be updated with research findings. If a trial is not there it doesn’t mean it’s not a valid trial, but a healthy dose of scepticism should be used.

4. Studies that show results that seem to be the complete opposite of what all the other studies show. This one needs some caveats. Sometimes a study comes along that completely turns science on its head and entire scientific theories are redrawn. So thinking that any study that flies in the face of conventional thinking is automatically wrong is probably not a good position to take. However, if you have a study that is showing controversial results it probably requires a very careful read!

Levels of evidence

This relates to the type of study that has been performed. Some study designs are better than others for providing definitive results. The level of evidence is defined as the strength of the results measured in a clinical trial or research study. Ranked in order of strongest to weakest these are:

· Systematic reviews (combining the evidence from multiple trials and/or studies)
· Randomised controlled trials
· Cohort study (observing a group of people over time for specific characteristics that may be associated with human disease)
· Case-control study (observing differences between people with a condition and without to try and find things that may be associated with the development of the condition)
· Cross-sectional study (essentially a snap shot of a group of people with a condition to understand aspects of the condition)
· Expert opinion (lowest level of evidence because opinion is always prone to bias)

For example, let’s say a study gets published that claims eating green leafy vegetables is associated with more severe FSHD symptoms. If that study is a systematic review, then you might want to reconsider your consumption of spinach. If that study is a cross-sectional study, you’ll probably want to wait for more research to occur before you make any radical changes to your diet.

Reading the evidence is a complicated, time consuming process. If you come across any papers that you would like explained, you might want to let the Foundation do the hard work for you! Send any papers and suggestions for future blogs to