Blogging my Research: Steroids, Lies and Natural Limits

One of the main things I have to do this term is produce a 5000 word research paper on some topic in the History and Philosophy of Science. For those who aren’t familiar with HPS, it is, frankly, vast. Our department at Cambridge has staff and students working on everything from the history of visualising embryos to the question of whether there can be a science of human nature to the narratives of sperm. It’s a wonderfully stimulating environment, but much of the time I find myself confused and quite ignorant of the subject matter of other people’s research.

For my part, I’ve studied (though not done any research in) modern history of science, technology and medicine, metaphysics and epistemology (fancy ways of saying ‘what stuff is there in the universe?’ and ‘how can we know about that stuff?’) and the ethics and politics of science. The last of these fields is the one that I really love, and that I’m hoping to be able to do a PhD in, but even this is a huge field, encompassing far more subjects than one person could ever hope to fully study and understand in a lifetime.

Last term I did a literature review on how laypeople might be able to discriminate between experts and non-experts, or between experts who say conflicting things. It’s a real problem which faces a lot of us, a lot of the time: the news has two ‘experts’ in international relations on to talk about the latest developments in the Israel/Gaza conflict – who should we believe? You’re browsing Wikipedia and you come across a subject that you’re unfamiliar with – should you just take it at face value? It also has a lot of relevance in policy-making: if scientific experts give conflicting accounts of what the evidence says, how can politicians and bureaucrats figure out who to believe without having to become scientists themselves? In courtrooms, we specifically appoint expert witnesses on both sides of the cases, asking them to give accounts which deliberately conflict with each other – but at the end of the case, the judge or jury have to decide who is right, and they’re hardly qualified to assess forensic evidence directly.

Instead, in most of these cases we rely on indicators which don’t have much to do with the evidence itself, but with the people giving it: do they give good arguments? When their opponent challenges them, are they able to come up with a swift and unhesitating rebuttal?  Do other experts agree with them? Do they have qualifications which suggest they might be experts? Do they seem a bit slimy, a bit Nixon-ish? There are all kinds of ways in which we decide who to trust in these kinds of situations, and it turns out that most of them have very little to do with the evidence in front of us. This has implications for really important topics, from climate change denial to the possibility of imprisoning people for crimes they never committed.

That’s a little bit of what I did last term, without the really detailed stuff about different accounts of knowledge and the reasons that trusting someone’s qualifications or the agreement of other experts might not be a good idea. What I really want to talk about right now is the research that I’m working on this term. Our course says that you have to do research in at least two different areas, and because I did ethics-y, politics-y stuff last term, now I have to do something different. One of my main hobbies is going to the gym, picking heavy things up and putting them down again. It’s something that I do most days of the week, and I’ve been doing it on and off for quite a while – though you probably wouldn’t know it to look at me. When you get involved in a hobby, you inevitably end up reading about it a bit, and so I’ve spent some time lurking on fitness forums and reading around the topic of weightlifting.

There are a few things I’ve noticed through this reading. One, a lot of people lift weights. Like, a lot a lot. It’s probably one of the most common things for young men, particularly at university, to do. There’s an interesting (though fundamentally flawed and quite classist) article on Vice which touches on the gym culture in modern Britain, and it does seem self-evident that there are more people going to the gym than ever before. Second, steroid use is widespread. It’s much more common than you’d ever think, especially at the upper levels of bodybuilding and weightlifting. All of the men with incredible bodies you see on the covers of Men’s Fitness?


Yeah, steroids. Steroids combined with an awful lot of hard work and likely a strict diet, but steroids nonetheless. One of the biggest cons in the fitness industry (and it is an industry) today is to sell men the idea that they can achieve naturally (and quickly) what can usually only be achieved with steroids, or at least many years of lifting.

Third, and this is the real problem – people lie about steroid use. There are massive disincentives to admit to the use of performance-enhancing drugs. One, they’re prescription-only, and much of their use is at least nominally illegal. Two, they’re illegal in nearly every kind of athletic competition, but there are a lot of ways of getting around drug tests. Three, a lot of the elite bodybuilders and fitness models rely on sponsorship from supplement companies and other businesses in the fitness industry for their income, and steroid use doesn’t sell. These companies want men to believe that these bodies are achievable without the use of drugs, without having to inject testosterone or dianabol into yourself every day for eight weeks at a time, with the minimum of effort and discipline and, most of all, with the use of the particular fat-burning/muscle-building drug that they’re selling.


It is nearly totally impossible to achieve the kind of physique in the picture above without the use of performance enhancing drugs. But bodybuilders either refuse to admit to steroid use, or just outright lie about it – this guy for example:


This guy, Kali Muscle, claims that all you need to get as big and strong as him is Pepsi and Instant Coffee.

Given that it’s really easy to beat drug tests in competitions, and given that large numbers of strength and physique athletes make false claims to being ‘natural’ – to not using steroids – how can we tell whether they’ve actually used performance enhancing drugs?

The short answer is we can’t – not definitively, not in every case. But there is what many consider a good indicator, and this is where my substantive research starts to come in.

BMI, as you likely know, stands for ‘Body-Mass Index’, which attempts to give figures for a ‘healthy’ weight based upon a calculation of your weight in kilograms divided by your height in metres squared. Doing this with various people at different heights and weights creates a graph like this:


This index attempts to classify people’s bodies as ‘underweight’, ‘normal’, ‘overweight’, ‘obese’, and ‘morbidly obese’. It was devised by Adolphe Quetelet in an attempt to measure the health of populations, and it was never intended to be used as a diagnostic tool for individuals. However, it is used for this purpose – or at least, for telling people that they may be at increased risk of developing certain conditions.

BMI is notoriously inaccurate for athletes. Muscle is denser than fat, and it is perfectly possible for an athlete to develop enough muscle that they are considered overweight, or even obese, on the BMI scale. This means that the scale simply doesn’t work for them as a diagnostic indicator. It fails to take account of the difference between muscle and fat.

Enter FFMI. FFMI stands for Fat-Free Mass Index, and it’s calculated by taking the lean body mass of an individual – that’s all of their mass, excluding fat – and dividing it by their height in metres squared. For maximum accuracy, this is then manipulated slightly to reflect the average height of a man at 1.80m and the fact that there’s a slight positive slope on the lean body mass of individuals as they get taller due to the fact that they also tend to get wider and thicker. This index was first devised in 1990, and was intended to be used to establish the nutritional status of individuals. However, in 1995 a paper was published which compared FFMI in users and non-users of anabolic-androgenic steroids. The findings were particularly interesting: they suggest that there is a ‘natural limit’ at around 25.0 on the FFMI for non-users of steroids, and that figures above this strongly indicate that a person may be using, or have used at some point, steroids.

My research is about how this index came into existence, and how it came to be used to construct the concept of a ‘natural limit’ to the muscularity of the body which is used to police the boundaries between steroid users and non-users, as well as how this limit is negotiated and pushed by people working in the fitness industry now. The idea that there is a limit to what can be accomplished naturally with a human body is fascinating, and there are many other areas in which these kinds of limits have been constructed and imposed – in the ratio of testosterone to epistestosterone as an indicator of steroid-use, in the negotiation of the hormonal and chromosomal boundaries between sexes in sex tests, in the creation of diagnostic criteria for gigantism and dwarfism, and in countless other instances within medicine. It’s a fascinating area, and I’m excited to have the opportunity to try to contribute to it – and to share the results.

James Watson’s Racism has Highlighted our Poor Understanding of the Social Consequences of Scientific Research

watson crick

James Watson announced today that he would be selling his Nobel Prize medal, having been left with ‘barely enough to live on’ due to being shunned for his racist views. There’s been some pretty good coverage of the fact that he’s hardly the nicest of characters, as well as his interesting motives for wanting to sell the medal (expected to fetch over $2 million).

However, the most interesting part of this story hasn’t been the revelation that – surprise! – you can be a genius and also a terrible person. Rather, it’s the way that the commentary has highlighted our flawed understanding of how scientific research operates in a democratic society.


First, there’s the idea that if IQ scores show that there are differences between people of different races, then it must be true that these reflect real differences in intelligence. IQ is hardly an ‘objective’ (whatever that means) measure of intelligence. Some of the fathers of the original tests were the leading lights of the eugenicist movement, including Francis Galton and Henry Goddard, who were hardly unbiased in their views on race, class, intelligence and heredity. Further, IQ scores have steadily increased over time – and unless this reflects the general population of the world generally getting a lot more intelligent, then there is a component in the score which is cultural, social or educational, rather than a product of inherent intelligence.


Second, there’s the idea that Watson’s views would be fine, if only they were based on ‘pure science’. This is often accompanied by the suggestion that science is cold and disinterested, and if only we could do the research on IQ differences, then we would be able to vindicate once and for all the view that there are no differences in intelligence between people of different races, and therefore all the racism would disappear overnight.

This fails to recognise a few things about scientific research. One, every project we choose to fund has an opportunity cost, in that other projects will necessarily be deprived of funds. Therefore in choosing to fund research into racial IQ differences, we are saying that this is important research that needs to be done, and losing out on other research that could be done instead. This would be fine, if the research yielded good consequences. The fact is that it would not, and could not, ever lead to anything good.

Racism isn’t a rational state of mind. If we were to fund research into racial IQ differences, and find that there is no significant difference between people of any races, or that, say, people of African origin score more highly than Caucasians, then they wouldn’t turn around and say, ‘Oh, damn! Guess I’ll have to stop being a racist now that my views have been invalidated by scientific research’. They’re going to keep being racists, because it’s a position based upon sociocultural upbringing, conditioning, fear and insecurity. The results of the tests are likely to be inconclusive (because it’s the social sciences, and they’ll be very lucky to get p < 0.05), and so they’ll explain them away as artefacts, or say that they don’t show anything definitive, or that IQ is a silly measure anyway.

What’s more, most of them don’t even recognise that they’re racist, but either lie about it, harbour the views subconsciously, or engage in some really exceptional doublethink. Telling them that it’s okay, they don’t need to be racists anymore because the research has shown that there’s no substantive evidence to it is likely to result in them just denying that they were racists in the first place, but carrying on as they were. There’s unlikely to be any substantial change in voting behaviour, or in the way that they treat others, or even in media coverage. If we do the research and the results come back negative, very little changes.

But if the results come back positive? Well, then we’re going to have a bad time. Suddenly, all the people who say ‘I’m not racist but’ and read the Daily Mail and have a vague distrust of anyone brown or black will be vindicated. They’ll say that they were right all along, and of course there’s no reason for us to push through affirmative action bills, of course there’s no reason for us to try to bring up the educational standards of ethnic minorities, because it’s all down to genetics. The political consequences would be dire. Even if the results are inconclusive but lean towards a positive, then they’re more likely to make the inferential leap from that to the affirmation that what they secretly believed was true all along.

Scientific research takes place in a society. It has consequences for the people who live in that society. Whenever we make a decision to fund research into one question we have about the world around us, we decide to prioritise that question over other, unfunded questions. It doesn’t seem too controversial to say that we shouldn’t be funding research that would have give us little gain in the way of knowledge, but which could have hugely damaging consequences for politically vulnerable groups.