"I made a mistake." Now get a job"...Fact and philosophy
Well, the discussion around the study published by Coyle on Lance Armstrong, and the subsequent revelation that he had made a calculation error has opened up some strong debate, which is always excellent. We've had numerous responses, and of course had the study been done on John Jones, as opposed to Lance Armstrong, no one would have cared...
And therein lies the first problem - this paper got to where it did off the back of the fame of its subject, and scientific stringency was lobbed out the window. So to the one Anonymous poster who commented that we were leading the "witch-hunt" for Lance Armstrong...I'm afraid you've missed the point there. It's not about Lance Armstrong, it's about scientific process. But more on that a little later, in "Philosophy." First, some "Fact."
Fact
Many of you wrote in saying that there was an error in the statement regarding the impact of the change depending on a high or low VO2. And you'd have been correct. I've emailed some authors about this, and what I'd like to do (this is part of our "growth strategy" here at the Science of Sport), is to allow external contributions to topics such as this one. So I will try my best to actually have the people involved respond to these kinds of specifics.
However, you are correct - the impact of the change is similar in absolute magnitude, but as a percentage of the slope, it's larger at lower VO2 values. At the very least, we now have confirmation that our readership is up to the challenge! That's a privilege, and says something about the high quality of our readership...!
I'm going to credit the fact that I have a "real job" for this particular error - these posts are done on the fly, during lunch or after work, thanks to the real job (again, more on this in "Philosophy").
But it suddenly struck me as ironic that this had happened in this post, of all of them (of course, we do make mistakes, but they've never been picked up before!). Because the funny thing is how this scenario played out...consider the following:
Yesterday afternoon, you were sitting in a coffee shop, an internet cafe, your home, or your office, and you logged on or got our daily email. You read it, and it struck you that something was wrong, so you gave it some thought, and you realised that you disagreed with our statement. You then took it upon yourself to write in a comment, and express your view about our position. One of you even wrote in and requested that we provide data to back up that position. It was beautiful.
Why was it was beautiful? Because this is what happened when you logged onto a part-time run website on Sports Science. Now, imagine that instead of this website, you picked up a credible Journal, say the Journal of Applied Physiology. You opened it, and read a paper that was so full of holes, so full of errors and lacking any integrity in the interpretation of the data, that it defied belief. What would you do? I suspect you'd take it upon yourself, as a scientist that is, to write to the Editor, to respond to the Author of the study, and to seek answers.
Welcome to the Coyle-Armstrong debate. Now, imagine that in response to your email and opinions, I was to tell you to "Get a real job." Again, welcome to the Coyle-Armstrong debate, because that is what was said in response to a scientific discussion.
And that is what this series about. One of the bigger ironies is that we (and the Australian scientists) were yesterday accused of being "myopic" to our approach to criticizing the Coyle paper. That is a response that can only come from within the "Fraternity" (and I use this word deliberately, instead of "Community" because science is often a Fraternity) of scientists who are perhaps defending a former colleague.
They are the ones suffering from myopia, because they've chosen to ignore the quality of the science and attack the messenger instead. The reality is that a peer-reviewed scientific paper should meet certain standards. Making "miscalculations" (which I still believe to be significant) is not acceptable, but then nor is the litany of other problems with that study, which we described in our first post. Then the response of the author is so offensive, so disrespectful and so arrogant that it warrants a post series all by itself.
To be sure, errors do occur in data collection and analysis. This is not a crime and nothing to be ashamed of, however when other scientists challenge your findings because they think there might be an error somewhere, transparency should prevail, data should be provided, and the problem rectified. This is how we move our knowledge forward.
The fact is that subsequent to a complaint of scientific misconduct, Coyle has still not provided all the data. Why not? Just make it available and let's have this debate out in the open. Or is there a reason not to? Moving even further along, not only does this science appear in a scientific journal, it then becomes a cog in a legal defence when it was incorrect all along (and to answer one poster, I don't know why the data was not made available before - perhaps the legal team made the mistake of trusting the scientific process too much?)
So that is the problem. The final point I wish to make before moving onto "Philosophy" is that in a normal state of affairs, when a research study is sent off for review, it is usually sent by the editor of the journal to a few reviewers - experts in the field, who cast their eye over it to make sure it is well-controlled and worthy of publication. The Coyle study of Armstrong was submitted on February 22nd. It was accepted three weeks later. Of course, short turnover times are not unique, but this is remarkable. Remember, this is a study so criticized that it inspired two SEPARATE letters in response, and trust us, letters like these are not all that common in the publication process. It became a case study here at UCT in how not to write a case report, and as mentioned in our first post, was criticized widely at conferences.
Yet it got through the Review process in three weeks. That is where the problem began, and it continued, all the way through to the admitted error. Perhaps a lot of people with "real jobs" were involved along the way.
Philosophy
Speaking of real jobs, one comment we received yesterday got me thinking about our existence here at The Science of Sport, and specifically, what our purpose is. We are now just over 17 months old, and it's wonderful to be able to count among our readership the likes of the people who wrote yesterday. In particular, this poster (sorry, I would refer to you by name, but I don't know it), mentioned that he gave our posts to his students on occasion, but was critical of the "lack of objectivity and professionalism" in the posts. What a privilege to have such readers, and we are very blessed to have such wide (though we'd love to grow it - send this to your friends!) and influential readership. Clearly we have many scientists and other academics among our audience, and we do feel privileged that these people have chosen to read and comment here.
When we started out, our intention was to use it as a vehicle to coach. Jonathan's passion is cycling, mine running, and we felt that we could get into the coaching world this way. It lasted about two days, before I realised that in fact, the "market" was crying out for a more thought-provoking approach to sports news reporting. And so the concept of "Performance analysis and discussion" was born. Our mission then evolved into trying to marry our passion for sport with our training in science.
We said that our objective would be to provide the "Why and How" to what internet news sites, magazines and newspapers were reporting as the "What and Who". That remains the idea today. So our flagship posts have been the Oscar Pistorius debate, which I genuinely believe we have covered better than any media outlet around, and the Major Marathon races, where we look at the pacing strategies, splits and race news in real time, from a physiological point of view. The Olympic Games were obviously a huge focal point as well. But we're not here to analyse the science - that's what journals do. We're also not here to tell you what happened in the world of sports - the newspapers do that. We're here to try to strip down the stories and provide some insight which hopefully brings science into the sports news, and sports stories into the world of sports science.
But the point I have to make is that it is all opinion. This is not a scientific journal, though we recognize that we offer (or should try to offer) a little more than the usual media/blog fare. So what we try to do, where possible, is provide references and links to the "objective" research. So to respond again to the Anonymous poster, if you want the "objective" discussion, then simply refer to the letters by Gore et al. in the scientific journal.
Our intention is to translate, and to inform, and then to entertain through the use of the site, and hopefully give people new perspectives based insight that we have (or hope to have). I mentioned earlier that I had a "real job" - I work as head of research for a sports management and marketing company, and so my involvement with science is these days in a consulting role. Jonathan is more immersed in a University position. But my training, both PhD and Commercial, allow me to stand with one foot in a commercial block, and another in the sports science world, and sing. Or dance, or play, whichever you prefer. Speaking personally, these posts are what I see when I look at the world, nothing more.
One of my favourite books is "The Undercover Economist", by Tim Hartford. That book was described as "spending an ordinary day wearing X-ray goggles". I'd like to think this is a site that lets you watch sport through X-ray goggles. That requires opinion, and it's guaranteed to offend some (Pistorius' father among them), but I won't apologize for opinion.
So we are sorry if you feel that our discussion sometimes lacks objectivity, and fact. But then again, consider the discussion on Usain Bolt. I believe he is clean. Is that fact? Of course not, but then if we restricted ourselves to "fact," this would be a very empty website. And no one wants that...
So apologies for the philosophical, and somewhat self-indulgent post (I indulge twice a year). It won't happen again until perhaps March next year!
Ross
OK, so I've been reading some stuff you wrote on this row about the "muscular efficiency improvement" of Lance Armstrong's, and I do feel that we haven't discussed the issue of his high rpm, and we need to. That was a way to distinguish Lance from the rest of the pack in his Tour years. He used lower gears than the others. And sometimes that was said to be a way to improve efficiency and reduce lactate...
ReplyDeleteCoyle's paper states Lance had a constant 85 rpm in the tests, but that is quite low for Tour-winning Lance Armsrong, overall but especially on the TT bike. Some more answers are needed about the latest tests on this matter.
We should also note Ivan Basso's Giro victory. That was after he met Fuentes, but I must say I noticed him increasing his rpm during his career.
There might be something here, because lower resistance means no fast twitch fibers are recruited to perform the motion. And indeed, fast twitch fibers are somewhat less useful for long endurance events.
Just my thoughts.
Ross and Jonathan,
ReplyDeleteI believe you are on the right track. If you ask me, my desire is to read an informed analysis of sports, not a dry scientific paper which I will not be able to understand since I lack the educational background.
Scientific findings should stand up to peer review, the time for dogma by an "elite" fraternity of scientists is gone forever.
P.S. At the time I read Coyle's analysis of Lance Armstrong post cancer I didn't know who the author was. Still the findings seemed bizarre, if they were valid wouldn't it be easy to use dieting and drop 4-5 kilos to become a much better cyclist?
George
Jonathan and Ross...
ReplyDeleteI agree with Anonymous and his comments and your posts make for riveting coffee breaks. Keep up the good work!
Mircea - I have watched many posts regarding Cadence on the Wattage forums, and one that stands out is that "Cadence is a red herring". After all, if it was that simple then why did Jan Ulrich not merely click down a gear or two and give Lance a proper hiding whilst spinning at a higher cadence? And so I suspect that the answer lies somewhere else... perhaps 5min critical power output levels that allowed Lance to "Punch" his was away from Jan when needed - Perhaps just one of the many trees in the forest.
Just my 2c.
Two short comments:
ReplyDelete1. Thanks for the effort that you put into this blog. To this recreational runner the articles here are a welcome contrast - in their analytical, questioning approach - to the waffle in popular running mags, where a quote from a nutritionist or personal trainer is thought to be enough to support a claim on behalf of some diet or training regime.
2. On the (un)availability of the raw data: Perhaps someone should mention the concept of 'open notebook science' to Coyle. (See, for example, this Nature news article and the links therein.)
If Coyle really wants this to go away, he should either give up all the data so that the analysis can be redone or retract the paper. Both actions would be consistent with his assertion that the paper is no big deal. Standing by a paper that has already been shown to have at least one major flaw in it is unethical.
ReplyDeleteAs the one who described your (and others) criticism of the Coyle paper on the basis of how delta efficiency was calculated as "somewhat myopic", I feel obligated to respond here, if merely to say that I stand by what I said. To wit: given the data as presented, it is a mistake focus so much on the delta efficiency calculations, because the conclusion that Armstrong's muscular efficiency improved can also be reached based on the gross efficiency measurements. Thus, unless the raw power and/or VO2 data can be shown to be in error, to emphasize the delta efficiency data is to miss the big picture.
ReplyDeleteOther than that, all I might add is that what I have written on this subject should be taken strictly at face value, and should not be construed in any way, shape, or form as either a defense or a criticism of Coyle's paper, Coyle himself, or Armstrong.
Hi All
ReplyDeleteThank you for the supportive posts, you are among the most loyal and valued readers, and of course, as this post made clear, the purpose of our existence! So thank you!
Just a note, the point that I still feel has not hit home is that this research on Armstrong was never subjected to the kind of scrutiny it should have been. Once the scrutiny came, it was too late, and jumbanho is 100% right - either make the data available for analysis or retract the paper. Pretty simple, if you ask me.
But apparently it is not, and then the question must be "Why not?". What is happening here to prevent that data from being given to the Aussies despite the fact that they got a formal complaint in with the University? Your guess is as good as mine!
It does seem funny to me that our last post (not this one, but part 2 of this series) has almost been scrutinized as much as the Coyle paper in one of the world's leading Journals on Physiology! We even had people writing to say there is a difference between Power and Work (which there is, of course, but it was immaterial to the point being made). Where was that stringency in 2005 during the review process of Coyle's study? And during the legal case? And of the latest round? Very peculiar
Ross
I personally think you have done a terrible job of looking at the merits of this paper. You seemed to get side-tracked by the politics and name-calling and it appeared to me you were taking sides.
ReplyDeleteFrom a political perspective, everyone is to blame. The JAP for doing a terrible editorial job in not doing any kind of review or checking, probably because of the subject matter and the esteemed reputation of the author. It was a big mistake on their part.
The Australian critics seem to be taking a minor calculation error as evidence of fraud. If Coyle had not made this error (which does not change the overall trend if the calculation is done correctly in either delta efficiency or gross efficiency - which requires no corrective calculation) where would their criticism be? They have asked for the raw data and, although they were not provided with the entire set, found no errors from what was reported and what was collected in what was provided and only have this calculation error, an error that really doesn't change anything.
Then there is Coyle, who didn't check his work very well and didn't realize he was becoming involved in a tabloid like "scandal" of an international rock star and the media and critics are not going to go away because he tells them to get a job.
Then, there are you guys, who also did a poor job of analyzing the data, accepting one side's argument over another with no evidence. Your focus should have been on the paper and the facts. It was not.
I have plenty of criticism of the Coyle paper but it does not hinge on the data collection, which I will assume to be accurate unless there is evidence to suggest it is fraudulent, or a minor calculation error. My criticisms relate to the interpretation of this data to the point that he documented a change in cycling efficiency but attributes it to a change in muscle efficiency, even in the title, for which there is zero evidence to support such a conclusion. While cycling efficiency and muscle efficiency may be related, there is no evidence that they are locked hand in glove. He further attributes the change to change in muscle fibre type, for which there is actual studies that suggest this is not possible in elite cyclists like Lance (see: Effects of long-term physical training and detraining on enzyme histochemical and functional skeletal muscle characteristic in man. Larsson L, Ansved T), and ignores other possibilities, like efficiency changes that might accrue from changing pedaling style (see: Effects of short-term training using powercranks on cardiovascular fitness and cycling efficiency.
Luttrell MD, Potteiger JA).
The issue of this paper is that cycling efficiency changes were demonstrated in this elite athlete, something that most thought was impossible. The question, which was poorly addressed in the original paper and in essentially every discussion of this paper, is how did these changes occur? Coyle's original explanation makes no sense and he has no biopsy data to prove his contention. In view of the current controversy Coyle owes everyone access to the raw data or a good explanation as to why it is no longer available. But, presuming the data is good, the question remains, how did Armstrong improve his cycling efficiency that much?
Hi Frank
ReplyDeleteThanks for the post. I appreciate your honesty. As I've said numerous times above, we know we can't satisfy everyone's opinion on the paper and our analysis of it. Lucikly, I can take consolation in others who have emailed with the exact opposite feeling, which I guess shows everyone has a right to an opinion.
However, I must just say that having been taken aback by your first line, I then read your post and I realised that in fact you say nothing that I don't agree with. I have said that the JAP editorial was at fault, that Coyle was at fault.
But you're missing the point about the stringency of scientific research. You can question the data collection based on the numerous issues raised by TWO separate groups of scientists in response to the initial paper. I'm not sure whether perhaps you have come into this discussion at the end, and maybe missed out on the first two posts, because they cover in much more detail what transpired on the data collection side of things.
And then I will also say that this was a tremendously difficult post to write, not only for its technical merits (which, according to you, I've messed up completely. Oh well...), but also because I'm a little more closely involved than you might realise, but I can't show a full hand in writing this. So there are a few things that you are speculating about without having a clue about (with the greatest of respect).
So thanks for the honesty, and bluntness, I'll take it under advisement, but stand by the analysis and the description. You might take your own issues up with the Journal of Applied Physiology and Gore et al.
Ross
Ross and Jonathan,
ReplyDeleteActually, I have been here from the beginning and posted two comments earlier, one on each post, as well as a comment on the Usain Bolt prelude to which I expressed my excitement and some thoughts, as an annonymous poster. I decided to give my name this time because of the crtical nature of the comments and it looked like this would be the final installment.
You initially said you were going to look at THE PAPER critically. It appears all you did was look at this "delta efficiency" controversy, pretty much ignoring the discussion and conclusions, my criticisms of the paper, which went to the discussion and conclusions. Just how does one justify the term "muscle efficiency" in the title when no such thing was ever measured. You ignored completely this aspect of the paper.
I believe that one of the Australian critics were involved in the lawsuit on the other side. They should have been able to get access to the raw data then through the discovery process and if Coyle could not have provided it for fact checking then they probably could have had either the paper or Coyle excluded from evidence, which would have probably won the case for the insurance company. Why they did not ask for this data then (when they are asking for it now) is a bigger mystery to me than Coyle not being able to provide it all now.
Anyhow, I saw this effort on your part as more an analysis of the controversy (which you got some things wrong, which you acknowledged as has Coyle his mistakes) but missed the big picture of discussing the entire paper INCLUDING ITS DISCUSSION AND CONCLUSIONS, which to me is the real controversy.
The best part of this review is the unfiltered give and take in the comments which sets it apart from the typical journal. Good try, I just thought it fell short from my hopes.
isn't there a paper showing that Paula Radcliffe improved her running economy over the years?
ReplyDeletewhy do people think that it is impossible to improve cycling efficiency?
So do you guys have any surrebuttal on the issue of gross efficiency following the claimed trend hence the insignificance of the found mistake in the calculation of delta efficiency ?
ReplyDeleteIt seems like you are trying to evade that. It strike me as hypocritical that you deliberately ignore Coggan's remarks, ostensibly due to the identity of the "messanger" and not the content of the message. That sure isn't scientific or even good old candidness stringency.
I also find it unscrupulous that you never mentioned that Ashendon had been on the plaintiff's (the insurance company) support team in that very same trial, for which you took multitude of shots at Coyle, questioning his bona fides.
In full disclosure; if Jonathan and Ross did not already know; Frank Day is the inventor of Power Cranks, which are independantly operated cranks with the idea to train you to pull up on the upstroke of the pedal revolution thus making the rider "more efficient" in pedaling nicer circles. His contention with the paper is because of its conclusion. If the data is true, but the conclusion is false, and his conclusion of the findings is true, then his product and its purpose is possible. If the data is incorrect and the findings bunk, he will still have no scientific evidence that his product is useful.
ReplyDeleteThought you should know the whole story.
To the latest of many Anonymous posters
ReplyDeleteAnd I find it fresh that you'd come onto a site and insist and throw out insults of "hypocrites" when I have specifically said in this post that I have contacted people who are specifically involved in this matter to address this issue. In addition to that, it has been addressed in the posts, though not directly in response to the questions - but then I refer you to this post, where I stated that we'd get back to you on that.
So spare me the accusation.
Ross
Wow you guys are great! Good job
ReplyDeleteDon't worry about these anonymous posters from the "Fraternity" of science that you referred to. You guys did this series just fine - it's a typical response to attack the details to undermine the big picture - smokescreen and mirrors, to deflect the issue and criticize the messenger - classic legal tactic.
All you "Anonymous" posters, read the posts. You might be amazed at how much you pick up if you actually read the thing - but in classic human style, your mental capacities are clouded by the red haze of mist that seems to descend when you feel that one of your own is being criticized. As far as I can tell, having read these posts, and the fact that a total of 3 different letters have been written to JAP about this paper, it's garbage. This is not a case of some unhappy scientists taking issue with Lance Armstrong - it's about a scientific study that was so poor that two separate groups criticised it IMMEDIATELY after it was published. Read the post, Ross made that clear.
I'm a lawyer, and it's very obvious that this series of posts is about the process followed, but all you guys can see is the details, and you snipe around the fringes, finding faults, and all the while, it prevents you from seeing the true point of these posts. Your battle should be fought in the journals, rather than flexing your inflatable muscles on this forum, which does the public and science community a great deal of good when it comes to sports news. You must be new around here.
Ross, well written, I'm with Owen and other fans on this one. I can understand why Frank is disappointed, he had a different expectation. That's fine, but all these other guys who come in with biting criticism instead of constructive comments, seriously, you guys are missing the point here. Ray said it best in the second post of this series. You did read that, right?
Allan
I note a distinct lack of response to Mr. Coggan's points of discussion by the proprietors here.
ReplyDeleteAnon Coward
Hi Anon Coward
ReplyDeleteYou may also have noted that I said in this post that I was still going to come back to this point, but that didn't fit with your attitude, so perhaps it remained un-noted. As did the comment three above this one where I clarified that I would be responding in due course.
But here you go, straight from the proprietors (why refer to us as that - flexing some intellectual muscle? Some sarcasm?).
Two things: Firstly, it's been stated, as far back as the 1970's, that delta efficiency is the superior measurement of efficiency, and so the debate that is now raging about gross efficiency concerns an inferior measure. That doesn't make it irrelevant of course (hence this response, which you seemed so eager to get).
So the take on the gross efficiency:
Coyle is hanging his his hat on the “fact” the he measured an 8% change in Gross efficiency. But several small calibration errors alone possibly account for that result.
To touch on calibration, this kind of study requires calibration data for each of the tests conducted and for both the VO2 system and for the cycle ergometer. You may disagree, but then the finding of the study measures change over 7 years in one individual, and you have to be very serious about this measure. I'll develop below the argument that tiny differences easily account for the measured finding.
Back in 2005, after the first set of letters criticizing the research study, Coyle cited CVs for VO2 and VCO2 from a 1994 thesis (ref 6 in his rebuttal) of 0.87% and 0.92% for serial measures made over 8 weeks. Good science would have reliability data for the VO2 system (over YEARS) from 1992, 1993, 1997 and 1999, or at least at the start and beginning of the series of test on Armstrong.
In addition, reliability does not imply validity - with poor calibration you can consistently get the wrong answer. For example, gas analysers deteriorate over time, and even changing from one calibration gas mixture to another can cause problems since every 0.04% error in the fraction of expired O2 will change VO2 by 1% (all other factors being equal). Volume calibration is also critical because every 1% error in volume corresponds to 1% error in VO2 (all other factors being equal).
Now, you're probably saying this is picking holes, but the reality is that when the study is going to conclude that a change over a 7-year period is the crucial explanation for performance (in the legal case,that is), then this kind of calibration is key.
As for the cycle ergo, Coyle used two refs (#3 & 7) that were published in 1991 to justify that the system to determine power and cadence was accurate (which is more convincing than just being reliable). But where is the data for the period of testing? 1991 was two years before a seven year testing period began, and the crucial importance of the power data means that some measure of validity had to have been provided.
It is critical for longitudinal data over years on an individual athlete to demonstrate that both measures of VO2 and Power are accurate. Any anyone who has been in a lab for 7 years will attest that keeping both VO2 and power systems running accurately is very difficult to achieve, but it can be done. And then it should be reported for this kind of study.
So to refer back to the 8% issue, if between Jan 1993 and Nov 1997 the Ergo was reading 3% less power, and if the cal gas was higher by 0.1% absolute of Oxygen (=~3% lower VO2) and the volume was reading ~2% too low, such change could collectively explain an 8% lower VO2 = lower gross efficiency.
The error in Ergo calibration could lower the true VO2 cost by 3% and the errors in the cal gas and the gas volume meter could account for 5% spuriously lower VO2. These errors would be additive to the tune of ~8%
Hence the reason that VERY accurate calibration to better than a percent are crucial and the reason why it is hard to collect high quality longitudinal data.
Ross
Ross,
ReplyDeleteIn discussing the Gross efficiency defense or "problem" (depending upon your point of view) you wrote:
"So to refer back to the 8% issue, if between Jan 1993 and Nov 1997 the Ergo was reading 3% less power, and if the cal gas was higher by 0.1% absolute of Oxygen (=~3% lower VO2) and the volume was reading ~2% too low, such change could collectively explain an 8% lower VO2 = lower gross efficiency.
The error in Ergo calibration could lower the true VO2 cost by 3% and the errors in the cal gas and the gas volume meter could account for 5% spuriously lower VO2. These errors would be additive to the tune of ~8%
Hence the reason that VERY accurate calibration to better than a percent are crucial and the reason why it is hard to collect high quality longitudinal data."
A couple of comments.
First, I don't think this data was ever presented as a study. It was presented as the summary of a series of testing done on a very high interest subject. I would guess that Coyle normally takes great care in calibrating his equipment when doing tests for professional athletes but I doubt he keeps the records such that he can come back many years later to write a paper about the results.
Second, you write: "it's been stated, as far back as the 1970's, that delta efficiency is the superior measurement of efficiency, and so the debate that is now raging about gross efficiency concerns an inferior measure". Perhaps you should explain why delta efficiency is considered "superior". Isn't it because it allows a better comparison between different subjects that have different body masses and basal metabolic rates? Of course, in this "study" there is only one subject in which weight and basal metabolic rate would have been quite similar at each test such that there would hardly be any superiority to the delta efficiency over gross efficiency measurement when looking at efficiency trends. I see nothing "inferior" about the use of gross efficiency for this purpose.
Third, you have to theorize a long list of additive errors, all in exactly the right direction, to come up with the 8% change being an artifact of calibraiton errors. What is the probability of that occurring? But, beyond that, there were four tests and there was a steady increase in gross efficiency during the series culminating in the 8% increase. Now it would be further necessary for you to theorize that the cumulative errors in each of these tests were different but exactly such to show this nice steady trend. What is the probability of that?
My guess is if Lance had not shown this steady trend in improved efficiency this paper would have never been published as there is nothing particularly remarkable about the data except for this change. He was not being tested as part of a study but, rather, for the purposes of his training. It was only after he became the dominant cyclist of his era that this data, and especially this change, became remarkable.
While, what you theorize is possible regarding these cumulative errors, it seems highly improbable to me. What you theorize seems more like straw grasping because you believe such a gross efficiency change to be impossible and if the data is not being fudged, then if must be due to many additive calibration errors.
Frank
Hi Frank
ReplyDeleteThanks for the questions. I still think the key point of this whole series was not to discuss the "study" or the "case", though I appreciate that your expectation was that it would be - I apologize if I created the impression in previewing it that this would be so. It was never intended this way - the intention was to look at the process.
I can't stress this enough, though I have tried. I think I have quite clearly failed to do so, but the point was not the specifics, but the process. It's all good and well to say that this is not a "study", but it was arguably the most read research article in the history of the JAP, it became a cog in a legal defence case, and was widely discussed as a "fact", despite the fact that it was full of holes. And that was BEFORE the error in calculation was made.
Within weeks of it being published, questions were being asked. I'm not going to speculate about the motives of those people - maybe they wished to disprove that efficiency changes, who knows. But it's clear from the first two letters in 2005 that the study was fraught with problems. The PROCESS was the problem all along and the latest revelation about a calculation error is simply one of many issues.
However, we can't progress beyond that calculation error because Coyle won't make ALL the data available, which adds even further to the problem. Just let people see the data, and we'll soon figure out just how important the miscalculation is. The point is, there is inside information, there is historical development and there is recent evidence all pointing to the same thing - poor scientific process.
It doesn't matter much whether it was a study or a case, it failed to follow some of the foundational principles of how research should be done. And that was the point - the process (I am hoping that if I repeat this often enough, it will be remembered).
To answer you on the superiority of delta vs. Gross efficiency, yes, you're right. The problem is that we can't assume that this one subject's metabolic rate stays the same - he's had chemo, (apparent) weight loss (though I would dispute this), ageing (over 7 years) and changes in training and racing history.
As for the list of things I theorize as contributing to the 7% defence, the probability is reasonable - maintaining equipment over 7 years is very, very difficult. But again, it's not the specific, but the process. Why not measure these things, report the calibrations etc.? That kind of thing is routine for longitudinal studies, and when the entire conclusion of the paper ("matures") depends on the measurements over a period of time, then this kind of calibration is vital. Again, the process of quality control was not followed.
To quote from one of the very first letters sent in response to the Coyle paper, "such studies should respect the basic principles of scientific investigation".
This study didn't, and that's what we wanted to emphasize.
Ross
I am reminded of something John Holloszy once told me, back when he was the Editor of JAP and as one of his post-docs I was reviewing two or three papers every week. He said, "You can't reject a paper just because you don't believe the data." In that vein, then, is there any objective evidence that the VO2 and/or power data presented by Coyle are, in fact, incorrect?
ReplyDeleteIf not, then the issue devolves to whether or not the novely and potential impact of the findings sufficiently outweigh the obvious weaknesses of such a retrospective case study. Whomever they were, the Associate Editor obviously thought so, or the paper wouldn't have been published.
Ross,
ReplyDeleteThanks, again, for your reply.
You indicate you want to examine the process but you are examining the wrong process. You and the more vocal critics keep using the term "study" in your criticism, you used it several times in your last answer to me. This paper was not a study as there was no protocol and it did not go through the IRB process. He simply published a series of tests he ran on this subject. This series of tests occurred over several years. This paper is more properly a case report. The standards for a case report are substantially different than they are for a study.
To criticize this paper and Coyle as if this paper were a study is unfair and inappropriate. It is possible that Coyle doesn't have (or, can't find) the raw data he has "failed" to provide.
As a case study, this would explain the "poor review" done by the JAP as they knew this information would be of great interest to their readership (and pretty much every cyclist everywhere). It would have been nice if they had checked the math (and, in my opinion, it would have been especially nice if the title actually reflected what the data showed instead of an unsupported conclusion).
It also explains some of the "holes" in Coyles data, he published what he had, never expecting to publish it when it was collected. It could possibly explain why Coyle cannot provide the raw data to those who have asked for it although it would be nice if he could.
But, in your answer to me you continued to refer to the paper as a study, trying to hold it to that standard. Case reports are going to be "full of holes". That is the nature of the beast. You can point out some of the potential problems to the uninitiated, such as the potential for calibration errors, but Coyle has answered those, and pretty well I believe. If you want to get specific in this regards why don't you address the specifics of Coyle's answer to the calibration issue?
To me, the bigger "process" issue, is in the discussion of the results. Am I the only one that is bothered by this? Essentially everyone accepts that if the data is true that this shows Lance improved his "muscular efficiency" 8% because that is what Coyle put in the title and what he said in his discussion. There is no data in this case report (or anywhere else) that supports such a hair-brained conclusion that such improvements could be attributed to changes in muscle fibre type in an athlete such as Lance.
So, if you want to talk process, talk about the process that actually occurred. If you want to talk about the paper, talk about the paper as it was written and some of the rebuttals that have been given to the initial criticisms.
You have indicated earlier that you have knowledge of aspects of this that you cannot divulge (". . . because I'm a little more closely involved than you might realise, but I can't show a full hand in writing this"). This suggests to me a conflict of interest and potential bias, yet good science would demand that you disclose such a potential conflict at the time of the publishing. You are coming off as biased in this "fight". Perhaps this explains why.
I've been enjoying your "Science of Sport" for over a year now. While I confess that I enjoy your editorials and opinions, I always thought your purpose was to be scientific. The following two out-of-context quotes from the Philosophy section of your third? Coyle post inspired me to write.
ReplyDelete'This is not a scientific journal...'
'if we restricted ourselves to "fact," this would be a very empty website'
Almost every issue you tackle lacks any real evidence. This is not your fault, but I thought it was your purpose. Generally, you've been very good at pointing out theories which lacked evidence. Often, you've managed to find a little bit of objective data about a subject. This is good.
Understandably, elite athletes may not want to publish training data which they feel gives them a competitive advantage. Also, many of the physiological questions require very expensive testing. So you are generally left analyzing statistics from events.
(For an interesting story on using statistics to determine overall player "skill" or "worth", I liked the book, Moneyball, by Michael Lewis.)
There may be some interesting conclusions for someone willing to analyze the published data. Your split time, personal best data, and temperature data are a beginning.
To summarize, stick with the science. If there's no data, ask the good questions.
Thanks for your informative discussions of sport.
Wow, Ross
ReplyDeleteYou weren't kidding when you said that science was like a "fraternity". It seems the Gamma Kappa Beta boys have rallied around and are enjoying the battle in which they never respond to your main points.
I'm a scientist myself, and I've always enjoyed the fresh, opinionated approach you bring to the posts you write, I think it's great. Mostly because it elevates the sport to a new level and brings the science to the fan. This post is obviously much more "scientific" (which is not your target, as you point out), but it has provided me with endless amusement to read how these guys (most of whom hide in anonymity when criticizing) continue to attack on the points that you have deliberately not discussed in the post.
And then continue to ignore what you have said 1000 times is the purpose. It's a brilliant, and commonly used strategy in debating - avoid the real issue, dance around the peripheral ones. These guys would make decent lawyers.
If you want my advice, ignore these wan#%rs and just keep doing what you do - it's perfect. You've made your point brilliantly, don't give the frat boys any more chance to flex their muscles and work out their penis envy issues.
Cheers, and good job
Paul
Hi all
ReplyDeleteAndrew, and I am reminded of a review I once received from a journal (Journal of Physiology, I recall), in which it was stated "Your research paper cannot be accepted for publication because your data and conclusion disagree with the prevailing hypothesis for fatigue".
Seriously. So while in principle you're right, the problem with occupying the 'moral high ground' is that the people who do are rarely reluctant to descend into the valley below and blow up anyone who disagrees. As a scientist, you've no doubt experienced this. Unless of course, you're the one doing the blowing up, then science and the peer-review process works perfectly.
Anyhow, I digress, the point (made maybe 53 times) is not the belief of the data, it's the process or lack thereof of a study that became a legal defence, which I believe was wrong.
Then to Frank, nice try on the "conflict of interests" thing, but you're wrong. I have never been, and never will be, a mouthpiece for anyone else to express their views. These posts are MY point of view, based on MY experience of the world, and I have no conflict of interests, except for those with other people!
The information I refer to came to light AFTER I had begun the series, and had nothing to do with my motivation or the purpose of writing these articles. So good effort at discrediting, but no cigar, I'm afraid. The point I wished to make was that there is much more to this than meets the eye, but that this opinion could not (or should not) be expressed in the posts - it was a personal opinion expressed to you in discussion, and which you took out of context to re-inforce a position, which would be a good journalist/legal tactic!
As for the quality control of case studies vs research, that's all good and well, and if you'd like to approach a case study that way, then go ahead. There were so many holes, which drew into question that paper, that it simply had to be challenged. So if you'd like to reconcile the holes with the fact that it's "only" a case study, then that's fine. I'd prefer to hold case studies to a slightly higher standard than was clearly met in that particular study. As for publishing it because it had interest for readers, great, but at the expense of the scientific process, can't see how that benefits anyone (apart from the author and those in his circle).
Then to Anonymous, thanks for the mail, and advice. We will certainly stick to science, though I really can't stress that "science" is not our reason for existence. But you've summed it up well, thanks.
And finally, to Paul, thanks also for the compliments. I won't say I disagree with your points, but no need for name-calling! I do enjoy the debate, though I must agree that this long ago devolved away from debate, because I'm saying one thing and everyone is attacking another. So this is a debate held in two separate rooms. Perhaps it's time to move on - next thing I'll have to shut off comments because people will be swearing at each other!
I'm tempted to invoke the words of one Ed Coyle: "This is a minor waste of my time. Don't these people have real jobs?". But that would be disrespectful and arrogant, wouldn't it? So I won't..
Thanks again for the discussion, Berlin Marathon coming up soon, I wonder which fraternity will be pi$$ed off if I write something about Gebrselassie's world record attempt? At least it won't be the Jamaican Usain Bolt fans!
Cheers
Ross
I personally think you have done an excellent job being objective about the data. At no time have you appeared to have "chosen sides", you are just merely pointing out the flaws in the Coyle article, of which there are many. Having read all the posts and the original Coyle article and letters themselves, it is not necessary to do an item by item analysis of the flaws in Coyle's data, anyone with a basic grasp of the scientific method can pick out these for themselves. You have brought a fresh perspective in looking at the delta vs. gross efficiency and the miscalculations, things that do not necessarily jump out for most people. Kudos to you.
ReplyDeleteThere would be no progress in science without debate, you have done a great job at objectively picking apart the science behind the article, keep up the great work. Perhaps the future exercise scientists of the world or the veteran fraternity will pick up the ball and run with it, replicate the idea in new studies with solid scientific procedures and hopefully it will shed some new light on whether or not muscular efficiency can be improved and even more importantly, how.
For an exercise physiologist like myself that chose to stop at masters level, the practical application of that data would be most helpful in coaching athletes looking to improve! For the grad students out there, this type of debate surrounding the process is a great way to make sure your own research is sound.
Perhaps it's a bit late to chime in, but I continue to be surprised by the insistence of "Frank Day", and a comment by "Andy Coggan".
ReplyDeleteFirst to reply to Andy's question: "is there any objective evidence that the VO2 and/or power data presented by Coyle are, in fact, incorrect?" I think that is really the main point of the last 3 blog entries: There is no objective evidence at all, because Coyle provides insufficient data to draw any conclusions one way or another. Isn't the burden on Coyle to provide a sufficient discussion of the uncertainty of the measured data. Why should Coyle profit from any benefit of the doubt, when it is his obligation to provide upper and lower bounds as to the range of doubt?
And Frank Day may have a point, that a larger, more important discussion is how muscular efficiency was equated to cycling efficiency without any challenge. I don't think I can disagree with anything he has written, but it just seems that any discussion about right or wrong conclusions is simply premature. This can only happen if we take for granted that the data (measured and fabricated) is accurate and reliable to actually show an improvement, even in gross efficiency. We simply aren't at the point were we can draw any conclusions. It seems that Coyle should have left that section blank.
Another reply directed to the anonymous poster about a "surrebuttal" about "gross efficiency" over "delta efficiency". This was actually in the main article, that gross efficiency results can be "skewed", and delta efficiency is better. What this means is that, for example, while the direction might be the same, an 8% gross efficiency may translate to a 2% delta efficiency, or otherwise small enough that calibration is even more important than before.
And to repeat what I said earlier, the report was criticized before "delta efficiency" miscalculation was discovered. That mistake is by far not the focus of criticism, just the icing on the cake, that shows a flaw in process control at just about every level you can imagine.
I'm all for holding everyone up to high standards, but seriously, why hold Ross and Jonathon (for writing a blog!) to a much higher standard than Coyle, writing for a peer-reviewed journal?
Hi Ray
ReplyDeleteNo, not too late, especially for a comment like that...Bravo! Well put, you've hit every note exactly right, it's one of the best comments we've ever had. Obviously, I'm biased, because you make the same point I took three days to make, but it's 100% - would you like to write for us in future! Ha ha.
Seriously, though, I am hopeful (thought doubtful) that yours is the last word on this, because it's outstanding!
THanks
Ross
Ray,
ReplyDeleteI believe that you are incorrect in implying that Coyle's data for power and VO2 are obviously flawed, and/or that insufficient detail has been provided to assess their validity. The Methods section of the original manuscript provides enough information for someone reasonably knowledgable in the field to replicate the study in its essential details (with the possible exception of how precisely how delta efficiency was calculated), which is all that one can really expect. Moreover, Coyle has provided additional information demonstrating the accuracy and/or precision of the measurements in his response(s) to the original Letters to the Editor. Thus, while the paper would clearly be much stronger if it had been planned a priori and included better controls, I don't think you (or even those with access to some of the raw data, e.g., Gore et al.) can say for certain that they are wrong. Given that, at this stage I think the only option is to simply accept them at face value, and go from there.
Thanks for the great work guys - I think your general approach is a good one and strikes a nice balance between the science and some thought provoking commentary.
ReplyDeleteKeep up the great work
Peter
Ross,
ReplyDeleteIt sounds to me as if you may have been victim of a bad review process with Journal of Physiology - perhaps you should have submitted your paper to Journal of Applied Phyiology instead? ;-)
Seriously though, without seeing your manuscript it is not possible for me to comment on what the reviewers and/or editor saw in the data and/or your interpretation that made them decide it was unpublishable. I will say, though, that in my 25 y in the field not even once have I personally encountered what I felt was a bad editorial decision. Off-base, reviews, yes, but not an inappropriate outcome. Perhaps that's because I've been lucky, perhaps it's because I don't try to publish every scrap of data that I happen to have available, and/or perhaps it's because I'm a member of "The Fraternity" (although I'm pretty anti-Greek, don't remember taking any pledge, and definitely don't pay any dues). I think a more plausible explanation, though, is that editors/associate editors are selected not only for their knowledge and experience, but also for their judgement, and really do try to separate the wheat from the chaffe. That doesn't mean that they are perfect (or that they can please all of the people all of the time), but as you point out, the appropriate response when you feel a mistake has been made is to write a Letter to the Editor. That is, indeed, what happened following the publication of Coyle's study, and in that regard one could say that the system worked just as it should. What I'm trying to understand, though, is where blog entries fanning the flames of what in scientific circles is now an ancient controversy fits into the picture. From where I sit, I don't really see anything constructive coming from it...in essence, that ship has already sailed.
Finally, to the anonymous poster "Allan" who complained about mostly anonymous posters who don't respond to all of Ross and Jonathon's opinions: I don't know about others, but personally I'm not in the habit of mentioning things with which I happen to agree, as I don't see such "me tooism!" as serving any useful purpose. My 4- and 2-y old children may benefit from such positive reinforcement, but presumably by the time one is an adult such praise is unnecessary.
Ray wrote:
ReplyDelete"And Frank Day may have a point, that a larger, more important discussion is how muscular efficiency was equated to cycling efficiency without any challenge. I don't think I can disagree with anything he has written, but it just seems that any discussion about right or wrong conclusions is simply premature. This can only happen if we take for granted that the data (measured and fabricated) is accurate and reliable to actually show an improvement, even in gross efficiency. We simply aren't at the point were we can draw any conclusions. It seems that Coyle should have left that section blank."
I disagree (not with the part where I may have a point but whether this is worthy of discussion). While there could be some question as to the accuracy of the raw data Coyle measured any inaccuracies could just as well go to the other side and the data he is presenting could actually be underestimating Armstrong's improvement. Whether the data is flawed or not, is one argument. But, it was accepted for publication on the basis that the data was good enough for publication based, presumably, on the palmares of the author. The author has subsequently presented data to suggest it is unlikely there is a substantial calibration error. Therefore, it is also reasonable to argue what the data means (and what could have caused it) if it were true.
It is my impression that most of the people arguing against the validity of the data are doing so because they simply cannot believe that such changes are possible (it is their bias) so it is easier to discount the data than to come up with theories as to how such improvements could occur in an athlete like Lance. If such efficiency improvements were commonplace no one would be doubting the data. Such improvements are not commonplace. One can argue that the underlying data might be flawed, but if it is not (there is no evidence it is not) how this improvement occurred needs to be explained. Coyle tried but his explanation is hair-brained, in my opinion, for an athlete like Lance. A discussion of this aspect of the paper is equally warranted in my opinion but is being ignored by essentially everyone.
HI Andy
ReplyDeleteYou're lucky never to have had run-ins with the review and editorial process. Thanks for the reply to that one - I must say it was just an aside, not a focal point, just came up as a thought in response to one of your last posts.
I must just clarify (again), that I'm not fanning the flames of the scientific community here - you're missing the whole reason for existence of what I'm trying to do with the whole site - the Coyle post is the latest of many similar topics covered, though it has 'transgressed' into the realm of technical sports science (maybe a mistake, because it's alienated the vast majority of our readership, who are, frankly far more important. I'll deal with science in scientific journals if that is my inkling).
THis site is intended for those people who are frustrated with the rubbish quality of journalism and sports reporting that they often get and who enjoy a more intellectual approach to what is often 'corrupted' by the media.
The media approach to the Coyle paper is one such example, and that's why it crosses my radar/field of view. My own interpretation of that is of course subject to biases, but as Ray said so brilliantly, we're not out here to be JAP or MSSE, we're here to inform and provide perspective, stimulate thought.
So the mention of the review process is firstly not the focus of this post - it was a comment brought up in reply to other comments. But more than this, it is not an attempt at fanning the flames of the scientific community - it's an attempt (maybe a failed one, we should ask the 3000 or so people, of whom I guess that maybe fewer than 400 are scientists, who read this) to provide a perspective that the media have missed. SO just because the ship has sailed for you (or should I say "Us" in the sciences), this particular discussion is relevant for all those who are not in the community and who have read the media reports and what the detail that the media cannot (or do not) give them.
Ross
"Such improvements are not commonplace."
ReplyDeleteAnd how do we know that? As far as I can tell, the only long-term longitudinal data on the topic is that presented by Coyle. Schumaker alludes in his Letter to the Editor to unpublished observations that run contrary to Coyle's data, but that's all they are: unpublished observations. There is also the (slightly shorter) study that we kicked back and forth on ST, but as I said there, it is possible that the subjects' efficiency had already plateaued before data collection ever began.
As I've mentioned several times before, the longitudinal data that I have on myself (which spans >30 y, starting in my late teens) are consistent with Coyle's claims re. Armstrong. Indeed, when you consider the way the neuromuscular system changes with development and maturation, an improvement in muscular efficiency would be expected...if others are not finding this to be true, then a very interesting question is, why?
"Such improvements are not commonplace."
ReplyDeleteAnd how do we know that? As far as I can tell, the only long-term longitudinal data on the topic is that presented by Coyle. Schumaker alludes in his Letter to the Editor to unpublished observations that run contrary to Coyle's data, but that's all they are: unpublished observations. There is also the (slightly shorter) study that we kicked back and forth on ST, but as I said there, it is possible that the subjects' efficiency had already plateaued before data collection ever began.
As I've mentioned several times before, the longitudinal data that I have on myself (which spans >30 y, starting in my late teens) are consistent with Coyle's claims re. Armstrong. Indeed, when you consider the way the neuromuscular system changes with development and maturation, an improvement in muscular efficiency would be expected...if others are not finding this to be true, then a very interesting question is, why?
Hello Andy,
ReplyDeleteI have to respectfully disagree with you on several points.
I do not imply that the data is obviously flawed. I just say we don't know. Nobody knows, not even Coyle, because no attempt has been made to assess the magnitude of the possible accumulation of errors. Prove me wrong and show me the error function.
If I read the original paper, I see that his weight was measured, with an accuracy of +/- 0.1 kg. So far so good, but then, that's it. What's the spread for power? What's the spread for V02? What's the spread for blood lactate, sometimes taken from the vein, sometimes from the finger? I also see that some equipment was "calibrated periodically". Which periods? The "methods" section is insufficient for me to come up with a reasonable error function.
When pressed by reviewing peers, Coyle did produce references that said what? We have calibration values from 1994 for V02 measurements, and we have accuracy measurements for an ergonometer from 1991. Is there anything else? (I didn't read the links here so really, is there anything else?)
When pressed again, by a formal complaint, Coyle only produced data from one test. What's up with that?
Are we in a position today, to calculate his gross efficiency, the inferior measure, with a 95% confidence interval of +/- 0.5%? or +/- 5%? We simply don't know.
Furthermore, the original paper talks about measurements they did not make. They presume during the tour, he weighed 72kg, and conservatively estimate his VO2max at 85. This is where 8% turns into 18%. Do these measurements come from the same well calibrated and maintain equipment? No, Coyle passively says these values were "reported" -- we don't know!
The fact that Coyle miscalculated delta efficiency, simply completes the trend of errors in the process from beginning to end.
Now if the whole goal of this case study was simply to form a hypothesis about muscle efficiency, and suggest directions for future studies, this paper might be adequate. But it seems it's been overinterpreted much further than that. To draw conclusions about what happened between 2000-2005, from measurements between 1993-1997 is simply not supported.
When we are uncertain, the most we can say is "we don't know". Especially in science, I find it unusual to turn around and say, no one has proven an uncertainty of any magnitude exists, so let's give him the benefit of the doubt.
Hello Frank,
ReplyDeleteI'm one of the handful of naive people on this planet who think Lance won so many times, because he worked hard on the bike, highly motivated to overcome adversity, tested in wind tunnels, wore seamless suits, had the latest and lightest equipment, and only competed in one race per year. I have no reason not to "believe that such changes are possible", and truth be told, I lack the foundation to know what a change of 21% efficiency to 23% efficiency actually means.
If I could criticize Coyle's assumed link between cycling efficiency, and muscle efficiency, I would. But does it make sense to discuss "how did these changes occur?" before we confidently answer "did these changes occur?"
For you to say that the sources of error may also under-estimate the improvement, further demonstrates the importance of bounding the accumulated errors. If each gross efficiency calculation can be off by, let's say 2% points, how confident can we be that there was an improvement from 21% to 23%? How confident can we be with a measured 8%, when an error analysis would tell us it's in the range of [-8%,30%]?
I think what is warranted is to write a second, corrected version of the paper. Or if someone feels compelled, follow up on the muscle efficiency hypothesis with further supportive or contradictive studies.
I can not do it -- I have a real job.
Hi Ross,
ReplyDeleteThanks for the complement. It means a lot for you to say you like what I wrote!
Ray
Hi Ray
ReplyDeleteGosh, don't thank me - it's true. you put the key points across far better than I did! In that post and in your last two responses, as well!
Thanks again!
Ross
Ray wrote: "If I could criticize Coyle's assumed link between cycling efficiency, and muscle efficiency, I would. But does it make sense to discuss "how did these changes occur?" before we confidently answer "did these changes occur?" "
ReplyDeleteYes it makes sense. It makes sense to discuss both. This data was reported. The author has a track record of sound science. While everyone can make errors this is not a simple case of two data points. There were several tests and the trend is constant and clear. If there was a wide ranging error in this data we would expect that such a clear trend in these several tests not be so apparent. So, unless the data is fudged (a possibility when the data is "too clean") I think the presumption must go with it being valid and it is distinctly impossible to repeat this particular "study". If you can only say that the data cannot be correct, then you are showing your bias.
We are stuck with the data Coyle reported. While it is reasonable to argue the validity of the data it is also quite reasonable to presume the data is good so we should also be discussing the reasonableness of the conclusions of Coyle should that be the case. We should also be discussing the appropriatness of putting a hypothesis in the title of the paper as if it were shown by the data. If Lance was able to improve his cycling efficiency 10% (the only change of note here as everyone has the possibility of losing weight), how did he do it? How can the rest of us do it also? According to Coyle all that one need do is change the mixture of muscle fibre type in the legs? That is pretty easy to do if you are a sedentary housewife taking up cycling? It is pretty difficult to do if you are the current world champion. How did he accomplish this, presuming Coyle's data and conclusions to be correct. I personally think Coyles conclusion as to how this was accomplished (changing muscle fibre type) to be hair-brained, as I have stated before.
In the September 19th part 3 analysis Ross wrote: "Many of you wrote in saying that there was an error in the statement regarding the impact of the change depending on a high or low VO2. And you'd have been correct. I've emailed some authors about this, and what I'd like to do (this is part of our "growth strategy" here at the Science of Sport), is to allow external contributions to topics such as this one. So I will try my best to actually have the people involved respond to these kinds of specifics."
ReplyDeleteLater in the comments, responding to an annonymous post Ross and Jonathan wrote: "And I find it fresh that you'd come onto a site and insist and throw out insults of "hypocrites" when I have specifically said in this post that I have contacted people who are specifically involved in this matter to address this issue."
I was wondering if you have had any response from those who you queried? If you have, what did he/she/they say and, if not, how are you interpreting such a failure to respond to your inquiry?
Frank
Frank Day said:
ReplyDelete"They should have been able to get access to the raw data then through the discovery process and if Coyle could not have provided it for fact checking then they probably could have had either the paper or Coyle excluded from evidence, which would have probably won the case for the insurance company."
As I understand it, although I may be wrong, the case was won by the Armstrong side entirely on the basis that his name stood as the winner of the Tour de France. The ASO and UCI never attempted to disqualify Armstrong. Even if Coyle's paper was completely discredited, would it give sufficient evidence for the governing body to penalise Amrstrong? Biological passports are relatively new and only allow for suspicious riders to be flagged and subjected to higher frequency testing. I think nothing short of a positive test under WADA rules could have resulted in Armstrong being disqualified and the insurance company winning its case.
An another point: Armstrong's weight was not lower post-chemotherapy? Many people believe he lost a lot of weight post-cancer and that made him a super climber. From the Coyle paper alone his pre-season weight was relatively constant over his career.
Thanks for the articles - they've been interesting and have generated some wide-ranging debate.
Whether Armstrong would have won that case whether or not this paper was in or out of evidence is of no consequence to me. My point in the post was some of those who are most vocal against this paper were aligned on the other side during this trial and should have had access to all of the raw data then as part of discovery. If they failed to ask for it then then someone on that team was incompetent. If the judge failed to grant them access if they asked then he was probably incompetent. If they asked and Coyle showed to the judge that the raw data was either lost or destroyed, then it is unreasonable of them (and others) to demand that it be presented now. It seems now is not the time for that group to be claiming they need the raw data so they can properly critique the paper and that Coyle's failure to provide ALL OF IT, NOW!!!, is somehow evidence of Coyle's unethical behavior or collusion with Armstrong.
ReplyDeleteI'm sure you're all done thinking about this, but....
ReplyDeleteFirst, I'm not a scientist. But the criticism of Coyle here has not been entirely sound in my opinion. At best, the criticism speaks to the weight to be given to his data. On a scale of 1 to 10, perhaps these criticisms drop his score to a 5 (I would certainly not say that it drops to zero). Thus, we can say that his data provides SOME evidence of increased cycling efficiency (by some unknown means) tempered by our knowledge that his data collection means were imperfect.
Secondly, I'm not a Lance Armstrong fan per se, but I took up cycling recently and have read about him and professional cycling a lot since then. It seems to me that the circumstances around Coyle's publication would suggest that he is more of a "scientist" than his critics. When Coyle began testing Armstrong, he was a potentially good cyclist (He didn't win the World Championship until 1993), but he had no real palmares at the time. On top of that, he got cancer which one would think would decrease his liklihood of fulfilling any potential he previously had. In other words, it looked like Coyle had probably picked the wrong athlete to study, but by whatever means Armstrong turned it around. The point being that Coyle couldn't have known Armstrong would be a seven time TdF champion. So questioning Coyle's data by implying his motives were pecuniary seems wrong. On the other hand, Michael Ashenden, the expert witness opposing Coyle in the Armstrong case, is a long-time critic of Armstrong and has made it his mission to battle blood-doping in athletes (and, not coincidentally, he owns a company which profits handsomely from the mission of keeping sports "clean"). Thus, Mr. Ashenden has a very real and pressing pecuniary incentive to shoot holes in Coyle's work, since he will profit if Lance Armstrong were shown to have been blood-doping rather than to have achieved his victories through increased efficiency and hard work. That is not my understanding of how an "objective" scientist should operate.
My overarching point, I suppose, is that I agree with the comments that there seems to have been a bias in the first few posts of this blog (though they have repeatedly denied this), since Coyle seems to have presented a defensible, if imperfect, research paper regarding the effects of training on cycling efficiency. On top of that, the criticism implied that he may have overlooked the imperfections in data collection due to questionable motives (e.g., fame, money, etc.) but didn't question the work and motives of the scientists criticizing him. I think if you're going to be objective, you need to be scrupulously so.
But I enjoyed reading the thougts on the subject - good blog!