Back in the saddle for Part 3 of our series on Fatigue today. In previous posts, we've introduced the concept of Anticipatory Regulation, and contrasted it with the theory of "limitations" of exercise performance. We introduced the pacing strategy concept, which is often dismissed as "obvious" (because it is!), but is in fact one of the most complex, and meaningful characteristics of exercise performance for physiologists.
What we'll do next is go through a number of different situations, scenarios and "challenges" faced by the body during exercise in order to delve into the concept of anticipatory regulation vs. peripheral fatigue a little more. That series of posts will look at:
- The physiological basis for why pacing strategies exist during exercise
- The special cases of:
- Exercise in the heat
- Exercise at altitude and with MORE oxygen
- Exercise with different availability of energy - the fuel limitations theory
- Studies of exercise where subjects are deceived as to how long they will exercise for
Scientific testing modes
There are, of course, an infinite number of possible study designs and combinations of different exercise types. But it's worth discussing the two predominantly used exercise modes, which are:
- Constant workload trials to exhaustion
- Time-trial studies
In constant workload trials to exhaustion, the exercising athlete is "forced" to cycle, run, row (or do any other exercise) at a predetermined intensity until they themselves choose to terminate the exercise bout because they are unable to maintain that required intensity. This includes the well known VO2max test, where a runner or cyclist starts off at a comfortable pace, and the intensity is increased every minute or two until the athlete cannot continue any longer (or falls off the treadmill, something most physiologists have experienced!).
In this test, performance is measured as either time taken to reach the point of voluntary exhaustion (the athlete says "that's it, I feel close to death and can't carry on!") or the total work done (distance covered, kilojoules used etc.) before that point is reached. What is crucial to recognize is that the athlete has no idea of how long they will exercise for - the instruction is to "go until you have to stop", which means exercise is completely open-ended with the athlete determining the duration. This removes any aspect of pacing, and since there's no known "endpoint," adjusting the pace appropriately would be meaningless and impossible anyway.
In this kind of exercise mode, the researcher is:
- Defining fatigue as an "event". That is, fatigue is a distinct moment in time when the athlete decides to stop. It is therefore an "all-or-nothing" event, black or white, yes or no, on or off. One could compare it to an "off-switch," where the athlete exercises until such time as "the lights go off!". This definition is clearly not appropriate for cycling or running races (though it's entirely appropriate in this kind of study), for when you are taking part in a race, you recognize that fatigue is more complex than simply a point at which your lights go off!
- Establishing at what physiological point the "off" switch is reached. The researcher is able to narrow physiology down to one or two variables and control for others. It reduces the complexity of performance quite dramatically (though it's certainly still very complex), and enables the scientist to adopt what one might call a "black box" approach. They can measure as much as possible, and then infer backwards from the "fatigue point" in order to appreciate what caused fatigue in the first place. When we talk about exercise in the heat, you'll see a great example of how this has been done.
Before moving onto the next type of exercise mode, we'll let Calvin and Hobbes give you a little illustration of the principle of constant workload trials and fatigue:
Basically, this cartoon illustrates what physiology is doing when we fix the workload and let the athletes exercise to fatigue. Effectively, it is stressing the human body to the point where it "breaks", and fatigue occurs. Then, once that has happened, the researcher goes back, analyses the physiological situation when that fatigue "occurred" and deduces that the cause of fatigue was X, Y or Z.
For example, one might make athletes exercise in hot conditions, and find that fatigue always happens at a body temperatures of about 40 degrees Celsius (104 F) (this is true, as we shall see in the series moving forward). In that case, one could conclude that the high body temperature has caused fatigue, as we have controlled for everything else. Similarly, you might find studies where athletes exercise to exhaustion in a VO2max test. And because they reach the "VO2max", the conclusion is made that a limitation of oxygen delivery caused the athlete to eventually stop exercise (or fall off the treadmill again!). The fatigue, then, is analogous to the "load limit" in the cartoon, with factors like temperature, metabolites, and oxygen availability all representing "trucks" that cause the bridge to break!
This kind of constant workload test, as I'm sure you can appreciate as you read this, is not exactly representative of what happens when you go out and exercise. Because regardless of what you do, either in training or racing, there is hardly ever a situation where you do not have a choice to slow down before you stop. In a laboratory, doing exercise at a fixed work rate, this choice does not exist for you! You either exercise or you don't, whereas any other exercise affords you the chance to slow down. And, as we've discussed in Part IB of this series, it's this ability to slow down (and the regulation that controls HOW and WHEN you slow down) that is crucially important for understanding physiology.
So what then, is the point of this kind of constant workload study? I certainly don't wish to dismiss it as meaningless, because it is in fact responsible for some of the best research done. It is science at its best, in many ways, as we must control for all the other variables except the one which we are investigating, and this includes the workload. However, what it does is establish the limits of performance in a simplified, manageable model. By defining fatigue so specifically as a single "event" or point, one is able to study the upper or lower limits of exercise quite elegantly. It's extremely useful for scientists to know that fatigue coincides with a body temperature of 40 C, for example, as it allows us to know what is happening at the extremes of performance, and therefore reveals the physiology of "homeostatic failure."
But this technique does NOT explain how performance is regulated, and problems develop when scientists begin to apply these findings to all situations. For example, when a physiologist proclaims that "Impaired exercise performance in the heat of Beijing will be the result of high body temperatures causing fatigue", then they are stretching the truth, and lying to you by taking their finding out of context! This happens very often, and is a big reason for the sometimes aggressive debate between the two models we introduced in our last post.
Then finally, in terms of application, when you are watching the Olympic Games this year, you're not watching this kind of exercise, you are watching the second kind of study, the self-paced trial (with a few differences!)
2. Self-paced exercise time-trials
This is a rather obvious concept - the athlete in the lab is made to do a "simulated" time-trial, over a known duration, and the power output or running speed is free to vary, at the athlete's discretion. Performance in this kind of model is defined by the time taken to complete a known distance, or a known amount of work, or distance covered in a known time. The key to this kind of study is that there IS a known end-point, so the athlete has an idea of what lies ahead when they start exercise. This enables the "pacing strategy" to come into play, as discussed previously.
If the Constant Workload Trial we discussed above represents the "ON-OFF" switch, then this kind of self-paced trial is the "DIMMER" control on your lights. Because instead of a situation where the light is on until it goes off, here we have a situation where the athlete is able to constantly modulate the workrate, up and down depending on the set of INPUTS they are receiving. The key question for physiologists everywhere, then, is how is this achieved? Returning again to the heat example, the challenge is to figure out how the athlete is able to adjust work rate to prevent the limit from ever being attained.
This kind of exercise is also more "realistic" if you want to compare it to most exercise types and actually apply the findings from your study. That is, when you go out for an 8km training run, you are effectively doing a sub-maximal exercise bout which is self-paced, with known duration, just as you would do in the lab. That means that application and inference from this kind of model is possibly more realistic than a model where workrate is fixed and duration is unknown.
Some key differences - "real" exercise is not purely self-paced
There are, of course, some big differences. For one thing, when you go out to run a 10km race, you're not really doing a "self-paced" trial, because there are other athletes in the race who have an equally large impact on your selected pace. So perhaps, for a race situation, one might say "freely paced", and then acknowledge that other runners, motivation, and numerous other factors affect the "self-paced" intensity! The point is that ultimately the athlete is still able to increase or decrease the exercise intensity, and this is a "self-selected" pace, regardless of which INPUTS are responsible for the pace. This sets the scene for the previously discussed "Anticipatory Regulation" (see the model at the end of Post IB for more on this concept).
Another difference is that exercise outdoors throws up a number of variables that are difficult to replicate in a lab. For example, changing wind and temperature conditions, gradients, road surfaces, and surroundings all exert an effect on performance during training or racing, but are limited in labs. We can, and do, try to control for this, but it does of course limit the contextual application of research to outdoor competition, and is one reason why to this day, with so much knowledge, we still actually know relatively little about performance physiology! As we've said, if anyone tells you that they know the TRUTH, they're lying...ignorant...or both...!
Looking ahead - why this is important
As I mentioned at the top of this post, the next few posts will look at pacing strategies during exercise in different conditions. But we'll also look at the constant workload model, and compare the conclusions made from these studies with those made in the self-paced, time-trial studies.
Once again, exercise in the heat provides the best example of this, because in this area, there have been some excellent research studies using constant workrate trials, which have concluded that fatigue is the result of high body temperatures acting on the brain. As described above, the problem is that these types of trials create a situation where that is really the only conclusion possible, because the trial is set up to evaluate a "forced" physiology leading to a distinct failure of exercise. In self-paced trials, one can look at what happens to performance and perceptions of effort long before the "fatigue point" is reached, to help understand how performance is regulated. That reveals that in fact, athletes slow down long before they are hot, and suggests that performance in the heat is regulated in advance of the "failure" so easily observed when athletes are forced into maintaining one intensity.
But this is all for the next batch of posts, where we'll tackle those four scenarios in turn, and we'll constantly be coming back to this concept of fatigue as a distinct event vs. fatigue as a regulated process!
Join us then!
Ross
Could you give some insight into the issues of selecting study subjects in the first place? Most studies I've read seem to make a point of using a wide range of backgrounds while others have a narrow study group, e.g. only "experienced marathoners".
ReplyDeleteWhile I can see reasons for various selection criteria, they're rarely explained. If a study used only avid marathoners, then why and what was the impact?
It seems both the "light bulb" and the "dimmer switch" can vary so much among themselves that a study's conclusions might be misleading. Do you push or twist the switch, and how hard or far, to get the desired brightness?
I still can't believe you're trying to tackle such a complex issue! Keep it up!
Hi Andrew
ReplyDeleteThanks for the comments. And yes, still trying to tackle fatigue! In fact, I'm not going to give up any time soon! I expect that this series might last all the way to Beijing, but I swear I will get through all of it, even if it takes until them. Basically, to let the cat out of the bag, I'm trying to "translate" my own PhD thesis, working through it systematically, sub-section by sub-section. So we've done 3 of those sub-sections, only a million to go (seems like)!
To answer your question:
The question dictates what subjects you aim to recruit for the study, and then circumstances tend to moderate it after that!
In other words, you try to match the selection of subjects to what you ultimately want to conclude from the study. But it doesn't always work out like that. Just today, I was reading a well designed study that was looking at the effects of precooling athletes on their performance. Ideally, this study would produce an outcome that some coach or manager would be able to use to help him prepare his elite athletes for competition in Beijing, where cooling the athlete might help with performance in the heat.
Great idea, but they ended up using untrained, though healthy men, who ran to fatigue over a period of about 20 minutes. that's fine, but it does of course impact on their conclusion and ultimately, the applicability of the research - what if they had used elite runners, would the finding have been different? I suspect so, and this is case of good intentions, but failing to deliver. This often happens because finding the "elite" is very difficult, and often the physiologist has to take what he can get!
The best example of this is the studies that have looked at drugs and performance. Last year, we did a post on EPO, after a Danish group did some research on EPO and performance (you can find this post in the "doping" tab at the top of the page). This study found massive (50%) improvements in performance. But the problem is that the cyclists they tested were actually quite weak, way below elite level. And so while that figure is amazing, it's difficult to know what it means for the Tour de France or professional sport - the difference would become much smaller! But here, it's a case of "take what you can get", because no elite athlete is going to volunteer for this study, for obvious reasons!
But to give you some other insights, there are studies where subjects HAVE to be selected from a small group. I once did a study looking at comparing upper body trained athletes (kayakers) to lower body athletes (cyclists) to see whether training type affects performance in a non-familiar activity (we were trying to see whether the kayakers paced themselves differently during cycling, and vice versa).
For that particular study, the question dictated that we got well trained kayakers, and well trained cyclists, because obviously, we would run a risk of error if there was any discrepancy between the groups regarding training status.
Another example is in all these time-trial studis I've referred to, you generally try to select very well trained cyclists, who have at least state/provincial/county level performance ability. The reason is that when you are studying something this complex, you have to reduce any random variation, and generally, a well trained athlete can be "relied upon" to produce more consistent series of performance. The last thing you want is for a guy to show up in the lab untrained and inexperienced and blow your whole study up because he has a terrible day for no reason other than "it's just a bad day"!
So you want to be sure that if you are looking to find an effect (of heat or diet or altitude, for example) on performance, you need to do the best you can to make sure your group is as homogenous as possible. This means they must be relatively similar, "cut from the same cloth". If they are not, then the chances of one "outlier" affecting the whole data set increases. One must remember that because of the difficulty of doing these studies, it's often not possible to get 30 guys to come in - a typical study uses 8 to 14 subjects, and that means statistical power is lower - homogenous groups are key in this case.
To use your example, if a study was looking at marathon pacing strategies with the intention of seeing how some intervention affected performance, then a very similar group would be desired. Off the top of my head, I would say you probably want a group of 12 males capable of running between 3 and 3h30, within the last 6 to 12 months.
However, if you want to do a study that observes and describes performance, or fluid intake or diet strategies, then it is sometimes useful to get a much wider spread, so that you don't make the mistake of "zooming in" on a very tiny portion of the population.
The opposite is true of training studies. Here, for obvious reasons, if you are planning a study to look at the effects of say strength training on running performance, you must take guys who are currently not doing weight training. If you do, then suddenly, you have the problem of their starting point being difficult to control. But it gets a little trickier, because if you want to see how strength training affects running, then you have to also decide what level of runner you take! If you take a very good runner, then it's possible that strength training does almost nothing for him, because he's already close to his peak ability - a 1% improvement is near impossible to find. However, if you take complete novice runners, then of course you'll find a difference, because any training is going to improve their running ability, even if it is "just" strength work. So you see, it's a pretty important consideration!
To sum up, it's never an exact science, but in general, when you have an intervention, and you wish to determine the effects of that intervention, a more narrow group is desired. The only exceptions to this are things like drugs or medical products, which ulimately have to be made available to everyone. It makes no sense to test the latest blood pressure drug on males aged 25 to 30, when in fact the drug is going to be used by males and females aged 30 to 70, for example!
Having said all this, one can't concede defeat and adopt the approach that a study can ONLY be applied to its situation and subjects. Some would say that you can't ever infer any research to other population groups, which would kill the whole purpose of research! So one has to, but it's also good to be mindful of the limitations and important considerations!
Whew, that's just about another blog post there!
Cheers!
Ross
Hi Andrew,
ReplyDeleteGood question about the subject selection, and together with Ross's comment we have just completed a post on a major tenet of experimental design and research methods!
As Ross mentioned above, in fact it can have a major impact on your experiment. Let me add some additional points here:
First, if you are doing repeated trials and select untrained subjects, then you might see them get better as the study progresses. We call this a "training effect," and in reality you are exposing them to a training session each time you test them. The result is that they perform better in the last trials.
Therefore, when possible, it is almost always preferable to recruit trained (if not highly-trained) subjects as it reduces the training effect.
Second, experienced subjects already know how to pace themselves to a certain extent. For example, if we recruit five-hour marathoners and ask them to do a 10 km time trial in the lab. . .well, you can imagine that they might pace themselves differently then runners with a sub-three marathon PB.
Third, regardless of what population you are recruiting, it is important to assemble what we call a homogeneous group. This means that all the group members are similar to each other. In physiological testing, this means that they are of similar age, body fat percentage, ability, and training status.
Having a homogeneous group will help reduce the variation and help us create a more valid testing environment. If we have lots of variation, i.e. if one runner has a 10 km PB of 29 min, while another a 40 min, while still another 55 min, then we are likely to see differences in the way(s) each responds to our experiment. This then makes it hard to interpret what is actually happening.
So the population can indeed make a huge impact on the outcome, and therefore should be chosen carefully.
Excellent question, though, and very relevant to understanding how to interpret the results from studies.
Kind Regards,
Jonathan
Wow, you guys are awesome. Thanks for the informative answers!
ReplyDeleteIf you don't mind a follow-up question... How would you define "homogeneous" for an Anticipatory Regulation study?
In order to isolate the impact of one input, you'd have to homogenize the other inputs, but the brain is one complex input! Would you have to use subjects that not only are physically homogeneous, but also with similar psychological profiles? What if certain mental profiles for a given physical standard don't exist?
It could take forever to analyze the combinations of studies before drawing a useful conclusion. Is this a case where rather than a reductionist approach, an "ensemble" type of study would yield a stronger answer?
Baseball analysts face a similar issue - quantifying success factors in that sport is so complex that the best solution might be to simply average everyone's flawed analysis. (See this article for an explanation.) Maybe that's what your thesis will do?
I didn't mean to hijack your comments! Very good discussion, though.
Very much enjoying your series.
ReplyDeleteYou might be interested in reading the article in today's New York Times, "Researchers Seek to Demystify the Metabolic Magic of Sled Dogs." Scientists are studying the dogs' amazing resistance to fatigue, and they have found that "the dogs somehow change their metabolism during the race." The scientists involved in the study are "confident" that humans have the same switch that could be turned on during extended exercise. If they can just figure out how. :)
I really enjoy this series, and have never really thought about the problem of determining how pacing is regulated physiologically before. In trying to understand the basis for the anticipatory pacing I am having trouble seeing a clear dichotomy between the two models you have presented.
ReplyDeleteMuscle fatigue is often defined as a decrease in muscle function, which could include decreased force output, decreased power, decreased shortening velocity, or a change in energy usage by a muscle. During a series of activations all of these characteristics of muscle may be changing in different motor units, to different degrees, and all at different points in time. For example, in a “fatigued” muscle there could be a prolongation in the time for relaxation. Because of this, the nervous system can decrease the rate at which that motor unit is recruited saving energy while still maintaining the same force output as a “non fatigued” muscle. In this situation the muscle would be fatigued, but submaximal force production would not be altered. It would then seem difficult to completely separate the nervous system from the muscular system in analyses of performance, and also to equate muscle fatigue with organismal fatigue.
Along the same lines, muscle cannot only decrease its function but also increase its function as seen during force potentiation. Therefore, over the course of a race muscle function may be changing in complex ways that could include both increases and decreases in function.
Ultimately, I would think it would be important to separate out pacing from “fatigue (organismal or muscular)” and not create a dichotomy between the two ideas. Not sure if you would agree or think I am off base. As I am not too familiar with the pacing model I am really excited to see your future posts!
HI Sean
ReplyDeleteGood question, and you're at least partially right. The dichotomy between the two models is actually not as large as it would seem when you approach the system as you have done. What you have done is look at the system as a whole, and included neural regulation of muscle function in your explanation. Once you do that, then you recognize that the two models are in fact complementary.
But I have to point out the very important point that this is not normally done. What I've tried to do in the first 3 posts of the series is show how research has in the past confined itself to a "black box" approach. THat is, according to what I've said in this post, they have tended to isolate a variable (or a group) and then conclude that it causes fatigue by means of A,B, and C. The heat example is the best - once you overheat, your hot brain says "no more", and it fails to activate muscle.
That is very clearly a limitation model, but it does not explain how performance is regulated during self-paced exercise. To take that finding (which I'll tackle either later this week or early next) and say that fatigue is caused by overheating is very clearly a massive departure from pacing!
So in that instance, there is a very large dichotomy between the two models - one is actually incorrect, though the physiology that underlies it is very good. It's teh application that falters. The same goes for altitude and O2 availability, fuel use, metabolites, even leaky calcium channels. Just because people STOP exercise because something in their physiology limits them does not mean the same thing slows them down! And that's what has been said.
So while I agree that the dichotomy does not NEED to be that large, it is, because of how both parties (because the Central Governor model has also forced something of a contrived, amplified difference between itself and the peripheral model) have set it up. In other words, because it's been set up as the antagonistic model, the dichotomy has been blown out of all proportion! What we'll do with this series is try to reconcile the two as much as possible.
I suspect that I've already hinted at this "marriage" between the two, and you're hinting at it in your question as well! But we'll see that the two models are not necessarily disagreeing with one another, just that they are being applied incorrectly.
As for the whole issue of neural strategies to delay fatigue, you're quite right, there is some evidence that the brain will change teh firing rates to modulate a reduction in muscle force generation capacity. But, very important , this has only been shown for small group muscles, during isometric or isokinetic activities. in other words, never been shown in dynamic exercise like running or cycling. A lot of those studies literally looked at the muscle that controls your index finger! (seriously!). That's not to say it doesn't happen during dynamic exercise, but it's not been measured.
We'll look at the studies of muscle function later in this series. In particular, there is one study of muscle that shows quite clearly that fatigue happens even though there is a massive reserve! What you are suggesting is that the reserve may exist, but that some muscle fibres are also fatiguing, but that the brain alters the recruitment patterns to defend against force output. I guess the question then is "why doesn't the brain just activate more muscle to defend the force output?".
So give it some more time and you'll see that you are right, there is a closer relationship than people think, but the argument put forward that fatigue is the result of some peripheral change can't explain self-paced performance - that dichotomy is real!
Ross
Hi Andrew
ReplyDeleteExactly! That's the difficult thing about doing those kind of self-paced, "integrated" studies. But it is possible to control many of the variables.
But it's not difficult to see why the fixed workload trials are often chosen instead - they afford the researcher with the luxury of "control", in that fixing the workrate eliminates one component of a very complicated system.
In the anticipatory model, the workrate is not only free to vary, but changes in workrate also cause ALL the other variables to vary, so it amplifies the complexity!
Having said all that, the self-paced trials are actually known to be LESS variable in terms of performance. That is, an athlete will vary less from one day to the next when doing a time-trial than when they do a fixed workload trial to exhaustion. I think this has to do with familiarity (most people don't do an exhaustive trial very often) and also motivation (what's the incentive to keep riding for 45 minutes when you can stop after 30 minutes?).
so in that respect, the self-paced model actually has LESS variability. It's in the physiological interpretation that the challenges come.
However, this doesn't impact too much on the selection of subjects. They just have to be appropriately trained, as Jonathan and i have said above. Other than this, you actually don't want to control for too much, because it prevents an integrated finding.
So, I would define "homogenous" in terms of performance ability and training history, nothing else. Remember, it's almost always a performance trial, so the most vital thing is that the subjects are solid, decent performers. I would, for example, choose only sub 35 minute 10km runners if I were doing a trial to examine the effects of some drug on 10km running performance.
The reductionist approach, for it's part, is really important, because it helps to show us what the "components" are. For example, without that reductionist approach, we'd never appreciate that humans don't exercise beyond a certain body temperature! So that is the value of those studies. But if you want to understand pacing, and performance in the Olympic Games or the Tour de France, for example, then you have to take an integrated view. Because when Kenenisa Bekele is racing against Zersenay Tadese in Beijing, they most definitely are not stuck in a constant workload, and the surges, changes in pace, mid-race "lulls", all can't be explained by a reductionist view.
So to me, constant workload science represents the "building blocks", while the anticipatory model is the attempt to put them all together.
That is probably the equivalent of "averaging out" the two, though it's also a little more complex than this. Perhaps the better description is a "marriage" between the two, which is certainly a reasonable expectation!
Ross
You mentioned the external factors outside of the lab such as other people in the race contributing towards the pacing strategy... I have always wondered what exactly the beneficial effect of "sitting wheel" in a running event would be. From personal experience, I know that the benefit existsas there have been a few races where I have managed to hang on the back of a better athete and been pulled to the finish line. In this case, I have effectively been adopting somebody else's pacing strategy and it is definitely harder to take my turn and do the running for a while than it is to sit on the back. Drafting is surely not the culprit for runners as the speeds are too low, but there must be a reason for this happening. Isn't this the reason why pace-setters are employed by race event organisers - to break up the natural pacing strategies and speed up the top runners for the first portion of the race?
ReplyDeleteForgive me if it's a dumb question, but related to your hydration and cramps series, are muscle cramps related to fatigue?
ReplyDeleteI ran a marathon last week. I chose a very conservative pacing strategy (5 heartbeats lower than usual, and 5 minutes slower at the half) because the finish is tough, and I wanted to conserve enough energy to finish well, and produce relatively even splits. It was an evening marathon, which started out warm (20+ deg. C), but started cooling off at the half. I've had little heat acclimation, as it was really the first warm day. I drank plenty of water and aquarius, and some Coca Cola at the half. Not too much liquid, maybe at one point too little (some congestion at one stop so I continued).
After having said all that, I suffered a minor leg cramp at 31K, on a small downhill, before the hard part even started :-( I walked a little, stretched it out, and then resumed running for the next 6K, at an almost 1 minute slower pace. At 37K my legs started tightening all over (hamstrings and calves), and I decided to walk back to the showers, with my tail between my legs.
I've completed 6 marathons in the last 4 years, and this particular one twice (but with bad finishes), and was comparably trained except for less emphasis on interval training (speedwork), but I still think I should have been able to finish well.
Anything obvious?
Regards,
Ray
PS: Hurry up with the fatigue series, I'm getting tired of waiting.
Hi Ray,
ReplyDeleteUnlike the gents running this Blog, I'm not a trained professional, but I couldn't help but try and offer one comment on your question.
Question first, then comment. What was the nature of the downhill in your race and how much downhill running do you do in your training. I'm a fell runner and the eccentric contractions forced by downhill running can play havoc with your legs particularly in long races (I suffered horrible cramps coming off the 2nd mountain of the Three-Peaks race in England and the last 16km were excruciating). There are some studies that I believe could link the muscle damage done by downhill running to cramp reactions, so perhaps you can shed more light on the route?
On another note, I would urge caution using HR as a precise gauge of intensity (especially down to the precision of 5 beats). HR seems to have shown less precise than thought when training with HR monitors first become popular.
Too many factors can influence HR on any given day (for better or worse) apart from your fitness level. You are actually better off focusing on Perceived Effort Level, especially in a race. Your HR profile can also change quite significantly through training, so unless you get regular tests done you are already engaged in a lot of guess-work. With Perceived Effort Level you are always dealing with the latest feedback from your body.
That being said, I train with HR monitor a lot myself, but use it more as a generic guide. For instance when I returned from a bad injury I would note that my slow runs could trigger HR as high as 155bpm at 12kph (base runs), but after a few months of being back in training I would note that it would often drop below 140bpm for such running speeds. How I use this feedback to change my training is complicated but suffice it to say, I make variations once speed consistenly seems to have become "too low intensity".