Length: »1900 Words
Reading Time: »7-9 Minutes
Target Audience: » Coaches
“It is a capital mistake to theorize before one has data.”
— Sherlock Holmes
Effective coaching and programming is about more than numbers. The athletes scoring highest on performance tests, are not always the best players on the pitch. Even though great players are comprised of much more than just their physical abilities, these abilities are still essential, and evaluating these are an integral role of the coach.
Performance testing allows us to do just this.
Most managers and coaches implement performance testing with their teams and athletes. Whether or not a standardized battery of tests is the best way to identify weaknesses and to gauge progress is debatable, but nonetheless any of us who work as coaches will inevitably carry out some form of periodic testing with our athletes, and if something is worth doing, it is worth doing right!
When it comes to data collection, we are only as good as our measures. The standard approach to testing involves assessing our athletes at the beginning of the sporting year and again following a tough pre-season regime before the competitive season swings into action. Time and time again I have seen coaches follow this approach and either pat themselves on the back when they see the massive improvements they have produced with their athletes, or contrastingly, wrongly declare the conditioning program to be completely inadequate based on the results observed in re-testing.
Yet, the common problem I too often see is poorly conducted testing procedures with too many factors that can significantly skew the results, leading to false conclusions.
Below are the main considerations you should know about when carrying out, and repeating a performance testing session with your teams and athletes.
Consistency of testing protocol and instruction:
I apologize if this seems obvious (but you would be surprised what I’ve seen), when repeating performance testing you should use the exact same battery of tests used in the initial testing session. If you change the tests you use to measure aerobic endurance, strength etc., then you can’t compare the data in an objective and precise way. At best you will just be able to gauge roughly if the athlete has improved.
The way you carry out testing should also be identical to how it was conducted at baseline. You should have a clear protocol, ideally in written and diagrammatic form of how the test is to be set-up and carried out. You can find a non-exhaustive list of standardized testing protocols HERE. By having clear set-up instructions, we minimize the variance in testing results from factors such as distance measurements, angles of cones etc.
What we must also consider is something called inter-tester variability. This means if two people carry-out the same tests on the same athletes, will the results obtained be different? In most cases they will be, so our aim is to minimize this difference. Ideally the same tester would carry out the tests on all occasions, but this may not always be possible, so we minimize this variability by having standardized protocols for our testing that everyone is briefed on and fully understands. A common test where inter-tester variability is usually high is maximal strength and strength-endurance testing. If two separate testers have different ideas on what constitutes a below-parallel squat, or an adequate push-up, then we see uncertainty creeping into our test results. A consensus on protocols and agreed standards should be drafted and understood by all involved prior to testing.
An often-overlooked element is variance of instructions given to the athletes. We see that most tests have a learning curve, which means athletes do better by simply practicing the test. Ideally, we should give the athletes the exact same instructions every time we administer a test, make sure they fully understand what is required of them, and time permitting, allow them to do a standardized number of practice trials or walk-throughs.
This is arguably the most common issue I see with field-based sports testing. Our aim when carrying out a follow-up testing session is to keep the conditions identical to the initial testing in order to minimize confounding variables. Yet, I too often see pre-season testing carried out on a dark, wet, muddy pitch in the bitter cold of a winter’s night, only for the follow-up testing to be scheduled for a bright spring morning, with a firm, dry pitch. Can we really compare the results of a yo-yo test, or 40m sprint from initial testing to follow-up? Of course not! Even if the players slightly regressed in their abilities, I would argue that the data would likely show increased performance, simply due to the more favorable surface and ambient conditions.
When carrying out periodic testing, make sure that the testing environment is as close to identical as possible. Factors to consider include:
· Wind speed and direction
· Time of day
Performance tests by their very nature require maximal effort from our athletes. This means that the athlete must push themselves well outside of their comfort zone. With this in mind we should consider the motivational environment we create for our testing sessions and how variances in this could impact results significantly.
Let me give you an example:
Imagine that during the initial testing it is simply you (the coach) and your athletes, and the athletes don’t really know each other that well yet so they remain fairly silent for the night. No one else is watching the testing, it is simply you carrying out the entire testing session.
Now, after a tough pre-season it is time to re-test the athletes (who have now bonded and became good friends) to see if they have improved. But this time it is not just you present for the testing. The entire management team have decided to come down and watch the players, and the players are aware that the management is still deliberating who will be selected for the first team. In addition to the management team, a team of the opposite sex have just finished their training session and decided they would hang around and watch the testing session too.
Would you expect the addition of management, another team, coupled with the increased encouragement from players and spectators could influence the motivation of your athletes and significantly impact the results?
Aiming to replicate both the physical and motivational environments between testing sessions is an important consideration to factor for any coach striving to collect high-quality data.
“Data doesn’t lie. It is not influenced by emotion or bias.”
You should now see the importance of consistency and minimizing confounding variables when it comes to testing. A typical testing session in a team setting consists of several tests carried out on a large number of athletes simultaneously. From a practical perspective, this can be a logistical nightmare and usually involves the athletes splitting into separate testing groups and carrying out contrasting tests at the same time (i.e. one group doing a repeated sprint test while the others carry out their strength tests).
It is therefore important that you record the order that an athlete completes their testing and replicate this order in the repeat testing to ensure validity and negate and order effect in the testing data. For example, if a particular athlete carried out an anaerobic test (yo-yo test) prior to a strength test (3 rep-max squat) in the initial testing, and then performed the tests in reverse order at the re-testing, this would most likely affect the outcomes and make it difficult to compare results.
Does this scenario sound familiar to you?
The team have had a tough couple of months pre-season training, putting in some serious work; intense gym sessions, brutal nights of sprinting on the cold and muddy pitch, and maybe even a couple of Sunday mornings repeatedly going up and down the highest hill in the parish. The team has stringed together 10 or 12 weeks of hard training and it is time for a league or a challenge match. But on game day everyone looks a bit off the pace, second to every ball and generally slow and unfit. What is the usual reaction?
“The players aren’t fit! They need more running.”
I see this time and time again, and it’s frustrating. After months of hard training, do you honestly think fitness is the issue? Or maybe, is it the fact that the players haven’t had a break in months and have accumulated a sh*t-tonne of fatigue?
We must train hard to maximize adaptation and build better athletes, but this comes at the cost of accumulating fatigue. At some stage we must repay this fatigue debt by prescribing our athletes a period of lower volume and intensity training in order for them to fully recover, enabling them to perform at their best and showcase their high levels of fitness. If we don’t, we just keep accumulating fatigue and push our players deeper into a hole of poor performance and increased injury & illness risk.
What has this got to do with performance testing?
We generally place re-testing at the end of an intense training block, right?
Chances are our athletes have accumulated a lot of fatigue if they have been training hard for a prolonged period. If we don’t account for this and allow a period for full recovery to happen prior to testing, the enhanced fitness of our athletes may be masked by excessive fatigue and we end up getting disappointing re-testing results which could lead to a false conclusion that our players have not improved, or even regressed! This is an important consideration if the purpose of the re-testing is to determine the efficacy of the conditioning program.
If we want to see considerable improvements in our performance testing scores, then we should schedule a recovery period of reduced training load prior to our re-testing to give our athletes the best chances of showcasing their abilities.
Let’s Wrap Things Up:
When carried out effectively, performance testing can be a valuable tool for coaches. It provides us with objective data of our athlete’s capabilities, tracks progress and identifies athletic weaknesses. We can also use this data to enhance buy-in and motivate our athletes if communicated appropriately.
Reliable and appropriate data can enhance our planning and inform our decision making, leading to greater progress and performance from our athletes.
However, if performance testing is poorly conducted, it generates unreliable data which can lead us to wrongful conclusions and misguided decisions.
Effective and reliable performance testing relies on the pillars of:
- Consistency of testing environment
- Awareness and minimizing of confounding variables
- Effective communication and consensus between coaching staff and athletes
As a coach myself, I know it is not always possible to account for all the factors above and that we work in a dynamic and quick moving environment with different personalities and levels of competence, but as with everything, we control what we can control.
With some careful planning, consideration and effective communication there is no reason that high quality performance testing data cannot be obtained with almost any athlete or team.
Evaluate, Plan, Execute!