Imagine this scene: You've completed months upon months of training and you're about to embark upon your marathon quest. The "Star Spangled Banner" has been sung, and a nervous tension hangs in the air. Crack! The starter's pistol sounds and … you wait and wait and shuffle a few steps and wait some more. That's what tends to happen to mid-pack runners in the bigger marathons. They get stuck in a human traffic jam that doesn't even begin to clear until they cross the starting line, which is well after the race clock has begun ticking away.
When tens of thousands of runners congregate -- as is the case for premiere events like the Chicago and New York City marathons -- a runner may not cross the starting line for as many as 22 minutes after the race begins [source: Sinha]. Those minutes are valuable to a runner who's trying to beat a friend's time , qualify for an exclusive race like the Boston Marathon or simply reach a pre-determined goal. That's why most marathon race results indicate a runner's "gun time" (the time from when the actual race clock began) and her "chip time" or "actual time" (the time from when she crossed the starting line). The chip or actual time is the true, net result.
For years, race organizers depended on numbered race bibs with pull-off tags to record marathon times and preserve the correct order of finish. The runner would cross the finish line, her race number and finish time would be written down and a pull-off tag at the bottom of her race bib (which included important information like name and age) would be ripped off and put on a spindle in the order of finish [source: Mitchell]. As marathons became a bit larger, finishers were hustled into finishing chutes until volunteers could tear off the tags from the athlete's shirt-front. But how to offset the difference between Runner A and Runner B's start times?
Enter the timing chip and subsequent marathon tracking technologies.