Adventures in stupidity

OPENING SCENE : our protagonist sits at his desk, working

ENTRANCE cheap RC helicopter and operator

OPERATOR (in the style of Butt-Head)

huh huh ... huh huh ... huh

The helicopter comes to rest on the edge of the protagonist's workspace.

PROTAGONIST

If that gets within arm reach, I'm going to break it.

OPERATOR

Why?

Here’s thing: I’m not good at hiding when I’m annoyed. And my body language is so clear that sometimes people shield their eyes when they walk past. But this fucking idiot can’t pick up on it even though I offered to break his toy.

Comments

Marathon pacers matter (Seattle Marathon 2007 to 2011)

Update 12/3/2011: Unfortunately, the information below can’t quite be trusted. I’ve taken a closer look at the results I pulled from the Seattle Marathon site at the time and the results today and I can definitely say that the data I pulled was not official and the results today are also incomplete. I’ll need to make an update to this post and all this research some time when official, complete results are available. What’s wrong? In the data I initially pulled, I see things like women’s winner, Trisha Steidl’s splits as 1:10:29 for the first half and 1:34:09 for the second half for a 3:03:38 (which is wrong and doesn’t add to that final chip time) – today the results say 1:29:32 and 1:34:09 (which seems right). This suggests today’s results are closer, however today the pacer chips are missing from the results. Anyway – I’ll work on this again when I can…

I’ve started collecting the data from the Seattle Marathon, 2007 to present and am doing some analysis on it, specifically from the perspective of the marathon pacers since I organized the pacers this year and we just finished the race. If I find the time to keep analyzing the data, this will probably be the first of many posts on the subject. If I don’t, this might be the first of one.

Assuming I’ll keep writing on this – here are the methods I’m using for the data source…

  1. I pulled down all full and half, male and female results from 2007-2011 and dumped the data into Excel.  This represents over 45,000 finishers over that period between the races.
  2. I did a little data cleansing – many records contained no data for the first split and just turned these into 0’s for processing in an Excel PivotTable.
  3. I used an Excel 5 minute rounding function to approximate the pacer that some finisher would be behind (e.g. a full finisher crossing the finish line at 4:13:35 evaluates to a 4:15 pacer, a full participant crossing the midway point at 2:02:41 would be behind the 4:10 full pacer, and so on).

Through 2010, the Seattle Marathon only offered full pacers for 3:30, 3:45, 4:00, and 4:45 (unhappy with this, in 2010 I lobbied for us to add 3:10 and paced that myself).  In 2011 I organized the pacing and changed the pacer structure to offer more times3:00, 3:10, 3:15, 3:20, 3:30, 3:40, 3:45, 4:00, 4:15, 4:30, and 4:45).

When trying to process this in the past, I had frequently tried to look at the finish result. This is pretty impossible to make any conclusions off of because (if you hadn’t heard) a marathon is hard and there are all sorts of reasons people do or do not make their results.  While it’s a lot more important to ultimately answer “did people make their goals?” without a questionnaire that’s fairly impossible to tell.  It *is* pretty easy to tell from the first half split, though, where people were setting their goals, and looking at some of that data, I see a clear indication that the pacers and pace groups matter.

The following chart shows a plot of full finishers over these 5 years of races and highlights the pacers that were offered for the races in those years.  The data shown is based on what 5 minute group they were running with at the first half split (not the finish) and the red rings highlight the 5 minute segments for which we had a pacer.

  • 2007-2010 there was some pretty clear clustering in most years of a large group clustered around the pacer segments. Sometimes the spike is a little outside the circled block, but I believe there is some pretty clear visual correlation (this includes 2010 when I had a group on track for 3:10 at the half)
  • 2007-2010 looking at the distribution of the field outside the pace groups shows a fairly smooth distribution of finishers. I think this further suggests that when there isn’t a pacer to associate with, runners tend to just distribute themselves more evenly.
  • In 2011, the distribution is much more choppy with more clusters of runners in the race with most clusters inside a pace group and most of the rarefied sections outside the pace groups.

This doesn’t help understand whether people are achieving tougher goals and there is no sophisticated analysis in here at all (maybe I’ll get to some of that in a later post) but I believe that it definitely indicates that runners will choose to run with a pace group if one is offered in the race.

Comments

Life in the northwest

A couple weeks ago as I was leaving my house and walking down the steps I felt and heard the familiar, distinctive, and disgusting sound of a snail shell being crushed under my shoe. At the time I had no reason to doubt my intuition that: “This was the single most disgusting thing I will experience all month.”

Fast forward to going out on a cold winter night and finding a fresh, live slug “incorporating” some of the crushed remains. Fast forward a couple minutes later to the time where I forgot that replacement slug was devouring its ancestor.

I was wrong.

Comments

emacs – the only guide you’ll ever need

I use emacs every day and for as much work as I can on a computer and have for about 9 years. It was not easy to learn, though, and I used it casually for about 8 years before starting to use it seriously and all the time in about 2002.

Learning was harder than I think it should have been – primarily because the main tutorial (invoked with C-h t) focuses on lesson after lesson of basic file and editing operations instead of trying to teach you just a couple very basic and core lessons about emacs itself. So, I attempt to present:

The only emacs tutorial you’ll ever need

Emacs does a lot and new users definitely needn’t try to understand all of it. I really ramped up dramatically faster in my learning curve once I discovered and mastered a very short list of basic functions that help explain the major interactions with the software.  Before this, I was very, very often feeling trapped by it and it convinced me (many times) to turn away (to vim, TextPad, WinEdit, notepad, and other software). Now I can’t imagine trying to use something else to get work done.

The short version: I believe that if you start by learning describe-function, describe-key (and where-is), apropos, modes (and describe-mode), and ctrl+g, you will ramp up on emacs much, much more quickly than if you do not.

  1. Every key press in emacs executes a function. Whether you press the “a” key or some key sequence involving the control (“C-“) or Meta (“M-“, usually by pressing ALT or the Escape key) keys, you are running some function. This is probably different from most software you normally work with.
  2. Every function has documentation. You can see this documentation by executing the function “describe-function” and typing the name of the function you want to get documentation on.
  3. Many functions can be invoked by name. You do this by pressing “M-x” and entering the function name in the minibuffer.  For example, if you type “M-x describe-function [ENTER]” emacs runs “describe-function” which asks you for a function name. Type “describe-function” and you will see the documentation on “describe-function”.  I said “many” and not “all” functions can be invoked by name – in the function’s definition it must be declared to be interactive for this to work. Emacs has a lot of non-interactive functions (e.g. basic lisp functions like car) which cannot be executed interactively.
  4. “describe-key” (and its close sibling “where-is”) can help you explore keymappings. I mentioned that when you press “a” it runs a function – to see what function that key sequence runs, type “M-x describe-key [ENTER] a”. This tells you pressing “a” executes “self-insert-command” and shows the documentation of self-insert-command (that it will “Insert the character you type.”). Similarly you could use “M-x describe-key [ENTER] M-x” to see that M-x is bound to execute-extended-command (which opens up the minibuffer and asks you for a function to run). Cool!  So let’s say that you know there is a function called “goto-line” which lets you jump to a specific line in a file.  You’re lazy, though, and don’t want to type that whole thing out whenever you want to use it.  “M-x goto-line” – so much typing!  Instead, you can type “M-x where-is [ENTER] goto-line [ENTER]” and emacs will tell you what keysequences goto-line is mapped to. In my setup, they are: M-g g, M-g M-g, <menu-bar> <edit> <goto> <go-to-line> – so I have three ways to get to it.  Another invocation of “where-is” and I learn that “describe-key” is bound to “C-h k” – so the quick way to do the first operation in this section (“what function is run when I press ‘a’?”) is: “C-h k a”.
  5. “apropos” can help you find (or remember) useful functions. Say you didn’t know that goto-line was the function to jump to a line in a file. If you type “M-x apropos [ENTER] goto” you’ll get a list of (interactive) functions that include “goto” in their name. Personally, I find this more useful to remind myself of a function I can’t quite remember than to find a function I don’t know at all, but it’s very useful. (short way: “C-h a goto”)
  6. Your major mode sets up a number of default behaviors about your interaction with emacs. All interactions take place in a single major mode and you can see this mode in the modeline it might be (“Lisp Interaction”, “Apropos”, “Shell” and others).  Depending on your mode, your keys will behave differently!  This can be very confusing to new emacs users.  For instance, when I press “C-h k <TAB>” (to inspect what the TAB key does) in Lisp Interaction mode it runs indent-for-tab-command (to indent some line for lisp programming), in Shell mode it runs comint-dynamic-complete (to try to tab-complete a function or file name), and in Apropos mode it runs forward-button (to navigate to the next linked entry in the apropos output).  “describe-mode” will tell you what mode you are in (and what minor modes are enabled) and what many of the major keybindings are for that mode. (short way: “C-h m”)
  7. Minor modes can be mixed in to add more customizations. Most of your keymap will be defined by the major mode you’re in, but there are some editing conveniences that can be put on top of this that may transcend any particular mode.  A pretty good example is “folding” – a behavior that lets you collapse large sections of a document and see a larger structure.
  8. Ctrl+g runs “keyboard-quit” You may find yourself locked in the minibuffer or with emacs trying to get you to complete some command you don’t understand – ctrl+g can frequently get you out of this.  (note: it’s not perfect – you might wind up in recursive edit but that’s another story).

These are the things I wish I knew before I started any of the tutorials.  The tutorials *are* good and the reference cards *are* handy, but I was frequently frustrated and confused why the keyboard didn’t react in the ways I wanted (I didn’t understand modes in general, least of all the one I was in), I didn’t understand how keys worked anyway (didn’t know about describe-key), didn’t know how to increase my proficiency once I started getting a little more comfortable (didn’t know about where-is or apropos), and didn’t know how to learn more about many of the functions (didn’t know about describe-function or apropos). Those are commands I still use every day when using emacs today.

Comments

New music, November 2, 2011 edition

Music is my life. Well, then again, not really – there’s friends, family, pets, computers and running.  But music is way up there.  And lately I’ve got a few things I’m newly into.  Here’s a short rundown – in no particular order. Every link is to a song that I think is worth listening to.

  • Male Bonding – I just posted the youtube clip of their incredible track – Bones – from their most recent album. I was on a training run about two weeks ago listening to their new album for the first time when I first heard it and it’s one of those incredible experiences when you first hear a song and it just stuns you. Previously I’d seen their video for Year’s Not Long, which I guess would probably be called gay-positive in the sense that it winds up with all the guys in the video making out with each other.  But Bones – *6 minutes* of pretty serious (if poppy) thrashing. There’s not a lot of complexity to these cats and you’ll probably immediately know you love them or they’ll bore you to tears.  I saw them play at Chop Suey as part of City Arts Fest and they were great, but it was a little strange to see a show so poorly attended (I’d say there were 50-100 people there and we basically all fit on the main floor).
  • Jay Reatard – died ahead of his time.  He looks and acts like a reject from the carny and the “pool-party-gone-wrong” theme of It Ain’t Gonna Save Me are an inspiring testament to someone I wish I’d gotten to see live.
  • Frank Turner – speaking of testaments – Eulogy is easily the most perfect <1 minute song I’ve ever heard (I was never a big D Boon fan). I saw him at Neumos and then, like in the linked clip, they led into “Try This at Home” which has some of the most perfect sing-along choruses I’ve heard in years. By the end of the show, he insisted on and succeeded in getting every member of the audience to sing along to Photosynthesis – and it was magic.
  • Carissa’s Wierd – It’s hard to know what to say about this band. Listen to Heather Rhodes and lines like “saw someone today who looked exactly like you – it’s funny how the years go by” or One Night Stand and “please don’t ask me what my thoughts are cause I don’t care about yours” and you’ll find tragic desperation that is just destined to be the soundtrack for sad memories and for the discount bins. Which is really unfortunate because they made incredible music and S still is.
  • Pajo – Keeping that thread going, David Pajo played guitar for Slint and apparently he’s still making music but as far as I can tell pretty much flying beneath the radar of everyone.  At least I just found a copy of “1968” used at Sonic Boom in Ballard and it had been getting marked down for the past 3 years.  When I listen to his cover of Where Eagles Dare or basically anything from 1968, I think “this must be what people got out of Elliot Smith.”
  • The Gglitch – this is hard to write about because this is the band that my excellent and incredibly talented cousin was in before he died of cancer. I just visited with his brother and he travelled a little this summer and was pursuing an excellent effort to try to get their last album into some public libraries. Anyway, my cousin’s keyboards on the lead track from their last album (which is Angeldust if you have Spotify) shows their amazing range. I don’t even know what style to call it, but I know that I love good, passionate music and that beyond missing my cousin – I believe this is it.
  • Jay-Z and Kanye – somewhere this post turned very melancholy and I want to turn it to an uplifting note and that comes from the Frank Ocean cut off Watch the Throne – Made in America. I could listen to the layers they put down on this over and over – and have. And I can do all that and look past the Big Ghost Chronicles review which trashes this track pretty hard because even Big Ghost has to eventually concede that “its still a pretty tight project son”

Give me some advice on what to listen to next!

Comments

Pacer seeding

The 2011 Seattle Marathon is just a couple weeks away. I’m organizing the pacers this year…

Aside: being pacer organizer is good but there are some weird experiences. I got email from some guy in Toronto who wanted to pace and another email from someone interested in pacing but who wouldn’t be in Seattle for the next year or two, so how about pacing then? But I digress…

…and in past years in the start area there have not been any signs helping the starters line up by pace.  There’s just one giant start area for both the half and the full, though the races start at very different times.  I tried to lobby to get some signs set up on one side of the start chute for the half with minute/mile pace markers and signs on the other side for the full (there are >3x as many half finishers as full finishers, so the pace groups for those paces will definitely be very different). It seems like we’re not going to get that, so I ran some numbers.

I went to the Seattle Marathon’s website for results and pulled down results from 2008-2010 for the half and full (mens and womens).  I sorted results by chip time to figure out where people really *ought* to line up given how fast they finish the race, and here are the results I found:

The key part for where we should line up with our signs are (assuming we can approximate how far back “the back” is and that the crows is uniformly distributed):

  • 1:45 half / 3:30 full should be about 1/10 of the way back from the start.
  • 2:07 half / 4:15 full should be about 1/2 way back from the start.
  • A 2:30 half / 5:00 full would be ~4/5 back from the start (however Team in Training are going to provide 5+ hour pacers this year)

Comments

Bones

This is my favorite song for 22 October 2011:

Comments

Latency

A friend recently posted a comment mentioning latency that made me want to dump some thoughts about what I feel like I know about the topic. There are better sources on it in greater depth (go to Steve Souders’ blog or there tons of good things James Hamilton has written on it) but I have a pretty broad working knowledge of it so here goes. I’ll try to introduce what latency is, how it’s measured, and how it’s analyzed.

Web latency can roughly be defined as “how long it takes your pages to load.” It’s common to start thinking about this by wondering what the connection speed of the clients reaching your site are, but this often doesn’t really matter.  Even if you get this, you’ll just know how fast your clients are, but what matters is the their perceived speed of your pages.  It’s important to understand that there is both server-side latency (how long it takes your site to generate pages) and client-side latency (how long it takes customers to get your pages).  Server-side latency is pretty easy to measure – you can generate markers in server-side code that extensively measures how quickly you can generate pages, but users don’t care if you generate your pages quickly – they care if they get them quickly.

Latency is typically measured in milliseconds, both for server or client-side. I mentioned it’s easy to add instrumentation to measure serverside latency but measuring client-side latency is trickier. You generally can’t measure the “speed” of a client, nor would you really need to.  To do this, you need to conduct fairly extensive tests sending multiple files back and forth between the client and server and at the end of the day, you’d only know approximately how that client is (or your average client) and, again, this doesn’t matter for the client’s perceived latency of your site. The typical way to measure client-side latency is to include some javascript in the pages you send to the client which call back to some server code tracking those markers.  Once you have this, you can measure a few markers:

  • Time to first byte – how long it takes for your page to start reaching the client
  • Time to (some key metric) – you might want to measure some skeleton of the page that starts to render, “the fold” (the chunk of a page which renders in the initial screen of the browser without any scrolling), or some other key feature on the page which may be above or below the fold
  • Time to page loaded – how long it takes for the entire page to reach the client

Each of these matter for different applications and you need to decide which are most important for your site (though initially it’s probably best to just focus on time to page loaded).  Also, the injection of all those callbacks to measure this don’t come for free and will impact the latency of your pages, so it’s common to measure this selectively to understand the overall health of the site at different times throughout the day. You might exhaustively measure this for all transactions (all pages, all clients) to get a quick, comprehensive measure of latency and understand weak points in the experience with your site but typically you can more selectively add this data collection and monitor it over time.

So those are some of the key ways to measure latency – once you’re collecting all this data, there are different ways to analyze it. The ways I know best are those that we use in my current job so those are some of what I’ll focus on. When measuring this you can look at different percentiles or using understats. Percentiles are similar to SAT scoring and are usually reviewed at intervals like “p10” (10th percentile – the 10% fastest clients), “p50” (50th percentile – the midpoint), “p90”, “p99”, and “p99.9” (the slowest 0.1%). It might be obvious, but p99 > p90 > p50 > p10, and all points in between.  Even if you have a completely homogeneous user base, these will vary.  You might find that they are all pretty fast or slow, but you’ll probably find that they vary over the course of the day and week and you’ll probably find p90 is dramatically slower than p50.  They will vary over the day as when your services are higher/lower loads, when your hosts are undergoing maintenance, or when the user base varies (if your site draws worldwide traffic then in the middle of the night you’re seeing overseas traffic which will have some increased inherent latency you can’t easily reduce). When you have your instrumentation data collection system in place, you can track this over the course of the day and week and identify different tolerances that are important to you. It’s not important to measure all of these immediately, but it is important to settle on a few key thresholds and focus on improvements to those over time.

The other main way I understand to look at latency is using “understats.” Understats can be plotted in two interesting ways and are sometimes preferred to the time percentile statistics.  Understand pivot the same data you’d see from percentiles along an axis which analyzes percentage of customers receiving pages in a certain time – u1000 tells what percent of users received the page in 1000ms, u2000 tells the same for customers receiving the page in 2 seconds (2000ms), and so on.  This can be plotted over the course of the day (u1000 might fluctuate from 40%-80%), or aggregated over some time period. In the second approach, latency for the requests is sorted from fastest to slowest and shown in an a plot (usually an asymptotic arc) that reveals all understats for that time period (y-axis from 0-100% and x-axis ranging from fastest client (lowest latency) to slowest (highest latency)).  The profile of the understats graph plotted in the second way can reveal a lot about the latency profile of a site, so I like it for at-a-glance view of latency, but for regular operations it’s more practical to set up rules like “I never want my p50 to exceed 1200ms” or “I never want my u2000 to exceed 30%” and run operations based on that.

[note: this would probably be more illustrative if I added some pictures illustrating these – maybe I’ll do a followup post with some of that later]

There are a couple other interesting aspects of latency measurements. You probably won’t have uniform latency across your site.  It’s probably important for your landing page to load quickly. If it’s not, users will immediately leave the site and not bother learning what it’s about, so this page might be almost completely static – most serverside latency is passed straight to the client, so building a complex and slow landing page could be death for a site. It might also be important to have this page distributed from edge cache servers or distributed via some CDN.  You probably can’t have servers everywhere in the country you’re serving, but you can outsource this to a hosting company (CDN) that can have fast servers throughout the geographic region where your customer base is.

You’ll also have different numbers of resources required to render your pages.  Consolidating and minifying these helps. Consolidate your javascript into a single file which can be fetched in a single request or series of requests rather than across multiple fetches – do the same with CSS. Also, for production, run both through a minifying system that strips whitespace, abstracts variable names to single characters throughout the JS (the client browser doesn’t need to know friendly variable names to render), and so on. Another technique is to use “spriting.” This gets its name from spriting in video games and the technique involves creating a single, large image which contains the multitude of visual components that are used throughout a page (buttons, icons, and other UI elements) and then using CSS to render a visible portion of a single sprite out of that single resource on the page.  Without spriting, you might have 50 small images required to render a page, each requiring an HTTP GET, vs. a single GET which is reused throughout that page – and reused throughout other pages in your site.

Further on the fact that the landing page might (should) be fast – other pages will be more complicated and slower. For this reason, it’s important to have a sense of what classes of pages you serve and to measure and work on those independently. The p50 of your landing page might be 500ms while your slowest page might be 2500ms. If you look at an aggregated measure of “site p50” – you might be able to guess which are the weak points but if you set up instrumentation by type of page, you can identify and improve weak points much more quickly.

Finally, there might be some classes of clients that impact different pages differently.  Mobile browsers (phones and tablets) might experience latency one way (at p99) – IE/Firefox/Chrome will probably have a different experience, and these might point to simple optimizations to increase overall latency. Client browser type can be tracked to help you focus on weak points there, too.

Where to from here?  There are a lot of frameworks for measuring this and I won’t go into those (mostly because I’m not that familiar with them, just what we use in house, but google’s analytics packages are very good from what I understand). There are also some good tools (like ySlow) that can immediately point to low-hanging fruit to improve your site speed (do you minimize asset requests? is compression turned on for pages sent back? does the server support pipelining?). When you start looking at this space, you can usually find a ton of low-hanging fruit to greatly improve latency.  You can probably make a 500-1000ms latency improvement with minimal effort. After that is when you start getting to the interesting and hard work. So the first month of working on latency should probably be identifying a toolset and making those initial improvements, then you can expand the metrics you look at and get to the more subtle ways to make your site faster.

Comments (1)

HTC 2011 recap

Somehow this year I made it onto a team for Hood to Coast. I’ve wanted to do this race (or some similar race) for a couple years but always had conflicts. This year, Dana, from ChuckIt (who turned out to be much cooler than I’d know or expected) posted to Facebook with an opening on the team she was participating with and I hopped on it.  I’m going to try not to turn this into a White River sized post and get to some of the highlights and lowlights.

  • First, and no offense to any of the participants or finishers, I really felt like this barely qualified as a race. Sure, there is a chip, the course is probably measured, and people can be disqualified, but on the whole, I think this is a race about as much Bay to Breakers is and treating it another way is kinda silly.  This isn’t necessarily a bad thing, but there are some terrific athletes who’ll find themselves “beaten” (in terms of team clock time and finish place) by total couch potatoes.
  • It *is* a lot of fun and an unique experience. A lot of teams have very creative themes in their vans, team names, costumes, and so on.
  • I’ve heard people say that even though it’s only about 18 miles, the need to run, sit in a van, sleep, and run again make it as hard as a marathon. They’re wrong.  Anyone who says this hasn’t really tried to run a solid marathon.  It’s not easy, but it’s just not even close to being as hard as a marathon.
  • The logistics are interesting and it’s fun to start to understand the legs.  The race is run with 2 vans of 6 people each over 36 legs of varying difficulty on the 200 mile course. The course itself is almost all pavement (which is decent, though trails would sure be nice) with a couple sections of gravel road (which is *awful* – the vans kick up an insane amount of dust) – usually running alongside lightly used county roads from Mt. Hood to the Oregon coast. There are a series of wave starts with ~20 teams starting every 15 minute wave from about 4AM to about 6PM on Friday with the slowest projected teams starting first and the elite teams at the end. Van 1 had each of their 6 runners run a leg of the course, then hands off to van 2 while their runners run legs on the course and van 1 speeds 6 legs ahead to the next van transition. After 5 van transitions and 3 legs each to all the runners, everyone winds up in Seaside, Oregon for the finish.
  • I had leg 11 and in hindsight, I think that’s a pretty good leg. Legs 1 and 7 (the first legs in each van) have a tough time. Leg 1 is terrible because it’s given to some poor sacrificial sucker whose legs will get shredded. It’s nearly impossible to find an elevation map of the entire HTC and I think it might mainly be two reasons: A) it shows that it’s not a hilly course (despite what most of the people doing it seem to act like) and B) it shows that leg 1 is an absurd downhill that looks designed to destroy the legs of whoever runs it.  So it’s out.  Leg 7 (first leg of van 2) has the downside of needing to run immediately after sleep and negotiating the exchange from the other van (both of which are also downsides of leg 1).  Legs 6 and 12 (the last legs of the vans) also have the “coordinate with other van” strike against them and you need to go directly from running to getting what sleep you can on the overnight transition. So I think the middle legs in the van seem preferable – not taking into account the difficulty of any of the legs.
  • The race organizers let way too many people into this thing.  We spent a highly significant amount of time sitting in traffic as vans tried to get into an exchange (especially the last van excahnge between legs 30-31). We were one of the later vans to get into these exchanges and as we were entering, we saw a lot of vans sending runners on foot to get to the exchange so they wouldn’t miss the handoff. This is a mistake that I think should be fixed in future years (my friend Tien said he’s never doing HTC again because of this and I wouldn’t blame him).
  • As for my personal race experience: it felt good – great, really, to get out and run kind of hard again.  It also felt great to feel like I was really kicking ass because there were so many middle of the road or weekend runners on the course.  I encouraged every person I passed and think they should be proud of their accomplishments, but with this kind of structure of race I could see it being a little discouraging when a modestly fit runner gets tossed on the course with a ton of people who aren’t runners at all or have been “training” for a couple weeks for the event. Anyway, due to my screwed up knee I advised that my time projections for the course be set for a 44:00 10k runner since I didn’t think I should run faster than that. I beat these times on all legs, coming in with an adjusted performance of a 42:00 10k runner. My knee did hurt, but it didn’t stop me – the worst leg was the 3rd, where my knee went from “noticeable” to “hurting” within the first half mile, but it was manageable the whole time.  Over my legs, I passed a total of 39 other runners (“roadkill” in the parlance of many of the participants) and wasn’t passed by anyone, which was all nice but didn’t feel like an incredible achievement.
  • The team support is really nice.  I was cynical about this and want to say it doesn’t matter but regardless of whether these people are your friends or even your team, it’s really great to see a handful of people cheering you along on the course.  The positivity is borderline overwhelming and at times it’s hard not to think “I bet this is what it would be like to be in TnT…” but if you can suspend that for just about 24 hours, you’ll be happier.
  • Finally, and in the spirit of the event over the race, here are some things I’d like to remember for future such races:
    • Costumes – would be fun. Wigs? Makeup? Team theme?
    • Noisemakers for the van – for runners on the road. Silly string?
    • Possibly bring a hammock to tie up at exchanges?
    • A folding chair is great
    • Look for a race that isn’t all pavement
    • Try to ensure that the team has one of the giant 15 person vans – consider not doing it in a minivan (though the minivan was probably the main stakeholder in the marketshare of car types at HTC)
    • Every teammate should bring a single bottle with a giant beverage dispenser / nuun / concentrated Gatorade (not a thousand disposable plastic bottles).
    • The team should either be competitive and stacked with athletes (or otherwise people who actually know their fitness levels) or strictly focused on having fun and not worry about goals – probably the latter.
    • Ideally join a team that gets seeded  in a place such that they will reach the finish well before the course cutoff (we didn’t get to the finish till ~7PM and the course closed at 9PM and it felt like things were already winding down).

I think that’s about it. This was fun and challenging in different ways from any other race I’ve done, but a lot of the mystique was definitely worn off and I found myself much more interested in the Ragnar series than I thought I’d be, but I do hope to go back some day and try a different leg.

Comments

Learn…forget…relearn

I feel a little like I’ve forgotten how to blog. I’m definitely out of practice with WTPB being basically down for probably a year and the glory days of MCWOT running on bloxsom ancient history.

But despite never caring about having a popular blog that gains loads of readers (which is still the case), I need to realize that if I think to myself “how could I be bothered to read / proof this whole thing?” then a post is probably not worth posting at all.

That said, I think I’ll work on another White River post that cuts to the chase. Or maybe not – but I’m going to try not to do that again.

Thanks for your patience.

Comments (1)

« Previous Page« Previous entries « Previous Page · Next Page » Next entries »Next Page »