Expect the unexpected, or why you should share those bad graphs

This week I’ve had a timely reminder that things don’t always go to plan, especially when you’re doing a PhD.

About this time last year I did a pilot study to see if a particular topographic variable changed systematically along the length of a mountain glacier. The results of this pilot study looked promising and gave me a really nice looking trend.

Not a bad looking trend, enough to merit further investigation!

 

This was all great and was instrumental in allowing me to breeze through my MPhil to PhD upgrade process.

So now I was a ‘proper’ PhD student and I needed to start working on a much larger sample of glaciers to further investigate my nice trend and see if it held true. I had collected the pilot data entirely by hand, which a very laborious process that took several days to get the measurements for a single glacier. Clearly this was taking far too long for me to be able to scale-up the data set to a statistically significant number of glaciers. I needed a faster data collection method. My supervisor suggested that I automate the process so that I could collect the data much faster and using a more standardised method. This sounded like a pretty good idea and I got to work right away.

Turns out automating a process is a hell of a lot easier said than done. I won’t go into the details here but suffice to say that after many months of trial and error, I finally hit on a method that appeared to work. But as in all real-life situations there is a catch – my automated method is not outputting data with the nice trend that I’d found in the pilot study.

Uh oh. This doesn’t look very nice and that extected trend is a pretty poor fit for the data.

 

I’ve tried everything to refine the method and get data to behave (short of manipulating it I hasten to add!). It’s just not working. This has been very disheartening and I’ve often wanted to simple abandon researching this variable, put all the notes away, and never speak of it again.

Thankfully my supervisor hasn’t let me do this. At every meeting he suggests more tests for me to do and urges me not to abandon the project and potentially throw a baby out with the bathwater. It’s still too early to know if he’s right but our meeting this week did give me a bit of hope that this automated method might just work out after all. The reason for this? He spotted something in the data that I’d completely overlooked.

Hold on, if we remove the expected trend that 'bad' graph does actually look pretty interesting...

Hold on, if we remove the expected trend that ‘bad’ graph does actually start to look pretty interesting…

 

I had got completely blinkered by the trend I’d found in my pilot study data, to the point where I couldn’t see the wood for the trees and had missed another trend that was staring me in the face. Now I don’t know if this new trend is reliable or if it’s going to solve my automated method woes. But just having overlooked something this obvious has reinvigorated me to review all the data that I’d written off because it didn’t match the trend I was expecting. In short: I’ve learnt to expect the unexpected.

So the purpose of this post is a reminder to you researchers to keep an open mind, don’t be put off if the data isn’t coming out looking just as you want it, and , most of all, always show your ‘bad’ data to other people. When you work on something for as long as I’ve been working on my automated method you can very easily become blind to something that a fresh set of eyes will see immediately. So chin up and take that horrible-looking graph along to your next research group/ supervisor meeting. Maybe someone will see the interesting shapes in your data clouds?

 

p.s. the nice trees in the title image of this post are Redwoods in Whakarewarewa Forest, near Rotorua on New Zealand’s North Island. Worth a visit if you’re in the area, plenty of good walking and mountain biking to be had in that forest.

Leave a Reply