Author Archives: Adele

Using Python & BeautifulSoup to scrape a Wikipedia table

Well, it was only a couple of weeks ago that I set myself a challenge to complete the Python course on Codecademy and I did it – I completed the Python track and it was fantastic! I was given the opportunity to put my newly found Python skills in to action this week as I needed to scrape some data from a Wikipedia page – I have a table of addresses and need to compare the County in the list that has been provided to the one that it really should be. This page on Wikipedia contains the data I need, for each Postcode district there’s a Postal County and I could use this data as a comparison – formatted in an HTML table like this:

WikiCapture

Normally, I’d just copy & paste the table in to Excel for use later on BUT it’s not as easy as that (oh no!), as there can be are multiple Postcode Districts within a row which is slightly annoying! To be of any use to me, I need the data to be formatted so that there is a row for each Postcode District like so (I don’t necessarily need the Postcode Area & Town but I’ll keep them anyway – I don’t like throwing away data!):

Postcode Area Postcode District Post Town Former Postal County
AB AB10 ABERDEEN Aberdeenshire
AB AB11 ABERDEEN Aberdeenshire
AB AB13 ABERDEEN Aberdeenshire
AB AB15 ABERDEEN Aberdeenshire

And so I thought this would be the perfect project for me to undertake in Python and to familiarise myself with friend-of-the-screen-scrapers, BeautifulSoup. I won’t jabber on too much about BeautifulSoup as I’m not fully up to speed on it myself yet, but from reading around the subject I gather it’s a great way to grab elements from web pages for further processing.

Step One: Wikipedia doesn’t like you…

Wikipedia doesn’t like this code:

[code language=”Python” highlight=”7″]
from bs4 import BeautifulSoup
import urllib2
wiki = “http://en.wikipedia.org/wiki/List_of_postcode_districts_in_the_United_Kingdom”
page = urllib2.urlopen(wiki)
soup = BeautifulSoup(page)
print soup
#urllib2.HTTPError: HTTP Error 403: Forbidden

[/code]

Wikipedia only allows access to recognised user agents in order to stop bots retrieving bulk content. I am not a bot, I just want to practise my Python and so to get around this you just need some additional code to the header (thanks to Stack Overflow for coming to the rescue).

Step Two: Hunt the table

If you look at the code behind the Wikipedia article, you’ll see that there are multiple tables but only one (thankfully the one we want) uses the “wikitable sortable” class – this is great as we can use BeautifulSoup to find  the table with the “wikitable sortable” class and know that we will only get this table.

[code language=”Python”]
from bs4 import BeautifulSoup
import urllib2
wiki = “http://en.wikipedia.org/wiki/List_of_postcode_districts_in_the_United_Kingdom”
header = {‘User-Agent’: ‘Mozilla/5.0’} #Needed to prevent 403 error on Wikipedia
req = urllib2.Request(wiki,headers=header)
page = urllib2.urlopen(req)
soup = BeautifulSoup(page)

area = “”
district = “”
town = “”
county = “”
table = soup.find(“table”, { “class” : “wikitable sortable” })
print table
[/code]

Output looks like this:

TableOutput

Great! This means that we just have the HTML table stored in our variable. Now, it’s just a case of iterating through the rows and columns…easy…*ahem*

Step Three: For your iteration pleasure

We need to do the iteration in two stages – the first stage is to iterate through each row (tr element) and then assign each element in the tr to a variable. At this stage, we will grab everything in the Postcode Districts column and store it in a list for further iteration later. To do this, I used the following code:

[code language=”Python” firstline=”19″]
for row in table.findAll(“tr”):
cells = row.findAll(“td”)
#For each “tr”, assign each “td” to a variable.
if len(cells) == 4:
area = cells[0].find(text=True)
district = cells[1].findAll(text=True)
town = cells[2].find(text=True)
county = cells[3].find(text=True)
[/code]

The .findAll function in Python returns a list and so on line 20, we obtain a list containing four elements, one for each of the columns in the table. This means they can be accessed via the cells[n].find(text=True) syntax. You’ll notice that I’ve used .findAll for the Postal Districts column, this is because I want a list of the items within the cell for iteration purposes later!

After this code executes, I have a value for the area, a list of districts, a town and a county. Now for the second part of my iteration:

[code language=”Python” firstline=”28″]
#district can be a list of lists, so we want to iterate through the top level lists first…
for x in range(len(district)):
#For each list, split the string
postcode_list = district[x].split(“,”)
#For each item in the split list…
for i in range(len(postcode_list)):
#Check it’s a postcode and not other text
if (len(postcode_list[i]) > 2) and (len(postcode_list[i]) <= 5):
#Strip out the “n” that seems to be at the start of some postcodes
write_to_file = area + “,” + postcode_list[i].lstrip(‘n’).strip() + “,” + town + “,” + county + “n”
print write_to_file
[/code]

I found that, instead of district being a standard list of postcodes, in some cases it was a list of lists (oh joy!). I was expecting it to looks like this:

[u’AB10, AB11, AB12, AB15, AB16, nAB21, AB22, AB23, AB24, AB25, nAB99, non-geo’] *

*Ignore the n signs and non-geo text – we’ll deal with them later!

I got this…

[u’AB10, AB11, AB12, AB15, AB16,‘, u’nAB21, AB22, AB23, AB24, AB25,‘, u’nAB99‘, u’non-geo‘]

And so I needed an additional layer of iteration: one for the whole list and then another for the items in the individual lists. Simple.

For each item in the list, the .split(",") function in Python allowed me to split out the comma separated list of postcodes in to a list that could be iterated over. For each item in that list, we just check to see if it’s a postcode (a check on string length sufficed nicely this time!) and then build up our output string. To deal with the n that was appended to some of the postcodes, I just left-stripped the string to remove the n characters and hey presto it worked!

I flushed the output to a CSV file as well as to the screen and it worked beautifully!

Here is the full code:

[code language=”Python”]
from bs4 import BeautifulSoup
import urllib2

wiki = “http://en.wikipedia.org/wiki/List_of_postcode_districts_in_the_United_Kingdom”
header = {‘User-Agent’: ‘Mozilla/5.0’} #Needed to prevent 403 error on Wikipedia
req = urllib2.Request(wiki,headers=header)
page = urllib2.urlopen(req)
soup = BeautifulSoup(page)

area = “”
district = “”
town = “”
county = “”

table = soup.find(“table”, { “class” : “wikitable sortable” })

f = open(‘output.csv’, ‘w’)

for row in table.findAll(“tr”):
cells = row.findAll(“td”)
#For each “tr”, assign each “td” to a variable.
if len(cells) == 4:
area = cells[0].find(text=True)
district = cells[1].findAll(text=True)
town = cells[2].find(text=True)
county = cells[3].find(text=True)

#district can be a list of lists, so we want to iterate through the top level lists first…
for x in range(len(district)):
#For each list, split the string
postcode_list = district[x].split(“,”)
#For each item in the split list…
for i in range(len(postcode_list)):
#Check it’s a postcode and not other text
if (len(postcode_list[i]) > 2) and (len(postcode_list[i]) <= 5):
#Strip out the “n” that seems to be at the start of some postcodes
write_to_file = area + “,” + postcode_list[i].lstrip(‘n’).strip() + “,” + town + “,” + county + “n”
print write_to_file
f.write(write_to_file)

f.close()
[/code]

Disclaimer(ish)

This code has no additional error checking or handling and was merely written to solve a small problem I had, and to put in to practise everything I’d learned so far. It does also just work for this particular table on the Wikipedia page – although it could be adapted for use on other tables. But, it was great fun to put the learning in to action and work on a real-life problem to solve. Here’s to more exercises like this!

Self development fortnight

Well, it’s been a while since my last update and a lot has been happening with this blog behind the scenes (bye, bye 1&1 hosting and hello to the wonderful Squirrel Hosting) and with myself.

In short, I am going in to hospital for an operation on 6th June which is a little bit nerve-wracking, exciting (strange…but this op will vastly improve my quality of life) and unknown – I haven’t had an operation since my tonsils were removed when I was six and so I don’t know what to expect, I only have Holby City to go on! As a consequence of this, I have two weeks off work – time off beforehand for preparation & relaxation and then time off afterwards to rest & recuperation. I am putting my foot down now and saying that these two weeks will be for self development, learning new things and essentially NOT SITTING AROUND AND WATCHING DAYTIME TV (like I always find myself doing when I have the odd day off!)…except for Pointless, I love Pointless and so that’s my only exception.

So, I have created an account on Codecademy and am slowly making my way through the Python course. At the time of writing, I am up to lesson 8 on Loops and I am thoroughly enjoying it, it’s such a great way to learn a new language. I have written C++ and C# code in the past and so am not a complete beginner, but it’s great to start right at the beginning and learn a new language from scratch. It’s a bit of a revelation not to have to add a semi-colon at the end of a line of code…it feels a bit naughty!

When I have completed the Python course, I hope to undertake a small project of my own. My main motivation for learning Python is for screen-scraping and data extraction purposes and so I’d like to start a project to help me to gain experience in these areas. I will of course keep blogging about my progress and the new discoveries that I make.

The fortnight of self development starts now…

Week 5 & 6 – A topic of our own

For the final week of the MOOC, we have been given the task of producing an infographic of our own – this means choosing a topic, gathering the information and presenting an idea to show the information in graphic form.

As my previous sketches have been for interactive infographics, I wanted to give a static graphic a go. Having so much freedom was pretty hard – there is a wealth of information and data out there, but choosing which story to go for and what angle to take was going to be hard! It was lucky then that I got a tweet from the team behind the BBC iPlayer pointing me to the latest performance report and that is when inspiration struck.

The BBC produce these performance reports every month and I read them with interest – I am a stats geek and love stuff like this. The report gives stats such as the viewing figures for content on iPlayer, popular programmes, usage by device type and the gender/age group of users. It’s a wealth of information that I find fascinating. But I also love it because it’s about the iPlayer – something I use for at least two hours a day and have a certain affection for, it’s brilliant. For non-UK residents, the iPlayer is a service that the BBC officially launched at the end of 2007 and allows viewers/listeners of BBC TV programmes/radio shows to replay missed content and to watch shows live via the internet. The iPlayer is available on PCs, tablets, mobile phones, via Smart TVs and via cable operators. In essence, it’s brilliant.

I am fairly certain that the report released by the BBC is not aimed at the typical iPlayer user – it feels more for those in the media or for those who have a specific interest in audience figures and so my goal for the infographic was to produce something that everyone could appreciate. Luckily for me, October was a record month for iPlayer usage with 213 million requests for TV or radio content – breaking the 200 million request barrier for the first time and so I had a nice little slant for my infographic. It also meant that the story had been picked up the press too:

BBC iPlayer tops 200 million monthly requests for first time – Digital Spy

iPlayer passes 200 million monthly requests for the first time – Digital TV Europe

Merlin and Jimmy Savile documentary help BBC iPlayer to record month – The Telegraph

BBC enjoys record iPlayer requests in October – Cable.co.uk

…but no-one had produced an infographic, and so I felt it was my duty to produce one to celebrate!

My goals for the infographic were as followed;

  • Produce something for everyone – using the stats from the October performance report but make them easier to read and emphasise their relevance.
  • What were the most popular shows in October? Why did it break the 200 million request barrier in October and not, say, during the Olympics?
  • Who and what is using the iPlayer service? What proportion of requests are coming from tablets?
  • Make a static graphic that could serve as a template for every performance report so that non-industry readers could glean the key information easier on one page as opposed to trawling through the report.

And so with all of this in mind (and not a lot of time to complete the task – despite two weeks to work on it, December is a crazy busy time at work!), here is what I have come up with…

October 2012: A record iPlayer month for the BBC (PDF)

Notes about the graphic

  • This is a static graphic which uses the figures from the October 2012 iPlayer Performance report but could be used as a template for other monthly reports.
  • I extracted the information that I thought would be interesting such as iPlayer requests since 2009 (as far back as the report goes), the gender breakdown of users, the devices used to access the service and the popular TV and radio shows in October. I have also put a few stats in the blurb at the top.
  • The graphic style is largely similar to my last task with minimal use of colour –  I stuck to pink as that is the predominant colour in the iPlayer branding.
  • If I had more time, I would have liked to explore the peaks and troughs around the end of 2010 and beginning of 2011. Do peaks relate to the release of iPlayer apps on mobile and tablet devices for example?
  • This graphic could be made interactive and this is a project I would like to work on in the future – especially to see the variation in the share of the device types – so watch this space! 🙂
I am pretty happy with this graphic but feel there are plenty more angles to explore with this data – but this is good as it gives me something to tinker with over the Christmas holidays. Now, do you think I’ve been good enough for Santa to bring me a copy of Adobe Illustrator?

Week 4: Interactive graphic based on US unemployment stats

Our goal this week was to think about what kind of interactive graphic we could create based on the data used in the Guardian’s piece about unemployment in the US -> http://www.guardian.co.uk/news/datablog/interactive/2011/sep/08/us-unemployment-obama-jobs-speech-state-map

There is a lot of data used behind the scenes of this graphic which is great but is also slightly frustrating. For example, if you click on a particular state, you get a wealth of additional information – but it doesn’t allow you to easily compare it to other states. The same goes for the drop-down at the top of the graphic – it’s great that you can view the unemployment rate at particular point in time, but it’s really hard to compare unless you are focussing on a particular state. I do however like the range of comparisons that have been made with the data, especially the ability to visualise the percentage point difference from the national figure – I shall have to remember that one in future 🙂

And so, I jotted down some thoughts about what I would like to see on an interactive graphic like this and came up with the following list;

  • The Guardian piece focuses on the unemployment rate in the US since Obama came to power…what about further back?
  • Is state level in-depth enough? What about within the state – how does the unemployment rate differ within the states themselves?
  • In the accompanying course material, we were told not to add more than 6 colours to a choropleth map (which makes total sense for comparison) but what about viewing a small list of those counties with the very lowest & highest unemployment rates that would normally be enclosed in ranges?
  • Based on feedback from last week’s assignment – I wanted to focus more on type, colour and “interactiveness” of the graphic – this is definitely where I need more practise.

And so with all of this in mind, I scribbled down possible graph/map/info ideas and arranged them on the table (see last week’s post for an idea of how it looked!) and I came up with this:

Unemployment Rates in the US (PDF)

Unemployment Rates in the US – with notes (PDF)

Notes about the graphic

  • The user is able to scroll back in time to see how the unemployment data differs on the map of the US. I added a line graph so that it was clear to see years when the unemployment rate was particularly high/low. I did think about adding an overlay to show the years that a new President came in to power – incidentally there does seem to be a trend of the unemployment rate dropping in the year this happens – but I did not progress along this line of investigation for this project. Maybe another time 🙂
  • The map at the top is interactive and allows the user to click on a particular county to see detailed information about it as well as the state in which is belongs. The small bar chart on the left would become active when a county is selected.
  • The user also has the ability to tick the boxes and add lines to the graph showing the county and state unemployment rates and compare them to the national figures.
  • I have taken on board comments from last week about colour, type and making it appear more interactive. It was VERY hard being so restrained with colour (I’m not used to this!) but I actually found working with Colorbrewer for the map colours gave me a base to start from and I didn’t stray from there.

I am really happy with this graphic and I didn’t rush as much as I did last week. I took my time, didn’t faff around with Illustrator too much and so had more time to concentrate on what I wanted to do and actually what I’d want to see on an interactive visualisation like this.

22 days left on my Illustrator trial…will I be adding it to my Christmas list (as well as Alberto Cairo’s book and Andy Kirk’s too)? YES!

Week 3: Sketch an interactive graphic

The goal for this week was to think about how an interactive graphic based on a particular report by Publish What You Fund, and also published in a Guardian blog, would look. The data in question relates to how transparent major donor organisations are with their own data and so each organisation has been rated using a distinct set of criteria created by Publish What You Fund, therefore producing an overall transparency index.

This assignment has really stretched me this week and made me take full advantage of the sketching/note-taking apps on my tablet as I found I was coming up with ideas in random places and needed to get them down for exploration.

My first task was to find out what the heck “transparency” actually meant and how it was actually measured and I was thankful that the data originated from a very well organised website. I then looked at both source websites and noted down what I thought was missing and how I would like to play with the data myself. This took about three or four days – and this is where a lot of sketching and brain storming came in; thinking of the “what ifs….” and “oooh how about I just change this…” scenarios.

I toyed with the data in Excel to see if I could find any interesting correlations such as splitting the data right down to individual indicators, looking at the annual resources and budget of each donor and in turn where the money goes but what I was really missing was information about the donor itself. I was very pleased to see the UK’s Department for International Development at the top of the list but in all honesty, I really knew nothing about them  and so I wanted to build that in to the graphic.

And so I started by jotting down potential graphs/data to include in my final interactive graphic and started arranging the sketches until I had something that I thought could work. Incidentally, I find jotting things down on paper like this so helpful as you invest very little time in it and it allows easy rearranging of elements – paper prototypes FTW!

 

 

From there, I installed the trial version of Illustrator CS6 and started playing around. To cut a long story short (it really was a long story as I battled with Illustrator’s graphs – I won in the end though!) I came up with the following design;

Aid Transparency Graphic (PDF)                  

Aid Transparency Graphic + Notes (PDF)

Notes about the graphic

  • The bar chart that can be seen at the top of the graphic can be manipulated by the buttons on the right hand side and the user can select to show the results of individual aid information levels or all of them (the total).
  • The user can also select to show particular countries instead of having everything on the graph which I found really hard to read in the Guardian blog.
  • If a user clicks on a donor’s name or the bar associated with that donor, the panel at the bottom will display additional information about the organisation. I added a space for some text about the organisation to add a bit of context and also a timeline to chart their major accomplishments so that users would be able to relate to an organisation’s particular focus. Both pieces of information could be scraped from donors’ websites and annual reports.
  • I have tried to minimise the use of the word “transparency” and instead used “openness” where possible as I personally wasn’t very clear about what this meant at first.

I am personally really pleased with this, as the work involved way more that playing around with a few graphs. I had to think about what I wanted to say, how I was going to represent it in a prototype form that would communicate how an interactive version of it would work. But I’m doing something that I love and time did indeed fly when I was tinkering all weekend!

Week 2: A critique of the “Convention Word Counts” visualisation in the NYT

Source material: http://www.nytimes.com/interactive/2012/09/06/us/politics/convention-word-counts.html

A comparison of how often speakers at the two presidential nominating conventions used different words and phrases, based on an analysis of transcripts from the Federal News Service.

Although I very much like the look of this graphic at first glance I feel it includes too much information and too many layers of abstraction and hides the beauty of what is quite a high impact piece.

The main graphic serves two purposes:

1. It acts like a word cloud and illustrates the frequency of words by resizing bubbles accordingly.

2. Shows how the usage of words is split between the two parties.

Therefore it presents the reader with the ability to see that both parties have used the words ‘Tax’, ‘Energy’ and ‘Families’ in equal measures, but Democrats have used the word ‘Health’ more than the Democrats, though they themselves have used the word ‘Leadership’ more. The reader is clearly able to see this by comparing the size of the bubbles and they are able to identify the split in usage between the two parties.

However, I do feel that it presents the reader with too much information. I don’t think it is necessary for the numbers to be present in the bubbles as they serve as a distraction – the blue/red split in the bubble itself should be enough to allow someone to see the proportion of the word’s usage. The numbers themselves are also per 25,000 words, which bombard the reader with unnecessary information. Is the average reader really interested to know that the word ‘Health’ is used 38 times per 25,000 words by the Democrats vs. 9 times per 25,000 words by the Republicans? I’d hazard a guess and say “no”, but I think they are more interested in seeing that the Democrats used it more than the Republicans overall. But I do think the numbers are interesting and so maybe they should only be displayed when a bubble has been clicked on.

I feel the text, which is placed below the bubble, describing the words, does not need to be present the whole time. It’s taking up room and in actual fact I didn’t even bother to read it when I was playing around. Also, while I love the ability to add your own words to the collection, it does allow you to add words (e.g. “UK”) that have no mentions on either side – I personally didn’t find this very interesting and found that the 0-words cluttered up the visualisation.

I think the visualisation would benefit from altering the shade of blue/red depending on where the bubble is located. If for example we take the word ‘Forward’ which is far over on the Democrats’ side, I think the blue should be a lot darker than for a word such as ‘Success’ , which is more prominently in the Republicans’ side. This would help to reinforce the fact that there are two extremes to the viz and a middle ground shared by both parties.

Another level of abstraction I would love to see would be the ability to see who said the word and how many times. This data is used below the bubbles but it’s not really used effectively. How fascinating would it be to see how many times Mitt Romney said Obama (and vise versa) without having to count it up yourself?!? Then if you clicked on a bar in this chart, it would take you down to the section of the person to show a breakdown of the paragraphs.

And finally, while it’s not a criticism of the visualisation itself, I am an avid follower of the Guardian Data Blog and am used to seeing a link where I can browse and download the raw data. I’m not sure what the NYT’s policy is about this, but I think the visualisation would benefit from a link at the bottom so that data geeks like us can tinker with the data ourselves.

And so here is my rough sketch to show how I would tidy it up with the main changes listed below:

BEFORE                                                                                      AFTER

NTY_Adele_Sketch

  • I have removed the figures inside the bubbles that displayed the share per 25,000 words to give the graphic a cleaner finish.
  • I have removed the descriptive text that relates to particular bubbles.
  • I have added the percentage share of the word used by each party at the top.
  • I have added two graphs below the main graphic that will be displayed when a bubble has been clicked on. They will show who in each party has used the word and their proportional usage as a whole per party.
  • If you were to click on the person’s name or a particular bar on the graph, it would take you straight down to that person’s section in the blurb below.

Week 1: Introduction to Infographics and Data Visualization course

About a month ago, I signed up to a new MOOC offered by the Knight Center for Journalism. The course is run by Alberto Cairo and is exactly the sort of course I’ve been after for a while. As an aside, Higher Education institutions in the US seem to be way ahead of the game when it comes to MOOCs; I have completed courses via Coursera in the past and they have been fabulous.

Anyway, as part of my week 1 assignment, I have been asked to critique and discuss with fellow students the following graphic:

Week 1: Social Web Involvement

As with everything, it is so much easier to critique the work of other people and I realise that I will have fallen in to some common traps when creating my own graphics. Even viewing the first week’s lectures made me cringe at the screen as I am guilty of a lot of them. But that’s why we go on courses right? But the most important point I learned was to stop thinking like a designer and think like a reader – does the graphic convey its point within three seconds?

Do I think the graphic satisfies this? No, not really. If I’m honest, for the first three seconds it did grab my attention and if I’d seen that in a newspaper I probably would have stayed on the page and wanted to explore it. However, it’s only when I delved a bit deeper that I realised how difficult it was to decipher.

So what does the designer want me to do with the graphic? Well, it’s definitely not being used for a geography lesson, as the label for the UK is far up in the North Sea and Germany is over in Russia. The only real clue as to what the graphic is about is the title, “Social Web Involvement” and the rubrik in the bottom right hand side attempts to describe it. However, it is not clear what the sizes of the donut charts mean and also what the millions of users in the tables mean. We are told that 32,000 people were interviewed (2,000 per country) and so where the heck have these figures quoted in millions come from?!?

The graphic does present several variables, but it does not present them very clearly. The variables come in the form of the countries and each type of social web involvement. The graphic does not allow for comparisons, organisation or show correlations easily as the data is bunched by country and so it is not easy to find, for example the country with the second highest proportion of people writing their own blogs. The details are scattered around the graphic and a reader has to memorise the data in order to compare it. There is a lot of redundant grey space on the graphic which could have been used more efficiently.

I would improve the graphic by removing the map completely and just concentrating on the data. The data we have can also be sliced and diced with other data, for example the GDP of the country, the population (Internet users per 100,000 people) or we could look at other methods of social web involvement – why are there no games mentioned when according to Wikipedia there were 10.3 million players of World of Warcraft in May 2011? As a start, I would go back to basics and just ask the question: How does social web involvement vary by country to country?

Let’s take the UK as an example (edited a wee bit to fit):

UK

We could represent this data in a bar chart that could easily be compared to other countries. Of course, sixteen of these graphs may look a little cluttered but I think this is a step in the right direction.

excel_compare_1

If we wanted to develop this idea and wanted to just focus on one particular aspect of social web involvement, all categories could be greyed out and just one particular item focused upon.

excel_compare_2

These alteratives are most definitely still a work in progress but I think immediately they are a lot clearer to read and allow the data to be compared and organised a lot easier. Having read the forum for the course, I see that someone else has had a similar idea and created a stunning graphic -> http://www.flickr.com/photos/89317425@N05/8133822514/ so it’s good that I was thinking along similar lines and hope to keep developing my skills as the course progreses.

Data Viz Squirrel

Data Visualisation, Data Viz, Infographics and ‘making pie charts look pretty’ seems to have sky rocketed within the past couple of years. Despite studying database development during my final year of University in *ahem* 2003 and constantly being surrounded by lovely data (whether at work or on the web) I had never really paid much attention to it. I gawped at the screen when I discovered Information is Beautiful (and then ordered the book right away), yet it still hadn’t twigged that this was something I could try for myself, thinking it was only for graphic designers and not a lowly techie like myself.

My moment of epiphany came in 2011 when I was looking for a course to attend – I don’t have a huge training budget at work and so that ruled out any MSC qualifications but I wanted to try something a little bit left-field but which was still related to my job. This is when I discovered Andy Kirk’s ‘Introduction to Data Visualisation’ training and thought I’d give it a go – after all, if it wasn’t for me I’d still get a free trip to Brighton (yippeee!)

The training was fantastic, so much more than I had imagined as it started right from the beginning – it assumed no prior knowledge and covered all aspects of visualising data. Attendees were from a variety of backgrounds: designers, coders, database developers and I finished the day feeling incredibly inspired (you should see the mad scribblings I made in my notepad on the way home).

No more boring pie charts for me – hello stunning data! 

Fast-forward a few months and data visualisation is still at the forefront of my attention. I am overloaded with all the wonderful resources on the web about it; be it free tools, design tips, data…oh the lovely data, tutorials or books. You should see the “Data Viz” folder in my favourites – I’m like a squirrel hoarding the links.

I am a data viz squirrel; hoarding links for the winter.

Well, this is why I have set up this site, I LOVE to play and tinker but will then often finish something and then move on to something else. I have coded Python for the first time, created a Choropleth map of English Counties, I have screen-scraped, made a Heat Map and created my first visualisation for publication. I want to document this work, my thoughts, discoveries and processes so that I can keep a log of everything I have worked on. I have seen some wonderful tutorials online that don’t quite fit what I want to do and so I have modified them and now want to share them. I have so many ideas for blog posts and so many people to credit that I can see this site expanding very quickly!