Pezholio

Royal Fail Part 3: Response Received (and It Ain’t Good)

| Comments

Well, it’s been an exciting few months for the open data movement, with the future of the Ordnance Survey and geographical data going out to consultation and data.gov.uk being launched, but imagine my horror when I finally received a response to the petition I started to make postcode data free to non-profits. The full text is below:

Under the Postal Services Act 2000, Parliament set up Royal Mail as a public limited company with the Government as its only shareholder. Under this framework, the Government established an arm’s length relationship with the company so that the company had more commercial freedom and could run its operations without interference from the shareholder.

The Postcode Address File (PAF) dataset was designed and engineered by Royal Mail and is owned and managed by the company as a commercial asset of the business (containing around 29 million addresses in the UK). Royal Mail developed the PAF with the primary purpose to aid the efficient delivery of mail, though over the years the PAF has come to be used for a number of purposes other than the postal purpose for which it is designed and was established. Indeed, many organisations, including new postal operators, banks, insurance companies and others offering to deliver goods to your door, use the information held on the database. The PAF is also used in other business processes, including mailing list “cleaning”, anti-fraud activities and various customer services.

Royal Mail invests significantly in collating and maintaining the Postcode Address File (PAF) and this cost is recovered through an independently regulated licensing arrangement. It would of course be very time-consuming and costly for anyone to try to replicate the list, so Royal Mail licenses PAF data, for a fee, allowing others to use it. Under Section 116 of the Postal Services Act 2000, Royal Mail must maintain the PAF and make it available to any person who wishes to use it on “such terms as are reasonable”.

This requirement is replicated as a condition of Royal Mail’s licence issued by the postal regulator, Postcomm. Provision exists for Royal Mail to recover a reasonable charge for the supply of PAF and it must not impose any term or condition other than reasonable restrictions to safeguard its intellectual property rights (IPR), and to ensure that the PAF and its updates are used to support effective addressing.

As access to the PAF is governed under a condition of licence, Postcomm monitors its practice. Royal Mail’s licence obliges the company to make access to the PAF available on reasonable terms. Postcomm allows the company to make a reasonable specified profit margin and monitors its accounts.

Postcomm has previously undertaken a public consultation reviewing how the PAF was managed. The consultation started in 2006 and finished in 2007. Postcomm took all the diverse uses of the PAF into account before reaching its decision in 2007, announcing more safeguards for the management of the address information held in the PAF with the aim of making sure that the PAF is maintained properly and made available on fair and reasonable terms. The findings of the consultation can be found on Postcomm’s website (www.psc.gov.uk).

If any PAF user or stakeholder feels that Royal Mail is not complying with the terms of section 116 of the PSA 2000 or Condition 22 of its licence, they can either raise concerns direct with the company or with Postcomm. Postcomm would consider the merits of any such concerns in the light of its statutory duties.

This is, word for word, the exact same response I got from Lord Young via my MP, Mike O’Brien after I wrote to him in October (which is also exactly the same response as the Free PAF petition got earlier last year). No mention of data.gov.uk, no mention of the Ordnance Survey consultation, just a flat out ‘no’, even though steps have already been put into place to free postcode data!

I can only assume that this was written (read: copy and pasted) by someone who has little or no understanding of the issues, has seen ‘postcodes’ and ‘royal mail’ and just whacked a standard response on. It doesn’t really do much for your faith in democracy and the system when you know more than the Prime Minister’s office seems to know.

Open Postcodes - My Letter to the FT

| Comments

As some of you have have seen, Sue Cameron of the Financial times has written a confused and desperate sounding article about the government’s recent announcement to open up the postcode dataset for public reuse.

As the FT doesn’t have any commenting functionality (my biggest bugbear with newspaper websites), I’ve drafted a letter to the editor. I’ve not sent it yet, so would appreciate your comments! The full text is below:

Sir,

In Sue Cameron’s article ‘Mandy and Gordon - the unravelling’, Ms. Cameron claims Gordon Brown’s opening of the PAF dataset will be a ‘free gift from the taxpayer to major money makers such as Google’.

Yes, large corporations will benefit slightly (very slightly - they won’t have to pay £1,200 per year to use the PostZon dataset), but the real winners will be small, community built websites, such as Planning Alerts, Job Centre Pro Plus, The Straight Choice and countless other sites, which, until recently were powered by a free service - ernestmarples.com, which tied location data to postcodes.

These sites offered services like alerting people to planning applications and jobs in their area for free, providing useful, innovative services for free and taking the load off the stretched public sector.

However, in October, Ernestmarples.com was forced to close by heavy-handed legal action from the Royal Mail, and without a free location-based dataset, these sites could not function and were closed down.

Linking locations to postcodes is an extremely valuable service which should not be in the control of one private organisation, the data was sourced at the public’s expense and should be free for the public to reuse as they see fit. Who knows - it might end up saving the government a few quid too.

Yours Faithfully

Stuart Harrison

An iPhone App in 5 Days?

| Comments

Mockup of Ratemyplace app on an iPhone and iPod touch iPhones are ace aren’t they? Despite howls of derision from some camps, in the space of a few years, it’s completely turned the mobile world on its head.

I’ve wanted to build an iPhone app for Ratemyplace - the food safety ratings website I run at the council on behalf of 8 other councils in Staffordshire for a while, but unfortunately just didn’t have time to learn Objective-C, the language that iPhone apps are built in - at the same time, I’m also aware that not everyone has an iPhone, so have been scouting around for ways to build apps that can be ported to other platforms quickly and easily.

About 6 months ago, I looked at Phonegap, which promised to allow me to build apps quickly using only HTML and Javascript. I beavered away for a few weeks, and finally had an app I was (sort of) pleased with. It used native features such as geolocation, and I used the UiUi Kit CSS library to make the app look as much like a native app as possible. I was all ready to submit it to the app store. Then the 3.0 firmware upgrade came.

My iPhone app was busted - native geolocation no longer worked, and a little hack I’d worked out to get a toolbar at the bottom and a navbar at the top was also kaputt. I tried to fix it for a bit, but other priorities took over, and the project went on hold.

Then, a few weeks ago, I came across Appcelerator Titanium, a platform that does pretty much everything Phonegap does, but (in my opinion) much better. While Phonegap uses a portion of the iPhone’s API called the webview to display webpages as though they were native apps, Titanium goes one better, it compiles your code into a mixture of web views and native Objective-C code. Like Phonegap, it also works with Android, and a promise of more platforms to come.

This means that you can not only take advantage of many more native iPhone features, your app is much less likely to be rejected (as a lot of phonegap apps were) because of the use of what Apple calls ‘private APIs’.

I dusted down my old bits of code, bookmarked the API documentation, and set to work. In the end, as well as doing my other duties, the app took me 5 days (and one beery evening) to complete. You can see a few screenshots below:

[flickr]http://www.flickr.com/photos/13492966@N06/sets/72157622918161445/,460,460[/flickr]

I submitted it to Apple yesterday, and I’m not expecting to hear anything for a good few weeks, if not months (Apple’s approvals processes to get an app into the app store are notoriously strict) but I’m reasonably hopeful. I’m now looking at porting my code to Android phones, with a view to looking at Blackberry and Nokia devices too.

Looking forward a few years, with the growing adoption by mobile phone browsers of HTML5 and CSS3, I can see that apps are going to become more browser based, and (hopefully), this will lead to standards being adopted across the industry, but, until then, it seems that platforms like Titanium are going to be the best way to get your apps to as many people as quickly as possible.

Fixmytweet!

| Comments

I love Fixmystreet - it’s the nearest thing we’ve got to a national problem reporting hub, while other council website bury their reporting facilities under pages and pages of navigation, Fixmystreet is simple, quick and direct.

What I also love is its openness, as well as being able to pull out reports for problems in your area, you can also inject problem reports directly into the system, meaning anyone can build their own front end for the system.

I’ve been thinking about Fixmystreet a lot recently, after building my own frontend for it via the Ubermap, I’ve also been thinking about repurposing our existing streetscene reporting forms, and getting them to use Fixmystreet.

It was while fiddling about under the hood, that I thought ‘wouldn’t it be cool if people could report problems via Twitter?’

Twitter is a fast way of communicating, and crucially, it’s mobile. There’s a plethora of mobile apps and frontends for Twitter, and even if you’re only on an aged Nokia 3310, you can send tweets by text message. This means that when you spot a problem, you can report it straight away. You could even add a picture.

I got cracking on it, and, within the space of 24 hours I had a prototype - users sign up with their Twitter username and email address on the Fixmytweet website, and whenever they spot a problem, they can send a tweet in this format:

@fixmytweet {postcode} {description of problem} {twitpic link (optional)}

(If you’ve got a smartphone with a Twitter app that supports the geolocation API you don’t even need the postcode!)

The system then parses the tweet and sends the necessaries to Fixmystreet. Fixmystreet then sends an email to the user, with a link for them to add more detail and approve the report. It then gets sent directly to the council! (and, if the council is smart, they can even integrate it with their CRM system)

I’m currently testing this out on a test version of Fixmystreet, and it seems to be working well so far. If you’re interested in helping test me test it, please sign up on the Fixmytweet website and send a test tweet.

I’ll keep everyone posted on developments on Twitter and this here blog!

Twitter - to RSS or Not RSS?

| Comments

It’s been a while since I first published my Beginners’ Guide to Twitter in Local Government, and since then (although, obviously not as a direct result of my blog post!) there’s been a plethora of local authorities using Twitter.

However, with a few notable exceptions, most councils seem to prefer the ‘fire and forget’ method of sticking an RSS feed into Twitterfeed and leaving it to run. Hell, even I was guilty of it up until recently - even though I was making an effort to monitor and engage, I was still letting the news articles automatically publish via Twitterfeed.

There’s been a bit of a discussion recently via Twitter as to the rights and wrongs of this, and I’m definitely in the ‘wrong’ camp now - A few months ago I turned off the RSS feed for @lichfield_dc press releases and now do them manually. Why? Well, I shall tell you…

You don’t talk to a robot

Twitter is, by and large, a social medium - if all you’re doing is chucking out press releases, people will assume you’re not interested in engaging or are just going to generate boring content. This means you aren’t going to get followed by as many people who might otherwise be interested in you.

Auto tweets look ugly

Take this press release for example. If I were to use an auto tweeter, here’s what it would look like:

News: Everyone is invited to get into the festive spirit at this year’s Christmas Fayre taking place in the city… http://is.gd/5eZUT

Pretty boring huh? Plus the content gets cut short and can often not make sense. However, if I conjour up a manual tweet, I can tailor it much more to the Twitter medium and come up with something a lot more friendly:

[tweet url=”http://twitter.com/lichfield_dc/statuses/6432523654”]

This means you can get your message across much more easily and in a style that fits in with Twitter’s informal approach.

Stopping information overload

Sometimes you’ll have a press release which, while it might be relevant for the website, might not be suitbale for Twitter. If you’re blindly tweeting everything you put out, then people might be much more likely to be turned off by your content and reach for that big button marked ‘unfollow’.

This is all well and good, but…

…I can hear the objections already - ‘I don’t have time to write manual tweets’ - but how difficult it it to write a 140 character tweet? If you’re going to take online engagement seriously, it’s definitely worth just taking 2 minutes to show your followers you care.

If you really must use automated tweets, then make sure you mark your account up as such - call it something like ‘councilx_news’ and state very clearly that all you’re doing is publishing the latest news - then if further down the line you decide to engage a bit more, you can start another account for more human interaction.

I’m a massive hypocrite

However, after saying all this, I do agree that sometimes, automated tweets have their place. We do still tweet food safety inspections, and @ldcplanning has been happily tweeting planning applications for a while now. However, it’s not really practical to tweet large volume, samey tweets manually, and you don’t get the same advantages with tweeting press releases.

In fact, I recently did a straw poll amongst our followers about whether to get rid of the food safety tweets, especially as much of the content is replicated by @ratemyplace, but the majority seemed to like them. So maybe automated tweets isn’t as open and shut a case as I’d like it to be?

Simple(ish) Routes to Local Authority Open Data?

| Comments

Now, I’ve been banging on about local government open data for a while, I’ve been opening up various datasets on the Lichfield District Council website and generally evangelising to anyone who will listen (and even to some who won’t!)

This JFDI publish-everything-you’ve-got-in-a-database-just-because-you-can approach is all well and good if you’re a techy like me, but there’s three wee problemettes:

  • Problemette Number 1 You need to be a techy to get the data out there - a lot of local authorities don’t have the technical resource to do this stuff.

  • Problemette Number 2 It’s not sustainable - If I die (God fobid!) or leave the authority - whoever takes on my role might not have the skills or time to keep up my work.

  • Problemette Number 3 It’s not in a standard format - generally I’ve been making up XML schemas, or releasing in RSS or KML - which, while useful for local developers, doesn’t work semantically.

These issues have been playing on my mind for a while now, especially with central government releasing a whole load of data on data.gov.uk - how can we local authorities, a disperate band of organisations spread across the country with different teams and different ways of doing things do something similar, particularly with the problemettes above in mind?

It was with this running through my head that I chanced up on the Central Office of Information’s guidance on structuring information on the web for reusability - this sets out how organisations (particularly central government) can edit their HTML pages using RDFa - a markup that allows information to be ‘sucked out’ of web pages and re purposed as RDF data. A robot can then ‘see’ the pages and republish them on a different site (in this case DirectGov).

Although this is still reasonably technical, anyone with a basic knowledge of HTML should be able to edit their templates and expose this data, much easier than slaving over SQL databases and messing around with XML. Suppliers could also easily modify their systems to add RDFa markup (although, suppliers being suppliers, they’ll probably charge an exorbitant fee for this work!).

At Lichfield, we’re currently putting together a consultation system, and, following the directions on the COI site, I managed to mark up our consultations in RDFa, and, other than a few issues with validation, I managed it pretty quickly - here’s the RDF in all it’s glory (you’ll need to view the source to see it properly).

The COI are asking central government departments to implement this new format for all their consultations from 1 January 2010, which is no mean feat, but I’d like to see a similar dictact (albeit one with a slightly more realistic timeline!) come for local authorities too - we did it with IPSV, so why can’t we do this too?

Adventures in SPARQL Part 2 - Now With Added KML!

| Comments

It’s been a few weeks now since I posted my first foray into the Edubase dataset, and since then, there’s been a few changes to the dataset, so I thought I’d give it another crack - the end result meaning I might have something exciting for everyone to benefit from!

Since the dataset was published, the most significant thing has been the addition of latitude and longitude, as well as eastings and northings. This means that we can now get location-based data much easier, without a lot of mathematical mucking about with conversions.

To get a list of primary schools, addresses, locations and a latitude and longitude point, I can use the following query on the SPARQL endpoint:

` prefix geo: http://www.w3.org/2003/01/geo/wgs84_pos# prefix sch-ont: http://education.data.gov.uk/def/school/ prefix space: http://data.ordnancesurvey.co.uk/ontology/spatialrelations/ SELECT ?name ?lat ?long ?reference ?address1 ?address2 ?town ?postcode WHERE { ?school a sch-ont:School ; “.$open.” sch-ont:districtAdministrative http://statistics.data.gov.uk/id/local-authority-district/41UD ; sch-ont:phaseOfEducation http://education.data.gov.uk/def/school/PhaseOfEducation_Primary ; sch-ont:establishmentName ?name; sch-ont:uniqueReferenceNumber ?reference ; geo:lat ?lat; geo:long ?long; OPTIONAL { ?school sch-ont:address ?address . ?address sch-ont:address1 ?address1 ;

  sch-ont:address2 ?address2 ;
  sch-ont:town ?town ;
  sch-ont:postcode ?postcode .

} } ORDER BY ?name `

This gives me this result, which I can easily (with a bit of code wizardry), turn into a KML file, which can then be used with Google Maps, Google Earth and Bing.

I’ve generated two KML files for Lichfield District, Primary and Secondary, which I’ve used on the (still in development) ‘ubermap’ on the Lichfield District Council website.

This was quite easy to replicate for everyone, so, because I’m a nice fella, I’ve put together a script that generates KML files for primary and secondary schools in any council area in the country. All that I ask is that you save the files to your server if you use them, so you don’t cane my bandwidth!

Just in case you’re interested…

To get the list of councils for the above script, I did another bit of SPARQLing, this time using the Office of National Statistics SPARQL endpoint. The query I used was as follows:

` PREFIX rdfs: http://www.w3.org/2000/01/rdf-schema# SELECT ?authority ?label WHERE {

 ?authority
   a http://statistics.data.gov.uk/def/administrative-geography/LocalAuthority .

?authority rdfs:label ?label ; } ORDER BY ?label `

This returns the authority label, as well as a unique URI.

Cautious Optimism O’clock - Ordnance Survey Opening Up Data

| Comments

If you have an interest in open data, there’s a pretty good chance that you’ve heard today’s announcement that the public will have more access to Ordnance Survey data. Details are a bit sketchy at the moment, but data set to be released includes boundary data, postcodes, and mid-scale mapping.

Consultation will start soon on exactly how this will happen, but it’s about bloomin’ time. It’s especially good to see Stephen Timms, the Minister for Digital Britain ticking all the open data boxes with this little tit bit:

About 80 per cent of public sector data mentions a place. Making Ordnance Survey data more freely available will encourage more effective exploitation of public data by businesses, individuals and community organisations.

Which is exactly what a lot of ‘us lot’ have been saying, open data is all well and good, but if the tools that we need to make the data really useful (such as postcode and boundary data) are locked up, then what’s the point?

The two datasets that really piqued my interest were boundary data and postcodes. Boundary data is probably one of the most ridiculous examples of OS’s derived data conditions, because all political boundaries (local authorities, parish councils etc) were originally plotted on an Ordnance Survey map, this is classed as derived data - because it’s derived data, OS own the copyright, so you can’t legally use it in any way you choose.

Even walking a boundary with a GPS device and plotting the output is verboten, because how would you know where a boundary was without looking at an OS map?

Postcodes is an interesting one, and one I’d like to know more about - If you’re a regular reader of this blog, then you’ll know that my folk devil de jour is the Royal Mail, who are the copyright holders of postcode data, and recently shut down ErnestMarples.com - provider of postcode data to many useful non-profit sites. Now, if the Royal Mail have exclusive rights to postcode data, where do OS come in?

My guess is that this dataset is the Codepoint product. I’m not sure how Codepoint differs from the Royal Mail offering, but it’ll be interesting to find out. It could also be a potential use of the derived data rule for good, instead of evil - after all, the Royal Mail use OS maps to plot postcode data, so surely this is derived? (and therefore the intellectual property of OS, rather than the Royal Mail)

This announcement also comes off the back of the ongoing plans to privatise Ordnance Survey, and hopefully means that lessons have been learned from the earlier ‘sort-of-kind-of’ privatisation of the Royal Mail in 2000, which pretty much means the Government have little say in what the Royal Mail does with regard to PAF licensing.

If Ordnance Survey do become a private organisation, then it’s essential that, before selling them off, we make sure that any socially useful data they hold is put back into the hands of the public.

I’m excited (but cautiously so).

Barcode Posters - the Second Coming

| Comments

With all this talk of data, and online, we can often lose sight of the real, physical world (as a certified iPhone addict, I know I do - there’s been a few times I’ve almost walked into a lampost due to me paying more attention to my phone than what’s going on in front of me!) - digital is all very well, but how do we link this in with real life?

Adrian Short (he of Mash the State fame) recently blogged about his barcode posters site, which allows you to change RSS feeds into printable documents with QR barcodes (those thingies above) attached to each article - people with smartphones that can read barcodes (iPhone, Android etc) can then scan the code and link directly to the article. It’s almost like hyperlinks for real life.

I’ve had this idea on the back burner for a while now, but after tweeting about seeing the first new style Ratemyplace (a website I run which publishes food safety scores online) poster ‘in the wild’ - I got this reply:

[tweet url=http://twitter.com/tonypiper/status/5561369295]

This made me think - it’s a perfect fit for barcode posters - people can see the inspection certificate and scan the barcode to find more information. You can see an example of what I dun here

Now, I know what you’re thinking - ‘but generating images server side is hard! It’ll take me ages fiddling around with various libraries and I’ll be pulling my hair out with frustration!’ - Not so - Thanks to the wonderful people at Google, all you need to do to generate a barcode for a given URL is this:

http://chart.apis.google.com/chart?chs=125cht;=qrchl;={url-goes-here}

You can then stick it in the src attribute of an IMG tag and display it on a page like so:

(Make sure you encode the URL of the website either on the server or using this handy online url encoder.)

Simple innit? There’s more explanation of the whys and wherefores of QR barcodes on the Barcode posters website. Now get out there and barcode!

Impossible Possibilities - a Wishlist for Linked Data

| Comments

I’ve been playing around with the Ordnance Survey linked data and, while it’s not perfect, it’s certainly a step in the right direction.

It’s got me thinking though - in an ideal world, where an ‘information-wants-to-be-free’-esque hippy such as myself could get all the data I wanted, what cool stuff could be added on to this dataset to make it really useful?

  • Shape files for all administrative regions

Yes, now this really is cloud cuckoo land, I know we can use the unit ID in combination with OpenSpace to display the layers, but, let’s be honest, OpenSpace isn’t great.

As with the linked data, it’s a step in the right direction, but the mapping (until you get down to street view) is not designed for screen, and the API is still a bit too buggy. I know there’s also a few licensing issues that a couple of people aren’t too comfortable with. Besides, shouldn’t we be able to use the platform that we want to use?

  • Postcodes in every administrative region

This is where things get really cool, imagine if in every resource, there was a list of every postcode contained in that area - for example, looking at the resource for the resource for Coleshill, we could have listed every single postcode in that parish.

This would mean that, once others started making datasets that used the OS linked data URIs as identifiers, a simple postcode search would reveal everything that was relevant to that area.

Even better, every postcode could have its own URI (for example http://data.royalmail.co.uk/postcode/WS13+6YY (now you know I’m dreaming!)) which gave information about where that postcode was (lat/long, parish, police authority, council etc etc).

This is just a taster to get some conversation going, but please feel free to add your own ideas in the comments. Forget about licensing, let’s just imagine a brave new world of free data without limits and just go mad on this thing.