archaeology
At the end of November (yes, I know it’s a while ago, life got in the way) I was involved in an event called #HackLancaster, part of a wider project in my home town to try and raise interest in the heritage of the old city area beyondthecastle. This was, I think, a great success on several levels, but it also throw up some issues with archaeological and geospatial data that I hadn’t really considered. This was going to be a much longer post, going through everything I think is wrong with trying to put archaeological data on-line as is, but since it’s been sitting in draft form, I think it’s time to just bite the bullet and get something out there…
So without further ado- my job for #HackLancaster was to take a stuffy old access database, some shape files and some geotiffs, and make this available as an api. Here is the toolkit I ended up using:
- Various scripts for converting access databases to postgresql- collected and tinkered with here and here.
- csvkit for dumping csv to postgresql with the bare minimum of effort.
- Dirt Simple PostGIS API for opening up spatial functionality in my postgresql database over http.
- PGRest for opening up standard non-spatial functionality in the database over http (had I been doing this now, I might have tried PostgREST instead)
- Geoserver for serving historic base maps over WMS, and spatial data over WFS/WMS as needed.
- Gitbook for documentation. You can see my documentation here. Live documentation meant it could be updated and added to as we went along.
Insight Number 1: The one thing I’d take away from this part of the project is that all of these pieces of software were easy to set up, use and configure. As a toolkit for making this kind of data available quickly and easily, you can’t go far wrong here.
Insight Number 2: You need to do more than simply make the data available via an api. Complex relational data structures and archaic terminology are a barrier to non-specialists, no matter how technically competent they are. What’s the solution here- collapse the relational structure somehow? It would be nice to have an automated method in case of updates to the source data.
Insight Number 3: Technologies that seemed like a really obvious fit to a GIS person are not so obvious to everyone else- the historic mapping WMS was completely ignored in favour of Google Maps or MapBox and the spatial RESTful endpoint was only used to convert data from British National Grid to Spherical Mercator, under some duress. Functionality that could have been provided by the spatial RESTful end point was generally fudged.
Insight Number 4: It’s important to make committing code part of the hack. While we asked the teams nicely, not one of them made their code available after the event. In retrospect, since we gave out prizes, we should have made committing their code a prerequisite for winning a prize!
Finally, despite my slight gripes above- the teams involved came up with some extremely nice visualisations of the data- if I can persuade them to make them publicly available I’ll definitely put out some links.