caa uk 2008
As the title kind of gives away, it was the annual Computing Applications in Archaeology UK Chapter annual conference over the last few days. It was held in York this time, rather than Southampton, as it has been for the last few years, heralding, I hope, a plan to move around the country from now on.
Normally I like to blog daily from these conferences, whilst it’s all fresh in my mind, and because it’s easier to blog about one day’s papers than two, but guest houses with wifi in York are a little difficult to come by (gotta get me one of those mobile broadband thangs).
Soooo… where to start…
Duh, with the thanks and acknowledgements of course! So, thanks to the ADS and the University of York for hosting the conference at the gorgeous King’s Manor, arranging lots of yummy food, a great visit to Dig (the only musuem about actually doing archaeology, as far as I know) and a conference meal at possibly the largest and grandest pizza restaurant in the world.
Friday kicked off with sessions on collaboration- mainly heritage agencies getting together to put their data on the web and provide better access to their sites and collections. Leif, however, threw us all a googly with a great talk on assessing the veracity of different historical maps in an empirical and logical manner, and figuring out whether they are suitable for the job you want them to do (in a geospatial sense, mostly). Really, this is just a way of formalising a process we should all go through when choosing datasets to use, but we rarely do. For example, is the map complete or incomplete, are the boundaries of the entities fuzzy or hard (eg formalised boundaries or best guesses), does it represent a period of time (like the Roman empire), or a snapshot, are the entities markers (like point data, mostly) or an accurate depiction of a location, is the map observed or derived (like a model). Then , what happens when you combine two maps? Which of the above are incompatible? I thought about this for a while afterwards, and got to wondering a few things- mainly that definitions of completeness depend entirely on what the user of the map needs it to do- in fact every map is incomplete up to a point (because it can’t depict everything) but if it contains all the entities you need then that’s fine. I am hoping Leif is going to post his powerpoint (hint, hint) to make it all a bit clearer to follow (and so that you don’t miss the great map at the end and Leif’s description of its genesis).
Other sessions attempted to address an issue that has been bubbling under for a while now- how to address issues of uncertain in virtual reconstructions. How do you separate what is real from what is conjecture without adding artificial intrusions into the image? Previous discussions have suggested varying levels of transparency, whereas the ideas suggested here were to create different models for each of your conjectures, or to embed links to images, discussion, interpretation within the model.
The afternoon sessions on day one were varied, but mostly pretty techy. In particular I loved the final paper, on the archaeology of Jodrell Bank, complete with archive recordings of 1950’s astronomers bouncing echoes of their voices off the moon. Like last year, the idea of acoustic modelling came up. Can acoustic modelling help us interpret buildings or spaces? The answer seems to be yes, but I don’t think we’ve yet addressed how you add the large crowds of people into your model and what effect they might have.
Saturday’s sessions included work on predictive modelling and other statistical models for discovering patterns in archaeological data. This is quite controversial in archaeology because it tends to assume that all human activity is environmentally determined rather than socially. This is mainly because the social effects are so hard to model. It is also prey to issues arising from flawed or incomplete datasets (back to Leif’s talk) and biases in the data collection strategy. However, I thought today’s talks tried to address some of these issues, or at least acknowledge that there was a problem. I’m all for using the technique myself, but as one element in the model, not the whole thing.
I should give Joseph a plug- he talked about Oxford’s One Laptop per Archaeologist plan (actually it’s now one openmoko smartphone per archaeologist, but who’s being picky) and our over-arching plans for openness- open data, open access and open standards. It made me quite glad to be working there!
Props in the final session must go to Nicola Schiavottiello for his “Magic Tour”, using clever little hand-helds and tablet pcs to superimpose 3D models of objects over the actual scene in front of you. Nicola sees it as being used for historic tours, showing you what a place was really like and I think it’s a fantastic idea!
Finally the Antiquist crowd got together for a “show and tell” about what cool things we’re all working on or need help with. I should say that if you’re doing archaeological computing of any sort and want help, or just to bounce ideas off people then it’s a good place to go.
Throw in some biting cold, some strong winds, a little snow, a large church building, and there you have it. Roll on next year…