Wednesday, 25 April 2012

Primary

We have recently been through an upgrade to Windows 7 at work. Now all you normal people are going to be thinking 'wow, so big deal, my laptop's had Windows 7 on it for years, welcome to the 21st century grandad', and kinda fair enough. Except my laptop's had Windows 7 on it for years, which is why I use Ubuntu.
Anyway, so the migration came, and because we're a systems team and use all sorts of non-standard software we had all sorts of problems. Well, I say 'we', I mean 'my colleagues', as I had almost no problems with the switchover. Now I have to say that this is at least partially due to the fact that I was one of the last to go through the transition, but I'm equally convinced it was partly because I am the youngest member of my team.
I hit my early teens just as the IBM pc (or clones of it) and Microsoft DOS were becoming the standard. Windows was just becoming stable in those days, and the second-hand pc my parents bought wasn't really powerful enough to support Windows 3.3 (besides which it was expensive), so whilst my friends played fancy-looking games on their Amigas, I was getting to grips with the DOS file structures that influenced the way we've interacted with almost all technology ever since. I don't think I'm unique in this: certainly by the time I was at university most of my friends were pretty well versed in most Windows software, and we started using email. I think all this meant that such technology - and possibly more importantly, the user logic of such technology - became embedded in my functional brain at a point early enough to become part of my primary functional reasoning. This means that when it comes to using a new phone, computer or pretty much anything else that requires an operating system I don't need a manual. The triumph of the Windows/Mac OS approach is that it has dictated the logical mapping of all other user interfaces. Some people may argue that apps create a different approach, but I would argue that these are just an extension of the sorts of programmes one has always seen on a desktop. Fundamentally, all these things work in the same way, and if they don't, I expect to work them out pretty quickly through trial and error.
My brain is adapted to technology, and in the digital world I'm pretty old (I actually used DOS!), imagine how well adapted to the use of any new technology a real digital native would be (not sure we have any real ones yet: people in their late teens have nostalgia for a golden age of pre-millennial pre-digital simplicity).
In practical terms, this is a good thing: we should be able to use all technology with ease, and as software providers have failed to make their software more intuitive, our learned intuition with regards to the functional logic of such systems is imperative. However, is it also changing the logic of our thoughts? If we teach ourselves to think in a way the follows operating system logic, is there not a danger that we dehumanise our thought process?
'Uh oh, here we go another panic about technology changing humanity' I hear you say, 'we had that with telly.' And we did, and it did, a bit, but telly is passive. The use of most modern technology is interactive, it is about our expectations of the world and how they are realised. The fact that our expectations are met almost instantly by technology is partially an illusion created by the fact that those expectations were created or at least heavily modified by that technology. So our expectations become distorted, and the rest of life, the natural world, politics, education, all will fail to deliver instant results. It is possible that we are already seeing this failure of the non-digital world to deliver instant gratification in the shock of recent graduates that they are not slotted straight away into the middle management position that they so obviously deserve for having spent the previous three years racking up debt, drinking and reading the occasional book. Then again maybe this is just the eternal hubris of youth.
It is entirely possible that the functional logic of digital technology has no impact on our ideas about how the world should work, but it's unlikely. If we interact with the world through digital technology, then it must of necessity have some baring on that interaction. The danger lies in whether the process itself, the actions of navigating a set of digital rules and conventions actually takes over from the ends to which it was designed to lead. A friend recently referred to Twitter as a potential 'Linus blanket' for people who would otherwise be finding more direct outlets for their outrage. I think that at their worst, Twitter and other social media can perform this function. However, at their best, such technologies allow us to organise and unite disparate groups of likeminded individuals. Clearly the way we live our lives and therefore the way we function is informed and modified by the technology we use. It is up to us to make sure that the uses we put it to encourage a human world rather than a purely functional one.

No comments:

Post a Comment