Wednesday 25 April 2012

Primary

We have recently been through an upgrade to Windows 7 at work. Now all you normal people are going to be thinking 'wow, so big deal, my laptop's had Windows 7 on it for years, welcome to the 21st century grandad', and kinda fair enough. Except my laptop's had Windows 7 on it for years, which is why I use Ubuntu.
Anyway, so the migration came, and because we're a systems team and use all sorts of non-standard software we had all sorts of problems. Well, I say 'we', I mean 'my colleagues', as I had almost no problems with the switchover. Now I have to say that this is at least partially due to the fact that I was one of the last to go through the transition, but I'm equally convinced it was partly because I am the youngest member of my team.
I hit my early teens just as the IBM pc (or clones of it) and Microsoft DOS were becoming the standard. Windows was just becoming stable in those days, and the second-hand pc my parents bought wasn't really powerful enough to support Windows 3.3 (besides which it was expensive), so whilst my friends played fancy-looking games on their Amigas, I was getting to grips with the DOS file structures that influenced the way we've interacted with almost all technology ever since. I don't think I'm unique in this: certainly by the time I was at university most of my friends were pretty well versed in most Windows software, and we started using email. I think all this meant that such technology - and possibly more importantly, the user logic of such technology - became embedded in my functional brain at a point early enough to become part of my primary functional reasoning. This means that when it comes to using a new phone, computer or pretty much anything else that requires an operating system I don't need a manual. The triumph of the Windows/Mac OS approach is that it has dictated the logical mapping of all other user interfaces. Some people may argue that apps create a different approach, but I would argue that these are just an extension of the sorts of programmes one has always seen on a desktop. Fundamentally, all these things work in the same way, and if they don't, I expect to work them out pretty quickly through trial and error.
My brain is adapted to technology, and in the digital world I'm pretty old (I actually used DOS!), imagine how well adapted to the use of any new technology a real digital native would be (not sure we have any real ones yet: people in their late teens have nostalgia for a golden age of pre-millennial pre-digital simplicity).
In practical terms, this is a good thing: we should be able to use all technology with ease, and as software providers have failed to make their software more intuitive, our learned intuition with regards to the functional logic of such systems is imperative. However, is it also changing the logic of our thoughts? If we teach ourselves to think in a way the follows operating system logic, is there not a danger that we dehumanise our thought process?
'Uh oh, here we go another panic about technology changing humanity' I hear you say, 'we had that with telly.' And we did, and it did, a bit, but telly is passive. The use of most modern technology is interactive, it is about our expectations of the world and how they are realised. The fact that our expectations are met almost instantly by technology is partially an illusion created by the fact that those expectations were created or at least heavily modified by that technology. So our expectations become distorted, and the rest of life, the natural world, politics, education, all will fail to deliver instant results. It is possible that we are already seeing this failure of the non-digital world to deliver instant gratification in the shock of recent graduates that they are not slotted straight away into the middle management position that they so obviously deserve for having spent the previous three years racking up debt, drinking and reading the occasional book. Then again maybe this is just the eternal hubris of youth.
It is entirely possible that the functional logic of digital technology has no impact on our ideas about how the world should work, but it's unlikely. If we interact with the world through digital technology, then it must of necessity have some baring on that interaction. The danger lies in whether the process itself, the actions of navigating a set of digital rules and conventions actually takes over from the ends to which it was designed to lead. A friend recently referred to Twitter as a potential 'Linus blanket' for people who would otherwise be finding more direct outlets for their outrage. I think that at their worst, Twitter and other social media can perform this function. However, at their best, such technologies allow us to organise and unite disparate groups of likeminded individuals. Clearly the way we live our lives and therefore the way we function is informed and modified by the technology we use. It is up to us to make sure that the uses we put it to encourage a human world rather than a purely functional one.

Tuesday 17 April 2012

Pentennial

One of the things that constantly dogs my thoughts about the the way things could be is the obstacles presented by the way things are. Moreover there is the problem of my moral reluctance to change some of the things that are. This is far too abstract, let me elaborate.
Our democracy, like most others, is based around parliamentary terms of around five years, and whilst a government or leading politician may hope to extend the height of their influence to two or thee of these terms, that still only leaves them 15 years to make their mark. And let's not kid ourselves, politicians are in their job to make a mark. They're not there to better their constituents lives or the lot of the human race, although that may be one of the ways that they intend to achieve their goal; they are there because at some point during a history lesson they decided they wanted to appear in one of those books. We would do well never to forget this fact when dealing with politics . I'm sure there are exceptions to this rule, just as I'm sure most politicians will have convinced themselves that they are that exception. However, we would do just as well to assume that there are no exceptions, as it will help us to understand the primary reason for the five(ish) year term limits: egos. Anyone who thinks that they deserve to appear in a history book will have a massively inflated sense of their ability. Given the chance to govern such people will naturally assume they are best at it and will struggle to see why they should stop doing it. The five year parliamentary term means that a reasonably regular intervals we all get a chance to burst their bubble. This is the very cornerstone of democracy and essentially its point: the power of any tyrant is in principle limited by time. However, the very thing that gives democracy its potency is rapidly becoming its biggest failing.
The short-term nature of parliamentary terms mean that politicians are only ever really interested in short term results that will stamp their image on the pages of history. Thus, every five years or so, they dick about with the schools or the NHS something rotten, just to prove they are doing something. Talk to anyone in any job that is in any way impacted by the government and you will quickly become acquainted with the the sense of jaded fatigue that such endless tinkering inevitably engenders. This issue is not exclusive to the public sector either (although they tend to bear the brunt): sectors such as financial services face a seemingly endless barrage of legislation. Certainly in the case of finance, this partly because there has to be an Income and Corporation Tax Act passed every year, so whoever's in charge at the time sees it as practically their duty to tinker with something whilst re-enacting the legislation required to collect income tax. Of course the argument is that the economy is constantly changing and so legislation has to be continually adapted to account for these changes. Indeed the Income and Corporation Taxes Act is the ideal short term legislation, better than much other legislation that is enacted permanently in order to deal with a transient issue. Weirdly, this legacy of obsolete legislation clogging up the statutes is a much overlooked consequence of political short-termism. The hubris that allows politicians to believe that they are the best person for the job also allows them to believe that they have found the ultimate solution to any problem: that the legacy of their time in office will be unimpeachable legislation.
Generally, it is more likely that our system would work better and our politicians would be more accountable if the majority of our legislation were time limited. Obviously that wouldn't be a problem if we were starting from scratch, but setting up the initial legislation required to effectively clear down the statutes would be a mammoth and complex task for which there is no appetite. Although it wouldn't hurt to see the short-termism in future legislation and draft it accordingly.
Much more problematic are the issues that won't affect politicians in their lifetimes but which nonetheless require action and legislation now. It is all too easy for is all to forget these issues or put them aside for another time. A strong favourite of the pro-pollution lobby is to insist that we all wait for the burden of proof to be incontrovertible before we decide to take rash decisions that may harm their short-term profits. Whilst this is a clear case of vested interests trying to influence policy for short-term outcomes, it does illustrate the biggest problem for long-term policy makers: how can you know what is the right solution to a problem due to occur in the future. Again, in such situations, politicians are forced to rely on a certain amount of faith, both in their own ability (not hard) and in the ability of those advising them and feeding them information. Again for a politician, such faith should be easy, as it reflects on their good judgement in seeking the advice of these people in the first place. Unfortunately the rest of us cannot be so confident, as we know that such advisers are largely chosen from a pool of people who position themselves to be chosen, ie they're only there in the first place because they have a vested interest.
So how do we ensure that the right people are chosen as advisors? Must we elect these people too? This is probably not a sensible choice, as these people are supposed to be there to deal with the complexities that cannot be reduced to a soundbite. However, even if we don't vote for these people, we vote for those who chose them, so the advice offered will vary with the government seeking it. Is it possible to have advisory bodies that are assigned as the official arbiters of knowledge on certain topics? It is not possible for an organisation to be entirely objective, but that should not stop it aspiring to be. Such organisations would need to be able to defend any point of view they held on demand. They would need to prove consideration of all relevant theories. They would need, in effect, to be above reproach. This is patently as impossible as being entirely objective. However, we have a model for such organisations: the recently created Office of Budget Responsibility, or the slightly less recently created Office for National Statistics. Both organisations have dealt with the issues of objectivity and impartiality and come out the other side, so could such a model be applied to scientific or other advisory bodies? Surely it's worth a try.
Unfortunately, the setting up of independent advisory bodies doesn't change the fact that the people making the policies based on that advice are still only looking to the short-term. We could try to find ways to make them more long-term in their actions, like ways of compelling them to act on the advice given by the independent advisory bodies. Of course we already have a way of compelling politicians to do things: our vote. Maybe then the biggest change we need is the way we use our vote.