Thursday 29 September 2011

Autonomy as a measure for societal harm from online behaviours

Debates about internet governance are not sustainable at the current level.  All too often we hear three sides of an argument, but are left with no rational framework to evaluate the relative merits of each point of view.

Invariably the viewpoints offered are:
  • Regulation is needed
  • Regulation is not needed
  • In an ideal world we should regulate, but the nature of the internet makes regulation impossible/impracticable
Whether the topic is privacy, free speech, protection of intellectual property rights, national security or freedom from oppression; assertions driven by fear or self interest rise to prominence above evidential findings or deep and thorough rational thinking.

And by fear I'm not just talking about those who fear harm (personal, economic or national) may come through an open internet, but also those who fear the positive power of the internet will be ruined if regulation is enforced.

But can we even start to propose a model, a rational approach, to assessing harm, risk and benefit of digital technology and various regulatory approaches in such a complex domain?

Can we quantify harm from loss of privacy and balance against the benefits brought by linked and accessible data? Or set the perceived risks to political discourse from blocking and censorship against the perceived benefit from limiting access to harmful content?

Your first instinct may be to say "no!"  But if we don't explore new ways of understanding the problem domain we risk stalemate; debates lead by human instinct - great for so many of life's problems, but wholly inadequate for some complex, chaotic social problems when the answer sometimes proves to be counter-intuitive.

In some respects I see parallels with the debate about recreational drugs and society, not that there's an obvious comparison between drugs and the internet (and no, I'm not smoking something)! I'm referring to the way the issues are debated in society.

The drugs debate is lead by fear of a human self-harming behaviour with very obvious individual and societal dangers, and a misunderstanding or over-simplification of an incredibly complex relationship between supply, demand, enforcement and human rights.

Self-harming behaviour exists online.  Publicity-seeking [teenagers] create an immutable record of their exploration of life that may affect future employment opportunities (at least in the short term, until attitudes of employees correct). Antisocial online behaviours damage relationships. Addiction to non-productive activities. Addiction to evidentially harmful activities.

And no, I'm not talking about a human right to overdose on heroin - I'm referring to the logistics of enforcing an absolute ban of harmful substances - which can be produced with minimal resources - without impinging on important personal freedoms.

For me one report changed the quality of debate around drugs and society.  Drug harms in the UK: a multicriteria decision analysis by Professor David Nutt et al. attempted to assess the relative overall harm of common drugs by combining the harm to individuals and the harm to society, with the now infamous finding that alcohol is more harmful than heroin.

The analysis by the Independent Scientific Committee on Drugs didn't offer solutions, but it provided a methodology for assessing the scale of problems in society.

Where would we start with a similar approach to information policy?  One idea presented by Professor Andrew Murray (LSE) during his keynote speech at the Human Rights in the Digital Era Conference at Leeds University School of Law this month was to consider autonomy - how do certain laws and technologies affect our ability to act as autonomous human beings? The premise being that autonomy is synonymous with freedom, and when our autonomy is threatened (a) life lacks fulfilment; and, (b) we are at risk of being manipulated into destructive behaviour.

My initial reaction to Andrew's speech is that the concept of autonomy can be assessed, relatively, and used to understand the impact - benefit or harm - from certain regulatory approaches.

Take the complex issues around free speech.  Self-publishing on blogs removes the need for publishers - it strips a level of control, and increases our autonomy. But self-publishing can be used to libel, harass, bully or otherwise harm individuals - affecting their autonomy.

In the absence of sufficient regulation, one individual may assert a level of power over an other, affecting their autonomy. The risk or harm can be mitigated by law, but laws attempting to regulate e.g. speech can have unintended chilling consequences, having a greater overall impact on autonomy in society than the risk and level of individual harm. (For example effective vicarious liability on ISPs through notice and take-down procedures for alleged - not proven - libellous assertions.)

There's also a risk that those who want to damage democracy will abuse their autonomy granted by democracy, and the natural response from government - to regulate - may, perversely, harm our autonomy to a greater extent than the initial threat.

In the above three paragraphs lies an equation. It might be incomplete, and the uncertainty in estimating the input variables may well render the result meaningless.  But in defining a process we're at least advancing the debate beyond instinct, unfounded assertions and self interest.  It will, hopefully, encourage further analysis at a more meaningful level; a starting point for others to study and de-construct or improve upon.

In a similar way to drugs harm, I'd like to apply methodology over a wide range of internet themes; privacy, recognition, ownership, security, publicity, e-economy, confidentiality and trust.

No comments:

Post a Comment