Tuesday, 27 November 2012

Children and privacy rights within the family: are we normalising a surveillance culture?

A question which seems to come up with increasing frequency is whether children have privacy rights from their parents or carers (including schools, etc).

A really good starting point on this is a 2011 paper by Schmueli and Blecher published in the Columbia Human Rights Law Review.

Schmueli and Blecher highlight the lack of privacy rights enshrined in US law (and within the EU, as far as I can ascertain) for children in a family setting.

That is, children have little or no legal right to privacy from their parents.

But they also discuss why this is. Parents have a duty of care:
"To be sure, the primary role and responsibility of parents is to protect their children"
The question of childhood privacy is not new; but online dangers - many real, many perhaps over-hyped - most definitely are.

Their paper discusses a shift in attitudes towards child privacy brought on by the perceived threat from dangers online:
"parents have always been able to invade their children’s privacy by going through their schoolbags, reading their personal diaries and the like"
Noting that being able to "trust" a child in your care, where trust is warranted, was traditionally a sign of good parenting.  I, and I assume most people, were raised with the notion that it was definitely not right to read someone else's diary, even your child's.
"Whereas “good parents” may have traditionally been encouraged to trust their children, today they are encouraged to safeguard their children, where this incontrovertibly involves invading their privacy in one way or another. Monitoring has become associated with good parenting, and the surveillance of children has been framed in a language of safety, protection, and care.
There are many examples of technology used to monitor children to e.g. check social media use, track their location, trawl their internet history, etc.

I suspect a large group of parents believe this shift is fair and necessary, whilst another significant group remains worried by what I will describe as a significant invasion of virtual personal space at a time when children need private spaces to learn and develop their thinking and behaviour.

Schmueli and Blecher don't dismiss the safeguarding element out of hand, recognising a child's need for guidance from their parents who "act in their children’s best interests" through “natural bonds of affection”.

But they talk about a need for balance; and, notably, a need to recognise that a child's safety is not always at stake:
"Thus, when children’s physical or emotional safety is at stake, whatever interest in privacy they may have is outweighed by society’s interest in their protection. However, not all situations involve a child’s safety, and even when the goal is to prevent harm to the child, the harm of infringing upon the child’s privacy should also  be taken into account."

This balance will evolve as a child matures.  I scrawled the diagram below as a rough attempt to highlight how a parent's duty of care in the early days when a baby is entirely dependent on its parents evolves into a duty to respect a growing child's autonomous rights as they approach adulthood:


From my own experience as a parent it's notable how early a child displays independence and free will, making demands to be left to perform tasks such as feeding or dressing themselves.

Yet, as noted in the paper:
"There is a widespread consensus that children show less concern than adults about privacy. However, very few empirical studies have demonstrated this."
Is there conflict between physical demonstrations of free will from an early age and a child's natural demands for privacy from its parents; or, is the above conclusion drawn from insufficient or flawed data; or, maybe we simply don't understand enough about privacy itself to draw conclusions about a child's needs.
"It seems more accurate to argue that privacy is important to children, though their conceptions of privacy and the private differ from those of adults."
One question I'm left with hinges on whether some forms of parental monitoring cause distress or other psychological harm to the child.

E.g. a child could understandably become distressed if they discover a parent has read their private diary, hurt by the breach of trust.

Whereas a child has legal protection from most forms of physical harm, bar "reasonable chastisement", even from a parent, they have little if any legal protection from the emotional harm caused by unwarranted and persistent invasions of privacy, since both EU and US legal thinking groups privacy rights into "family units" of a sort.

In the way that the abused can become abusers themselves, will such persistent breaches of trust create a generation normalised to pervasive surveillance? A generation who won't think twice about imposing a similar regime on others, including their own children, and so on?

And what effect will this have; will our children be afraid to experiment, through fear of being watched, as they would in "normal" environments?

Would such inhibited behaviour limit itself to online contexts, or will it shape all aspects of child development, e.g. creative experimentation?

I for one absolutely did not want my parents to read any of my creative writing essays set by my school.  In later life I understand this as wanting to be free to explore ideas and behaviour that my parents didn't particularly subscribe to.  That surely is a healthy part of a child's development.

There is a possibility that none of the above will happen.  That children will adapt to find a level of privacy they are comfortable with that their parents simply cannot invade, e.g. through the use of technology such as strong encryption and other secure storage.

In fact a child only needs to be one step ahead of their parent's technological capability to evade surveillance, which raises another, different question: how is a parent expected to fulfil their duty of care when a child knows far more about technology than its parents?

@JamesFirth

Wednesday, 18 April 2012

Update 18-April

Whilst I'm still struggling to find a way to keep Open Digital staffed I'm making some progress on the personal front.

I need a job since there's now very little prospect of a salary from Open Digital in the near future and I've invested all my spare cash getting it going.

Lots of people have been in touch with potential job offers, please keep them coming as it's a matter of finding a tolerant employer happy for me to continue my public advocacy in my spare time if Open Digital is to survive at all.

I've also now started writing a weekly blog at Computerworld UK.

Although after 2 years SRoC was well read and, so I discovered, by some influential people; I hope moving my writing there will help get issues raised with a different audience and might help the rescue package I'm trying to cobble together for Open Digital.  Hopefully more on this later.

That's about all for now.  You can read my first post over at Computerworld:

Why the end of Consumer Focus is a blow to UK tech policy

When a government in the midst of a cuts agenda releases a report entitled Empowering and Protecting Consumers, you just know it will not be good news for under-empowered and unprotected consumers in the cut-throat utilities sector. 
I imagine the report might have started life as a memo: ‘make a case for getting rid of these meddling windbags’.
What many people, even those in the tech sector, perhaps don’t appreciate is the role Consumer Focus plays in batting for sense in UK tech policy. 
... 
Consumer Focus is not Big Government. It’s a safeguard against unavoidable monopolies in the utilities sector, including Internet Service Providers. It fights largely from behind the scenes, pushing back against many of the self-interested demands of tech lobbyists.

>> Read the full post >>



@JamesFirth

Tuesday, 10 April 2012

Thanks and goodbye - the end of a relatively short road

Thanks for all who've helped and supported us over the last year with Open Digital and the last 2+ years for me as an independent policy advocate.

Sadly we can't fund year 2 so this project is on hold. Indefinitely.

Whilst it's never over till the hard disks are wiped and the organisation is liquidised the financial reality is that I can't put any more time into this or SRoC until I can make it pay.

I have mouths to feed and my public CV is here if anyone wants to hire me:
http://www.opendigital.org/consultants/OpenDigital_JamesFirthCV.html

Sorry it's been so brief. Whilst I'm sure there are many out there funding work towards ethical data policy I have not managed to get any of it to flow in our direction.

We all have to eat; and, sadly, as I have written many times over, policy follows the money.

Bye - for now at least,

@JamesFirth

Monday, 26 March 2012

Help fund us: call for sponsors for our work on standardised personal data licenses

We've had tremendous feedback from our proposal for a set of standardised personal data licenses to provide user clarity on privacy, as well as some constructive criticism that I genuinely welcome as part of the wider debate (I'll blog all of this later - UPDATE, see here).

We want to go ahead and develop the concept, but it will cost more than Julian or I can self-fund. We need to commission two studies to be sure of the user psychology and commercial demand before we even start with the legal side in drafting the terms for each of the licenses and finalising the iconography.

We hope to raise money from a mixture of crowdfunding, commercial sponsorship and selling shares in Open Digital Policy Organisation Limited.



We can't do anything without significant financial support

To put it bluntly, Open Digital is at a crossroads.

If we can't fund this project or find another ethical way of securing funding which doesn't tie us to vested corporate interests we can't continue as an independent voice promoting the open internet as a platform to benefit both ethical data businesses and the public alike.

It really is that simple. This is the model I tried to build.  I grew up with technology, I spent my 15-year career building connected technology, and the last 3 years advocating the benefits of open connected platforms.

But the risks can't be overlooked and technologists can't carry on blind to the growing public worries just as legislators can't carry on regulating the internet as if it's just an extension of TV or radio broadcasting.

We must work to further our understanding of how technology can be used and abused in order to make better well-informed policy decisions, and we are at the centre of that debate in the UK.

Since forming Open Digital just 10 months ago we have been involved at so many levels. Our first report (pdfgot a mention on page 2 of the FT, I have met with government ministers, been interviewed on prime time telly and spoken at meetings of senior telecom execs, legal and human rights conferences, social media policy workshops and in parliament.

We have responded to government consultations and are involved with every aspect of digital policy, from cyber security to child protection, copyright and privacy.

Yet we will have to park all our work within 7 weeks if we can't find a way forward on funding.  We won't exist in this form by the end of May unless we can crack the funding challenge. I have ploughed all my time and most of my own money into my own full-time research in this area for 3 years. (So if you can't help fund us, then... if you have a job going!)

We can't continue without significant financial support. It's that simple.  I'm particularly calling for shareholders and/or commercial sponsors who agree with our charter to support our work.



Project plan for the standardised personal data scheme

First we will commission 2 studies to guide the rest of the project:
  • Can a project like this change human behaviour; and if so, what elements are important to achieve maximum impact
  • Business attitudes to privacy and requirements from standardised licenses. 
We need to produce licenses that businesses will adopt, users will understand and will be effective in driving-down unecessary data capture.

We will then use the outcomes of the two studies to draft the licenses and finalise the iconography. This will include quite a lot of expensive legal advice - however we would be willing to partner with a suitable chambers or institution if any step forward.



Share sale

Should our scheme be successful, there is opportunity for sustainable commercial returns through growing our consultancy business helping companies who want to adopt our standard.

Potential investors should however note that, as an organisation dedicated to ethical business, we want to avoid accusations of profiteering from a protected monopoly.

We therefore pledge to hold the intellectual property from the standardised privacy policies project in trust, so that any suitably qualified global standards body can take ownership should this become too big for any company, however ethically minded that company sets out to be.

That said, there is plenty of scope to make fair returns.

The largest risk is of course non-adoption, but we hope like-minded investors will see the importance of this and our wider work at Open Digital and see the worthy contribution to the wider debate as helping balance the risk vs reward equation for investment.


Report sponsors


There is opportunity for 2-4 commercial sponsors to be involved with each of the two feasibility studies.


Any other ideas?


Seriously, just tell us: contact@opendigital.org

Tuesday, 20 March 2012

Trust bubbles: how security, trust and economic prosperity are interlinked

Here's a fascinating area of research I heard about from Virgil Gligor at the CSIT cyber security summit in Belfast last week.  Societies with a high degree of trust have better economic growth; there is a link between trust and economic prosperity.

A 2007 study by Dincer and Uslaner uses US economic data to prove a positive link between 'generalized trust' - trust in strangers - and economic growth. A 10 percentage point increase in trust yields a 0.5% increase in GDP.

This has some important consequences, especially given the current push to improve cyber security.

I can't stress how important it is to understand that 'generalized trust' is about trust in strangers, not trust bonds within community groups, social networks etc.

Typically, online security research tends to focus on the latter. How can Alice establish that Bob is a friend, and can she trust each communication she receives is actually from Bob and not some interloper?

There's a temptation to develop technology to facilitate trust in two ways. Firstly to provide certainty in point-to-point connections: Alice knows she's talking to Bob, and vice versa.

And secondly to build systems which measure and display the trustworthiness of strangers to other strangers. For example the buyer feedback mechanism on eBay and other customer review systems.

But neither of these help with generalised trust, and generalised trust is important to economic growth.

Whilst a customer review mechanism may seem to help foster trust between strangers, it is still only a specialised application to help established sellers prove their trustworthiness.  It's not a system that builds on generalised trust but a system which predicts specific trust.

A customer review mechanism can also actively exclude innovative new sellers simply because they don't yet have a formal reputation.

With zero generalised trust, we believe that any deal with a stranger is guaranteed to fail.  Customer review mechanisms can help by allowing established sellers to advertise their trustworthiness.

But in such a society only established players can ever compete. New entrants are effectively barred by virtue of having no trustworthiness and no way to establish this and the market risks becoming lazy or cartel-like.

There will be reduced drivers for improvement between existing sellers.

Now, introduce even even a moderate degree of generalised trust. Not all deals with strangers are guaranteed to fail and some buyers will be prepared to take a risk to get a bargain.

There is now a mechanism to establish trustworthiness from scratch and innovative new sellers can enter the market and disrupt lazy incumbents.

The link between generalised trust and economic prosperity poses an interesting conundrum for cyber securocrats and digital policy-makers alike.

It seems obvious. If I knew the email was from my bank - if I could trust the communications path - I would know it wasn't a phishing attack.

But establish an intricate system to broker trust in a digital community and we risk building the online equivalent of a closed community group fearful of strangers.

A group reliant on formalised trust mechanisms may become afraid to innovate or take risks outside of the trust bubble.

Or indeed take risks which buck the formal guidelines inside the bubble, leaving the equivalent of trust outcasts stuck in the bubble who will never again be trusted.

Rehabilitation is impossible inside the trust bubble.  Social mobility is also likely to be hampered.

There is also the possibility of systemic flaws being exploited within the trust bubble.  No system is perfect, and there's a risk those who learn to game the trust mechanism will prosper over other more-trusted competitors.

I suggest that a community which becomes highly reliant on formal trust mechanisms will be less prosperous than other more 'chaotic' communities.

There is an alternative. Technology has made certain cyber crimes and other exploits possible, but technology can also evolve to restore trust and reduce risk without relying on entirely new schemes to broker trust.

E.g. we all already have a built-in trust sensor honed over generations of evolution.  With faster broadband and camera-enabled devices soon it will be possible to have a face-to-face conversations with digital retailers e.g. for higher value transactions with previously unknown sellers.

Once spoken and facial cues are reintroduced back into the system it is possible to rely once again on human instinct to play a role in deciding who to trust.

Risks of exploit will fall-back to real life levels and the internet won't be seen in itself as a threat, just a communications channel.

On one hand this won't help the bottom line of retailers wanting to automate to save money, but economic prosperity on the whole will benefit through the link between generalised trust and economic growth.

With face-to-face contact becoming once again the norm, scams like phising won't scale and we will be able to have a degree of trust in online strangers.

Turing's confidence trickster? A question hangs over whether or not computers could ever fool humans, not just into thinking they're interacting with another human (ie pass the Turing test) but also to illicit a high degree of generalised trust in human subjects.

If so, the balance would swing back in favour of the phishers as online con-tricks would once again scale. One person could use technology to attempt to trick tens of millions of 'marks' at a time - until someone invents software for detecting non-humans!




Monday, 19 March 2012

Belfast 2012: privacy initiatives as an enabler for cyber security

I was lucky to get an invite to the CSIT 2012 cyber security summit.

Credit to the CSIT team in Belfast for creating a unique atmosphere which got the balance absolutely right - not too academic for industry to feel excluded, government participation without the grandstanding or justification sometimes seen, and plenty of opportunity for detailed open discussion amongst global delegates from industry, academia and government.

I attended for three reasons. Firstly, I spent the first 8 years of my software career designing secure communications (a patent from back when I worked at Motorola).

Secondly, I want to advocate against locking-down aspects of the internet in the name of security because I passionately believe open societies are stronger societies, and this applies equally to cyberspace.

And finally I wanted to discuss our work on privacy as an enabler to cyber security.

When each of us act with a high degree of autonomy, taking responsibility for our own personal data and protecting our own systems due to our inherent desire for privacy, I believe our networks as a whole will be more secure than if we defer to governments as our primary cyber defender.

This was a recurring theme throughout discussions.

There is what was referred to as an asymmetry in human resources faced by those tasked with securing our networks: relatively small numbers of professionals with varying skill levels facing off against legions of online activists and cyber criminals with an impressive skill set and access to most of the same tools (or 'weapons') as governments.

It went unsaid but one can only assume some nation states have also their own legions.

In a democracy we should be able to rely on citizens to augment the work of government in securing our networks, but for this to happen we need both mutual trust between citizens and government security agencies, and for citizens to feel motivated and able to help.

Specifically on privacy there are additional reasons why privacy can be an enabler to cyber security.

All sections of society are vulnerable to data loss through their ordinary everyday use of the internet.  Lack of data privacy can become a national security risk when personal data of those in sensitive positions becomes accessible to those with hostile intent, who may use private information to extort or to blackmail.

Whilst traditionally privacy has been seen in some quarters as at odds with security and policing requirements - e.g. the use of network monitoring tools to spot threats and investigate crime - there's also an argument against this.

In some cases a focus on intrusive policing and intelligence-gathering techniques can come at the expense of developing more sustainable community-based policing models for cyberspace.

Information technology is still developing apace, therefore capability-based policing - a reliance on a power available only to police and security forces such as network monitoring, data seizure etc - will for a while at least remain a costly arms race.

Soon after each capability is installed in our networks either technology evolves, requiring further upgrades, or the bad guys up their game leaving the capability obsolete, or both.

A worse scenario exists: the capability might be exploited by the bad guys.

Take the data retention directive, the EU law (which as far as I can establish the UK lobbied for) compels ISPs to store information which might be useful to law enforcement for a period of 6 months to 2 years.

All this potentially exploitable data sitting around in huge silos at ISPs. A capability that is also clearly vulnerable to exploit, especially given the security of such data is in the hands of private companies.

Employees have in the past looked up private data from company databases and sold this information to private detectives working for divorce lawyers, journalists and criminals.

It's not a straight choice between privacy and security. It's a balance between privacy as an enabler to creating a more secure information culture and privacy-invasive policing as a tool to detect and prevent cyber crime.

UPDATE: also, one must not overlook the importance of public trust and the benefits of consensual policing. Invasive monitoring can cause mistrust between the public and security services.

Public trust and confidence in the work of the security forces is a bigger asset, even, than the ability to monitor all electronic communications. A consensual approach makes society stronger and inherently stable.

Tuesday, 13 March 2012

Identonomics part 3: clarity in privacy and the need for a set of standardised personal data licenses

Exploring the economics of online identity

Skip to: [part 1] [part 2] [part 3] [part 4]

In part 1 of this series I propose that the open market in personal data will work ultimately in the interests of advertisers and those buying personal information - not the general public whose data is being used - unless members of the public exert pressure by shunning intrusive data services and selecting private alternatives.

In part 2 I look at some of the reasons regulation in this area may fail to protect consumers whilst risking a negative impact on innovation.

If personal data is the new currency as suggested by European Commissioner Reding , the consumer market is a classic confusopoly: a competitive market that is unable to function in the interests of consumers due to confused pricing.  ‘Buyers’ are unable to choose the ‘cheapest’ alternative and, as a direct consequence, ‘profiteering’ goes unchecked.

With enforcement hampered by the ubiquity and massively distributed nature of data, not to mention the trans-jurisdictional element, many companies are holding on to more and more information about us.

So I'm saying the market if left unchecked will fail to protect consumers, and that may lead to a collapse in trust which will impact innovation. As will regulation. We're doomed, right?

Restoring control to the user

Not quite.  One of the identified barriers to consumer choice is confusion, and our proposal for a standardised set of personal data licenses will help to break the confusopoly and temper the personal data land-grab.

Some data corporations today may not see it this way, but it really is in everyone’s interests to restore control to the user, and I'm pleasantly surprised by a number of global tech companies who do understand this.