Monday, 26 March 2012

Help fund us: call for sponsors for our work on standardised personal data licenses

We've had tremendous feedback from our proposal for a set of standardised personal data licenses to provide user clarity on privacy, as well as some constructive criticism that I genuinely welcome as part of the wider debate (I'll blog all of this later - UPDATE, see here).

We want to go ahead and develop the concept, but it will cost more than Julian or I can self-fund. We need to commission two studies to be sure of the user psychology and commercial demand before we even start with the legal side in drafting the terms for each of the licenses and finalising the iconography.

We hope to raise money from a mixture of crowdfunding, commercial sponsorship and selling shares in Open Digital Policy Organisation Limited.



We can't do anything without significant financial support

To put it bluntly, Open Digital is at a crossroads.

If we can't fund this project or find another ethical way of securing funding which doesn't tie us to vested corporate interests we can't continue as an independent voice promoting the open internet as a platform to benefit both ethical data businesses and the public alike.

It really is that simple. This is the model I tried to build.  I grew up with technology, I spent my 15-year career building connected technology, and the last 3 years advocating the benefits of open connected platforms.

But the risks can't be overlooked and technologists can't carry on blind to the growing public worries just as legislators can't carry on regulating the internet as if it's just an extension of TV or radio broadcasting.

We must work to further our understanding of how technology can be used and abused in order to make better well-informed policy decisions, and we are at the centre of that debate in the UK.

Since forming Open Digital just 10 months ago we have been involved at so many levels. Our first report (pdfgot a mention on page 2 of the FT, I have met with government ministers, been interviewed on prime time telly and spoken at meetings of senior telecom execs, legal and human rights conferences, social media policy workshops and in parliament.

We have responded to government consultations and are involved with every aspect of digital policy, from cyber security to child protection, copyright and privacy.

Yet we will have to park all our work within 7 weeks if we can't find a way forward on funding.  We won't exist in this form by the end of May unless we can crack the funding challenge. I have ploughed all my time and most of my own money into my own full-time research in this area for 3 years. (So if you can't help fund us, then... if you have a job going!)

We can't continue without significant financial support. It's that simple.  I'm particularly calling for shareholders and/or commercial sponsors who agree with our charter to support our work.



Project plan for the standardised personal data scheme

First we will commission 2 studies to guide the rest of the project:
  • Can a project like this change human behaviour; and if so, what elements are important to achieve maximum impact
  • Business attitudes to privacy and requirements from standardised licenses. 
We need to produce licenses that businesses will adopt, users will understand and will be effective in driving-down unecessary data capture.

We will then use the outcomes of the two studies to draft the licenses and finalise the iconography. This will include quite a lot of expensive legal advice - however we would be willing to partner with a suitable chambers or institution if any step forward.



Share sale

Should our scheme be successful, there is opportunity for sustainable commercial returns through growing our consultancy business helping companies who want to adopt our standard.

Potential investors should however note that, as an organisation dedicated to ethical business, we want to avoid accusations of profiteering from a protected monopoly.

We therefore pledge to hold the intellectual property from the standardised privacy policies project in trust, so that any suitably qualified global standards body can take ownership should this become too big for any company, however ethically minded that company sets out to be.

That said, there is plenty of scope to make fair returns.

The largest risk is of course non-adoption, but we hope like-minded investors will see the importance of this and our wider work at Open Digital and see the worthy contribution to the wider debate as helping balance the risk vs reward equation for investment.


Report sponsors


There is opportunity for 2-4 commercial sponsors to be involved with each of the two feasibility studies.


Any other ideas?


Seriously, just tell us: contact@opendigital.org

Tuesday, 20 March 2012

Trust bubbles: how security, trust and economic prosperity are interlinked

Here's a fascinating area of research I heard about from Virgil Gligor at the CSIT cyber security summit in Belfast last week.  Societies with a high degree of trust have better economic growth; there is a link between trust and economic prosperity.

A 2007 study by Dincer and Uslaner uses US economic data to prove a positive link between 'generalized trust' - trust in strangers - and economic growth. A 10 percentage point increase in trust yields a 0.5% increase in GDP.

This has some important consequences, especially given the current push to improve cyber security.

I can't stress how important it is to understand that 'generalized trust' is about trust in strangers, not trust bonds within community groups, social networks etc.

Typically, online security research tends to focus on the latter. How can Alice establish that Bob is a friend, and can she trust each communication she receives is actually from Bob and not some interloper?

There's a temptation to develop technology to facilitate trust in two ways. Firstly to provide certainty in point-to-point connections: Alice knows she's talking to Bob, and vice versa.

And secondly to build systems which measure and display the trustworthiness of strangers to other strangers. For example the buyer feedback mechanism on eBay and other customer review systems.

But neither of these help with generalised trust, and generalised trust is important to economic growth.

Whilst a customer review mechanism may seem to help foster trust between strangers, it is still only a specialised application to help established sellers prove their trustworthiness.  It's not a system that builds on generalised trust but a system which predicts specific trust.

A customer review mechanism can also actively exclude innovative new sellers simply because they don't yet have a formal reputation.

With zero generalised trust, we believe that any deal with a stranger is guaranteed to fail.  Customer review mechanisms can help by allowing established sellers to advertise their trustworthiness.

But in such a society only established players can ever compete. New entrants are effectively barred by virtue of having no trustworthiness and no way to establish this and the market risks becoming lazy or cartel-like.

There will be reduced drivers for improvement between existing sellers.

Now, introduce even even a moderate degree of generalised trust. Not all deals with strangers are guaranteed to fail and some buyers will be prepared to take a risk to get a bargain.

There is now a mechanism to establish trustworthiness from scratch and innovative new sellers can enter the market and disrupt lazy incumbents.

The link between generalised trust and economic prosperity poses an interesting conundrum for cyber securocrats and digital policy-makers alike.

It seems obvious. If I knew the email was from my bank - if I could trust the communications path - I would know it wasn't a phishing attack.

But establish an intricate system to broker trust in a digital community and we risk building the online equivalent of a closed community group fearful of strangers.

A group reliant on formalised trust mechanisms may become afraid to innovate or take risks outside of the trust bubble.

Or indeed take risks which buck the formal guidelines inside the bubble, leaving the equivalent of trust outcasts stuck in the bubble who will never again be trusted.

Rehabilitation is impossible inside the trust bubble.  Social mobility is also likely to be hampered.

There is also the possibility of systemic flaws being exploited within the trust bubble.  No system is perfect, and there's a risk those who learn to game the trust mechanism will prosper over other more-trusted competitors.

I suggest that a community which becomes highly reliant on formal trust mechanisms will be less prosperous than other more 'chaotic' communities.

There is an alternative. Technology has made certain cyber crimes and other exploits possible, but technology can also evolve to restore trust and reduce risk without relying on entirely new schemes to broker trust.

E.g. we all already have a built-in trust sensor honed over generations of evolution.  With faster broadband and camera-enabled devices soon it will be possible to have a face-to-face conversations with digital retailers e.g. for higher value transactions with previously unknown sellers.

Once spoken and facial cues are reintroduced back into the system it is possible to rely once again on human instinct to play a role in deciding who to trust.

Risks of exploit will fall-back to real life levels and the internet won't be seen in itself as a threat, just a communications channel.

On one hand this won't help the bottom line of retailers wanting to automate to save money, but economic prosperity on the whole will benefit through the link between generalised trust and economic growth.

With face-to-face contact becoming once again the norm, scams like phising won't scale and we will be able to have a degree of trust in online strangers.

Turing's confidence trickster? A question hangs over whether or not computers could ever fool humans, not just into thinking they're interacting with another human (ie pass the Turing test) but also to illicit a high degree of generalised trust in human subjects.

If so, the balance would swing back in favour of the phishers as online con-tricks would once again scale. One person could use technology to attempt to trick tens of millions of 'marks' at a time - until someone invents software for detecting non-humans!




Monday, 19 March 2012

Belfast 2012: privacy initiatives as an enabler for cyber security

I was lucky to get an invite to the CSIT 2012 cyber security summit.

Credit to the CSIT team in Belfast for creating a unique atmosphere which got the balance absolutely right - not too academic for industry to feel excluded, government participation without the grandstanding or justification sometimes seen, and plenty of opportunity for detailed open discussion amongst global delegates from industry, academia and government.

I attended for three reasons. Firstly, I spent the first 8 years of my software career designing secure communications (a patent from back when I worked at Motorola).

Secondly, I want to advocate against locking-down aspects of the internet in the name of security because I passionately believe open societies are stronger societies, and this applies equally to cyberspace.

And finally I wanted to discuss our work on privacy as an enabler to cyber security.

When each of us act with a high degree of autonomy, taking responsibility for our own personal data and protecting our own systems due to our inherent desire for privacy, I believe our networks as a whole will be more secure than if we defer to governments as our primary cyber defender.

This was a recurring theme throughout discussions.

There is what was referred to as an asymmetry in human resources faced by those tasked with securing our networks: relatively small numbers of professionals with varying skill levels facing off against legions of online activists and cyber criminals with an impressive skill set and access to most of the same tools (or 'weapons') as governments.

It went unsaid but one can only assume some nation states have also their own legions.

In a democracy we should be able to rely on citizens to augment the work of government in securing our networks, but for this to happen we need both mutual trust between citizens and government security agencies, and for citizens to feel motivated and able to help.

Specifically on privacy there are additional reasons why privacy can be an enabler to cyber security.

All sections of society are vulnerable to data loss through their ordinary everyday use of the internet.  Lack of data privacy can become a national security risk when personal data of those in sensitive positions becomes accessible to those with hostile intent, who may use private information to extort or to blackmail.

Whilst traditionally privacy has been seen in some quarters as at odds with security and policing requirements - e.g. the use of network monitoring tools to spot threats and investigate crime - there's also an argument against this.

In some cases a focus on intrusive policing and intelligence-gathering techniques can come at the expense of developing more sustainable community-based policing models for cyberspace.

Information technology is still developing apace, therefore capability-based policing - a reliance on a power available only to police and security forces such as network monitoring, data seizure etc - will for a while at least remain a costly arms race.

Soon after each capability is installed in our networks either technology evolves, requiring further upgrades, or the bad guys up their game leaving the capability obsolete, or both.

A worse scenario exists: the capability might be exploited by the bad guys.

Take the data retention directive, the EU law (which as far as I can establish the UK lobbied for) compels ISPs to store information which might be useful to law enforcement for a period of 6 months to 2 years.

All this potentially exploitable data sitting around in huge silos at ISPs. A capability that is also clearly vulnerable to exploit, especially given the security of such data is in the hands of private companies.

Employees have in the past looked up private data from company databases and sold this information to private detectives working for divorce lawyers, journalists and criminals.

It's not a straight choice between privacy and security. It's a balance between privacy as an enabler to creating a more secure information culture and privacy-invasive policing as a tool to detect and prevent cyber crime.

UPDATE: also, one must not overlook the importance of public trust and the benefits of consensual policing. Invasive monitoring can cause mistrust between the public and security services.

Public trust and confidence in the work of the security forces is a bigger asset, even, than the ability to monitor all electronic communications. A consensual approach makes society stronger and inherently stable.

Tuesday, 13 March 2012

Identonomics part 3: clarity in privacy and the need for a set of standardised personal data licenses

Exploring the economics of online identity

Skip to: [part 1] [part 2] [part 3] [part 4]

In part 1 of this series I propose that the open market in personal data will work ultimately in the interests of advertisers and those buying personal information - not the general public whose data is being used - unless members of the public exert pressure by shunning intrusive data services and selecting private alternatives.

In part 2 I look at some of the reasons regulation in this area may fail to protect consumers whilst risking a negative impact on innovation.

If personal data is the new currency as suggested by European Commissioner Reding , the consumer market is a classic confusopoly: a competitive market that is unable to function in the interests of consumers due to confused pricing.  ‘Buyers’ are unable to choose the ‘cheapest’ alternative and, as a direct consequence, ‘profiteering’ goes unchecked.

With enforcement hampered by the ubiquity and massively distributed nature of data, not to mention the trans-jurisdictional element, many companies are holding on to more and more information about us.

So I'm saying the market if left unchecked will fail to protect consumers, and that may lead to a collapse in trust which will impact innovation. As will regulation. We're doomed, right?

Restoring control to the user

Not quite.  One of the identified barriers to consumer choice is confusion, and our proposal for a standardised set of personal data licenses will help to break the confusopoly and temper the personal data land-grab.

Some data corporations today may not see it this way, but it really is in everyone’s interests to restore control to the user, and I'm pleasantly surprised by a number of global tech companies who do understand this.

Wednesday, 7 March 2012

Arbitrary and Orwellian - Amazon Payment Services blacklists Open Digital with no reason given

As we continue to explore ways to fund our policy research I decided to create an Amazon Payments Account.  Among the options we're considering is publishing a book on the privacy issues discussed on this blog and my own blog, sroc.eu.

I was shocked to find this morning, having provided the necessary bank account details, credit card security and telephone/email verification that we had been arbitrarily refused an account, the reason given:
"It has come to our attention that your desired use of Amazon Payments may be in violation of our Acceptable Use Policy or User Agreement"
What shocks me is that I provided very scant information as to what we planned to to with our Amazon Payment's account.

This snap decision highlights the powerful position that a few dominant technology businesses hold over small enterprises, advocacy and research groups alike.

I have asked Amazon for clarification.

Update 13-March:


Amazon have responded with my request for clarification. It seems the refusal was due to their terms and conditions, which prohibit the sale of non-physical goods.  Since our website wasn't selling any physical goods, we were refused an account.

This explanation makes a lot of sense, and I wonder whether it might not be in everyone's interests for Amazon to be clear in their letter of refusal the reasons for refusal.

Without an explanation the decision looks, as per the title of this blog post, arbitrary and Orwellian.

James Firth

Tuesday, 6 March 2012

Just over 2 weeks to ORGCon 2012

Our friends at the Open Rights Group are holding their 2nd conference on 24th March 2012.

You'll get a whole day of discussion, lectures and seminars on a range of topics related to internet policy and digital rights.

Click here to book tickets!

A few tickets are still available at only £12.50 for Open Rights Group members, £26 for non-members.  If you want to meet up with me on the day, follow @Open_Digital or @JamesFirth on twitter.

Friday, 2 March 2012

Crowdsource/request for feedback, review: Hargreaves copyright reform consultation "preserving the public domain"

OK a blog isn't the best tool for this but here goes.

Open Digital will be submitting a response to the UK Government consultation on copyright reform.  I have a lot of views on this subject so have spent the last few months listening to what other groups are fighting for so I can narrow my own submission to the areas not so well covered by other groups.

For various reasons I've chosen to focus on preserving the public domain (works for which the copyright has expired) and ask for the law to be clarified so that if can not claim a new copyright when I take a picture of a public domain artwork or digitise a public domain sound recording.

This will bring the law in line with US law as regards photographs of 2-D art works, and I see a strong parallel between a photo of a 2-D artwork and a format-shift of a sound recording.

I would dearly welcome comments, corrections, suggestions and any other feedback on the following. Please use the comments section below, email review@opendigital.org for ping us on twitter @Open_Digital

Thanks,
James

DRAFT SUBMISSION TO HARGREAVES

Update 16:19 clarified New York (ht @Copyrightgirl) and added paragraph about Wikimedia (private comment via email).

Open Digital calls on the Government to clarify the copyright situation as to faithful reproductions of existing works and in particular protect the public domain by removing any copyright protection for faithful photographic (including digitally scanned) reproductions of 2-dimensional art works and faithful copies of musical works that are in the public domain.

In parallel to the debate around format shifting, online businesses and individuals who make use of digital self-publishing tools are finding online uses for works already in the public domain.

But they face a threat from copyright claims made by the individuals or organisations responsible for photographing or scanning the original works.

UK case law in this area is not clear, so whether or not the Government and Parliament agrees with my suggestion set out below, there is opportunity to clarify the law in this area to provide clear guidance to the public on their rights.

In other jurisdictions such as United States New York, case law is clear (Bridgeman v Corel Corp, 1999) and does not grant copyright protection to photographs of 2-dimensional public domain art works, and I argue UK law should mirror this.

Wikimedia foundation has unilaterally declared a global policy (ref) mirroring the legal situation in New York, potentially opening itself up to legal challenges in other jurisdictions.

I argue a strong analogy exists between a digitising (format shifting) or re-recording of a sound recording and faithful photographic reproduction.

Allowing a copyright to exist in a faithful representation of a public domain work such as a painting, drawing or sound recording can have the effect of substantially prolonging the effective release into the public domain of certain published works.  This appears contrary to public domain provisions in copyright law.

Whilst there is one seemingly powerful argument in favour of providing protection for reproductions - that copyright acts as an incentive for those wishing to invest in digitising our cultural heritage - this must be balanced against the potential for organisations to abuse their monopoly of ownership, resulting instead in fewer works appearing online for public consumption; instead appearing behind paywalls or protected in some other way.

Whilst there is no research I'm aware of in this area, anecdotal evidence shows volunteer documenting and archiving efforts by groups such as Wikimedia Foundation and Project Gutenberg receive incredible public support.  Individuals seem motivated to work for free to reproduce the public domain in digital format, questioning whether a financial incentive in the form of a market intervention (copyright) is necessary.

As well as my main argument - that the public domain should be accessible to the public without restriction - providing protection for reproductions via copyright questions a fundamental principle of copyright. 

Copyright is intended to support the creative industries by providing protection for creators and those who invest in creation.  Creation must include some element or originality, and faithful reproduction by definition minimises originality.

Copyright is not intended to reward skill, it is intended to reward creativity. Even if it were, modern photographic reproduction techniques involve little if any skill and in some cases the reprographic process can be completely automated.  

By allowing a copyright to exist in a scan of a drawing provides a reward for merely pressing a button.

Therefore if incentives are needed to encourage digital archiving activities, copyright is not the right method to provide these incentives.

Allowing copyright to exist in a faithful reproduction also causes public confusion.  It can be difficult even for a skilled person to differentiate between a copy of a copy and a separate copy of the original. 

This is also true for automated digital identification and copyright enforcement techniques.  It is likely to cause false-positive matches for online automated content identification and protection systems.  

Such systems are used by online publishers to detect suspected copyright infringement and are unlikely to make a distinction between a separate copy of the original and a copy of a copy. 

This will cause particular problems for users of services such as YouTube, a service which provides a semi-automated mechanism to alert content owners that a video with a similar soundtrack has been uploaded.

The semi-automated nature of content protection procedures couples with the high volume of notifications handled today by large rights holders creates a real risk that an "authorised" public domain copy could be improperly blocked by the copyright owner of another similar almost indistinguishable copy. 

In summary I believe allowing a reproduction of a public domain work to have its own separate copyright results in a situation nothing short of a farce, where multiple almost indistinguishable copies coexist, with multiple owners and each with differing protection terms. 

One legislative option which would minimise the impact on existing businesses reliant on layered copyrights in mechanical reproductions as well as other unforeseen and unintended consequences is to design legislation such that it only affects public domain works.  

For example, the copyright protection term of any faithful reproduction could be limited to the unexpired term in the original, thereby allowing mechanical recording rights to coexist until the original performance enters the public domain.

My primary interest is to ensure public domain works are widely available to access and re-use, as is the intention of limited-term copyright law. 


ENDS