SOP workshop 212: Privacy and Security in an Open/Realtime/Linked Data World

Sixth Annual Meeting of the Internet Governance Forum
27 -30 September 2011
United Nations Office in Nairobi, Nairobi, Kenya

September 28, 2011 - 14:30PM 

***

The following is the output of the real-time captioning taken during the Sixth Meeting of the IGF, in Nairobi, Kenya. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the session, but should not be treated as an authoritative record.

***

>> JOCHAI BEN-AVIE:  Hi, everyone.  I'm the policy director at Access, I'm moderating this panel, "Privacy and Security in an Open/realtime/linked World" which you will hopefully learn a little bit more about in this session. 

     I think sort of in this panel we'll explore sort of the use of open/realtime/linked data and also geolocation data to some extent as well as the sort of privacy and security risks inherent in that use. 

     I've broken this panel up into two minipanels so I want to give you an overview of the format we'll use so that way you know when to ask questions and to whom. 

     I'll start off with housekeeping, we're doing right now, and then we will get into our first panel about the uses of these data and then some questions of that first panel, a second sort of minipanel and then the privacy and security risks and some attempts at regulation that has been done, questions for the second panel and then general questions to all panelists, two panels.      

       So I think before we sort of dive into the meat of the panel, critical to maximizing the use of open data generally is sort of the need for net neutrality.  That is not something we'll be talking about much during this panel but the need to create venues to ensure this is not -- these data aren't falling into a second tier.

     I want to sort of call on one of access's policy fellows, Keisha Taylor; she is serving as our remote moderator. 

     Do you have a minute now?  Just sort of -- will you say a word or two about a paper that we recently wrote about the importance of net neutrality in the emerging and developing world? 

     If you want a copy, they are up in front. 

     >> KEISHA TAYLOR:  Hi, everybody.  Basically the data for that, we would explore how the importance of net neutrality for emerging and developing countries and there is actually quite a lot of reports that have been done which lend support to this.           It looks like a lot of regional and international as well as local initiatives support the whole premise of net neutrality. 

     It is quite important for economics, for humanity and relief and a whole host of other things.  The paper is here so please have a look, and we welcome you.  Thanks. 

     >> JOCHAI BEN-AVIE:  I'll try to frame this panel.  Recently we've seen a slew of open data initiatives and realtime linked data and around the world to help improve governance and governance services and governance transparency -- and how many times can I use "governance" in one sentence? 

     We'll talk about the incredible uses of open/realtime/linked and geolocation data for crisis mapping and emergency response and the last year or two the U.S., UK, India and here in Kenya the government has launched open data portals.  Combination of open -- is this mic not -- yeah, okay -- has led to a flourishing group of open source mapping projects, health initiatives, tools that let rural farmers know the price of crops. 

     And I think critical to maximizing the impact of sort of data for inclusivity and development and more generally for good because certainly these data can also be used to violate user rights and privacy and security is sort of data from the corporate world as well.

     Certainly social media and user-generated content, although I think -- but I think -- which can be very interesting and tell us a lot about giving us a snapshot of the situation in a certain way but the data that is what is more accurate and more dangerous from a privacy perspective is the idea of data exhaust. 

     To quote an op-ed from a panelist Robert Kirkpatrick who will be with us remotely, I hope, data exhaust is the personal data corporations collect about what products their customer to buy and how they use digital services. 

     Corporations are mining these data to gain a realtime understanding of their customers, identifying new markets and make investment decisions.  This is the data that powers business which the world economic forum has described as a new asset class. 

     At MIT researchers have found evidence in mobile phone calling can detect flu outbreaks, a team has demonstrated calling appearance can be used to identify the level of population which in turn may be used to infer fair access to housing, educate, healthcare and water and electricity and that seems hike an awful lot of data for corporation to know about you and pretty major privacy risk to me but -- finally sort of researchers from Sweden's Institute and Columbia University have used information to determine the movement of displaced populations after the earthquake in Haiti which has aided in the distribution of resources.

     As we become more reliant on data to help us understand the communities and the world we live in failure to address not only problems of disinformation but the current impending privacy and security concerns may affect not only the liability but harm those that give data knowingly carelessly or completely unaware.

     The capability to display different data sets with or without geolocation background has meant one can identify someone or thing with a high degree of accuracy and this is a challenge that when developing and applying privacy relations.  In other words how does one apply a test to a data set which itself may not identify data subject but when combined with another data set may enable identification. 

     I'll leave you to mull that over for a few minutes.  Let's start hearing from our panelists who have a lot more to say on the subject. 

     Eric Hersman from Ushahidi is an international technology influencer with a keen eye on the impact of web across Africa according to his bio and doing really incredible innovation work in this region, helping to have the Ushahidi tool as open source adaptations around the world.  I'll stop talking and over to you. 

     >> ERIK HERSMAN:  I'm Eric Hersman, founder of Ushahidi.  Also of the iHub here.  We have over 5,000 members that are programmers and designers, puts us in a very interesting position.  We deal with the community of people who build things.   So let me start with Ushahidi.  It's a software platform, free and open source that is downloadable and has been downloaded over 20,000 times, 132 different countries.  We don't control its use.  That's an important thing to say before I go further anybody cab use it for whatever reason they want.  The main purpose of Ushahidi is to gather information from different sources, channels, could be SMS Twitter, e-mail or web forum, and map it and say this is what's going on here.  So really we're trying to change the way information flows in the world.

Instead of a top-down information flow like we've had for most of history we try to make it easier for ordinary people to have a voice and say something. 

     This has been used in Haiti and Japan after the earthquakes, been used in election monitoring all over the world and it's an interesting concept because it's very early, very nascent actually. 

     What we're dealing with is what was referred to earlier as the enablers of this data exhaust.  This is social data.  This is crowd information.  What does it mean?  A lot of things, depending what type of data you layer on top of it. 

     There are huge advantages to it.  We have seen that already.  Decreases in efficiencies by allowing information to come in and for multiple people to see it at one time.  It's hugely valuable in kind of crisis, elections, in post-disaster scenarios and so there's huge value here.  At the same time, we have to contrast that with what are the risks.  And because this is a discussion I'll leave these open and say we don't necessarily have answers to all of them but what we see is no matter how secure we make the Ushahidi platform we can't secure it completely because at the end of the day if someone sends an SMS, that's open.  Depending what the relationship is between the government and the mobile operator, it might be also easily accessible by government for good or bad purposes so that's something that must be taken into account.

     So there's a real importance of giving people a choice to put their data out there.  At the same time, there is the need to educate on what that means.  I think that is going to be gotten into by panelists here as well.  I won't delve into the open data initiative here because Tim will talk about that more in depth but the advantages to the social data, be it crowd data or open public data are hugely valuable. 

     What do we do with it as individuals?  As organizations?

And as government?  That is kind of the crux we're at now.  Because who is in control?

We can see through a couple different examples over the past couple years that we're -- we have corporate interests who can shut things down, whether or not it's legal.  We have governments who can shut down things whether or not it's legal and we have citizens who can bypass all of it as well.  So who is in charge?  That's my question for everybody. 

     >> JOCHAI BEN-AVIE:  Thanks, Erik. 

     Next we'll hear from Tim Davies, the founder and co-director of Practical Participation, a UK-based consultancy that works with sort of engagement and youth participation and probably one other thing that if I looked at my notes I could tell you but I think technology apparently that's the third thing.  Really sort of incredible working on engaging youth with sort of these uses of open data. 

     I'll stop talking again and let Tim. 

     >> TIM DAVIES:  I won't wear my youth hat.

(Laughter)

 I will try to base my reflections on a live case study working with the international aid -- (unintelligible) -- so I'm here with my freelance consultant hat on but I've been working closely with aid info on a project where we've been working with a particular data set.  I'll work through that. 

     Obviously we've already heard there's been lots of actions on the governments owning up their data with portals being developed across the world, including here in Kenya.  There were many different sorts of data.  Not all sets are equal.  Data sets are as diverse as information sources and we need to treat different data sets in different ways but in this particular case of transparency initiative which we usually refer to as EATI.  What is it?

A political process towards getting donor governments to open up data on aid projects and it's been a technical standard for them to share that information in ways that make it intraoperability.  It encourages organizations giving aid from the world bank and other donors governments down to NGOs to share timely information as quickly as they know it on what projects they are planning on running, where they are geolocated with as much detail as possible.  When things are running, when to publish details of transactions of money spent in those projects to give as much detail as possible on where we are doing development work including details of organizations to whom funds are given.  Most of that data is coming from governments.  We're at the early stage of that data being published but number of projects supporting NGOs to public data on the aid project starting to create a whole chain of data to trace money from one government to another to projects on the ground.

     What's the potential of this?  Why would we want to do open data in this area?  Well, there are many good reasons, improving accountability, leading to better cooperation between agencies and challenging duplication of efforts where two or three agencies are working the same place without knowing what each other are doing.  Supporting innovation.  Again people take this data, map it and see there are areas where we are not delivering projects and areas where there were too many projects or issues on which we are not seeing results we should and having the data to support people to find innovations.  Challenging corruption by seeing payment, see seeing where that doesn't tally with other information sources.  Also providing a framework to gather feedback from citizens on this.  I know working with others on looking at that in Uganda.

     There's amazing potential here but some of the greatest also is challenges which we have to explore so I've mentioned transaction information.  We're working on a standard for identifying organizations so we know a payment has gone from this government or this project to these organizations so there can be transparency.  But who are those organizations?  Some of them are big entities like the OXFAMs, so we come up with some challenges when we try to turn this into linkable data and connect it up with other sources of information. 

       Open data alters the balance of power.  That's at the heart of it.  It changes information flows, alters the balance of power but that doesn't mean it will necessarily alter that balance in the way those of us pushing for greater efficiency and social justice expect or want.  We have to be really critical in how we think about that.  Michael Gerstein uses a case study of land records in Bangalore being digitized and published and he says while access was given to this data the poor didn't have the effective access they needed.  This was intended to empower the poor and marginalized to have equal access but because there wasn't enough consideration of who could access the data and had the skills they in fact found that was used by -- (unintelligible) -- organizations and individuals to disempower poor citizens and take land.

     We have to be really critical about these impacts but not step back from each data and ask: What is the responsibility of those publishing open data to think about these issues?  That's something we have been exploring. 

     In essence it involves recognizing that making data accessible is more than just publishing a data set; it involves building skills, it involves continuing to be open in giving people the actual information they need to make sense of that data and being part of the community of people using the data. 

     One of the biggest fears people are of opening up data sets is that data will be misinterpreted or misused and that is a challenge.  If we just put it online, people use it in quick ways that don't get to the subtleties of that data but by sharing source code and how-to guides and sharing user stories so we are user-centered in design we can support people to be more responsible users of that data which most of our users want to be

     We don't need to throw out the idea of open data but we need to be aware the power of open data means it's not neutral and open data alone is not enough and we have to focus on those other things, too. 

     >> JOCHAI BEN-AVIE:  Thanks, Tim. 

     Keisha, do we have Robert? 

     >> KEISHA TAYLOR: 

     >> ROBERT KIRKPATRICK:  Can you hear me? 

     >> JOCHAI BEN-AVIE:  We don't have Robert by video but this is Robert Kirkpatrick, UN Global Pulse and I'll let him tell you -- we do have you.  We'll get you on full screen.  Hold on a second. 

     >> ROBERT KIRKPATRICK:  I can't see you but if you can see me that's 50% of the way there! 

     >> JOCHAI BEN-AVIE:  That's the important part, right?

(Laughter)

 Shall we just go.  Okay.  Go ahead, Robert. 

     >> ROBERT KIRKPATRICK:  I think, like Erik said, we don't have all the answers

(Lost audio)

     >> JOCHAI BEN-AVIE:  We've lost your audio feed.  Hang on one second. 

(Pause.)

     >> JOCHAI BEN-AVIE:  While we're waiting to sort out technical difficulties perhaps we'll open the floor quickly to questions to our first two panelists.  Anyone? 

     Okay, I'll ask a question.  I think that both you, Erik, and Tim have identified this idea of knowledge is power and the ability of data to shift that balance between users and corporations and governments. 

     And I sort of think back to a conference I was at earlier this year at the Ford Foundation and at the time I sort of laughed at the tag line which was "bending the arc of technology towards justice," and I actually the more that I have thought about it over the last few months I think there is something to that. 

     We are all familiar with the book the delusion and this idea that technology is inherently neutral and can be used for sort of beneficial or nefarious purposes and how it's used and I would say that technology and data are probably intraoperable in that sentence.  I guess I wonder if either of you or both of you have thoughts on how do we do that?  Are there sort of top-level sort of -- obviously every data, linked data project is different but are there general practices and guidelines for users and sort of organizations designing sort of data sets like this to ensure privacy and security and beneficial use of these data? 

     >> ROBERT KIRKPATRICK:  Our approach is we are going to try to set up labs at the country level to work with community leaders.

     >> JOCHAI BEN-AVIE:  Robert, can you hear us now? 

     >> ROBERT KIRKPATRICK:  Yes. 

     >> JOCHAI BEN-AVIE:  Because of some technical difficulties we missed everything you said.

(Laughter)

     >> ROBERT KIRKPATRICK:  Oh, dear, I'm so sorry. 

     >> JOCHAI BEN-AVIE:  I'm so sorry, I'll hold the question I just asked, make note, think about it.  And we'll let Robert give his presentation now.  So if you could just sort of give us a few short sentences about what Global Pulse is and then what you guys are doing. 

     >> ROBERT KIRKPATRICK:  Certainly, okay.  So essentially  Global Pulse is an initiative to try to explore how to use this new world of realtime data, this ocean of data all around us now, to improve our understanding of whether our policies and programs are working, and where there are populations who are at risk of harm, at risk of reversals in global development.  The challenges, old ways of doing business and tracking effects of price through paper-based surveys are no longer agile enough and we find ourselves again and again in a position whereby the time we know something is happening it's too far down the road to actually help. 

     Our -- what we are looking at is a situation where we see a lot of data out there, as you noted in the beginning, there is a lot of data on the open web, a lot of data that is accumulating behind corporate firewalls, you raised the point, pretty scary the corporations are this information.  Corporations do have the raw data from which this information could be derived and they have it in increasing volumes.  What we're looking at is the fact because it's they who have that information, none of the utility of that information is being made available to shape public policy.  That is the problem we're looking at. 

     We see huge privacy concerns around this information and our goal is to really make sure we're working in a way that increases our capacity to protect global populations.  If we compromise privacy and safety in order to protect them, we have failed.  So I just briefly looking at these different kinds of data, I think there are significant privacy concerns around both of them.  It's not just the data exhaust that is hidden from view behind firewalls but also the public information.  I was speaking with a UN colleague a few weeks ago who said, she's a child protection expert and she said, we are in a situation we've never been in as we start looking at this data.  We see children around the world on social media, for example, on Twitter, talking about being abused in their homes.  We're not even allowed to ask them on surveys whether that's going on and the question arises are we allowed to use that information to rescue them if they decided to share it with us and the rest of the world?

There are privacy experts who will say no to that because even though someone chooses to share something publicly, there is still a presumed assumption of use around that information.  I think we are dealing with a very unfamiliar situation here and these kinds of conversations are exactly what we need to start exploring what could be done and what is crossing the line. 

     >> JOCHAI BEN-AVIE:  Thank you. 

     So Robert, did you catch my question before about sort of how do we -- these guidelines and recommendations to people who would set up open realtime linked projects to sort of maximize that for those data for good and build in protection guidelines?  Or did you miss that entirely? 

     >> ROBERT KIRKPATRICK:  No.  I did catch it. 

     >> JOCHAI BEN-AVIE:  Okay. 

     >> ROBERT KIRKPATRICK:  Yeah, I think there's -- before you can answer the question of what is the right approach, you have to look at your fundamental model.  I think there are different conversations out there in the op-ed piece you referenced there's mention of the world economic forum, for example, and there their description, their approach is basically based on an assumption that we're moving in a direction where the underlying mental model about personal data is my personal data is my personal privacy.  It's my personal property, right.  That your data is nothing you own and we need to get to a model where everyone owns and controls access to third parties by third parties to their data on an individual basis. 

     There are other models out there that say data is a public good.  If you look at -- and you can see both sides of this, I mean, I think there's a utility argument to be made here.  If you look at something like Google traps, people don't tend to be concerned about that even though it uses aggregate search data because they can see what people are searching do, helps with disease outbreaks, there's a public good.  You go on Amazon to buy a book you can see people who bought this book also bought these three.  That's the same data in aggregate where there is benefit and you don't see concern over it. 

     I think the general principles that we are taking from the Global Pulse standpoint is we are not looking for any information or looking to help anyone gain information on the plight of an individual but are there people showing signs they are losing jobs, not able to afford healthcare, same kind of information we collect over surveys but faster.  The issue is now how do you make sure you are not as you noted in the beginning putting out information that could be used maliciously as everyone is saying these days, it's much harder methodologically to anonymize data than everyone thinks it is.  The power of influence is significant. 

     We really need to get the conversation going about how we minimize risk to individuals while still being able to protect the public good. 

     >> JOCHAI BEN-AVIE:  Thank you. 

     Erik or Tim, do you want to take a swing at the question? 

     >> There's a really key point there in data on individuals when combined with open public data.  So often when you assess an open data set from a large institution, even with anonymization concerns the privacy issues don't come to the for tonight.  Then when you combine people are tweet, combine open social information and government data, we do have challenges that are very tricky to work around.  Some of that has to be about individuals knowing what data is, being able to be critical about what government collects and holds. 

     I think the only other one in terms of that earlier bending the arc of technology towards justice is framing, recognizing the very structure of data sets is based upon the world often from the before it was bottom-up.  So many data sets, the governments are releasing new data sets but many the governments release are things they have historically collected that they perhaps shouldn't are been collecting or shouldn't be holding all those debt seats having coded within them normative biases and we need to be able to really be critical about that also. 

     >> ERIK HERSMAN:  Mine would be simple and straightforward that we need to strive for a more equality of power within information collection and sharing.  Right now it's -- people with the most capital have the most built to parse that data and do something with it. 

     That is the government, big corporates, and so to continue to lobby for the ability for citizens to of course own their data but also for them to have tools that allow them to do something with it as well.  So this ability to not be at a disadvantage.  Technology is beautiful thing.  Allows us to overcome inefficiencies in the system.  What we need to do is make sure those can be overcome by individuals as well as organizations. 

     >> JOCHAI BEN-AVIE:  Thanks.  Do we have any questions from the floor?  Please.  Can you step to a microphone so we can get your question on record, please. 

     >> Hi, I'm Mike Sacks, from the perspective of a small business, a lot of small businesses can do things for their customers by working together.  Sometimes that means sharing information.  Big, really big, companies don't have requirements to cross that line of sharing information.  So if we have really strict regulations, that prevents sharing of certain information, it might give these really big companies a disproportionate competitive advantage because they have a lot more information available about people.

     >> JOCHAI BEN-AVIE:  Can you -- give us an example of the kind of situation you are talking about and the kind of data.

     >> A company like Amazon knows everything I bought and I buy a whole bunch of different things from them.  If I buy camping gear I might be interested in camping books.  A small bookstore and small camp equipment store might want to work together but they would be prohibited by certain regulations in trying to reach out to their customers.  Is that a concern we can address through policy? 

     >> JOCHAI BEN-AVIE:  We'll want -- (unintelligible) -- to answer that when we get around to you speaking.  Great.  Do we have any remote questions?  Okay. 

     >> I liked what Robert had to say about data retention and how it can be really useful for judging social policies and things but what about it's called the ethics of forgetting.  What about rules governing when you are not allowed to keep data or how long you are allowed to keep it before you have to get rid of it? 

     >> JOCHAI BEN-AVIE:  Anyone on the panel?  Do we have Robert to answer?  Did he want to answer?  Okay.  Panelists, anyone? 

     >> I won't say I have an answer but is there a statute of limitations on data.  We are at this kind of really strange point in history where we've never had this much and never had the ability to do something with it before either.  It's all kind of coming to a head at the same time.  At the same time you're seeing different organizations are pushing the limits of what might be socially acceptable or not with it. 

     That's a question that I think unfortunately that's a question that is also culturally relevant, right.  You'll find that the Germans may not be so open about that, the use of that data and collecting and using of it as the Americans are, as opposed to the Kenyans.  Right.  So it is going to -- almost has to come at the country level more than -- very relative answer depending upon the cultural use, I think. 

     >> TIM DAVIES:  Pushing resistance to access data, because we know every boundary you put up drops off the number of people who will go use it and some of those are big valleys.  I think there is a key point that just because we have pushed back against resistance and pushed for open nets by default doesn't mean it should be taken out of the system he entirely.  There may be in the transparency initiative case the data is open but we are making sure we engage with the community to talk to people about how they use it.  All active engagement.  Other cases it might be right to have a conversation with someone before data is released and for there to be a presumption that it should be being shared where there is a good use of it and shouldn't be based on organizations hugging and holding onto data but I think there has to be a scale of different levels of resistance to access data, not a simple open/closed dichotomy. 

     >> JOCHAI BEN-AVIE:  Please. 

     >> I'm from the UK, Eric Joyce.  Briefly worth mentioning the chat over the Internet said if there was evidence of child abuse on Twitter, people are not empowered to do anything.  That is not the case.  People can investigate anything on the Internet, no question about it.  But what I really wanted to ask was if this paradox between open flows and people's demand for privacy.  If it isn't courts that make the decision, then who does make it, and who does make that decision?

     >> JOCHAI BEN-AVIE:  I don't know if that question has an answer from our panelists but if anyone wants to take a shot... I mean.  Okay. 

     >> ROBERT KIRKPATRICK:  Can you hear me okay? 

     >> JOCHAI BEN-AVIE:  Yes. 

     >> ROBERT KIRKPATRICK:  No, it's a fascinating question and again I think Eric Hersman touched on this at the very beginning.  He mentioned the not just about the fact that the data is out there, but about education.  I think there is -- the issue is not whether, who is empowered to act but rather what were the assumptions on the part of the person who communicated about how that data would be used.  I think there's a general lack of awareness on people's part about the possibility of the way this information could be used.  We certainly have seen a shift in cultural shift around the use of these tools and if you are 20 today you probably have a very different set of expectations about what kinds of things you are able or willing to communicate publicly, there's a general trend toward talking about everything that is happening in one's life publicly but with an assumption no one else is really paying attention.

At least how it looks to someone who is not of that generation.  I think there's a lot of information out there that could and should be used for better policy responses and if appropriate for rescue but we have to balance this with an understanding of how that data could be misused and I think education is a starting point. 

     >> ERIK HERSMAN:  A comment about that question who makes a rules, not an answer, just a comment.  A problem is the people making the rules are not the ones using it.  So there's actually a usage gap there that makes it even more difficult to get the right rules for the society in general.  Because not all societies have an inclusive way of getting to an answer.  So it's a very complicated problem.  Is there a solution?  No.  The future is all about contention over information.  That is the big battlefront of our generation.  So it's gonna play out in different ways  with different government and different corporates, REM versus UA, great example. 

     >> JOCHAI BEN-AVIE:  We'll move on now to our second     minipanel and I think talking about knowledge is power and sort of shifting control, I think Moez Chakchouk is uniquely well able to speak to that.  He'll peak about the information before the Tunisian revolution and what's going on today and then the processes that the agencies has undergone. 

     >> MOEZ CHAKCHOUK:  Thank you.  I'm glad to be here again and I think as -- Tunisian agency it was better linked to the former regime and after the revolution let me explain something.  We try (Speaking off-mic) we know that -- okay.  Sorry. 

     I said that the Tunisian Internet agency that was in charge today is barely linked to the regime before because a lot of things were done during the regime including censorship, and also a lot of equipments have been bought by the ITI in order to make all this possible. 

     So after the revolution I think ITI is now committed to being determined to transform in order to make it as neutral and transparent exchange point because we were?  The center point of all the system of -- and (Speaking off-mic) so the privacy issue has been raised in different situations before because we have a constitution rule that says privacy is fundamental for all people living in Tunisia and I really raised this issue because we know we have election in a few days on the 23rd of October and assembly will build a new constitution. 

     This Article that have been abolished and former Constitution is very important because I really don't know if this has been guaranteed or not during the regime or not because as we see all equipment and procedure regarding privacy and censorship there is no law that guarantees that on the Internet.

We talked about different issues, lot of things related to privacy but not on the Internet content.  I think this all those issues were raised have been raised before but because of the dictatorship we cannot tackle all those issues because it was a taboo subject and Internet cannot have a debate regarding the Internet. 

     For example, our framework regarding the Internet is very old, stated in 1997.  Really very old regarding development of social media and Internet and computing and all those issues and I think privacy will be a subject that will be discussed because now we focus on more on censorship and on content blocking because it's an issue that had been raised many times during the regime but now privacy also will be raised again because ITI is still having all those equipments for interception and helping all government to do a lot of things but we try in this transition period of time in order to make it with respect to order from the judge and the courts. 

     Those issues need to be clarified in the future because of course we have national authority for protection of data privacy or something like that and of course this authority is supposed to be independent also but I really raised those issues because we know we have to build all those authorities and have to be established in Tunisia because it's very important for the future of Internet as we deal with a lot of opportunities.  We know today Tunisia is well positioned to have cloud computing data centers, ITI commitment to do partnership in this field.  We have a lot -- we have done a lot during this period of time and I think that framework needs to be updated in an urgent manner because we cannot wait a lot.  Constitution will be built and all laws will be updated so I think all those issues have been raced now and Tunisia's commitment to go further and ITI as it was linked at the regime I repeat that.

     Also as the exchange point is determined to build a dialogue regarding Internet transparency and neutral and the debate is open to all society and welcome to executes with us all those issues and if any investigation has to be done in Tunisia, we are open to that.  No worries. 

     Thank you. 

     >> JOCHAI BEN-AVIE:  Thank you.  We'll go to a different region of Africa with Anahi.  I don't know which hat you are wearing today so I'll let you give a line of introduction and then launch into your presentation. 

     >> ANAHI AYALA IACUCCI:  Hello.  I work as Media Innovation Advisor for the Africa region.  I basically cover everything that is related to application of technology for Internews, an organization that deals with media and development.  What I want to do is like use an actual case, project we're doing right now in Ghana that involved exchange of information in realtime using specific software and technology and used as example to show the main problems that small organizations on the ground are facing as related to security and to realtime data. 

     The reason why I want to do that is because I think there is a big difference in between what small organizations on the ground are experiencing and what big organizations that deal with security and realtime data are experiencing right now. 

     The project I want to talk about is a project that is going on right now in Ghana.  It is related to children and women trafficking and violence against children and woman.  Basically pretty simple idea. 

     Majority of the trafficking and violence is happening in rural areas but the majority of the people that can respond and act and make policy are in the capital.  The idea is to create a realtime system that from rural areas and specific villages can bring information in realtime to the capitol and to the NGOs and organizations that are working in specific areas so they can respond immediately when a specific problem is coming out. 

     The reason why it's important to have this information in realtime is because if a child is being given by someone to be brought into the city the moment in which the child arrive in the city it's lost.  You cannot trace it anymore but you can still trace him in the time in which he's going from the village to the city.  The second reason why this is extremely important is that a lot of the violence is happening against women and children in rural area are in a kind of way inside families.  And it's not because we're talking about a particularly sick society; it's because there are a lot of social reasons for that.  But we need to be able to understand what are the problems that affect that specific village or that specific community to be able to tackle the problem.  We need this information soon and we need to have it fast in the hand of people that actually design policy.

     So what are the problems that we face with this project? 

     The first problem we face is that we don't want this information to be public because we can't share information about children.  We can't share their names or violence they are subjected to but on the other side there is power in this information that it's in the fact you to have make it available.  You have to make people understand what are the problems that are facing certain communities, why they have one problem and not another or why certain solution are not being designed so the first problem we have is that we have a lot of data coming out from the communities themselves that we want to make available for people that design policies, not the country level but international level and we have a problem in understanding how do we make this available without putting the communities themselves at risk. 

     The second issue we are facing is that there is a strong lack of understanding and knowledge from small NGOs and individuals on the ground that deal with realtime data in understanding what are the security threats.

     I have found myself many times not only in Africa but also in other countries in front of people that are using and exchanging sensitive information and they have no idea what they are doing.  They think if the SIM card is not registered, they are not traceable.  They think if you have a fake Facebook account, people cannot understand who you are. 

     There are a lot of really lack of understanding and on the other hand I would say there is also a little bit of responsibilities from people that create softwares in letting people understand what are the issues.  If I am using this specific software, let's say an encryption software on my computer what are the possible risks?  I don't believe there is any technology in the world that is 100% safe.  We need to make this information available and public and educate people about you can use this and this and this software or this issue to try to protect your data but you need to know that there is also this, this and this other one possibility.

     This needs to be done in a simple and accessible language.  If you go online and look for information about encryption you will find some crazy documents, absolutely not understandable from a normal person point of view. 

     When you talk about small NGOs on the ground or individuals you are not talking about people that have an IT background.  We need to translate this information in accessible ways and the third issue is that when we are talking about realtime data and privacy and security and we've seen this in the Arab spring revolution but I see that in my work a lot.  Most of the time we think that security is going to come from technology.  We pull in technology the 100% insurance that my information will be protected and that's the only way I will share this information. 

     I believe that is really not the case.  I believe that every time we use a certain technology we need to think about what am I gonna do if this technology is not there?  Then I can really build a successful project.  If we don't do that we won't be able to really create systems that are sustainable because technology can always fail.  We've seen this in Egypt when the government shut down the Internet, nobody was able inside the country to use Facebook or Twitter or whatever other means to share information or in a very kind of like different way. 

     But we need to make sure we understand this and make people aware of the fact this needs to be kept in mind.  For me one of the main ways is not to forget the power of social network.  When I say social network I don't mean Facebook and Twitter.  I mean real people social networks.  When you talk about sensitive data and sharing of information you are also talking about communities and people that have already an existing network in between them.  This is where for me security lies when I am -- talking about exchanging sensitive information.

     >> JOCHAI BEN-AVIE:  Okay.  Thank you.  I will add from the bully pulpit of moderator that Access, my organization, has produced a Practical Guide to Digital Security, which is available on our website, accessnet.org, and that is sort of geared exactly at people in those situations who frequently find themselves moving from citizen to activist or from suddenly getting a realization of sort of the security and privacy risks they face and knowledge of sort of the tools that are available and how to make more conscious decisions of the tools you choose. 

     That's that. 

     We'll now turn to Sophie, whose last name I can't pronounce.  I don't want to butcher it.  You can introduce yourself.

     >> Thank you.  Sophie -- (unintelligible) -- working from the Council of Europe.  I will bring the European perspective to this discussion. 

      The regulative perspective of Council of Europe is an intergovernmental organization setting standards.  The issue of data protection for us is linked to fundamental human rights, right to privacy -- and yesterday we had a panel on privacy and this question which we mentioned earlier today of the ownership of the data.  This was already discussed. 

     First, it is a key aspect, fundamental right, the problem is more as to the level of the control by user. 

     So as to the standards we set the main one is 30 years old convention and to come back on what you were saying, framework being obsolete.  It was drafted in such way there was no reference to any technology.  Its main principles which have to be applied, every technology use them.  For instance, we focus on the quality of data.  The reason why it has to be -- the reason why it's possess.  This possessing has to be fair and legitimate and this applies to any technology.  Nevertheless we are trying now to address new challenges and modernizing conventions. 

     The convention is a general framework, it's protection of personal data, applies to public sector, private sector.  Really it's a general formulation of the main principles.  We have tried to go beyond that with specific texts and I will only mention the latest one.  One is on profiling and we discuss before data exhaust which for me was a new notion, new terminology but clearly our recommendation from last year is precisely addressing that and trying to make sure that the data which is there is not going to be leading to unfair unprofiling of individuals.  This applies once again both to the private sector which is the main sector targeted but also to the public sector. 

     I thought this particular recommendation was really of interest for the debate.  It's phrasing the principles which apply to profiling.  You mentioned earlier the sensitive data that can be there and clearly the principle is that sensitive data cannot be part of profiling except with legal safeguards allowing that. 

     Other things I would like to point out to your attention which I think are fully in line with what we discuss today, last week there was a recommendation on the new notion of media adopted.  The main objective of that text is basically to define criteria to enable the identification of application of all the safeguards around the media to whichever content blog or whatever.  It's a set of alternative criteria, and depending upon those editorial lines, the width of the dissemination, then you can know if clearly the media guarantees can apply to this content, precisely because on such contents there can be a violation of private life. 

     It was considered that even if the main angle was the angle of freedom of express the fact you could have your personal data protection violated and in fact direct chilling effect on the freedom of expression so it's also addressed. 

     Finally I'll just mention two draft texts.  We are working on it at the moment, one on social networks and one on search engines and it was the angle of freedom of expression basically balanced with privacy.  We're including privacy guarantees in those texts and underlining the benefit of those for the freedom of express. 

     I would like to come back maybe also to this question of who takes the decision as to regulation.  Multistakeholder process of the IGF is a perfect one.  In theory regulators are at national level but there is big progress, you have national IGFs bringing together ministers of justice and all ministers involved so for this particular point at least of Internet Governance I think that things are moving at national level and that the regulators, legislators are bringing to go ministries involved and also civil society and private sector.  For sure it's what we're trying to do at the international level but more and more I think it's the case at national level.  That's it. 

     >> JOCHAI BEN-AVIE:  Thank you.  And having commented on the Council of Europe proposed recommendations on social networks and search engines, they're really incredible documents and I encourage those in the room and those watching to take a look and watch their development. 

     So thank you, all, to our panelists, and we'll now move more into the sort of free-form questions. 

     Let's start with questions for the second panel. 

     >> I have a question to the honorable delegate of Tunisia.    >> Can you identify yourself.

     >> I'm Andre (off-mic), higher school of economics, Russian -- technical community.  So during the Arab Spring especially in Tunisia, Egypt and other -- so actually involved by the social networks.  Social networks like in those meaning you mentioned that in meaning that Facebook and Twitter.  As a social networks and the Internet.  So there is an issue of the principle of net neutrality.  Was it... was it okay, conduct those principles during this events in Tunisia and in Egypt?  Because those channels were created to -- there is no neutrality in political sense because only protesters to be supported.  As for my -- as I understood and think about this event, after the Arab Spring happenings not so big scale happens in Portugal, for example, and in other countries, even in Russia.  So that is why there is a question if this realization from your personal perspective, is that freedom of assemblies or infringement of the global security? 

     >> MOEZ CHAKCHOUK:  I will say in Tunisia Facebook -- (unintelligible) --

     >> A bit louder, please. 

     >> MOEZ CHAKCHOUK:  I try.  Okay.  In Tunisia Facebook was locked by the ITI so social media were considered by the regime as harmful regarding practical and media and openness of the regime. 

     But during the revolution I think the regime didn't realize that the issue about locking Facebook or Twitter because they were used by all the Internet and the community to communicate all videos and different -- in the country. 

     I think at that period of time I was in the ministry -- (unintelligible) -- I think in the personal point of view the regime didn't realize that if he block Facebook or Twitter, he will maybe make people more angry and maybe cannot control the revolution because revolution of Tunisia was the first country to do the revolution.  When I look to the Egyptian, the regime of Mubarek did that, it blocked all communication system but that was not the solution. 

     So I think we have to deal with those social media with social media differently because opening or having Internet as open as possible is also saying to people open this is an open space and there can be free to say what they want and after the revolution we experience something that really was important in Tunisia, a lot of people communicate within Facebook and shares a lot of documents and lot of photos and pictures and you know a lot of people can go to court and say this is different, how to say in English, yeah, so someone -- sorry?  Defame, yeah, lot of people defamation cases were put on the court and ITI was involved in trying to make the work within the court. 

     I think social media is a big challenge for us.  In order to show to this country that with don't need to block that because it's also an opportunity to use and communicate and share a lot of things.  But at the same time we have to deal with privacy because privacy issues with those special social media were hosted by foreign companies and lot of data is to share it through those equipments.  For example, if we have a library for all videos that shared during the revolution I think just Facebook have that, not Tunisian government or company or NGOs.  Thank you. 

     I hope that I answered your question.

     >> Yes, thank you very much. 

     >> JOCHAI BEN-AVIE:  The gentleman in the blue shirt. 

     >> Hello.  I'm from the German Ministry of Interior.  First of all, congratulations for setting up this workshop because it's a very important question and to see both sides of the middle.  One is privacy and the other is open data.  Yeah, there's an interest in both sides of the  middle and the problem for creating rules on these is sometimes that we are talking about something we have already in mind, the special case, something like that. 

     That is sometimes very, very different.  For example, if we talk about profiling, we might have the case in mind that someone is sitting there Googling something and that Google should not be allowed to do to build a profile with all these search requests. 

     Then we would say we have to regulate it or it has to be totally clear that Google may not, what, profile, whatever, and create a picture of yourself which is even more detailed than you think about yourself on your own but it might also be the case that someone like us is just Googling something, a person.  I think everyone already did.  This is open data, this is all what is in the cyberspace already and I think we all use this kind of information. 

     If you create a rule which would say profiling is forbidden or is restricted, we could harm these cases or the second case, that we might -- don't want to regulate this case that someone is Googling someone and profiling someone with results in his head, but we do so. 

     So that makes it very, very tricky for lawmakers to create common rules on these problems and these issues.  So therefore the German -- we think there is a problem sometimes but we have to be very, very careful that we don't -- there are no collateral damages by regulating something with an ambitious approach on some very bad cases but the damage might be greater than the benefits sometimes. 

     >> JOCHAI BEN-AVIE:  Go ahead.

     >> Social media team -- (unintelligible) --  Just I want to ask about how can we save our content, our privacy while social media every time they are updating servers and they want more content about us like where we are working, we should share location, we should add phone numbers.  All these things we should have knowledge, how can we have our -- save our content. 

     My second question is: There is a lot of application and mobile phone, I can share location so how can we have also built knowledge about this thing while I am here or travel somewhere.  I can take and all people can know where I am.  So important thing on social media is not to join the social media, the important thing is to have the knowledge of social media, we are tracking the Arab Spring and see what happened in the revolution.  People think the Facebook and Twitter they did the revolution.  No, I think the knowledge of how can we use the social media is something important for us. 

     >> I'm Ronald from Kenya.  My question is basically on -- computing.  Whereby we'll be having the likes of -- other devices, what's being used for communication.  Now, how can privacy and protection be done on this so we don't have the likes of tags getting into our phones to protect us.  Thank you. 

     >> I'm gonna kind of repeat myself.  I really think that actually probably both answers is that you need to know what you are doing.  When you are building a Facebook page you need to understand what you are doing when you are putting data out there.  It's related to two problems.  One is users that normally don't read and don't understand what they are doing and they just do it because it's cool, they have an application and they share location with everybody and don't understand what that means.  On the other side I agree there is I will not say lack of information.  You can find information but sometimes you really have to dig to find it and to understand how your data will be used.  And that's where I think probably policies can be kind of like key in trying to make this a little bit more clear and open so whenever you are building your Facebook page and they ask you your job or anything like this, every time you have something popping up and saying you know this information will be used for this, this, and this.

This information will be stored for that many years, this information will be made available to third parties and this in this way. 

     It has to come from both.  It has to come from people that do regulation and companies that create these tools but really also it has to come from us.  When we use these tools we need to look for information and to understand what exactly are we doing. 

     >> ROBERT KIRKPATRICK:  Can I reply briefly to that as well? 

     >> JOCHAI BEN-AVIE:  Please do. 

     >> ROBERT KIRKPATRICK:  An analogy to be drawn with a lot of different products and services that we regulate where there's a risk of harm along with a recognized benefit, if you look at certainly in the U.S. and countries in Europe I've seen if you look in advertisement for pharmaceutical products, they're allowed to talk about benefits of the product but they also have to talk about side effects. 

     I've spent more than 15 years in software development and I'm as guilty as anyone else of this is not protecting users from themselves through inadvertent use of technology.  I think it is worth considering regulations that would require companies who provide tools in technology to alert users, especially first-time users, as they begin accessing features of a piece of technology of what exactly they are doing.  That's something we don't do enough of.  We require it in in the pharmaceutical space.  Perhaps we this think of it here. 

     >> JOCHAI BEN-AVIE:  Interesting analogy.  As a counter point to that, I would argue unlike with pharmaceuticals where users are clearly customers, if you are using a platform like Facebook, this phrase that's been going around, internet governance digital rights, you are not Facebook's customer, you're its product.  So in that sense the risks are a little bit harder to sort of find in that same way and I think if we think about Facebook in that way, I'm not saying don't use Facebook but think about what is the information you want Facebook to sell because it will happen. 

     It's about making conscious decision business how you share information which is a theme that's been mentioned by every panelist.  Tim, are you --

     >> TIM DAVIES:  I've found it useful to question about what we mean by privacy.  A taxonomy of privacy where he outlines we could be talking about information collection, processing, information being disseminated or invasion or intrusion and these were different sorts of data, very different sorts of concerns, different sorts of responses. 

     If data is being collected without our consent and harvested and shared we need to set responses that may need to be legislative and technical.  If data is being abused for decision interference, insurance companies sucking up open data on crime, information about where I go on particular days then use that to charge me a different price for insurance or even advertise ertion marketing things in manipulative ways that's decision interference we hope can be dealt with by legislation around banning that use of the data but not necessarily saying it shouldn't be there.  Other forms are trickier because it is already outside the law to use data in particular ways and we have to very much educate users to make sure they don't but we have to really get into what do we mean when we say privacy is under threat.  What's the impact and what sort of responses are like for that case.  

     >> Thank you very much.  (Speaking off-mic) I get the impression that an underlying assumption that people are always identified, that is the data we are talking about preserving for the benefit of everybody is always related to an identity. 

     I also think there's a lot of focus on legislation but how about technology?  I would like to ask you guys in the panel what you think about using pseudonyms, that is, not identifying people but being able to process data without knowing who you actually are processing. 

     Also I would like to ask the Council of Europe:  Are you considering this in your future work? 

     >> Yes, thank you for the question.  Indeed, this text I was mentioning on social networks, one of the recommendations is precise lie to enable users to use pseudonyms and anonymity.  This doesn't mean for law enforcement purposes that when needed  you don't track back to the identity of the person.  But clearly it's one of the recommendations that they are able at least to what is public to have this anonymity and freely express themselves. 

     >> ERIK HERSMAN:  We are strong proponents of pseudonyms, the ability to not care who it is from the platform but that we can identify a user for some reason.  What that means is we need to know who is more trusted or not trusted when they are sending information around a crisis or disaster.  We don't care who they are.  We just care that source of information, whether e-mail address, phone number, Twitter handle, whatever, can be trusted. 

     So this is why I highly disagreed with Melissa Meyer from Google a few years ago when she was talking about how the need for anonymity was decreasing and that everybody needed to be known and you see how this is playing out with Google+.  There's a real drag there and we need to --  kind of as a society I guess everybody  it's relative will fight back differently.  We need to figure out if that's what we want of the Internet or not.  It's really important thing.  I'm glad you brought it up. 

     >> I want to add something to that which is that whenever you allow, if you are using this kind of data in, for example, situation like repressive regime, whenever you allow everybody to be anonymous you are also allowing government to be anonymous and like simple example was what happened in Sudan where activists were using Facebook to coordinate demonstration during Arab Spring and the government was smart and set up their own Facebook page, called for demonstration and arrested and tortured everybody that showed up. 

     So there's always kind of like the other side but it's definitely something that needs to be like more spoken about.  You can use it much better but on the other side you need to understand how other people will use it and also when you are talking about humanitarian emergencies and all this data is important you need to also think about not necessarily a lot of organization will be allowed to use certain data if it's anonymous.

     For example, journalists.  I don't know any journalist in the world that will are write an article based on information that he cannot verify because he does not know who has put this information out.  Same thing is for example when we have been working with the UN in for example the flood in Pakistan where there were -- around the same issue. 

     >> Of course anonymous is something that is very important for Facebook also but you know in Tunisia now a lot of people are using Facebook and Twitter to say a lot of wrong things and lot of rumors and wrong information.  So we have a lot of cases regarding Facebook and Twitter so we have to identify those people in order to make the legal procedures so it's also an issue but technology is also required for those issues but technology without framework, clear framework and procedure, is also an issue in my country today.  Thank you. 

     >> JOCHAI BEN-AVIE:  We're probably clear on sort of how this can be anonymity can be used for good and evil and I'll add one more example of sort of sued name people loyalty to the Syrian regime, spanning the Syria hashtag and Twitter took action there.  But they were spamming this Syria hashtags, no actual information from people on the ground and allies could get out but I want to give a really great example of where it works and that's e-Bay.  E-Bay has built this incredible like network of people who are willing to buy things --

(Laughter)

 -- from people all over the world who they have never met who don't use their real names but they rely on the trust rankings of others, so there are models that allow for trusting a sort of person on the Internet who has anonymity. 

     And if we look at the trust issue that sort of Ushahidi users need to think about but also for e-commerce it's this kind of a consistent identity, sort of -- this is the same question that advertisers have and sort of their platforms in trying to think through like well we have to require a real name also so people will be the same person over time. 

     And I think e-Bay has shown not that e-Bay is chock-full of advertising but they have shown you can actually have a consistent perpetual identity on a platform even though it's anonymous. 

     Go! 

     >> I'll raise a question rather than answer one but something that came up in other work on payment transparency in the UK is what the individual -- individual citizens versus supplier of public servicers, someone involved in different acts where we have had the situation that people receive payments from the government, payments over 500 pounds ought to be published transparently.  We see companies as entities but when it's data about money and individuals received, that's commonly considered private.  I think this open data from governments versus open data we publish ourselves will create a whole set of new issues, what do we have -- private data and information about us and when is that information also of public value.  That comes up in how we have that debate.  That is something we really need to dig into. 

     >> JOCHAI BEN-AVIE:  Other questions from the floor?  Do we have remote participants who want to ask anything?  No?  Okay. 

     >> Richard Allen.  Facebook.  I can't resist coming in on the pseudonymous point but first I want to say you did say earlier Facebook will sell your data, and I don't think you have any evidence for that.  I just need to put that on the record.       Facebook has a business model which is based on targeted advertising which does not involve transfer of any data to the advertisers.  We're very clear about that.  It is possible to have a business model where you sell personal data.  We think that would be a very short-lived business model because no one would trust you and we have definitely decided that is not ours.  Pseudonymity is a tough one.  We have spent a lot of time on that one.  But as a baseline there are times when you want to do things in anonymous way and lots of services that offer you the opportunity and that's quite right but there are other services lining ours that are real identity at their core.

     We are absolutely clear about people come with that expectation of real identity.  It's not that a lot of people want themselves to be sued none mouse but everyone else identify themselves.  They want this unequal relationship.  We require a reciprocal relationship.  You must identify yourself correctly.  That is what we have decided for our system.  But it's not what other people necessarily have for their systems that's up to them and I do think an important part is people to UPB the difference between a real identity but not to assume every tell should necessarily offer pseudonymity because it breaks down. 

     Facebook works because people have real identities.  You fill it full of Mickey Mouses and it's not Facebook anymore.  Other systems are full of Mickey Mouses, and that's fine, up to them.  Just to make that point.

     We understand the cause for anonymity but we resist the idea that someone should dictate how every web service should structure itself.  There's full transparency.  Let people decide whether they want to use that service under those terms and conditions. 

     >> JOCHAI BEN-AVIE:  Richard, you'll be a guest of mine later today so I'll be careful here. 

(Laughter)

     >> I think that's well and good when it stays on Facebook.  What are your statement for when that follows you when you have logged out of Facebook? 

     >> Not referring to any recent press reports. 

     So again, let's be careful.  Facebook has a profile of individuals they volunteered under a privacy policy and terms and conditions which we take quite seriously.  We don't profile people on the basis of information that may become accessible to us because of plug-ins that exist on the Web and again let be clear, most websites these days have a multiple plug-ins, provide photos, maps, news feeds.  When you have those plug-ins some data is sent to the plug-in provider.  They could use it to create profiles and tracking.  You have two choices.  One is go back to web 1.0, get rid of all the plug-ins so there's no risk data is not transferred or second option is push the service providers very hard about what their policies are in terms of what they do.  Ask them whether they are tracking, ask them what they do with data when they delete how long they hold it for.

Whether it's health securely.  That is the option we prefer and people are asking those questions and we have given answers and will continue to.  Baseline data we acquire by using plug ins we delete after 90 days.  Used for security purposes, not for building profiles of people.  Only times under which data becomes associated is were they take an action like clicking a light button like a normal Facebook action but we continue people to press us on that, to want evidence of that and for us to have to demonstrate that. 

     That's reasonable but I would also say please do it for every other plug-in provider, because if you know anything about technology, other people are doing plug-ins to build profiles and so please treat us all fairly. 

     >> JOCHAI BEN-AVIE:  Thank you, Richard, for that clarification and I apologize if my comment was in excess but I do think the point stands about making conscious decisions about the information you are sharing on the Internet, whether Facebook or another platform, and I think you would agree there. 

     We're just about out of time but if there are one or two last questions...

     >> I can add a little bit from my first question, then.  Now I know that you all know what pseudonyms are and --

(Laughter)

 -- in general you can say that we have a group of technologies called privacy enhancers in which they are one solution but in general how do we put this concept of privacy enhancing technologies on the political agenda so we will have legislation that takes into account these new technologies, new methodologies like privacy and so on.  How do we put it on the political agenda?  Any ideas?

     >> It's clear we are calling for privacy by design, privacy enhanced technologies, then we are addressing Member States themselves to translate at a national level.  That is maybe where it's more difficult to respond. 

     If I may just come back on what was said before, this draft recommendation on social networks we drafted, indeed we circulated it to private stakeholders and had a meeting in Brussels.  The council of Europe in is Straussberg, not Brussels and it's 47 Member States, human rights-based. 

     We were very happy to have feedback from Facebook on this draft, indeed the -- what you just mentioned about pseudonyms was made very clear and another point where there were divergences was about privacy friendly default settings.  That's something we are trying to push for. 

     There again, I think you have a model which does not necessarily allow for that but still as our text is for social networks in general, not Facebook.  We are keeping this line. 

     >> JOCHAI BEN-AVIE:  I think that's all we probably have for today.  Oh, there is someone?  I'm sorry.  Go for it! 

     >> I'm apologizing for directing this question not as much at the panel as at our friend from Facebook.

(Laughter)

 No, no, I just... a new feature just released, it's a subscription option where you are now able to subscribe to a person's news feed without actually having them accept a friend request from you and that seems like the kind of a privacy violation, you no longer have control over that data.  It seems that the model is that you control the network that is listening to you and.  Is that right? 

     >> To clarify the new feature subscription, you have to choose to turn it on and then the subscription is just to your public updates. 

     So for a normal Facebook user, it makes no difference but someone who is a Facebook user who comes to a conference and doesn't want to "friend" everybody in this room but may want to share information with them, they can say "subscribe to me" instead of "friend me." 

     We see it as actually privacy enhancing in terms of giving you more granularity.

     >> Thank you, that's an excellent response.

(Laughter)

     >> Last question.

     >> JOCHAI BEN-AVIE:  If it's very quick.  We're out of time.

     >> Very quick, very quick.  What to do with content of the question, to the honorable Facebook representative.  What do you think -- what to do with the information which is published on the personal page of the one user which could infringe interests of another user. 

     >> The Terms of Service which we call our Statement of Rights and Responsibilities to make it clear to users they have responsibilities as well says you should not publish contents that infringes on the rights of others.  If people report that, we can take it down, warn the user who is offending and take them off the service altogether. 

     We have a robust reporting structure precisely for those kinds of circumstances and it's been I think we're very innovative in terms of things we put in place to help people resolve those disputes. 

     >> Thank you. 

     >> JOCHAI BEN-AVIE:  Thank you.  Thank you, Richard, for some of these answers.  Thank you, everyone, thank you to our panelists, to Robert for appearing remotely, remote moderators, everyone at home and everyone for coming.

(Applause)

(Session concluded)

                                   

* * *