Skip to main content

tv   British Committee Hearing on Fake News  CSPAN  February 8, 2018 9:03am-12:01pm EST

9:03 am
especially when you get battlefield nuclear weapons, everything starts to get crazy in the war plans never survive first contact. you worry about radical actors taking advantage in crisis situations to grab a new deployed nuclear weapon and cut iran off with it. that's the one scenario i worry about nonstate actors and nuclear actors. we'll leave the last minute of this conversation to go live now as the british house of commons is holding a hearing here in washington, d.c. focused on so-called fake news. scheduled to last until about 3:00 eastern. >> and we're all thrilled to be here and absolutely delighted with the support and welcome we've received from george washington university and grateful for their efforts in helping us put on this -- these hearings here. this is the first time the select committee has taken live evidence in this way outside of the united kingdom, so i'm grateful for everyone's work
9:04 am
whose made the logistics behind that possible. i'd like to thank the witnesses from the different companies we'll seed to for their evidence too. just one small piece of house keeping from the start. for people who are unfamiliar with the work of how fast commons select committees maybe more familiar with the work of congressional committees, it's understood that anyone taking part in the proceeding of particle lent like this answers questions honestly and truthfully to the committee and it's an offense to mislead parliament. i'm nol not saying our witnesses would do that but we don't require them to swear an oath because under our rules it's implied that they're telling the truth. we'll also -- we have a number of sessions to get through today and i'm conscious that we need to do that in good time gand order. i'd like to ask the members of the committee to be clear in directing their questions to either individual members of the panel, be clear what they're asking for as well. and just to say to the witness as to, because of the time constraints i would be grateful if witnesses would answer the questions put to them and avoid
9:05 am
as far as possible going into general statements of policy and if a question is directed to you personally that you be the person that answers that. if we can do that, hopefully we'll keep to some sort of time. i would like to start the questioning and, mr. green grass, to ask you, do you regard and does google reregard campaigns or disinformation of fake news as being harmful to your customers? >> first of all, thank you very much for the puopportunity to b here and present evidence. i would think there's no question that the issue of misinformation and untoward influence on our societies is indeed an issue and certainly an issue for google. i should point out that, as you know, our mission is to organize the world's information and make it accessible and make it useful. billions of users take advantage of our services every day and rely on us to do the right thing, rely on us to provide the
9:06 am
right information. so this is not only important to us as a company and important to our users, important to society, it's actually crucial to what we do. i often think of ourselves as being in the trust business. we will continue to retain the loyal usage of people around the world to the extent that we retain their trust with our work every day. so i agree it's an important issue, a significant issue, and we'll continue our efforts to make sure that throughout what we do on google search or google news is make sure that we're servicing the best information from the best possible sources and make sure that trust worthny information does not surface. >> we certainly agree it's important and i'm glad you think it's important too, by asked whether you thought it was harmful, whether disinformation can be harmful to society the and to your customers that may be recipients of it. >> i think of this in a broad range of areas, not just in news
9:07 am
and politics. as i note, people come to google every day, for instance, look for medical information. you know, it's not hard in the medical area if you're, you know, concerned about health and you might take do peach pitts cure kans center you can go to coug will and might find information that would suggest that. even across our surfaces as we look at content is how do we make sure they have the right information that is helpful and not harmful to them. >> do you believe that, as you say, if disinformation is harmful to society and to individuals, what responsibility do you think the companies like yours, like the other tech companies we'll hear evidence from today, do you think you have a responsibility to protect your customers from being unwittingly exposed to harmful disinformation? >> we feel an extraordinary sense of responsibility. as i was noting, the loyalty our users is based on our continued trust in us to the extent that
9:08 am
they don't trust us they'll stop using our products and our business will collapse. so from that perspective alone it's crucial to us. but it goes beyond that. we believe strongly in having an effective democracy. we believe strongly in supporting free expression and supporting a sustainable, high-quality, journalism ecosystem to make sure that quality information is out there, to make sure that our citizens, not just our customers, have the knowledge and information they need to be good citizens. >> jennifer, there's been research done looking at the role that the next up feature on youtube plays in supplying fake news, fake information to users through recommendation. what steps are you taking to address that problem. >> yes, thank you and thank you to the committee, good morning everyone i'm juniper downs and item global director of public relations for youtube.
9:09 am
our engine was designed to give users the kinds of content they want to see. it works quite well in the main use cases of youtube, educational content, comedy, music, providing users with more of the kind of content they want to see is common. news content makes up a fairly small percentage of youtube's watch time, less than 2%. we realize there's work do on our engine in terms of marking sure we're sur facing the right news content to our users. we've been investing a lot of resources in making you sure we surface authoritative news sources and demote low quality content. as the press conference has shown, we still have work to do, there's progress to be made and we recognize that and take responsibility for it. >> youtube has supplied the senate intelligence committee with information related to 18 youtube channels that were linked or backed by the internet research agency in saint petersburg which accounts for i
9:10 am
think 1100 videos and 43 hours of content. could you just confirm, was the identification of those channels based on analysis conducted by the intelligence committee itself based on information it had received, or is that youtube's own research looking for channels like this? >> so the security and integrity of our products is core to our business model. we cooperated with the congressional investigation into whether there was any interference in the u.s. election. and the channels that we discovered on youtube that were connected to the internet research agency were due to a thorough investigation that we conducted using our own resources. we have publicly reported that nofti information to congress to the intelligence committee where we testified back in the fall. >> the identification of those accounts was based on your own research, not information you were given? >> our research and our investigation included leads and intelligence provided to us by
9:11 am
other companies. >> but what did you do yourself that wasn't based open leads but just based on, you know, your ability to april lies how people are using the platform? >> correct. so we looked at the leads we were provide and went far beyond that in looking at any advertisements that had a connection to russia. we looked also at organic content to see if there were channels on youtube that were connected to the internet research agency that weren't purchasing advertising but were simply uploading content. >> you may be aware that the committee's written to twitter and facebook asking for its analysis whether russian-back agencies were involved in the elections in the uk. would youtube faced with conducting that are search for us hopefully we'll get an update from twitter, but with youtube would you be willing to conduct
9:12 am
the same investigation like you did for the united states? >> absolutely. we're happy to cooperate with the uk government's investigation into whether there was any interference in elections in the uk. we have conducted a thorough investigation around the brexit referendum and found no evidence of interference. again, we looked at all advertisements with any connection to russia and we found no evidence of our services being used to interfere in the brexit referendum and we're happy to cooperate with any further efforts. >> that's the electoral official's investigation. i'm asking about this committee's investigation and we're not just looking necessarily for paid for advertising linked to the election but actually the operation of channels or uploading of films which can be linked back to russian agencies and had a political purpose or message during the referendum. would you be able to do that for us? >> we're happy to cooperate with the investigation. >> so that's a yes. >> yes. >> thank you. if i could bring in jay stevens.
9:13 am
>> thank you, chair. i would like to ask about the search function on google. you may be aware of a british journalist who we met some members of your company earlier this week. and she was talking about the auto fill search or auto finish search. so she conducted a simple exercise when she typed jew into the search function, it would auto finish the rest of the search for her and all of it came up with far rights, extremist, anti-semitic searches. now, i'm interested to know why did it take a uk journalist doing such a simple exercise, why was she able to identify this issue? why hadn't google identified it previously? >> i thank you for that question. these occurrences happen from time to time, many we catch ourselves, others we do not. the auto complete feature on google search is an important service to our users. you come to search, you begin to enter a query and we offer
9:14 am
suggestions based on what other users have searched for. it's a dynamic feature. everything we work on in terms of the corpus of expression of the web or the core pupus of th that people ask is live and changes every day and are, indeed, also malicious actors out there who will indeed seek to game this as well. so as we have constructed these, we continue to build defenses to this and also to build mechanisms such that users inside or outside the company have the agency to identify these things and mention them, bring them to our attention so that we can quickly correct them. and that's what we look do. clearly those kinds of terms that you might see in those are offensive, offensive to all of us, egregiously offensive and we'll look to take care of them. it goes back to the trust of our users. >> and your algorithms, so they're developed and the data
9:15 am
that the algorithms gather, it's a cyclical thing, isn't it? >> yes. >> so that determines what phrase are picked up spot what safeguards are you putting in place? you can give us some examples to prevent that cyclical enhancement of hate content if people are searching using those terms? >> well, it's the continuous advance of our own systems in terms of what kind of terms to look for, what kinds of strings of words to look for and make sure that they don't occur. it's also maintaining constant efforts on our part to evaluate the results we serve. we have a large team of what we call raiders, 10,000 people around the globe who are constantly at work assessing our results against different quarries. are we surfacing the right kind of results? are we surfacing results that have appropriate authority? and they use that, those hundreds of thousands of bits of data from those evaluations to
9:16 am
continue to train our systems. this is an ongoing effort. you know, as much as i would like to believe that our algorithms can be perfect, i don't expect they ever will be simply because of the dynamics of the ecosystem upon which we work. but we will continue to strife to make sure that situations like that don't happen. >> do you have an ethics policy that your developers work within or a framework or would developers inherent biases, because we all have inherent biases, influence the algorithm that they build? >> absolutely. that's crucial. their ethics policy that guide our work. those raiders specifically we have 160-page set of raider guidelines which is public information. it's available for anyone here to look at that guides their assessments. so we -- we work on top of
9:17 am
policy, we constantly evolve that policy, we constantly change -- i should say train those who have to apply those policies to make sure we're doing the right thing. >> i can just clarify so i'm clear. your raiders as you call them, are they developers? >> they're not necessarily developers, they're -- >> so your developers that build the algorithms, that develop the algorithms, do they have an ethics policy? >> yes, internally we have ethics policies. for instance, we have an internal policy called the honest results policy which prevents our engineering teams and people like me from trying to influence the algorithms in untoward ways or allow third parties to influence our algorithms in inappropriate ways. >> thank you. >> i was very struck by your statement that you're in the trust business. trust is really based on knowledge. do you believe your customers know what information you retain about them and how you use it? >> i would hope so. we have made great efforts to
9:18 am
provide transparency and control to our users about the information that we collect as they use our services. and we collect that information to make the services better to them. but they can come to a control panel and millions and hundreds of millions of people have, to look at what information we're collecting and, for that matter, to change the settings. they can tell us to not collect certain kinds of information. that's always there for them. it's hugely important that we maintain in a sense of dialogue with our users about the information that we do collect from them to provide better services to them. i will also point out that we never share that information with third parties, never have, never will. it's crucially important that we protect that. again, it's part of the trust relationship that's important that we maintain with our users. >> so you will give me all of the information that you hold about me if i ask for it? >> inneed, you can come to google and look at the information that we have about you in our different services.
9:19 am
>> and will you tell me how you use that information? i mean, for example, i'm a politician. would you tell me how you could -- i could use that information for political purposes? >> well, we don't use any information for political services -- purposes, but we do try to be as transparent as we can about how we use that information. i'll give you an example. google search, google search is not personalized, except that we will tune the results based on where you are, geography, if you're looking for a restaurant in london then we'll service flaurnts london. but otherwise we think it's important not to personalize search results. we're expecting what you want to see and find is any information that's out there in the corpus of expression without us trying to guide you in one way or another based upon what we think you might be interested in. >> do you market your
9:20 am
capabilities to politicians? >> we market various services for instance our ads, to various folks who want to advertise using our products. and to some degree that does use data to help target information. >> do you have a specific team who market advertising to politicians? >> i'm not that familiar with our sales organization selling ads to know how we approach these things. we have advertising sales teams in countries around the world. i imagine some focus on ditch categorie -- different categories but it's not my expertise of vice president of news. >> is it within your area of expertise, ms. downs? >> i want to reiterate some of what richard said about how we use the information we collect from users. the main principles of privacy at google are transparency and control. for youtube for example we do collect the watch history of signed in users. but the user can at any time
9:21 am
pause or delete that watch history. the way we use that information is to improve of the service for users. so earlier we were talking about recommendations, if we know that someone is a lover of comedy or a particular kind of music, we can use that watch history to optimize the service. we never sell the information to advertisers. we do provide aggregate data to advertisers to help them optimize their campaigns, so in terms of the work our sales team does with various constituents who are interested in using google's ad products, that's the kind of information that would be provided, never information about individual users. >> thank you. >> just on that point, juniper downs, if you can provide such precise aggregate data, why do you find is it so difficult to identify bad content on the platform? why do you still have problems of fake news being filtered in next up? >> identifying and managing content on youtube is the number one priority for us. it's mission critical for the
9:22 am
business. it's critical to our users, to our creators, to our advertisers and to us as a company. so we invest tremendous resources, both in terms of technology and the people working on these issues, our executive team is absolutely engaged. we meet hours every week to figure out how we can improve our systems to make sure that the policies on youtube are followed, that we are quickly identifying content that violates those policies and removing it. so this is a top priority for the business. >> what sort of revenue do you reinvest in this way? >> i don't have an exact percentage but we have spent tens of millions of dollars fighting spam and abuse across our products and we've committed to having 10,000 people across google by the end of this year who are working on these issues. >> 10s of millions of dollars. >> 10s of millions. >> what's the revenue for youtube? >> i'm sorry but i don't know the answer to that question. >> just roughly, you must have a
9:23 am
rough idea. >> i'm sorry, i don't know the answer to that question, but i just want to reiterate that we will invest the necessary resources to address these issues not only in terms of the people that we employ, but the technology that we develop. we are continually staffing up and we have seen good progress in our management of these issues. for example, over the past eight months the work that we've done on violent extremism, i think speaks for itself in that not only removed over 100,000 videos, but the speed with which we've been able to identify this content and remove it from the site has gotten faster and faster where we're now at the point of 70% of that content being removed within out hours of uplode and 50% within two hours of uplode. that's the kind of progress that we demand of ourselves and we will continue to drive forward across all of these issues as we move forward. >> we'll see if we can work out during the course of the morning what that percentage is. my suspicions it might be quite
9:24 am
small. but i think many people say that you can sell the service, that's how you make your money, settling service to micro target people, that the same tools must be -- must be capable to use the same source to identify harmful content. of the content you do take down, though, how much of that is -- what proportion of that is user referral of bad content and how much is by content you discover by yourselves? >> so the technology that we've developed to identify content that may violate our policies is doing more and more of the heavy lifting for us. so in the area of violate extremism we're now at 98% of the content we have removed for violent extremism has been identified by our algorithms. that number varies from policy to policy depending on how effective and precise the technology is on particular issues. when it comes to misleading and deceptive content our spam systems are quite effective at
9:25 am
identifying the behavioral problems with mass produced misinformation that's being distributed through our systems pmtd we removed over 130,000 videos for misleading our spam policy. those were virtually all identify by our technology. >> do you, if you had evidence of someone who is being heavily exposed to distributing violent extremist content or maybe, you know, other forms of extreme content that might be a danger to children, do you -- what sort of responsibility do you think you might have to share that knowledge with law enforcement agencies, people that will might actually want to take a more active interest in the people who are doing this or that person? >> so we cooperate and meet regularly with law enforcement to share information when it comes to account disclosure of user data, obviously we do that pursuant to valid legal process. there are mechanisms in place where law enforcement can request information from us pursuant to their investigations. there are also emergency disclosure provisions where if
9:26 am
we identify content on our services that poses an immediate risk to life, we disclose that information proactively. when it comes to the safety of children, we obviously report to the national center for missing and exploited children any solicitation of minors or content that's exploit tatetive in that way. they then cooperate with law enforcement internationally to make sure those instances are followed up with. >> i appreciate you cooperate with authorities upon request. but someone might say if they had a family member that was a victim of a violent act and actually from their interests on youtube the things they take an interest in maybe channels are set up it was quite clear this was potentially someone who's quite violent and dangerous, people might say haven't you got a responsibility to notify the authorities for people that give cause of concern, not just wait until after a crime's been committed? >> sfercertainly if we identify someone on our service who is
9:27 am
posing those kinds of threats to an individual and we do see these instances where someone is perpetually trying commit a violent act, we would disclose that. >> thank you. rebecca powell. >> i wanted to look really at the ethics of google and look at your up next button and actually your scoring system that you have for videos. would it be right to say that usually the videos or the content that has the highest score is something that's had the most views? so the quantity, would you say, is something? so hits are important to juniper downs? >> thank you for the question. so the recommendation engine is quite complex and it varies depending on the video that's being watched. one of the factors is content that is popular on youtube. there's also content that is associated, so if i have a real niche interest, i'm interested in a particular kind of
9:28 am
knitting, there may not be a lot of highly popular videos that are about that type of knitting. but we will continue to recommend videos that are similar and that provide more instruction on that type of knitting. >> but the popular. >> popularity. >> popularity usually means lots of hits, doesn't it? and i just wanted to mention some research that was done recently by the guardian that suggested that the largest source of news for false news channels came from youtube and came from these videos that tend to be recommended by the up next button, and more research by oxford academics demonstrated a lot of this had a far right bias. so i put it to that you if that is the case, you are indoctrinating society. >> so i'm not here to comment on the particular methodology of the guardian research, but i will say when it comes to recommendations we recognize that we have more work to do. recommendations are largely a reflection of user interest. when it comes to news content, our goal is to promote more
9:29 am
authoritative sources and to demote lower-quality content from less established sources. it's not only popularity that fuels the recommendation engine, it's also trying to draw from sources that are we will-established known sources of news and put that near the top. do we succeed every time? we don't. i'll give you a recent example where earlier this week obviously the stock market fell and our algorithms recognized stock market plunge as being a news equery because a lot of them were using the word plunge in their targets but didn't recognize stock market crash because for some than term wasn't being used. that was flagged to us. of course we immediately correct today and made sure that we were surfacing authoritative sources across both of those queries. but even those subtle cities in language can cause our systems to misfire and we work quickly to try to correct that. >> the guardian information found 8,000 youtube recommendations into their system, their software system,
9:30 am
and which tracks political disinformation campaigns and in every case the largest source of traffic for these fake news channels came from youtube's up next algorithm. so you're clearly saying that you don't agree with that? >> i am saying it's an area of -- authoritative sources surface near the top of our engine. we will continue to work on and improve. it's an area of investment for us. >> there's one other area i wanted to touch on which is that at this point it seems like in choosing to rank videos and to have a system of what's good and what's better and what's higher and what's lower, you are, in fact, acting as an editor, sch what an editor of a newspaper would do, and yet you are not calling yourselves editors, you're calling yourself hosts.
9:31 am
and i wonder whether you might think that your -- the description of what you are ought to be changed and that your whole name for your platform ought to be changed so that you take on more of the responsibilities as would a bona fide newspaper so that you would have to apply broadcast and newspaper regulations to yourself. because at the moment you are unregulated. >> so when it comes to the scale of our services we have well over 400 hours of content uploaded by creators to youtube every minute. there has to be some organizing principles by which we determine when someone comes to youtube, searches for funny videos, there has to be an algorithm that delivers the most relevant and useful results to that individual user. so that's how our algorithms function. i think the question of whether we're a publisher gets to the question of do we take responsibility for the content on our site. and the answer to that question
9:32 am
is absolutely yes. this is a top priority for us to make sure that we're providing a useful service to our users and that we are exercising responsibility. >> but you are self-policing, aren't you? >> we have a set of -- >> is that right? >> we have a set of community guidelines that set the rules of the road for youtube. those have been in place for -- since we acquired youtube as a service. their dynamic, they evolve in response to changing trends in the world and changing trends in what kinds of content and misuse of our services we see. we've developed those guidelines because we're trying to maintain a certain kind of community for our users. they do go above and beyond what the law requires. but we also follow the law in the jurisdictions where we're launched so when we are alerted to content that may violate a particular law we also block that content for the relevant jurisdiction. i think the model of self-regulation that i've just described, if you look at the eu internet forum where we've been
9:33 am
cooperating with the european commission for several years now on the removal of illegal hate speech from our services is a good model of how governments, ngos and the tech industry can collaborate to ensure that we are acting responsibly and quickly to deal with these content issues. and the commission has acknowledged the progress of that process. >> it's strange that you're described as the most overlooked story of 2016 and people describe this issue as a disease on society and affecting behavior. it's surprising that you don't seem to register any of that. >> we take the criticism there very seriously. again, is a top priority for the company to make sure that we continue to invest in detecting content that may violate our policies, that we are watching for bad actors who are trying to manipulate or exploit our systems. the openness of youtube has brought tremendous benefits to society when it comes to educational content, culture, music, the arts, those benefits
9:34 am
are tremendous. and of course that openness also presents challenges, and these challenges are ones that we are committed to managing and dealing with responsibly. >> thank you. i think we've been -- the committee has been busy trying to figure out how much money youtube makes. the figure is not disclosed by the company estimates that youtube's global revenues are about 10 billion u.s. dollars, that sound about right? >> i really kont confirm any of those numbers. it's not information that i have access to. >> okay. well just say if it was and just for easy math, if that was -- if you're investing $10 million a year in making youtube safer and the revenues are $10 billion, then you're reinvesting 1% back into make the site safer. that sounds like a small sticking class over a gaping wound and not likely to be satisfactory in addressing some of the concerns that rebecca powell has raised. i don't know if you want to
9:35 am
comment on those figures, but i would appreciate the company may not want to disclose figures in terms of revenue but if they could give us a percent aeth figure that's reinvested from the proportions of youtube we would be grateful for that. but publicly available figures, 1.1% seems to be where they come out. >> i think your question gets to are we investing adequate resources in addressing these issues. there's no constraint on the resources we will put to getting this right. we have invested tremendous time, time from our engineering teams, from our executive teams, from the trust and safety operation which has nearly doubled in size over the past year. so we take these issues very seriously and if we decide that we are unable to make our desired progress because we don't have adequate investment in the issue, we will invest more in addressing it. >> all i'm saying is that for a multi billion dollar business,
9:36 am
if tens of millions of dollars address these widely held concerns that are increasingly being well documented, that sounds like a pretty unambitious program of investment and i think understandably given the huge size of the platform people would question whether that sort of investment sfr likely to be successful in addressing some of these concerns. i want to bring in chris imaginason. >> i just want to follow on from ms. powell's questions about the article on the guardian. and it quotes one of your former engineers who that's watch time was the priority that your algorithms give to -- to prioritizing what comes up next, not necessarily truthfulness or decency or honesty. how would you comment on that allegation? >> so watch time is obviously an important metric for us because it demonstrates that we're providing a service that users love where they want to spend time on the product, that they're enjoying and finding experience of youtube valuable. for certain kinds of content,
9:37 am
you know, watch time for music for example i know i listen to hours of music on youtube because it keeps delivering the music i want to hear. we know that for news and for certain other verticals the veracity of the content that we provide to our users is also incredibly important, which is why some of the changes we've made over the past years have been about making sure we are surfacing more authoritative content to users where they're getting the information they're seeking, demoting lower-quality content. we've dedicated news services, we have a breaking news shelf so when there's a breaking news event we surface high-quality news near the top. and we also invest in making sure that the users of youtube have the skills they need to assess the content they're watching. so in the uk we created a campaign called internet citizens where we're committed to training 20,000 young people and identifying fake news and
9:38 am
learning how to check their sources and bias and so on. so it's really about an effort that supports the whole ecosystem. we also work with publishers. we launched a program called player for publishers that has 50 european news providers in it where we provide them with technical support to have a video player embedded in their own website which reduces complexity and cost for them and we also work with them to optimize the content they produce for youtube. their watch time has doubled and they appreciate that fact that their watch time has increased through our investment in them. so watch time can be an important fact check. >> so you do accept there's a problem of bogus news, misleading news, disinformation and you both accept you have the responsibility tond at mea and to address that? >> we recognize there's a problem of misinformation and we're dedicated to making sure that we are promoting authoritative content when it
9:39 am
comes to news and demoting misinformation. >> do you have a sense of moral responsibility to take those actions? >> we have both a sense of social responsibility and it's a business priority for us. again, the trust that users have in our services is reliant on us providing a high-quality experience when they come to the site. presumably when people come looking for news rather than entertainment when what they expect to find is factual reporting from a variety of sources. we provide diverse sources from around the world, we think that's valuable, and so our investment in this area is, yes, because we recognize it's an important policy matter, but also because it's an important business priority. >> paul lewis confirmed the chazlo chong conclusions in regard to the u.s. election.
9:40 am
it was 60% higher for the trump campaign than the hillary clinton campaign. >> well, i won't comment ton. >> go on, comment it. >> we did publicly say that we didn't agree with the methodology wa that was used. >> why not? >> we weren't provided the 8,000 videos to examine them. >> but you got a bigger selection because you've got the whole set. so maybe you could run their methodology past your set which is bigger and get what you might consider a more accurate solution. how bo b doing that? >> let me explain how i think the engine did function in the lead up to the election. it's a reflection of user interest. so to the extent that there was more content recommended about one candidate versus the other, that was because there was a lot more user interest being expressed in that candidate so we saw more searches for that candidate. i think we saw similar trends across broadcast coverage of the u.s. election where one candidate got more coverage than
9:41 am
the other. that's a reflection of what users wanted to see, what they were interested in what they came to the site. >> ms. stevens already identified the policy that becomes a self-fulfilling prov fessy and it spierls in one direction as opposed to providing balance. >> this is an important question for to us figure out how to ensure that we're providing information that users actually want to watch and will watch that is also diverse. we do not build any political bias in our algorithms. we design products for everyone. we invest heavily in assuring that there is no bias built into the algorithms that we design. there's a human element here too, which is people want to watch what they want to watch. it's hard if someone isn't expressing an interest in a particular person or type of content to just insert something that's opposite to that and expect them to watch it. we see abandonment of the service when we do that because humans are -- have their own
9:42 am
will. so we try to create a diversity of content, but at the same time keep it topical enough that the user feels we're fulfilling the interest they've expressed to us when they come to the service. >> i can just squ, if a video is getting hundreds of thousands or millions of hits and then all the sudden gets taken down when it's at the top of its performance, wouldn't that be a little bit strange in reference to your business model? >> i'm not sure i quite understand the question. >> well, if a video's getsing hundreds of thousands or millions of hits, which is good for you gand for the person who posted the video, and then they take it down when it's at the top of the game, would that not strike you as a little bit odd? >> so the community guidelines and content policies we have for youtube apply to all of our creators equally no matter how popular they are. so we again find content when it's flagged by user and we evaluate them and if they violate our policy reremove them
9:43 am
immediately. >> what if the creator themselves take it down. >> he they always have control over their own content and are free to delete it at any time. >> thank you. >> thank you, share. >> you've mentioned how much progress you've made and you believe that your moves in this particular area are quite effective. yet, we've just discovered that potentially you're spending not 1% of your turnover. more recently, in fact this morning we have the front page of the "wall street journal" that states that they're doing the investigative -- old style journalism if you like and he says the recommendations that you present are from divisive, misleading or false content despite changes recently made to highlight more neutral fair. and 27% of viewer time is taking with these recommendations. in the light of that, where has your -- why has your self-regulation sody month straightively failed and how
9:44 am
many chances do you need? >> so our recommendation engine was designed with the main use case of youtube in mind, the things that people come to youtube and love the site for. comedy, music, cooking vlogs and others. we recognize the effort we put in to get lower-quality content say work in progress and we'll continue to invest in getting it better. >> but you stand by the fact that it's quite effect fife because that's what you said at the start. >> quite effective for the majority use of youtube. news is less than 2% of our watch time. >> so it's like 2% you think your algorithms don't work for at the moment? >> i think they're working better than they were six months ago, eight months ago and so on. this san area where we're continually investing to make sure we're providing the right news experience for users when they come to youtube. we're committed to doing better. when we see some of the results
9:45 am
in that "wall street journal" article, frankly we're not proud of them. that's not the experience we want to provide. >> so you agree with this article, you dispute the guardian's potentially from back in the past, but you say basically that's, you know, we have a phrase in britain, a fair cop effectively. is this a fair cop? >> i think that the results that are pictured in that article were taken from youtube and so to the extent that we look at those and think we could do better, we certainly look at those and think we can do better. >> one area which you seem to be -- do quite remarkably better is things like sporting rights. now for example if i wish to post a final touchdown in the super bowl as i'm saying that in america i probably think about the premier league soccer goal, that would be taken down just within minutes, it would almost be instantaneous, you'd be right on to that. why does it take so much longer at all when it comes to misinformation by foreign powers that are specifically looking to
9:46 am
undermine the west? >> so when it comes to copyrighted material, we have invested a lot over-the-yea theo make sure we're protecting copyrights. and rights holders provide us with a digital file of their copy right material so we have that as the starting place and we know what content the systems are looking for and we can identify matches to that content relatively quickly using the technology. if we were provided a digital file of every misinformation video, we would be able to identify it just as quickly. but unfortunately our systems have to look for patterns. they're quite sophisticated at doing that but bad actors are constantly evolving and trying to evade defection. it's a little bit of a cat and mouse game that we're always trying to stay one step ahead of those who attempt to misuse our services. >> it's not just about the money and fear of being sued that you act so quickly when it comes to copyright. but however when it comes to
9:47 am
this misinformation, as i say, by foreign actors, the evidence is abundant that this is effectively is not in the same ballpark, just a stretch the sporting analogy? >> it is a top priority for us. >> not 1.1%. >> i'm not surety financials behind this are the right metric in the technology that we deploy, which say hard thing to month ties does a lot of the work at scale to identify this content with speed. so if you try to kind of quantify in people hours how many hours of human endeavor are being saved by the technology, i think we'd get to a much vaster number. so we really in order to address these issues at scale need to invest in the technology that can -- can find the content and enable us to act on it quickly. >> okay. germany's -- i'm going to turn to you in one question in a
9:48 am
moment. germany has effectively sort of regulated just ahead of their elections in terms of removing hate speech and social media, i think they're concerned about the reaction to immigration. on social news responded and many commentators suggested there's wa a demons strabl reduction in the level and effectiveness of interference in their and also the french elections. surely this is strong evidence that the way in which western democracies protect themselves is to regulate you. >> so we work with elections campaigns around the world to help them with digit tooal tool protect their campaigns and web sites from fishing attempts and so on. we're very committed to investing in the political support to political candidates and campaigns, that's aer is stlas we can help provide. we collaborate with others on that. i think in terms of the german law that was recently passed,
9:49 am
there has been robust public debate about the risks and ben fits of that approach and i think since it's been implemented in january a lot of public criticism of the way that it has played out. so i think the debate is ongoing on whether that's the right approach. >> okay. thank you. mr. giggras, your company was hosted in new york in order to talk about this project which was interesting. we also saw in new york the stark evidence of the crushing impact on local press increasing the national press of sucking out revenue by companies such as yourself from more traditional media. what happens do you think when you talk about how you train journalists and get them able to use your platforms more effectively, but what happens when all we have a few sort of global players like "the new york times," washington post, and atlantic the british ball
9:50 am
casting corporation. it's pointless the journalistic point there's so few of them left. what else are you going to do to ensure that journalism survives? >> that's a very important question and one which we are deeply skquestion and one which we are deeply concerned about as well. i should point out that the success of our efforts with google search and their ads is dependent on there being a rich ecosystem of knowledge out there. as we often say, our success is dependent on the success of publishers. i will point out that clearly the publishing business at every dimension has certainly been changed and affected by the introduction of the internet, without question. ilogical also point out, however, the various things that we're doing to assist in this. first of all, i should point out that we share 70-plus percent of our display ad revenue with publishers around the world to the tune of some $12.5 billion.
9:51 am
so that is one element as well as providing them with ad systems and tools that allow them to derive revenue. 2 million publishers around the world benefit from that. we also drive traffic to many news sites in the tune of well beyond 10 billion visits per month, which has been valued by third parties at between 5 and 7 p per visit. that is another $5 billion to $7 billion a year in revenue to the industry. so that's one component. but it's just one. i think we need to recognize, again, that the ecosystem has changed. and one way to think about this is think about it there you are own actions. i started out in the newspaper business. i startsed out in the press roos of the providence journal when i was a teenager. >> does the providence journal still exist? >> yes. >> there aren't that many local news posts in the u.s. that do particularly exist. i'm very happy about the background, but the point is, effectively, what actually
9:52 am
happens when the journalistic landscape is so decimated and people don't know what to trust in this respect? is that -- very -- is there a sense of responsibility that you have? >> we do feel a strong sense of responsibility for the reasons i pointed out. it's important to society, and it's important to the nature of our business. and so we've mounted many, many efforts. you know, my role at google is both on the one side to oversee our efforts to surface news on google search and on google news, but another part of my role is the various efforts we have mounted to help enable the approxima publishing systems and legacy publishers to make the transition from the old marketplace of information before the internet to today's. but behaviors have changed. when you think about it, you know, when i was young and i was looking for a used car, i went to the classifieds in my local newspaper. if you're doing that in birmingham today, you'd go to gum tree, not to your local newspaper. if i was looking for a house or
9:53 am
rental apartment, as a kid, i would go to a newspaper. today, in liverpool, you'll go to zillow. even when it comes to national news, i went to local newspapers for my national news. now, much of it might have been from wire services but i went to them. but today also, people's behaviors have changed. they go to the telegraph or the london times so the marketplace has changed. what that means and what do we feel as part of our responsibility is how do we help news organizations, enable news organizations to help figure out what kind of products can they serve to their local communities, and we do this in various ways. one is through deep collaboration with the publishing industry. it's important that we understand their challenges and they understand what we can do. we've mounted efforts like the subscription project. can we bring better tools to help drive subscription revenue? can we bring better data and knowledge to help them identify targets of opportunity in their markets? we helped with tools for digital storytelling and trainings for journalists through the google
9:54 am
news lab. we've trained thousands of journalists in the uk and will continue to do more. we've provided innovation funding in europe through the digital news initiative so maybe $3 million to date and 6 million pounds in the uk helping folks like trinity mirror develop new services like in your area. we're seeing good signsover succe -- of success around the world but there's more work to be done, more innovation that needs to take place to reach that sustainable ecosystem of local news publishers as well. >> thank you. >> thank you. >> thank you, chair. just a couple of follow-up questions, one from mr. matheson and the other for mr. knight. following on from mr. matheson, your business relies on people being on your site and viewing for as long as possible, hence you have the sophisticated recommendation engines. and they're designed to do that.
9:55 am
so how do those recommendation engines differentiate between children and adults, and is it the goal to get the user, regardless of age, to stay on the site and consuming content for as long as possible? >> when it comes to google, we only allow users that are 13 and older. that is the limitation on the site. and i think your question gets to this issue of public concern around tech addiction and particularly young people's use of smart phones and social media, and when i think about the goal of google services, it's really to provide products that enhance people's lives, not detract from them. and we think that this question of how we can fulfill that goal better is a really important question to society, so we are investing in research to better understand these issues, to better understand what kinds of product design we can implement to make sure that we're providing a product that is an
9:56 am
enhancement to people's lives, that enhances people's productivity. if you look at youtube, we have a billion views of educational content every day. this is a tremendous asset to society. if you think about something like con academy where an uncle started making videos -- >> sorry to interrupt. but i do need to direct you back to the questions and not to make general statements. >> thank you. the question is, how do you differentiate between adult consumption and child consumption? >> we don't look at adult consumption versus child consumption. the recommendations are based on the video that is being watched, and content that is associated with that video or with the watch history of the individual signed in user. again, only people 13 and older can be signed in to youtube, so the watch history there would be teenagers and above. >> okay. so, there is no mechanism other than an arbitrary, you have to
9:57 am
be 13 to view that allows you to say who's watching what and anyone of any age, if they -- any tech savvy, you know, 5 or 6-year-old could probably get round a 13-year-old, so any tech savvy child could assume as much content as they like and you wouldn't know. >> so, protecting teen users on youtube is a top priority for us and one of the things we do is we age gate videos that are more mature and then you have to be signed in and over 18 to view them. so that is the one place where more mature videos are eligible to be viewed. >> i'm not talking about the more mature video. i'm talking about the consumption, the unrestricted consumption of youtube videos by children. and i'm just trying to see, is there a way that you have managed to address -- i suspect, by your answer, the answer is you haven't managed to address
9:58 am
it. >> the investment we've made is in developing a dedicated youtube experience for families and children, which is a dedicated app called youtube kids and when we designed the product, the goal was to create a new service that gives parents control and information and that it was informed by collaboration with child safety experts. so we designed a product that is a much more limited corpus of youtube videos of family friendly content. we give parents control such as a timer for how long their child can view the app, the ability to turn search off so the child is limited to a much smaller set of videos and other controls, and this was because we want to provide a good experience for families, but we recognize it really needed to be in its own space. >> are you happy that you are doing everything you possibly can -- it goes back to the trust issue you're talking about,
9:59 am
plch plch mr. gingras. are you happy that you're doing everything you can to protect children from this unlimited consumption that we hear so much about and that's why people have issues around child interaction with the internet at the moment. >> i would say -- i would point out that we're never fully satisfied with our work. as i mentioned, this is a vibrant, live, changing ecosystem every day. for instance, with google search, we get billions of queries every day, 15% of those queries are queries we've never seen before. so it's an ongoing effort. as i often mention, our algorithms aren't perfect. i doubt they ever will be, which means we have to be ever vigilant. when we talk about how many dollars we've invested in this, i don't want you to reach the assessment that when we mention $10 million of additional investment, for instance, in security efforts and so on, that's just additional recent investment. this is building on a foundation of effort where we've got thousands of engineers at
10:00 am
youtube, thousands of engineers at google search, security system, security people that we're building upon and will continue to build upon such that we can understand the new challenges and address them as quickly as we can. there are no laurels that we rest on here. i just want to be clear. >> okay. my final question, and i -- it relates to what mr. knight was saying. this fairy tale idea of a fun space where everyone can share and, you know, enjoy, it really has been succeeded by events, it's no longer possible to have youtube operate in the way that it did way back at the beginning. because of people who are exploiting it and, you know, social issues that have arisen. h are you satisfied that the current legislative framework is sufficiently robust to meet these new challenges, or would you welcome further legislation or, as the debate is now happening, how would you like to
10:01 am
participate in that debate to shape a legislative framework that works for everybody? >> we are intrinsically motivated to address these issues. it's a top priority for youtube as a company, and we welcome any conversation about legislative proposals. we are a big part of my organization's role is interacting with policymakers and other stakeholders to hear ideas, to participate in conversation about them, so we're always happy to do that. but i want to reiterate that we don't need extra motivation to get this right. it is mission critical for the business, for us to address these issues responsibly and that's what we're committed to doing. >> but more collaboration, more engagement is -- these are problems that we're facing as a society. that we're all facing. there are no silver bullets to any of these questions. they're big questions, whether we're talking about the challenges of misinformation or the challenges of the evolution of the publishing ecosystem and if there's anything that we've learned over the last several
10:02 am
years sh was that our best apprh to these is to engage with the community, whether they be publishers, whether they be people in the public policy space such that we can all collectively gain a better understanding of how to approach the problems and what they are and how do we partner together on finding better solutions. >> thank you. i think we take your point. but we've heard the expression, top priority a lot. i think if we judge the company based on what it does, rather than what it says, the top priority is maximizing advertising revenue for the platform and a very small portion of that is reinvested back into dealing with more harmful content. that's one of the reasons we're here and one of the reasons social concern about this is growing. >> to use pretty much every analogy under the sun, it seems to me like you've opened a pandora's box, probably unintentionally, and you've got a tiger by the tail and now to carry on with the analogy, you're somewhat behind the curve. it seems extraordinary that when
10:03 am
you're talking about misinformation, well, let's face it, misinformation, disinformation, fake news, can be sort of sexy. i mean, people like ghosts. they like conspiracy theories. they like ufo stories. and they follow it and this creates, i would imagine, a huge amount of traffic, both propagating those theories, et cetera, and those who want to quash that. so, that would be good for you, because you can then harvest the data from those people who are doing that. and then that has a commercial and potentially political value, would it not? >> so, let me talk a little bit about the spectrum of fake news because i think the term is used to reference a lot of different categories of content and i can speak to how we deal with it and address your question. on one end of the spectrum, you have fake news farms, content that's produced in bulk, there's a financial incentive to distribute as virally as possible. this pattern is caught by our spam detection systems.
10:04 am
then you have what i think you may be referring to, which is kind of click bait content that might be more sensationalistic, we see videos with titles in all caps or salacious thumbnails. we see this click bait content. what we do there is we demote it. our algorithms are trained to identify click bait and demote it and it's curious that you ask the question about watch time and user interest, because what we expected when we started demoting click bait content is that it would create a decline in watch time. and that was the initial result. we were willing to absorb that, because we didn't think that it was high quality content, so we were willing to absorb the loss of watch time. but -- and this may make you feel better about humanity -- over time, the watch time actually picked back up and increased. so when we were surfacing the higher quality content, people actually started consuming that as well. then, i think, if you think further down the spectrum of what people call fake news, you also have somewhere in the
10:05 am
middle, various opinions that people express on youtube about current events, political matters, and so on. we don't consider that content to be news. it's subject to our content policies, if it crosses the line to hate speech or harassment, obviously, we're going to take it down. but that isn't news content in our view. >> but you did say just a little while ago that people watch what they want to watch. and you're sort of on the back of that. and you're not able to keep up with it. i mean, i put it to you that it's actually not in large corporations' interest like yours to put a stop to this because you are harvesting data. >> so, this is really about the kind of site and service we want to provide to our users, and the long-term play. it's not necessarily about short-term results. so, when it comes to news content, our commitment is to provide our users with authoritative information from established news sources. we want to demote that lower quality content, not necessarily
10:06 am
because we think that we'll get more watch time. it may be true. your assumption there may be accurate, that more click bait-y content, people will watch because it's entertaining or for whatever reason. that is not enough. that is not enough for us to prioritize it. our goal is to demote it and promote the authoritative sources because we think it's a better long-term play for the business to be seen as a trusted provider of news and information to our users. >> final couple questions from me before we close this panel. i noticed that youtube's policy now is to create a label or a tag on content from broadcasters who are publicly funded in their countries. that's correct, isn't it? in response to requirements in the united states that that be done, is that correct? >> so, we recently started rolling out a new transparency feature for certain news sources. so, we made a public commitment to provide users with
10:07 am
information about the sources of news they consume, and we've started that -- fulfilling that commitment by introducing a new label that both informs users if a news outlet receives government funding and then a separate label to inform users if a news outlet is a public broadcaster. it's just started rolling out, but we plan to extend that and think that providing transparency to users is an important part of what we can provide as a service. >> there's been some criticism of this, because this means that rt, a kremlin. >> kremlin-backed propaganda station is in the same bucket as the bbc. by mixing up rt and the bbc together, do you think that is making things easier for consumers to navigate or muddying the waters? >> so, as we develop this new feature, we did reach out to various news outlets, including the bbc, and we got this feedback directly from them, which is why we created two separate labels. one that states that an entity
10:08 am
receives government funding in fuller part. that's what applies to rt. and then a separate label that labels things as a public broadcaster, which is what applies to the bbc. so they're actually two distinct labels and each label links to an online source where users can read more about the news source and get additional information about who funds them and what their structure is. >> listen, i think that -- there is a degree of confusion here. and i think some people reading that might say, well, what's the fuss about rt, they're a public broadcaster, so is the bbc, it's all the same thing, whereas they're radically different things and it's not just about the ownership of the organization, it's about the way it's run. so i think certainly from the evidence we've received from people whilst we were in america this week, it's very strongly suggested this is something that needs to be thought through
10:09 am
again because this may unwittingly sow more confusion. >> we agree that more information and context is better, which is why the label does link to a site where people can get additional information like what you described to distinguish some of these news providers from each other. >> yeah, i'm sure that's all well and good but i mean, the way people consume content, the kind of next up, providing a constant stream of content, i'm not sure that's a service people in reality will use that much of. what they'll go on is the labels and if labels seem to be sort of c categorizing content in a similar way, they may trust them in a similar way and that may not be helpful to them. but i think we've completed our questions for this session. richard gringras, juniper downs, thank you very much. >> thank you.
10:10 am
10:11 am
>> we'll go straight into the second panel. monica, simon, thank you very much for joining us here this morning for our evidence session. if i could start and ask monica bicker, i'd just like to ask some background information about the senate intelligence committee's investigation into russian activity during the election there. quite a lot of that work has been helpful and insightful for us and has led to us contacting facebook and asking for similar analysis to be done of whether there was any russian activity in the uk. i just wanted to ask about the information that facebook has provided. is it correct to say that the -- all of the evidence so far, the analysis of the -- the number of people that were exposed to content created and distributed by the internet research agency in st. petersburg, all of that analysis so far comes from the analysis facebook did identifying payments made in rubles to promote advertising
10:12 am
around that content. is that correct? >> thank you for the question. the information that we gave to the committee in testimony, and that was my colleague, is really the best source of information for the details about that. and in that testimony, colin did speak to how facebook conducted that investigation, ads paid for in rubles was one of the signals. but again, that is really something that mr. stretch's testimony is the best record of. >> yeah, just as a -- just from information we've received from the senate intelligence committee, they're very clear that everything's been extrapolated only from those accounts where ruble payments were made. analysis has been done trying to look at accounts linked to that behavior. the facebook analysis was simply looking at what some people might call the lowest hanging fruit of simply accounts ruble payments had been made and linking it from that and there's been no wider analysis done of whether there were other
10:13 am
agencies involved or other similar activity was taking place that wasn't linked to ruble payments. >> i think i could just refer you to mr. stretch's statement and also we did put out some posts, including in april of 2017 and then several of them over the summer, i believe one in august and two in september, where we talked about how we did look for that sort of content. >> yeah. we're interested to see -- i mean, we're pleased that facebook has agreed to conduct an analysis of whether russian agencies were involved in distributing content on platform linked to the brexit referendum in the uk and other elections too but just to note, as we're in the hearing now, our expectation will be that just won't be based on whether ruble payments were made for promotional advertising and actually an analysis of the origin of content and whether it's likely to have come from a russian agency because of the nature of that content and the way it's been distributed because ruble payments for advertising is one tool that could be used but there are many
10:14 am
and i think we would hope the analysis would be wider than just that. >> perhaps i could add to that. when -- i can tell you that as i explained previously, in my letter to you, a second investigation is under way. it's not yet completed. but i can tell you that we do expect to be able to report back to the committee by the end of february, the results of that investigation. and we will be prepared to share with you, possibly in private, because we don't want to tip off the bad actors about how we do those investigations, exactly how that work was undertaken. >> okay. thank you. we look forward to seeing that at the end of the month. monica bicker, to what extent do you feel the company has a responsibility to make sure that your customers are protected by knowing the source of information that they're seeing, where it's coming from, who's producing it. >> we feel very responsible for letting our community know, first of all, that they are in a safe community and some of the questions that were put to the previous panel went to questions
10:15 am
of safety and we can speak a little bit about that. that's a huge priority. it's also a big priority for us to help people connect with authentic information. we know from talking to our community it's something they care about. it's something we care about and we're investing a lot not just in the policies for keeping our community safe, which tend to be a lot more -- they're simple in the sense of it's fairly black and white, it either crosses the line into something that is unsafe and we remove it or it doesn't and relieve it we leave. but also in the area of fake news, we've developed a four-prong approach where we are trying to make sure that when people come to connect with news on facebook, it is reliable news and they have the ability to make decisions that are informed. >> do you feel that with -- not just with news services but think about other information people might share, community pages, pages of information that may have a political content, that it should be really clear to people where those pages are being administered from? so if i'm seeing a community
10:16 am
page that's giving me information about kent, where i live in england, that that's actually been run by someone who lives in england, not by someone who lives in another country. >> there is a spectrum of different types of what people might call news or information. one of the things that has been notable about social media is that it has given a voice to many people who want to share what is happening to them, especially in regions of the world where traditional news media outlets don't necessarily reach or reach frequently. so there is a spectrum of information. our job is to make sure people can connect with reliable information and make their own decisions about what they want to see. >> is one of the ways you empower people to make those decisions is for them to understand where the content is being created and whether that person is who they are pretending to be. >> there are a couple things we do to try to increase transparency of source information. one thing that we do is, on facebook, and this is distinct
10:17 am
from many other services, we have a policy that requires that you use your real name and so when we think of removing false news, a lot of that comes from -- if you think about the worst types of false news, this sort of financially motivated spam that has links that take people to ad farms. that sort of content is very typically propagated by fake accounts. so, that transparency requirement in our policies is very important towards removing those accounts. there's another thing that we do, which is we are looking at using context as a way to inform people about their news source. and what i mean by that is -- and this is something we're testing right now -- when people see information from a news source, if there is any signal to us that that news source might be unreliable, they can click on a little icon -- we're calling this articles context, and we released this in november of 2017 and from that icon, they are taken to information that is taken from across the internet
10:18 am
about this source and the reliability of the source. >> but on this point, this simple point about labeling, i mean, and transparency, it is quite -- people do set up fake accounts. you've identified fake accounts. >> we remove them every day. >> very many of them. so it's possible to do. but what would be the harm in making it really clear the origin, at least, of material, as you see it, as you consume it on the platform, where it's being created from, because that may be a very big signal to you as to whether it's a source of information you should trust. >> yes, and people can see, because of our real name policy -- >> as long as they're using their real name. >> if they're not, then we take that account down. >> but people may not know and it could be down for a while and people don't necessarily know location the page is being administered from, country or so on. >> and you're very right that we don't catch every fake account at its inception. we do find and remove many of these fake accounts every day.
10:19 am
this is also an area of tremendous technical investment for us and we've gotten a lot better for us. so in the are run-up to the french election, the german election, the uk election, we were using our technical tools to remove thousands of fake accounts, not that those were necessarily related to spreading disinformation or to spreading information about the election, but they were fake accounts. and we're using those technical tools to reduce the chance that they might be used to spread disinformation. >> just to ask, i mean, i think in fairness, to ask the same question i asked youtube earlier on. what sort of proportion of your roou revenues from facebook do you reinvest in identifying this sort of bad content. >> i can't give you a revenue percentage. what i can say is this is something that thousands of employees -- we put out an earnings call where our ceo said that more than 14,000 people working at facebook are working on safety and security issues. that includes the engineers who are working on the technical systems to identify fake accounts and identify terror
10:20 am
propaganda or other violating material. it also includes the work of our content reviewers who are looking at this sort of content that's been reported to us and are removing it if it crosses the line. >> youtube said they were spending tens of millions of dollars on this sort of work. how much is the investment from facebook in money terms? >> i wouldn't have a number to give you. this is something that is such a priority around the company that more than 14,000 people are working on it. so it's not just a question of sponsoring a certain program. this is these people's jobs. >> yeah, and do you know what your -- the ad revenue is for facebook for a year? >> i believe our revenue for the last quarter was around $13 billion. >> thank you. >> thank you, chairman. can i make it clear that all my questions are for you, monica bicker, because i don't get the opportunity easily to interview you in london so therefore hopefully the uk taxpayer will get the best value from my travel. you've just mentioned thousands of fake accounts in connection
10:21 am
with the u.s. election. all i've seen so far is the 470. so, connected with advertising. so, could you elaborate on that statement? >> yes. my statement was that in the run-up to the french election, the german election, and the uk election, we removed thousands of accounts using these enhanced fake account technical tools. we've been investing in this area for a long time. i want to be clear. this isn't something that -- the real name policy is not a new policy and using technical tools to find fake accounts is not new. i've been at the company for years, and we've been doing this for years but we've had a significant advancements in the past year and that is what has allowed us to remove those thousands of accounts in the run-up to those european elections. >> the thousands is in relation to the french and not the -- >> french, german, uk. >> not the u.s. election. >> that's right. >> so, why only 470 with the u.s. election? who's better at french, you or the russians?
10:22 am
than english? >> again, i can refer you to the comments that were put to the committee by my colleague, colin stretch. that's really the best place to find the information. >> have you not briefed yourself before this session? >> that's part of an ongoing investigation. we're certainly cooperating with the relevant authorities there and we've given information, and that is the best source of information for that. >> do you do these sweeps just in relation to specific events like elections, or do you do them all the time? >> we're doing them all the time. >> so, how many thousands of -- or hundreds of thousands of fake accounts have you then suspended that have got no connection with specific events like elections? >> that's -- that is, without knowing precisely, that is probably the most common scenario. we remove many false accounts every day, and many of those are created for the purpose of, for instance, sending out spam links or, you know, engaging in other bad behaviors.
10:23 am
so, some of them, we can catch at the time of creation, and we stop them from creating the account. others, we can remove quickly after identifying them using certain signals that our technical tools recognize. >> could you provide us with a briefing as a follow-up? >> when you catch -- we're certainly happy to follow up. but just to set expectations, when you remove accounts very quickly to creation, you don't necessarily know what the purpose of those accounts might have been. but we -- any accounts that we find that have, for instance, been create en masse or created where there are signals that they are not being accurate in their name or engaging in a false way, we remove them regardless of why they have come to facebook. >> we'll ask you for some statistics afterward. just in relation to the 470 and the $100,000 in -- or rubles equivalent, why did you accept that money in the first place?
10:24 am
>> with regard to the u.s. inquiry, we're continuing -- that's part of an ongoing investigation. we're continuing to cooperate with u.s. authorities, and that investigation will continue. but what i can do for now is refer you to mr. stretch's comments to senate judiciary, senate intel and the house intel committees. >> the question is unanswered. why did you accept the money in the first place? do you not make any efforts to know your user or your advertiser? >> we do. with regard to our systems, generally, and we can speak about ads and also user generated content. when it comes to ads, every ad that comes to facebook is reviewed by a either automated or manual review before it goes live. now, an important component of advertising on social media is that it does happen quickly. and so we try to use these systems to find things like bad content, an ad that might have a certain word in it that would suggest that we should take the time to review it before it goes live and there are a combination
10:25 am
of signals that might lead us to take this sort of manual review. after the ads go live, the review doesn't stop. we look at signals like -- >> hold on. but you've taken money in the first place. so, if you don't know your advertiser or your user, how can you be sure that you're not in breach of international sanctions, money laundering regulations? what responsibility do you take? >> we have a team that works very hard to make sure that when it comes to taking money, we are complying with all laws. such as around sanctioned individuals and countries. and as i mentioned briefly before, we also do have a policy that requires accounts to be authentic, and the advertisers must have an account. before they can purchase an ad on facebook. our advertising is a self-service model. and what that means is if you are a facebook -- if you use facebook and you have an account, you can run an
10:26 am
advertisement. if you do so, that ad will be reviewed in some fashion before it goes live. and then we will look at additional signals after the ad goes live, including how people are interacting with that ad. are they x'ing it out or reporting it to us or are there other signals that might suggest that it warrants further review. >> can i -- my son, actually, who's now 19, opened his facebook account at the age of 9 and has since found it unable to change his birth date, so your checks on who signs up, you know, stumble at the first block. but it's really important for users looking forward as the integrity of facebook and advertisers to be reasonably confident they're dealing with real people, not fake people. >> yes. >> or bots. >> very much. >> so what can you do to improve your game in making sure that people are dealing with real
10:27 am
people? >> that's -- >> not fakes. >> that is a very, very important part of what we do. when people come to facebook, they expect that they are interacting with real people. that's fundamental. it's a cornerstone of our service, and it's something that we're -- we invest very deeply in. our minimum age for people to come to facebook is 13, and we do have automated systems and as simon said, some things we can follow up in private because we don't want people to game those systems but we do have systems that try to detect when a person is putting in a false birth date and we do have systems that restrict people changing their birth dates, also, for similar reason. doesn't always function perfectly, but we do catch many people who might try to come online earlier in violation of our policy. when it comes the to detecting the fake accounts, this is something where we are constantly investing and constantly improving. technology, when it -- when we have these safety-related
10:28 am
policies like not allowing people to share extremist content or not allowing people to bully others, some of that can be done by technical tools, and like google and youtube, we're investing in that, and we are seeing real gains from that. but a lot of that is also contextual and it requires human review. with fake accounts, this is also true. there are technical tools that allow us to identify an account may be fake and sometimes it's very simple to say it is, and we remove it. other times it's not as simple and we will send it to one of our reviewers who will look at it and say, actually, it appears this is real, or we're going to ask this person for identification, or it does appear this is fake and they'll remove it. >> final question. if i come to facebook and complain that my identity has been stolen, how long in terms of your policy would it take for that page, post, site, whatever
10:29 am
you call it, to be aremoved by facebook so that i can be satisfied that you're acting. >> the vast majority of all complaints that we get from people about violating content or violating accounts, that sort of thing, are reviewed within 24 hours. if there's something that we think is safety related, it goes to the front of the queue, and if your account were, for instance, hacked, if you had that sort of concern, that's something that we would respond to very promptly. if you're seeing an imposter account, that is also something that we would attempt to respond to very quickly. imposter accounts, obviously, could really wreak havoc. i will say that is not always a simple inquiry. i've looked at our reviewers who do this work, and often what this entails is you've got one account and then you have another account. the second person has said, the other one is imposter. you have to look not only at the date the accounts were created,
10:30 am
the content within the accounts to try to figure out which one is right. you might have to ask for the upload of forms of identification to confirm. so, we try hard to resolve all these within 24 hours. we don't always hit that mark. >> okay. thank you. i think we all think you could do much, much better. >> thank you. julie elliot. >> thank you, chair. i just -- before i go into what i was going to ask, i just wanted to come back to something that you said about taking these fake accounts down. have you done an analysis of which countries these fake accounts come from in the main? when you talked about the large numbers of fake accounts you've taken down. >> you mean when i said in advance of the french election and in advance of the german election? there were various signals that our team uses and we can follow up privately and walk through some of those signals. >> right. because certainly in the uk, the significant implications around electoral law of people who are,
10:31 am
you know, trying to influence our elections, from outside, because all materials and elections are very, very regulared. >> right. >> so actually where these fake accounts are from is really very important, so i would appreciate that if you could. >> absolutely. >> and we had a very controversial referendum in the uk some 18 months ago now. do you believe that disinformation campaigns using your platform played a role in that referendum? >> i'm more expert on that issue, if you don't mind, ms. elliott. while it's something that we understand why people are concerned about it, concerns have been raised in parliament, and when the electoral commission approached us and said, we would like you to assess whether or not there was, indeed, misinformation coming from another country, and particularly russia, associated with the referendum, and that we were very keen to cooperate with the electoral commission on
10:32 am
that. we reported some initial findings from an initial investigation to the chair of the committee in december. i think it's fair to say he felt we hadn't done enough work and having reflected on that, that's why we're now undertaking more work. so the answer is, we won't be able to tell you until that work is completed, but we are committed to telling the committee the outcome of those results at the end of february. the one other thing i would say is that unlike the u.s. election, we have still not been furnished with any intelligence reports from the uk authorities to suggest that there was direct russian interference using facebook involved in the brexit referendum. that's quite different from the u.s. where there is an intelligence report demonstrating this. >> but if we put russia to one side, obviously, the referendum was about our membership of the european union going forward or not membership. so actually, countries within the eu, not the uk, probably had
10:33 am
more of a vested interest. are you looking at any countries, or are you just looking at the russian influence? >> we've been specifically asked to look at russia. i've not been aware of any parliamentary debate, any news stories, suggesting any countries other than russia might have been doing this. >> it's not whether we've suggested anything. i'm asking, as a company, are you looking at that? >> no, we are not. >> you're not. there was very clearly misleading information on facebook during that referendum, because anecdotally, every time you knocked on a door and somebody would tell you something, you'd say, where's that information come from? and it would be absolute nonsense they were telling you. they would say, facebook. that was just, you know, every time. so, what do you think, as a company, you can do to stop that sort of proliferation of false information that was getting shared and reshared and i think people were buying into it
10:34 am
because the people who were sharing of it were people they knew and trusted, but where the source of that was coming from, goodness only knows. what as a company can you do to stop that kind of proliferation of misinformation? >> thank you for the question. and i want to be clear that we do not -- we do not accept that sort of proliferation of false content on facebook. at the same time, i want to make sure that we are distinguishing between the sorts of content like extremist content and bullying content, where there is a bright line of leave it up, take it down, versus fake news. when we hear the term, fake news, as you point out, many people will share stories of things they've seen online. those range everything from the financially motivated spammers that are linking off site to ad farms, which is the most common type of false news that we see on the platform, down the line to the sort of sensationalist
10:35 am
headlines where the underlying story may be based in truth, perhaps given a spin or using certain words to get people to click on a headline. so we can't have one policy that addresses all of that. we have to have a more nuanced approach, and since this has become a topic of interest and concern, especially over the course of the past year and a half or so, we have developed a four-prong approach to this, which i'm happy walk through. and i would note if we think about why it doesn't make sense to have just one approach, you can take this to the point where we say, what if we had a policy on facebook where we required people to only post what they knew to be true and accurate and of course that would lead to a place where not only is it unenforceable, we wouldn't know what something that an individual posts about his or her observations in daily life is true or false but it would inhibit the type of speech that we all engage in on a daily basis, including predicting thing or speculating things or
10:36 am
sharing opinions on things. so we have to tread carefully. so the first thing that we do is we remove false accounts and known bad actors and when i talked about that spectrum, that far end, if we do it well, and we're not perfect at this, that takes care of a lot of that content. the next thing that we try to do is disrupt the financial incentives for these sorts of actors to come to facebook. i mentioned earlier that they might post links that appear to be exciting articles, click bait articles, that would take you off site to an ad farm. we're getting better at detecting those ad farms. and detecting things like when people -- when publishers insert click bait into their headlines or something that looks like a video that plays and in fact the play button is a ruse to get people to just click on it to take you to an ad farm. our systems are detecting that and removing that. the next thing we're trying to do is prioritize the visibility of content that is trustworthy
10:37 am
and specifically deemed trustworthy by our community. that's something we're testing right now, but we're very interested in this. and reduce the visibility of content where we have a reason to suspect that it is unreliable. people can report to us when news -- they believe news to be fake. we are using a system of fact checkers, and this system is growing. if we have that sort of indication that news is fake, then we reduce that visibility up to 80% in people's news feed. and then the final thing that we're doing, and i really want to underscore this because i think this is something that's so important long-term for all of us, is we're trying to improve the ability of the broader community, meaning not just people using facebook, but journalists, policymakers, educators, parents to fight false news by recognizing it, distinguishing among news sources, and being able to make those responsible choices.
10:38 am
we're doing that in the uk. we've been working not only with young people where we have digital ambassadors in schools talking about, among other things, how to recognize false news. we did this in the run-up to the uk election where we ran full facts top ten tips for how to spot fake news. and this is something we did not just on facebook, but we actually went out in traditional media and published these top ten tips to help people make responsible choices. it's also something that we've been doing over the past year with news literacy trust in the uk where we're trying to research and understand the best ways to help people become -- recognize false news and disrupt it. >> so, as a company, this misinformation that is on your platform, i mean, i could take you to half a dozen sites looking at the area i live in that has absolute nonsense on there. and it gets, you know, reshared and everything. do you think you've got any responsibility -- because that's influencing people's voting
10:39 am
behavior, and therefore influencing elections, and you know, i would say having a very negative effect on the democratic process, which is between people of political parties who stand on their platforms. do you think you've got any responsibility to try and sort this mess out? >> it's very important to us. and that's -- so i mentioned sort of the broader initiatives. but i want to emphasize that we are -- we are engaging with the community and trying to learn more about what they're seeing and the way that this news manifests itself. we are some moring aexploring a different options. i've mentioned a few of them. in addition to the article context icon that you can click on, we're also testing a way of giving people information about related articles. so, you're on facebook, and you see a news article about a particular topic. underneath that, you will see a spectrum of related articles on that same topic so that you know if the article you're seeing fits with mainstream sources or it doesn't.
10:40 am
we're looking at ways of incorporating brand logos into -- if you find content in facebook, if it's surfaced to you, you know exactly who it's coming from and whether or not you recognize or trust that news source. so this is not an area where we're done. this is definitely an area where we are investing and learning and understanding and testing different options. >> thank you. >> i'd just like to return just a couple of points to simon milner. just for the record, my complaint about the analysis that facebook did was that all facebook had done when looking at the brexit referendum in the uk was go look at the accounts that were identified by the u.s. senate investigation and see if those accounts were active in the uk during the brexit referendum and nothing else. >> i wasn't meaning to dismiss what you -- i was very clear. you were very clear about your assessment, and that's -- and your assessment, and the views of other parliamentarians were a significant factor in why we are
10:41 am
now doing another investigation. >> yeah. and the view of the british government too. you also mentioned there that the work that had been done in america in identifying the accounts that had been identified is -- was based on intelligence that facebook was given. >> i said it was -- that there was intelligence about russian interference and attempts to interfere in the u.s. presidential election. and that was a factor in the work that was done. i didn't say that the accounts were -- but i also -- >> sorry. what do you mean by intelligence? >> the intelligence -- there was an intelligence report produced by the u.s. authorities about attempted russian interference in the u.s. election. we have not had a similar report produced by a uk intelligence about attempts at russian interference in the brexit vote. >> when i met yesterday with senator warner and senator burr, they were slightly surprised at that presentation of events, because as far as they were concerned, there was no presentation of intelligence by the american government or by them to facebook, seeking to identify accounts that had been
10:42 am
problematic, that the accounts that were identified were simply a response by facebook to pressure they were receiving from the senate, which led them to really do the bare minimum, which was look for ruble payments that had been -- the site had received and this wasn't based upon a dossier of intelligence that had been received externally. and i felt that facebook, in response from parliament or a request from the government, should conduct its own research and not rely on other people giving you intelligence. ultimately, this is your system and your platform. >> i was not suggesting that therefore we -- that this had a bearing on our ability to look properly at our systems. that's exactly what we're doing. i was just explaining the wider context of the uk situation compared to the u.s. situation. >> but i think what you are doing, you were insinuating that there was a lack of intelligence in the uk that existed in america and that the absence of intelligence reports in the uk meant that work hadn't already been done. but in america, it was pressure from congress which led to what
10:43 am
they would, i think, see as the bare minimum having been done by the company. and i'm pleased that facebook are prepared to initiate that research in the uk but i felt the exchange that was had earlier on didn't give the clearest view of what had happened. >> more of the same. you recently announced it would be 1,000 extra people to vet political advertisements. what problem had you identified which led you to make that decision? >> thank you for the question. when we looked at the advertisements that we identified in the wake of the u.s. election, one of the questions we asked ourselves was, are we doing enough to identify when ads might be coming from bad actors and there's a couple things that we asked ourselves. one is our policies in the right place about political ads and we've actually decided to tighten those policies overall
10:44 am
for anybody who might run a political ad to make sure that we're not allowing ads that inadvertently, for instance, contain hate speech. the second thing that we asked ourselves was -- >> this was in response to some evidence, presumably? >> this was in response to us overtaking a broad look in the wake of the u.s. inquiry, taking a broad look at our advertising systems and say, how do we prepare going forward for what might happen during an election, so not necessarily because of specific ads that we saw in the u.s. related investigation. but the second thing that we asked ourselves was, is our review process wholistic enough. meaning, we don't want one reviewer to be looking at the content of the ad and another reviewer looking at how the ad is being targeted. we want to make sure we have one source of information for understanding who's behind an ad, how that ad is being run, what's in the face of it, and who it's targeting. so, we've made some structural changes to our review, and part of that requires investing more in our reviewers, and that's why
10:45 am
we're adding these additional people. >> you mentioned earlier on that in response to an earlier question, that there was a -- either that there was a lack of intelligence or no intelligence had been passed on to you. about the possibility that uk elections and referendums may have been impacted by political advertising from sources that were -- had mischief in mind or from other countries. presumably, therefore, there was no intelligence that that was not happening. you just don't know. >> and i'm sorry -- >> mr. milner's point, rather sort of shrug of the shoulders to say, there was no evidence passed to us that anything went wrong in the eu referendum. there was presumably no evidence either way so do you actually know whether there was a problem or not. >> just to be clear, also, when the electoral commission approached us to say, we'd like you to look into this, we
10:46 am
specifically said to them, have you had -- have you got any examples? has any member of the public or any parliamentarian or anybody from any campaign highlighted to you a page or an ad that was seen during that election which they felt was fishy? didn't feel like it was coming from one of the official campaigns, was coming from somewhere that they just didn't recognize. and they were very clear to us that they'd had no complaints. so i'm not suggesting that therefore there is nothing. indeed, until we've completed this investigation, we won't know. but what we haven't had is information that's enabled us to target on a particular page or a particular phenomenon from another source. so, about -- to his point, that doesn't mean we're not looking very thoroughly. we absolutely are. >> one of the pitches that your company makes to us as parliamentarians in the uk is the effectiveness of targeted advertising, effectiveness and cost effectiveness. so, it clearly works, because
10:47 am
you're encouraging us to do it most of the time. but it is quite subliminal. it enables me to target people over the age of 65 with an interest in fishing, for example, in my own situation. that's quite a -- that's quite a subliminal method of political advertising. are you happy that, in pursuing that, that you are within electoral guidelines, particular in relation to spending limits, which are obviously quite rigid in the uk, and that you haven't -- >> i better take -- so, again, that's something that we have a very good relationship with the electoral commission, by the way. this is not the first time we've met with them, and we've worked with them on helping to improve -- encourage people to register to vote, reminding people that it's election day, et cetera. so we have a good relationship with them, and they are, of course, they're the ones who are principally responsible for determining how much money people have spent and where did if t money come from. that's not something that we can
10:48 am
necessarily see. but we absolutely agree with you that there is an issue around the transparency of political advertising, so can you see what your opponent in your constituency is saying to voters, and can you respond to that if the advertising takes place on facebook. and that's one of the reasons why we are now rolling out a system of transparency around political advertising such that in due course, at the next general election, for instance, in the uk, you will be able to see every ad that's being run by both the main campaign pages and by all candidates. if you want to see what ads they are running on facebook, you can see the ads. and so we are going to introduce a radical new level of transparency that's never been seen before. in the elections. we didn't have that last time and we recognize that something that would be very valuable for people to exactly address the issue you're concerned about. >> the -- you would have seen this week that the pm said that
10:49 am
online platforms are clearly no longer just passive hosts of the opinions of others and i believe it is right that we look at the liability of social media companies for a illegal content on their site. this reflects comments made in other parts of the world as well. do you read that as being that the age of unregulated social media is actually coming to an end? >> i think there's a number of things one reads into it, but certainly we don't think of ourselves as unregulated social media. indeed, that's why monica and her team are responsible for actually an incredibly extensive set of regulations that she can talk to. >> but they're self-imposed. >> they are, but they are -- i can give you an example of how they are -- how they fit within a broader structure. >> well, that's not my point. i don't want to -- i don't want to know that. we're talking about regulation, which is accountable, which is democratic. and which is transparent.
10:50 am
>> well, it's certainly transparent because all our rules are public. it's accountable in that i'm here often and to other parliamentarians as are my colleagues to answer your questions on that. it's accountable in that which let people know when they complain about content what decision we have made and why. if your content is removed, why we have done it. there are multiple layers of accountability. there isn't a single body for any country or globally that these are the rules applied to facebook from the outside. it is very hard to see how that would work. there are many countries which have applied laws and rules which we take account of and ensure that people do not break those rules on facebook. >> that strikes me as a pre-emptive way of opposing anything that might resemble state regulation.
10:51 am
do you fear regulation if it did come forward which sought to apply light touch regulation to your business in the light of ov overwhelming evidence? would you cooperate or are you willing to resist in. >> we would certainly want to be part of the dialogue. we do from time to time see legislation that result s in soe unintended consequences that aren't good for anybody. our incentives generally aligned with those of government. i was in the u.s. government as a federal criminal prosecutor for more than a decade. when i came to facebook t was right in step, all of the criminal behavior and other things we try to find and remove from our service were very much aligned with the incentive of policymakers. like simon said, we do have a
10:52 am
process for complying with government regulations. that would involve, if a government tells us that something is illegal on our service, the first thing we do is see if it violateses our community standards. if it is terror propaganda, na is something we would remove globally. if it doesn't, let's say it is a law that is about something that doesn't violate facebook standards, we would look at it, the legal process and restrict the content in that country out of respect for the local law. we do have a system in effect. i would note that sometimes regulations can take us to a place, and you have seen some of the commentary about the law in germany, sometimes regulation can take us to a place where there will be broader societal concerns about content we are moving and whether that is in the right place. it is something we would want to be a part of in the dialogue. >> thank you.
10:53 am
rebecca powers. >> i just wanted to put it to you, miss bicker, isn't it the clever part of facebook that you get people to sign away their data and their rights to their data and you find out everything you can about them? >> no, not only do we not sell people's data. >> i'm not saying you sell it. you get them to give you -- they don't necessarily know they are giving you data. you are harvesting data every time you use facebook. >> no. i would not characterize it that way. we are very clear in our data use policy how we do use data. we allow people to see any information that we have. if you go to facebook, literally, you can download your information. you can go to download your information and you will see everything that facebook has.
10:54 am
we're very transparent about how this works. >> isn't the way you work, you gather all this about whether they are playing a game or watching a video, you are working out what they are doing and gathering profile and then you can talk in the back with information that they might want or all that sort of thing. isn't that the way you work? >> the way that targeted ad ver advising work, which we do allow, advertisers will say, i want to target this particular audience. for instance, people that have liked their page. we will provide an audience. we don't provide the people's data. we will provide an audience with people that fit that targeting criteria. >> you are gathering a massive amount of data in the whole of society. isn't this a massive surveillance operation? >> no. this is a system where people can come and communicate with
10:55 am
one another and advertisers can target people based on the interest they have expressed and the way they have engaged. >> there is an articles in the "times" today in the u.k., and the headlines is, how to win 2 billion friends and destroy civilization. this is by a very well-known journalist. he is suggesting that whether facebook likes it or not, the position you have got yourself in to is this position. you hold so much data about people, that you are now very, very powerful and you are completely unregulated. >> we are definitely regulated in many different ways. >> by yourselves? >> no, no, by laws and simon can speak to the u.k. >> in the respective data, we are fundamentally regulated by european data protection law. every single person on facebook in the u.k. is covered by that protection law and we are
10:56 am
absolutely accountable to it. it is completely wrong to suggest in how we handle people's data, we are unregulated. >> to pick up on the "times" article, it focuses on channel 4, the biggest news outlet that is using facebook to post videos. it has massive hits, 2 billion viewers. you have placed advertisement around those videos because you know they are going to get loads and loads of hits. i believe channel 4 asked if they might share in the data of who was looking at that so they could get some information about themselves about the rating audience. will you share that with them? >> that is not how our ads work. we do not place ads against specific pieces of content. that's not how it works. >> that's not what they told me. >> it doesn't work that way. the way it works is if you are an advertiser, you create a
10:57 am
target audience and then people who meet that audience will see an advertisement, for instance, in their news feed. the ad is not tied to a specific piece of content. >> just to be clear on that, there is ad space around the content. people are buying an audience rather than channel 4 content. there is advertising adjacent to the channel 4 content. that's the point rebecca was making. >> we don't run ads on pages. in somebody's news feed if somebody is in the audience for an advertiser, for the criteria the advertiser has submitted, then they might see an app. >> i believe that you actually asked channel 4 if they could put some of your adverts in the middle of their videos and they refused. >> you may be speaking about an audience network where there are ads off of facebook. >> it may be.
10:58 am
perhaps we could follow up on that. >> sometimes these videos are taken by other users and posted on and changed and altered. i wondered where you stood on co copyright issues. >> we have policies against anybody infringing upon others rights and we have a notice and takedown procedure. >> there has been a rather unpleasant incident of child pornography getting into the channel 4 network through the facebook messaging service and it is actually being spread far and wide and thousands of people have seen it. how did that happen? how did that get through your systems? >> the way we deal with any child sexual abuse imagery, we have systems in place to automatically identify such content. those systems aren't perfect. they primarily work on matching
10:59 am
known images. so, for instance, we work with the national center for missing and exploited xhirchildren, and we become aware of an image of child pornography, we reduce that to a hash, basically, a digital fingerprint and we can stop it from being uploaded in the future to facebook. we don't necessarily have the technical means to stop new images or images of child exploitation that we have not seen before from being uploaded. as soon as we become aware of such an image, whether it is from somebody reporting it to us or law enforcement send teeing to us, we will take steps to stop the dissemination of it and report it immediately to the proper authorities so they can take action. >> it actually took you 36 hours to take this down. >> how did that happen? >> i can talk to that, because i spoke with channel 4 news about this incident. just to be clear, this is something where we ought to use
11:00 am
this opportunity and hopefully anybody watching this on the live stream, if you ever come across a piece of content like this, do not share it. report it to the ibf. that's what they are there for. they will ensure it gets taken off the internet. unfortunately, a lot of people shared this to condemn it and were encouraged to do so. every person that did that broke the law in the u.k. and in many other countries by doing so. once we were made aware of it by channel 4 news and once they told us about which inboxes it was in, we were able to move it very quickly to remove it, to hash it, as monika explained, and to prevent thousands more shares of that horrible material on facebook. >> doesn't this just indicate just how powerful you have become and what a pandora's box, to use my colleague's term, that you have opened up? doesn't it demonstrate it is time for regulation and rules set out that roles and responsibilities of all of these
11:01 am
things. all of this is affecting our society, not least our clirn. >> children. >> i would ask you to ask the ibf about this and they say they see tiny amounts being shared on facebook. much more significant concerns about other aspects of the internet, despite our big size, 2 billion people using facebook, 40 million in the u.k. these kind of incidents are incredibly unusual. to suggest this is a pandora's box or something we are not in control of is completely wrong. >> my final question is, i wanted to know how long do you hold people's data for that you collect about them? is there a limit? >> as long as they want us to. some people have been on facebook, mr. farrelly mentioned, his son has been on there for ten years. we have held it for as long as he wants us to. if he wants to look at data from when he first used, he can.
11:02 am
as soon as people decide they don't want to be on facebook, we can remove that content z . >> so he was a child when he signed up. how long do you hold data on children? >> it depends whether they want us to. it is entirely up to them. we are holding data, which is very precious, photos, videos, family moments, important moments in their lives. we are custodians of that for them. it is up to them to decide when and if that data is removed from facebook. we will not make the decision for them. >> you do sound rather godlike saying you are custodians of that data. >> just to be clear, when you say it is up for as long as they want, for as long as they have a facebook account? >> yes. >> for as long as they have a facebook account, you hold that data. >> we look after it for them. >> if you removed a specific post from your account, we would delete that in accordance with
11:03 am
the relevant data decleeletion . >> mr. milner, you said something very interesting concerning the election law. in 2015, 2017 and the 2016 referendum, facebook advertising was extremely important. in those elections, do you agree it was not possible to establish as a candidate where another candidate's purchase of facebook's advertising was bought from. >> i don't understand what you mean. can you explain? can you give me an exam? >> in my constituency, my opponents can purchase advertising from facebook in the campaign. it is unlawful for someone to pay for that advertising from outside the u.k.
11:04 am
also, all of the information has to be recorded within my particular district. to date, and as we stand today, it is impossible for me as a candidate to check where that advertising was bought from. do you agree with that? >> is that also true for the pam threats th you don't know how they were paid for. >> can you answer the question? >> i don't understand what the question is. >> the question is, can you assure me that foreign donors do not pay for campaign advertisements purchased in britain today? >> i can't assure you of that, no. >> do you hold that information at facebook? >> no. in the sense of what we will
11:05 am
know is -- let me try and think about the scenario. if somebody is buying adverts to run a campaign in your constituency during the election, with you can see the account that has paid for the ads. we won't know where the moneys come from to go into that. >> do you know whether the account that is paid from the ads is from outside the u.k.? >> we will have information that will enable us to know who is paying for those ads. >> you know that is illegal if someone paid from outside the u.k.? >> yes, i am aware of that. >> you are aware of that. do you prevent that from happening? >> we don't at the moment. my understanding is that this is a matter for the electoral commission to investigate. >> it is a matter for you. >> isn't it a matter for the person paying for the ad, that they have to ensure that they comply with the law? >> it is a matter for you, because you are not complying with the law either, because you are facilitating an illegal act. >> i have never heard that analysis before. if you have something that you can share with us that
11:06 am
demonstrates that, i would be interested to see. >> this is the problem, mr. milner, you have everything. you have all the information. you have all of this information. we have none of it, because you won't show it to us. >> what i have explained, mr. lucas, we are moving forward with a new form of political advertising transparency, which will enable you to have this information? >> so there is a problem. >> yes. i'm not suggesting -- i think it would be wrong to suggest that suddenly this is -- that we have suddenly determined there is a problem on facebook. there is a wider issue of public debate which the ico are looking at and the electoral commission is interested in, which is whether or not british election law, particularly around the transparency of spending, needs to be modernized for a different era of political advertising. what we want to do, rather than wait for that, do what we can to
11:07 am
provide greater transparency in this area. not because we have been told particular examples of campaigns in your constituency or others where somebody was using foreign money to pay for their ads. >> i'm pleased you recognize there is a problem. it is a matter for facebook, because i welcome the assurances that you gave but how do we check that the rules that you are today announcing are being complied with? >> well, you will be able to look at in the future, the future election, be able to look at the panel of your opponent and see what ads they are running. you will be able to ask questions of them, if you have concerns about where the money is coming from. the electoral commission has the power to encourage them to provide information. they regularly fine people for illegal election spending.
11:08 am
they have those powers. we'll help you. if you have a concern about that, you have to go to them to get that information. >> what about advertisements from third parties, people that aren't candidates in campaigns, do you think that i would then have access to sufficient information to check that? >> well, we are very hopeful that with our new transparency around electoral, political advertising, you will have much more information than you have ever had before around the nature of advertising that is being deployed during the election. >> it is extraordinary facebook is laundering money through that bank. that is a matter for the person laundering the money and for the authorities to stop them from doing it. it has nothing to do with us. we are the mere platform for which this took place. that bank would be closed down and people would face prosecution. i think what you are describing
11:09 am
is basically the same thing here. it is up to the electoral commission to identify the person. even though you know where money is being paid outside a country but you don't detect it. the systems we have aren't picking that up at all. i think many people find that astonishing. also, you asked monika bicker, this is a change in the ads requiring disclosure? >> it is not a consequence. we are involved in conversations around that bill. right now, that is a bill. as simon said, we don't want to wait to see what the government wants us to do. we have been looking at ways we can get more transparent on our own. >> this is a bill, rather than an act. i think it might have been helpful if he put that context in place. it sounded like this was entirely had been started at facebook's initiative rather than as a consequence of a public debate in this country. >> certainly, we have been part of that discussion.
11:10 am
this was something we undertook voluntarily. after the 2016 election, we took a hard look at how other advertising system works and part in making sure we are doing a better job of reviewing advertisements and looking at how we can be more transparent. we are rolling out those initiatives. >> this is trying to head off the statutes of regulation, not because you think it is a good idea. >> this was something that we undertook voluntarily after the election in the u.s. we looked at our systems and tried to identify where we can do better. i should say that's not just something we do this in this space. we do that across all of our policies all of the time. any time we see a mistake, something that got through our system that shouldn't have or a type of conduct that, for instance, is a new type of behavior we hadn't previously had a policy in place for, we are updating our policies and looking to see how we can improve. >> respectfully, with regard to what miller said, if the system
11:11 am
isn't picking up people from outside one country seeking to place physical ads in another. it is not a question of you saying it is not going on. you are saying, it could be. we are under no real obligation to call it out if we see it. >> mr. collins, we have not seen in the last election, during the brexit vote investigative journalism that has led to the suggestion that there are lots of campaigns going on. >> you haven't looked, have you? that's the thing, you haven't looked? >> this is going on. >> i have been >>. >> given the controversy, has facebook done an analysis of how profitable face news or the deliberate dissemination that it has been to the country? is it not at all, very little, very profitable or hugely
11:12 am
profitable? >> we are looking at research. that's on going. i can't speak to the financial aspect of it. i will say that that would not drive our decision here. we have shown a willingness to find and remove false accounts. we remove these ad farms. this is a question of if it violates our policies, whether or not we are gaining money from it, we would take it. >> i understand that was the answer you gave a blwhile ago. had there been any analysis of just how profitable it has been to facebook? >> i'm sorry. i can't answer. i know we have looked to understand how false news is manifesting itself. i can't say we have looked specifically with how much money it has been gained. it wouldn't be a relevant factor in our decision to move the content. >> would you then expect us to believe that if the dissemination of fake news and false information had been a financial drain on the company,
11:13 am
your reaction and policies would have been exactly the same as they are. >> what we do think, long-term, we do think that because people want to come to facebook for a safe environment where they can connect with reliable information, we think it is against our financial interest to have that sort of content on our site. >> you think it has been against your financial interest to have that content on your site? >> we think it is bad for our community and long-term, people don't want to be in a place where they think it is not safe or they can't connect with reliable information. it goes again to the trust that our community has in us when they come to facebook. that's critical to our business. >> i would guess then in the absence of any answer, i would guess that the propagation of face news has been hugely profitable to facebook. the more sensational the story, the more people are driven towards it and more advertising surrounds it. where does facebook draw the line between the pursuit of profit and the social
11:14 am
responsibility? if you haven't crossed that line already, how far away do you think you are from crossing that lane and how will you know by your own measure when you have crossed that lane? >> i am very happy to speak to that. because my job at facebook is to manage our policies and money does not enter into it. we draw a line for every bad behavior. as i explained earlier, fake news is different, because there is this spectrum of behavior. there are many different types of lines we have to draw. we draw the line under our policies. if somebody crosses that line, whether it is a facebook account that has many, many thousands of followers and is very popular, if they cross the line, we remove it. if it is an advertisement that is set to run for a long duration and pay facebook a lot of money, it doesn't matter. we remove that. if people cross that line, their content comes down. >> you make it sound as if
11:15 am
facebook is a totally benign organization. facebook is a massive, if not the biggest, player in the dissemination of information around the world. that brings us back to why i was asking about google and youtube and your role in helping regulate that space. where do you see that role? are you happy with it? i'm not talking about self-regulation. i'm talking about a regulated framework. are you happy with the regulated framework as it currently exists? given that the debate is now happening, where would you see that debate go to the benefit not just of facebook but society generally? >> thank you, mr. o'hara. because of the breadth of the content that is often called fake news, i want to be very careful, there are things, just
11:16 am
like my colleague from youtube said earlier, there were things we would not necessarily call news at all, for the reasons i said earlier. there are things we should not be in the business of deciding whether it is fake or true. our community would not want us to be. they wouldn't want a private company to be the arbiter of truth. there are other types of content that we know are fake accounts intentionally trying to spread disinformation. those bad actors that are trying to send people to ad farms. our incentives are very much aligned. we want to remove that content. i would point you in terms of your question about where are we with regulations. we want to be a part of that dialogue, specially because one thing that i see when we talk to policymakers about these sorts of issues is sometimes a solution might seem great in theory, a regulatory solution. then, when we get together and talk about the practical ramifications for it, we both, we and the policymakers, will say, there are some unintended
11:17 am
consequences here. we have to be very careful. an example where that has happened recently was australia. this is a long-standing inquiry. they recently released a report where they said they are not going to regulate for now. they think collaboration is important. we can follow up with that. >> previously, you have been accused of actively working against the legislative framework and particularly electoral law. >> i'm sorry. in relation to what? >> electoral law. >> why do you stand accused of that? does it not make you direct at what we are now experiencing? >> we have reached out to electoral commission since the u.s. election in 2016. we cake out ame out and publicl here are some of the things we are going to do to respond to the threat of false news. one of them is actively engage with electoral commissions. that's an important part of getting this right. >> tlang you.
11:18 am
>> miss bicker, my colleague said you are the largest publisher of news. you make defacto publishing news and you design theal algorithms. they inherit the biases of the people that are developing them. i wanted to ask you, how many developers do you employ to design these algorithms. the person that you are using facebook, what you tend to interact with, your friends and
11:19 am
family. we are prioritizing friends and family. we know this will cause people to spend less time on facebook. we want people to have more meaningful interactions. those are the things that go into that. >> you are the head of global policy management for google and you don't know how many developers google employees or has working for them? facebook? >> for developers, no. >> roughly, hundreds, thousands, a dozen? >> we have many engine fleersere company and some of them work on the team and they are constantly updating. >> would you tell us afterward privately how many are working on them? >> we are always happy to follow up. >> your algorithms allow personalized content finely
11:20 am
grained to be directed and targeted toward specific individuals and your advertising helps to do that. most individuals that use facebook don't realize you are doing that. they don't realize that what comes up on their facebook line. >> the newsfeed. >> is what you are targeting toward them. there is a huge power imbalance. you are controlling it and the person that is receiving it doesn't have any control over it. it kind of remind me, if you will forgive the analogy, of a sort of abusive relationship where there is coercive control going on. somebody is deciding what you see, hear, read, what you have access to. can you see the parallels in that? can you see why i would be concerned about that? >> i will say people actually have a lot of control over their news feed. we are concerned that people sometimes don't understand how much control they have. we have tried to make that a little more visible for people. if you are on facebook, you can go to a section called news feed
11:21 am
preferences or type it into the help section and select, for instance, the reason we have a newsfeed algorithm. if you come to facebook and have a bunch of different friends and pages that you interact with, rather than show you 200 posts, we try to prioritize based on the factors i mentioned before for relevance. however, if you wanted to turn that off, you can do that. you can go into your newsfeed preferences and say i just want to see the content in reverse chronological order and we will provide that. we try to make the news feed algorithm something that you want to see. >> how can people know they can do that? if they don't know in the first place that you are controlling b what is seen on the timeline and there are mechanisms to stop it but they don't know about that. how do people find out about
11:22 am
that? >> our hope is that people will take some time to explore the site and see that there are important privacy settings. you have the ability on facebook to control who sees what you post. you can friend people. you can unfriend people. you can follow, unfollow. all of these settings and the news feed preferences are included within that. they are designed to give people control over their experience. i want people to know about that. i think it makes people's experience on facebook much better if they know about that. we try to make that fairly visible. >> going back to your developers, do you know what percentage of the developers are women? >> i don't know that. i know we have issued a diversity report in the past. we can follow up with you on that. >> i think the diversity report that i have seen, it says there is a heavy male bias in terms of developers on the technology side. only 35% of facebook's total employees are women. on the technical side, only 19%
11:23 am
are women. i talked about inherent biases before. if you have predominantly male developers developing algorithm, there will be an inherently male bias. what is being fed across your platform is inherently male in terms of its bias. does that worry you? >> we are concerned about any type of bias, whether it is gender, racial, or other forms of bias that could affect the way that it is presented at our company, including working on algorithms and enforcing our policies. there are a couple of levels where we address this. one is at the source. we have acknowledged that we want our workforce to be diverse and reflect the community in which we work. we have initiatives on going right now to try to develop talent in underrepresented communities and try to recruit from those communities and get them into facebook and help them succeed.
11:24 am
>> what do those initiatives start? >> at least a couple of years if we can follow up with some details. >> how are you doing against the targets you set yourself? >> we are improving. we have work to do. we think that we, along with many other companies, in the society, are confronting these issues. it is going to take some time. we are very committed to it. >> did you set yourself targets? >> we do have more information we could provide. i'm sorry to say, i'm not the expert on this but i do want to say this is important to the company. one thing that employees go through at facebook is called -- another way we are tapping the issue, is managing unconscious bias training. everybody has these biases in your mind and we have training that is designed to help people recognize that and adjust for it. finally, when it comes to development of our algorithms, enforcement of our policies, we have checks in place, with the way we enforce our policies, we
11:25 am
try to make sure they are sufficiently granular and they don't leave room for somebody to interpret the one way or another based on biases. are we perfect on this, no. we have work to do. it is something we care very deeply about. >> my final question, which is on another issue, what i have heard this morning, from you and the panel before, is that you recognize there is a lot of work to be done and you talk about pry t prioritization to make your platform safer and better. why not make it available toe a commission could independently help and prevovide solutions? >> we are working across industry. we do this fairly commonly in safety areas. we are doing that as well when we think about tackling fake news and elections integrity.
11:26 am
as far as research organizations, we are doing that in the u.k. with media frust. we will continue to find ways of partnering with these organizations. we have to be careful. we have data privacy regulation that limits the day that we can share. we want to make sure we are always complying with that and consistent with our terms and laws. it is something that we do. >> thank you. >> i would like to ask you about facebook's relationship with cambridge? how would you characterize that relationship? >> we have some colleagues meeting with the ico. they are undertaking an inquiry into the issue of political polling and one of the questions they have asked us is about the relationship with cambridge analytica. >> let's see how we go with the
11:27 am
questions and see if you are able to answer them. >> cambridge analytica. have you ever passed any information to them or their associated companies? >> no. >> but they do hold a large chunk of facebook's user data, don't they? >> no. it will not be facebook user. it may be data about people that are on facebook but they have gathered themselves. it is not the data we have provided. >> how would they gather that? >> you should ask them. >> we may well do. >> is it the case that third-party users, app users or whatever we might call it, can ask for a facebook user's data and pull that data off facebook and bank it?
11:28 am
>> yes, that's part of the platform policies we have. >> is it also the case that when i, for example, agree to give me data, that it also takes my friend's data as well? >> no. we have policies that are called our platform policies or our developer policies that govern how these applications can use facebook data. the way it works, essentially, is they have to tell each facebook user who is going to integrate with their app, who is going to use their app, here is the data we are requesting from you. an example might be, we are requesting your hometown and we are requesting your e-mail address. they have to give you a possibility to opt out of any nonnecessary data. you can opt out. you see the elements of data they require to run their app
11:29 am
and which they don't. you do not take data that is beyond you or belonging to your friends, their personal data, with you. once you have that data that the user has agreed to give you, you have certain responsibilities under our platform policies. if you were, for instance, to turn around and give that data to some third party or sell it or engage in any of that sort of activity, that would violate our policies. if we found out, we would enforce upon it. >> how might you find out about it? >> we can follow up on it privately. there are things we do to try to discover that behavior. if somebody raised the flag, we would investigate. >> when was the last time that such an incident like that happened? >> i don't have an answer. i'm sorry. >> how much does such an incident happen that you have to chase down? >> that is not common behavior. we are very transparent with developers about what the expectations are if they are going to use our platform.
11:30 am
we made clear we expect them to comply. we take steps to make sure they do comply. >> has facebook ever provided advisers to assist with political campaigns, the referendum campaign in the u.k., putting and embedded your advisers in to advise on how they can use their microtarget sng. >> we have a team that sits with them and advises them on how to use our products. they are focused on products that are free, how to set up a page, how to handle your in box and manage your affairs. they provide guidance an how to keep safe on facebook. how do you deal with abusive behavior, which is a very important concern to all of us.
11:31 am
we have a separate team that are involved in buying advertising. if people want to buy that, they well help them do that. those teams will work with campaigns during elections but they are not embedded in campaigns. >> during the referendum campaign, did you provide that kind of advertising service to either the remain site or the campaigns. >> to both, yes. >> you did so as well with the scottish referendum? >> yes, to both. >> now, you had a success story on your government and politics page recently, which was about you replaced that scottish story with a less controversial one in finland? >> i'm aware of this issue. there is a new story written about it as if it is big news. those kind of case studies that
11:32 am
we put out, we are often refreshing them and really i think this is a genuine case of somebody making a mountain out of a molehill. >> the whole thing is a bit too hot all of the sudden? >> absolutely not. we are very proud of the work our teams do here to help campaigns that want to make use of our products, to reach people with their campaign and message. we think that's a fundamental part of how democracy works. >> let's look at the ads that you provided then. if we look at the general election campaign of 2017, would you be able to provide all of the adverts or identify them and, if necessary, provide them, all that we used to influence the course of nature of the campaign including the dark apps that are specifically targeted? >> as i've explained, we are moving toward a system that will enable that. we are hoping that as part of it, we will be able to provide not just what ads are being
11:33 am
shown right now but also an archive of political advertising. i'm afraid i am not able to tell you how far that will go. we think what's particularly important is that we focus on moments of democracy which are happening now and which are to come. one cannot revisit previous moments of democracy and run them again. what we can do is focus our efforts on upcoming elections and referenda and ensure we can help people understand what's going on. >> you have identified, mr. lucas, that there has been a problem there. >> i think there is an understandable concern that campaigns and candidates have that they can't respond to or see the ads that their opponents are running. we are introducing new levels of transparency to enable that. >> let's go back to the referendum, to cambridge analytica. do you hold information on how much money was spent on facebook during the eu referendum
11:34 am
campaign by them or their subsidiaries? >> i am sure we will have some information on that with regard to the campaigns and how much the different campaigns spent. all those campaigns have provided that information to the electoral commission and that information is accessible, i understand. i also understand from our engagement with the electoral commission, they will be producing a report quite soon on this issue. >> my understanding is that that is not quite the case on the basis that some of the funding, the allegation is that it was channeled through parties in northern ireland where the rules on political reporting are different. therefore, that information isn't forthcoming but, of course, you might be able to provide it. >> as i say, we are cooperating with the electoral commission and the ico, who are running separate issues. we are helping them as best we can. i do think these are matters for the electoral commission rather
11:35 am
than for us. >> well, i'll refer you back to the chair's earlier statements on that. do you hold information regarding the content of online advertisements, referring to the eu referendum, who they were sent to and who paid for them? could you identify that? >> that's one of the things we are doing with respect to this investigation i referred toe earlier, particular ply focused on, was there russian-backed advertising. it will be reported before the end of the month. >> dr. martin moore of kings college, london, looking at the automated campaign and facebook suggested that in the united states, the presidential election in 2016, there would have to be about 50,000 bot
11:36 am
accounts. would that be a figure that either of you would conquer with in order to send out the amount of postings that were out? >> i can't confirm that number. >> one final question. has facebook ever been successfully hacked, miss bicker? >> we have seen individual accounts by people that have been compromised. that's usually because somebody might give their password. >> no, but facebook central and your back end data store of user data ever been compromised. >> not to my knowledge. i am not necessarily a person who would have that knowledge. >> would it be easy -- not easy. would it be possible to hack in and alter, albeit for a matter of moments until your engineers discovered it, the underlying algorithms that are used to manage facebook? >> what i can tell you is that we have a dedicated security team that works to stop any unauthorized access to facebook. they are identifying the best
11:37 am
ways to do that every day. >> miss mathieson, we can send you a copy of the report which is provided by our information security team on this very issue, something called information allegations, which are attempts to disrupt and hack systems. we can send that to you. >> thank you. >> i've just got a couple of quick statistical questions. as of the end of last year, you had 2.2 billion monthly users. i think i was probably one of them. i had to check up on someone who had said a few things about me. i must say, i have cut down on facebook, largely because of the brexit issue, which is very topical. i was getting inundated by people at various points on the lunacy scale from the abusive to the frankly deranged, which i
11:38 am
can cope with. i can just not read them. i have a fairly thick skin. what i can't cope with is people that do read them that are friends of mine and say, do you realize what such and such a person is saying about you? at some point, you have to stop. if i have to take that abuse, i wonder with the addictive qualities what effect facebook's scale really and success has on the mental health of children, quite frankly. of the 2.2 billion users, just to inform your efforts, what's your best guesstimate within the organization as to the percentage of those that are fake, nongenuine accounts? >> i believe that our -- correct me if i'm wrong. i believe our estimates take that into account. in other words, we adjust for accounts that we think may be fake that we have not yet identified. we can certainly follow up on that. i do want to address --
11:39 am
>> let me just ask this. what's the scale of the adjustment then? >> i don't know. >> could you follow up? >> absolutely. i do want to address what we do around harassment and hate speech. it is rightfully so. it is a real area for concern for policymakers and others that are using facebook. here is our approach. for private individuals, we don't allow bullying of any sort. we would remove any content that we become aware of that is intentionally bullying to others. for public individuals and this would include elected officials, we would remove any sort of hate speech or direct threats. we do allow robust discussion and debate about public figures and recognize exactly what you mentioned, that sometimes people will say things that are off topic and irrelevant. one of the things we try to do is provide people the control so that you can, if it is on your
11:40 am
page, you can control those sorts of comments and make sure that you can use your page the way you want to use it. >> i'm aware of that. it is just simple for me not to use it. >> the second statistical question. of the total number of posts, i haven't got a figure for that or however you describe them on facebook. for your internal efforts, what percentage of those do you estimate are made by bot accounts? >> i don't have an answer. >> i would say almost none. that's not really an issue on facebook. that's more of a matter for some other platforms you are about to hear from. >> my final statistical question, i see by the rankings that the u.k. is number ten in terms of the estimates of your users. and ahead of the united states is india with 250 million. could i ask you what efforts you
11:41 am
make in your biggest user markets to make sure that your platform is not used in the world's biggest democracy, not used for electoral information or social unrest? >> we take a global approach. when i spoke earlier about removing fake accounts, that's something we are doing around the world. outreach with industry, with government commissions, we do that around the world. something we haven't talked a lot about to provide good journalism is the facebook journalism project. we have worked with more than 2600 publishers to identify what are the ways that reliable news can best succeed on facebook. this includes real product fixes, for instance, making it easier for news media organizations to attract
11:42 am
subscribers or to have advertisements that work within their content. this is something that we do. >> possibly we can follow up about your regional approaches in different countries. >> absolutely. >> we have a session later on with some of the news media organizations. i have a couple of clarification questions i want to ask before we finish. going back to this question on facebook developers, mr. milner, you said it wasn't true that developers had facebook user data but they had data about people on facebook. what was the difference? >> i was initially assuming that you were saying, have you provided data, to some other outsi outside entity. we don't provide your data without your permission. the system miss bekka was talking about does allow people to say i am prepared to have someday ta in order to get a service for them. >> i wanted to be clear what you
11:43 am
meant. facebook developers gather data because they have interacted with the tools that are created on the platform. if that facebook user decide to leave facebook, does the developer keep the data they have gathered? >> no. that's in our policies. they have to delete the data once the person is no longer using the service. i would note you don't have to leave facebook to make that decision. if you have interacted with an app on facebook and you decide, i don't want to do that anymore. you can go into your settings on facebook and turn that off and reject that. then, they have to delete your data. >> i want to ask this question. the question of dark ads came up, ads that original by the person facing the ad than those receiving it. these changes you put in place whereby you can see the advertisements, would that
11:44 am
include dark ads? >> any time you see an ad, you can see the page behind that ad and the other ads they are running. >> if you want to see what ads a particular page runs, even if you are not in the audience, you would be able to see those ads. >> just on audience, this is a u.k. specific story. i would interested in facebook policy as a whole. the sunday times did an investigation when they bought advertising space through facebook for the 20 to 34-year-old audience to advertise to them. they were told the reach of that audience was 17 million people. according to the u.k., there are only 12.3 million people of that age group in the country. so there was a disparity of 4.7 million people there. is facebook concerned that the audience they are selling don't tally with the audiences with the country that are actually there? >> i don't have an answer for you right now. i want to make sure we get to the bottom of that. we will follow up with you on that. >> it is quite a big discrepancy on a number like that.
11:45 am
people in the advertising world would say if that discrepancy is allowed to continue, it is fraud, misselling of an audience. a lot could be fake accounts or people wrongly ascribed into that category. >> we want to make sure we are being honest and giving the right numbers to advertisers. i will follow up with you on what happened there. >> thank you very much. >> thank you, monika bicker, for your evidence. >> thank you.
11:46 am
11:47 am
we are running slightly late. hopefully, we won't run any later than we are. if i could just start with some of my questions. you will be aware that committees made repeated requests to twitter linked to the referendum.
11:48 am
focusing on research already done. not to say it will conduct this research to look at where accounts are based that have been politically active during british election campaigns. are you able to give us an update as to whether they will be able to supply the committee with the information. >> i would like to defer that question to nick, my colleague from the u.k., who does have an update to give you. >> thank you. just to clarify twitter, not facebook. >> sorry. >> before my facebook colleagues jump on me from behind. as i noticed in my letters previously, we have been doing photo investigations. i would like to read that. i don't want to misread them. >> can i just suggest, because we are quite short on time. is it a short statement? >> two paragraphs. we can update the committee on a broader investigation we noticed
11:49 am
in previous letters have identified a very small number of research agency linked accounts. we had 49 accounts that were active, which represents less than 5% of the total accounts that tweeted about the referendum. they collectively posted 942 tweets, representing less than the total tweets posted. these tweets cumulatively were retweeted 461 times and liked 637 times. on average, this represents fewer than ten likes per account and fewer than 13 retweets per account during the campaign with most receiving two or fewer likes and retreats. these are very low levels of engagement. >> what's the audience reach for those accounts? >> less than two retweets. >> what's the audience for that? we have a set number of
11:50 am
accounts. the engagement metrics that we have been using in this investigation to understand how they have been on the platforms. as i have highlighted, very low levels of engagement. they will will directly impact on the viewability of those accounts. if there are low engagements, that would suggest there are very low views. >> but what i would like you to tell us -- as is always, some university academics can work it out, but what the audience is. the number of accounts that are active is one piece of information. the reach for that information is something else. what we're sharing will be of interest. was that content they created, was it sharing links to other sources of information. that might be useful to know. thank you for giving us that update. i think we've clearly got other things we want to follow-up on and would need more information about that. we'd be interesting to know whether you've restricted your searches to certain already
11:51 am
identified accounts or whether you've done a troll across the whole platform for accounts registered in russia that were active during the campaign. we know with twitter, there was evidence of large numbers of suspected bot accounts and then being taken down after the referendum was over. that's why we're persistently asking for this information. >> and i can touch briefly on the point there. we were asked to look at the city university research. one of the challenges we have, these accounts weren't identified by research. twitter is an open platform. our api can be used by universities and academics around the world and it is. unfrntsly, that doesn't give you the full picture. in some cases, people have identified accounts as suspected bots, which have later been identified as real people. one of the things we do, we work with academics closely on asking people to bear in mind when they make assertions about the level of activity on twitter that
11:52 am
there may be cases where those assertions are based on very active twitter users who are real people own not bots. one of the dangers of using activity as a metric to identify bots is you may misidentify prolific tweeters who are human. so that's a benefit. researchers can use our platform. it's a challenge for us. the researchers can't see the defensive mechanisms and the user data we can. >> there's been plenty of analysis done looking at characteristics of suspicious bot activity. twitter also knows where accounts are being operated from. therefore, you could easily detect the creation of accounts operated in a different country that suddenly start tweeting about something happening in another location. that sort of activity is easy to spot on the sight if you're looking for it. i want to ask carlos, what
11:53 am
cooperations were given to the u.s. senate investigation? has the evidence of russian-linked activity on twitter just been extrapolated from the work that's been done looking at the facebook pages, or is that separate intelligence you've supplied to the senate? >> thank you for that question. you know, we're constantly mo monitoring our platform for any activity that's happening. the internet research agency in particular, we came across that information in a number of different ways. starting in 2015, there was a "new york times" article about some of the activity on some of those accounts, and at that time, we started actioning those, in june of 2015. in the course of the follow-up to the election, we got information from an external contractor whose name is q intel that we share with other platforms that provided a seed of information that they told us
11:54 am
was related to this farm in st. petersburg. i believe there were about a hundred accounts turned over. bit by bit, following more and more signals, found accounts that were related to it. we've improved the information we've provided to the public. as you mentioned, sir, to the u.s. senate and to the house intel committee. now that number is 3,814 internet research agency accounts that were active. noting that, you know, we started suspending these accounts in 2015, all of them have been suspended. they're not functioning on our platform. and you heard from our peer companies earlier today. we are very good at understanding what's happening on our platform, but sometimes it is important to have that partnership with third parties,
11:55 am
with contractors, with civil society, with academics, and with government and law enforcement in particular, to help us figure out what we don't know, what we can't see that's not on our platform. we're good at tracking the connections on things on twitter and sometimes we need some partnership on the rest of the picture. >> does twitter believe that there are likely to be or other farms, agencies running fake accounts from countries like russia? there's been a focus on one, but some people say if the level of activity that people believe has been the case and it was being carried out, it would probably be too much to be done by one agency, and there will probably be others as well. >> i think we have to be humble in how we approach this challenge. to say we have a full understanding of what's happening or what happened, we have continued to look. we're constantly looking for efforts to manipulate our platform. we're really good at stopping it, especially when it comes to the malicious automation side.
11:56 am
but we will not say or stipulate we will ever get to the full understanding. >> some of the evidence that the committee took in when we started the all evidence hearings in westminster related to the referendum in catalonia and research there that had been done suggesting there was not only russian activity but agencies based in venezuela. is that something twitter has looked at? >> nick pickles, not only our uk lead but also one of the leaders in the company when it comes to information quality, i think could perhaps address that more than i can. >> this is one of the challenges that twitter presents an opportunity. research is done and published. that particular research wasn't published in a journal. there's no underlying data. we've not received a formal communication of the findings. it's very difficult for us to
11:57 am
validate those external research findings sometimes. what we have is the numbers at an aggregate level. one thing i would say is that the -- and just to respond briefly to your previous point, chair, the assertion that it's easy to identify very quickly where an account is operating on the internet, where someone is based. i was logging into my e-mail earlier on as a standard corporate practice, we use a virtual private network to communicate securely with our company. that took two clicks. as far as google is concerned, i'm not in d.c. right now because my virtual private network is connecting somewhere else. so the idea that companies have a simple view of how customers communicate with us, it may be rooted through data centers, it may be rooted through vpns, it may be rooted through users deliberate ri trying to hide their location. i want to caution the idea that somehow saying there isn't absolute certainty there.
11:58 am
all of this work is based on a variety of signals, and we make probable decisions based on that, but they are very rarely absolute. >> if i could, just to build on nick's point, which is an important one. geographic basis of where tweets are coming from, where users are, are not always the strongest indication of what we use to action accounts based on violating our terms of service, which we take super seriously, which means even if nick is dialing in from a vpn or tour browser or other ways to obfuscate where he's coming from, if he breaks any of our rules, we're going to hold him accountable. >> the explanation you've given there, saying it's possible for people to hide where they are, i understand that, but also given that i would imagine if we were talking to your advertising people, they would say it's kwiez easy to buy advertising on twitter that targets people based on where they live. that would be one of the rudimentary requirements of a brand seeking to advertise on a platform like twitter.
11:59 am
>> only about 2% of our users share geo location data in their tweets. >> that's not what i asked though. >> that's one thing people often assume. someone may identify their country in a biography. you may be able to infer it from the language they used. i think sometimes -- and i'm not saying the research isn't important. i'm just saying that sometimes the conclusions reached don't match what we find as a company. and we see that quite regularly. for example, some of the research on bots will identify people. some of the research will identify other manipulations opt platform that we were able to detect and prevent. but that's not -- >> so if an advertiser came and said, i want to pay for promoting my messages on twitter, i want to target people in the state of virginia, we can't do that because that's not the way we're set up, or yes, we can? >> so that's an excellent question, chairman. thank you. we work with our advertising
12:00 pm
clients to try to get them the best value for the money. we don't have as much information about our users as some out of our peer companies. we try to figure out what are the analogs tries to reach the audience they're trying to reach. followers of cnn or fox news or the bbc. we can -- we do have a degree of geolocation for -- within a country or within a media market within a country, but we don't overexaggerate our precision on that. but we do provide extremely good value to our advertisers. but again, we are limited by some of the factors. >> but you would sell advertising based on location, even with those caveats. >> it is one of the approaches, but often we find people who are interested in certain subjects or search for certain issues can sometimes be a better -- >> i understand that. you could sell to an audience based on location. >> we

48 Views

info Stream Only

Uploaded by TV Archive on