Skip to main content

tv   Hillary Clinton Fmr. Google CEO Eric Schmidt Discuss AI Challenges  CSPAN  March 31, 2024 5:16am-5:48am EDT

5:16 am
actually requirements if the platforms to issue regular reporting which is helping us and per country basis. so this is important because in languages like dutch, czech, hungarian, you actually have to see what's been done because we didn't have this information before. so in this sense, it's really good. >> when do you think it will start to kick? because i know the different countries are still staffing up. >> it has started alreadile >> i know. but when will we all notice it? >> well -- ok. so there are already reports out. you can check those out. so if you do a bit of research, you will notice it. but in terms of on the platforms for example, you can already report illegal content what i'm worried about are the platforms that are not cooperative. if there's so much exchange of information between microsoft, google, but what about telegram
5:17 am
for example? it's the source of extremism and open to propaganda. >> we could go on all day. but i don't want to get in the way of the next panel which will be really interesting. so thank you very, very much. gaer ethan and domenica. [applause] >> first, we are so delighted to have eric schmitt with us especially because he is as you just heard up with of our carnegie distinguished fellows at the institute of politics. and he has been meeting with students and talking to faculty about a lot of these a.i. issues that we have surfaced during our panels today. and of course, he wrote a very important book with the late drt article official intelligence. so we're ending our afternoon
5:18 am
with eric and trying to see if question pull together some of the strains of thinking and challenges and ideas that we've hear. so eric, thank you for joining us. you look like you're in a very comfortable but snowy place. i wanted to start by asking you, what are you most worried about with respect to a.i. in the 2024 election cycle? >> well, first madam secretary, thank you for invitinger me to participate in all the activities. i'm at a tech conference in snowy montana which is why i'm not there. if you lock at misinformation, we now understand extremely well that virallity, emotion and particularly powerful videos drive voting behave you, human behavior, moods, everything. and the current social media
5:19 am
companies are weaponizing that because they respond not to the content but rather to the emotion because they know the things that are viral are outrageous, right? crazy claims get much more spread. it's just a human thing. so my concern goes something like this. the tools to build really, really terrible misinformation are available today globally. most voters will encounter them through social media. so the question is what are the social media companies done to make sure that what they are promoting, if you will, is legitimate under some set of assumptions? >> you know, i think that you did an article in the m.i.t. technology review fairly recently, maybe at the end of last year. and you put forth a 6 --
5:20 am
six-point plan for fighting misinformation and disinformation. i want to mention both because they are distinct. what were your recommendations in that article to share with our audience in the room and online? what are the most urgent action that is tech companies particularly as you say the social media platforms could and should take before the 2024 elections? >> well, first i don't need to tell you about misinformation because you have been a victim of that and in a really evil way by the russians. when i look at the social media platforms, here is the plant fact if you have a large audience, people who want to manipulate your audience will find it and they'll start doing your thing. they'll do it for political reasons, economic reasons or they're simply nihilists. they don't like authority.
5:21 am
and they'll spend a lot of time doing it. so you have to have some principles. one, is you have to know who's on the platform in the same sense that if you have an uber driver you don't know his name or details but uber has checked them out because of all the problems they've had in the past. so you trust that uber will give you a driver that's a legitimate driver. the platform needs to know even though they don't know who they are that they're real human beings. the other thing they have to know is where did it come from? we can tech knowledgeically put water marks, the technical term is called stegonography. you know roughly how it entered your system. you know how the algorithms work. we know it's very important that you work on age-gaiting so you don't have people below 16. so those are sensible ways of taking the worst parts of it
5:22 am
out. i think one of things that i wrote about is if you look at the success of reddti and their i.p.o., what they did they were reluctant to do anything. it improved the overall discourse. the lesson i learned is if you have a large audience, you have to be an active management manager of people who are trying to distort what you as a leader are trying to do. >> that re ddit example is a good one because i don't have anything like the experience you do. but just as an ober, it seems to me that there's been a reluctance on the part of some of the platforms to actually know. it's kind of like they want denybility. i don't want to look too close because i don't want to know. and i can tell people i didn't know. and maybe i won't be held accountable. but actually, i think there's a huge market for having more
5:23 am
trust in the platforms because they are taking off, you know, certain forms of content that are dangerous in however you define that. and your recommendations in your article focus on your role of tributers. maybe go first, eric, in explaining us to, like what should we think about and more importantly what should we expect from a.i. content creators and from social media platforms that are either utilizing a.i. themselves or being the platforms for the use of general -- generativea.n. how do we protect it even with a.i.
5:24 am
or open source developers? is there a way to distinguish that? >> it's sort of a mess. there are many, many different ways in which information gets out. so if you go through the responsibility, the legitimate players, the offering tools and so forth, all have the responsibility to mark where the content came from and to mark that it's since synthetically generated. in orders we started with this. and we made it into that. there are all sorts of cases linebacker i touched up the photo. but you should record that it was so you know there's an altered photo it doesn't mean it's an in an evil way. the real problem has to be a confusion over free speech. so i'll say my personal view which is i'm in favor of free speech including hate speech that is done by humans appear then we can say to that human, you are a hateful person and we can criticize them and listen to them and then hopefully correct them. that's my personal view. what i'm not in favor is of free
5:25 am
speech for computers. the confusion here is you get some idiot, right, who is just literally crazy who is spewing all this stuff out, who we can ignore but the algorithm can boost them. there's liability on the platform's responsibility for what they're doing. unfortunately, although i agree with what you said, the trust an safety groups in some companies are being made smaller and, or are being eliminated. i believe at the end of the dayers these systems are going to get regulate and pretty hard. you have amisalignment of interests. if i'm the c.e.o. of a social media company, i make more revenue with engagement. i get more engagement with outrage. why are we so outrage online? it's because the media algorithm are boosting the stock. most people it is believed this
5:26 am
are more in the center and yet we focus on -- add this is true of both sides. everybody's guilty. i think what will happen with a.i. just to answer your question precisely is a.i. will get even better at making things more persuasive which is good in general for understanding and so forth. but it's not good for the standpoint of election truthfulness. hillary: yeah, that is exactly what we've heard this afternoon is that, you know, the authoritativeness and the authenticity issues are going to get more difficult to discern. and it will be a more effective message. you know, i was struck by one of your recommendations which is kind of like -- it's a recommendation that can only be made at this point in human history. and that is to use more human beings to help. and it's almost kind of absurd
5:27 am
that we're sitting around talk about well, maybe we can ask human beings help human position figure out what is and isn't truthful. how do we incentivize companies to use human beings? and how do we avoid the exploitation of human beings. because there's been some pretty troubling disclosures about the sweat shops of human beings in certain countries in the global south who are b you know, driven to make these decision and kit be quite, you know, quite overwhelming. so when you've got companies as you just said got in the trust and safety, how do we get people back to some kind of system that will make the kind of judgment that is you're talking about? >> well, speaking as a former c.e.o., of a large company, companies tend to operate on
5:28 am
fear of being sued and section 230 is a pretty broad exemption for those in the audience, section 230 it's sort of the governing body on how content is used. and it's probably time limit some of the broad protections that section 230 gave. there are plenty of examples where somebody was shot and killed over some content where the algorithm enabled this terrible thing to occur. there is some liability. we can try to debate what that is. if you look at it as a human being, somebody was happened and there was a chip of liable but the system made it worse. so that's an example of a change. but i think the truth if i can just be totally blunt is ultimately information and the information space that we live in, you can't ignore it. i used to give the speech and
5:29 am
say you know how we solve these problems? turn your phone off. eat dinner with your family. and have a normal life. unfortunately, my industry and i'm happy to have been part of that that made it impossible for you to escape all of this as a normal human being, you're exposed to all of this terrible filth. that's going to get fixed by the industry collab rat actively or collaboration. let's think about tiktok because tiktok is very controversial. t it is alleged that a certain kind of content is being spread more than others. tiktok isn't social media. it's really television. and when you and i were younger, there was this huge frackous on how to regulate television. it was a rough balance where we said fundamentally it's ok if you present up with side as long as you present the other side in a roughly equal way. that's how societies resolve
5:30 am
these information problems. it's going to get worse unless you do something like that. >> well, i agree with you 100% in both your analysis and your recommendations and in a very first time we talked about the need to revisit and if not completely eliminate certainly dramatically revise section 230. it's outlived its usefulness. there was appear idea back in the late 1990's when this industry was so much in its infancy. but we've learned a lot since them and we've learned a lot about how we need to have some accountability, some measuring of liability for the sake of the larger society, but also to give the direction the companies. these are very smart companies. you know that. you spent many years at google. they're going to figure out how to make money. but let's have them figure out how to make a whole lot of money
5:31 am
without doing quite so much harm. that partly starts with dealing with section 230. you know, when we were talk earlier about, you know, what a.i. is aiming at, you know? the palace were all, you know, very forthcoming. and we said you know, we know there are problems. we're trying to deal with these problems. we know even from the public press that a number of a.i. companies have invented tools that they've not disclosed to the public because they themselves assess that those tools would make a difficult situation a lot worse. is there a role, eric -- i know there's the munich -- statement negotiated at the munich security conference chas start. but is there more that could be done with the public facing statement? some kind of agreement by the a.i. company than the social media platforms? you know, to really focus on preventing harm going into the
5:32 am
election? is that something that's even feasible? >> it should be. the reason i'm skeptical that there's not agreement nonpolitical leaders, of course, you're a world's expert on that and the companies on what definition -- what defines harm. i have wandered around congress for a few years on these ideas. add i'm waiting for the point where the republicans and the democrats are in agreement on -- from their local and individual perspectives that there's harm on both sides. we don't seem to be quite that the point. this may be bauds of the nature of how president trump works, which is always sort of baffling to me. bubu there's something in the water that's causing a nonrational conversation. this is not possible. so i'm keptup skeptical that that's possible. i obviously support your idea. the other thing i would say and i don't mean to scare people is
5:33 am
that this problem is going to get much worse over the next few years maybe or maybe not by november. but certainly in the next cycle because of the ability to write programs. i'll give you an example. i was re cently doing a demo. the demo consist of you pick a stereotypical voter. let's say it's a hispanic woman with two kids. she has the two interests. you create a whole interest group around her. she doesn't exist. it's fake. then you have python to have five different variance of her and different ages and background cho ming the same voices. so the ability to have a.i. broadly speaking generate entire communities of pressured groups that are, in fact, virtual. it's very hard for the systems to dethackett these people are fake. there are clues and so forth.
5:34 am
but to me, this question about the ability to have computers generate entire networks of people who don't exist to act for a common cause which may or may not be one that we agree on but probably influenced by the national security for north korea or china or influenced by some business objective from the tobacco companies or you name it, i worry a lot about that. and i don't think we're ready. these -- it's possible just to hammer on this point for the evil person who inevitably is sitting in the basement of their home and their mother gives them food at the top of the stairs to use these computers. that's how powerful these tools are. >> ok. [laughter] well, let's try to bring it back a little bit to where we are here at the university in this, you know, great setting of so
5:35 am
many people who have a lot to contribute and working in partnership with aspen digital which similarly has a lot of convening and outreach potential. what can universities do? what can we do in research particularly on a.i.? how do we create a kind of, you know, broad network of partners like we're doing here between i.g.p. and apps digital. and we began to try to do what's possible to educate ourselves, educate ourself students, in combating miss and disinformation with respect to elections. >> so the first thing we need to do is to show people how easy it is. i would encourage every university program to try to figure out how to do it. obviously don't actually do it. but it's real actively easy and
5:36 am
it's really quite an eye openerrer in. and i've done this for a long as i've been alive. the second i would do is there are -- there's an infrastructure that would be very helpful. the best design that i'm familiar with is block chain base. it's a name and origin for every piece of document independent of where it showed up. so if everyone knew that this piece of information showed up here, you can then have prominent and understand how did it get there? who pushed it? who amplified it? >> that would help our security services, our national security people to understand is this a russian influence campaign or sit something else? so there are technical things and there's also educational things. i think this is only going get fixed if there is a bipartisan, broad consensus that taking the edges, the crazy edges, the crazy people and, you know, who i'm talking about, and basically
5:37 am
taking them out i'll give you an example. there was an analysis in the last -- in covid that the number one spreader of misinformation about covid online was a doctor in florida which is like 13 of all of it. he had a whole influence campaign of lies to try to convince you to buy his supplements versus a vaccine. that's just not ok in my view. the question for me why was that allowed by that particular social media company to exist even after he was pointed out? you have a morale, legal, and technical framework. but you have to be seen as it's not ok to allow this evil doctor for profit allow people to mislead them on vaccinations. >> just to follow up on that -- i mean, i'll disagree about what has to happen if we're going to end up with some kind of legislation or regulatory framework from the government. but if they were willing, is
5:38 am
there anything that the companies themselves could do as i say if they were willing to that would lay out some of the guard trails need to be considered before we get to the consensus around legislation. >> of course but -- of course, the sans yes. but the way it works in the company, you don't get to talk to the engineers. you get to talk to the lawyers. and the lawyers are very conservative and they won't make commitments. it's going to require some kind of agreement among the leadership of the companies of what's inbounds and what's out of bounds, right? and getting to that is a process of convening and conversations. it's also informed by examples. so i would assert for example that every time someone is physically harmed from something, we need to figure out how we can prevent that. that seems like a reasonable principle if you're in the
5:39 am
digital world now. working way from those principles is the way it's going to get started. it's not going to happen unless it's forced by the government. the best way to make it happen in my view is to make a credible and feel proposal about where the guard rails are. we've been working on this. and you have to have content moderation. when you have a large community, these -- these groups will show up. they will find you because their only goal is to fine an audience to spread their evil. whatever the evil is. and i'm not taking sides here. >> >> well, think i the guardrail proposal a really good one. obviously, you know, we here at i.g.p., aspen digital, the company who is here, others, researchers who are here. maybe people should take a run at that. i mean, i'm not naive, i know how difficult it is.
5:40 am
but i think this is a problem we all recognize. it's not going to get better if we keep wringing our hands and fiddling on the margins. we have to try something different. so -- >> let me just be obnoxious. i sat through all these safety discussions for a long time. and these are very, very thoughtful analysis. they're not producing solutions in their analysis that are implementable by the companies in a coherent way. here's my proposal. identify the people. understand the providence of the data public your algorithms. be held as a legal matter that your algorithms are what you said they >> reform section 230. make sure you don't have kids and so forth. etcetera. you know, make your proposals, but make them in a way that's implementable by the team. if there's a particular kind of
5:41 am
piece of information that you think should be band, write a specification well enough that under your proposal, the computer company can stop that, right? that's where it all fails because the engineers are busy doing whatever they understand. they're not talking to lawyers much. but the lawyers prevent anything from happening. they're afraid of liability. they don't have leadership from the coming for the reasons you know. and that's what we're stuck. >> well, that's both a summary and a challenge, eric. and i -- i particularly appreciate that and especially the work you've been going to try to, you know, sort this out and give some guidance. so you get the last word from beautiful snowy montana. the last word to kind of offer to that challenge, you know, ask us to respond to follow up on what you've outlined as there's one path forward and try to do it in a collaborative way with
5:42 am
the companies and other concerned parties. >>s the snowstorm is hitting in behind me. look, i think that the most important thing that we have to understand is this is our generation's problem. if this is a -- this is under human control. there's this sort of belief that none of this stuff can get fixed. but you know from your pioneering work over some decades here, that with enough pressure you really can bend the needle. you just have to get people to understand it. these problems are not unsolvable. this is not quantum physics. it's a relatively straightforward problem about what's appropriate and what's no. the a.i. algorithms can be tuned to whatever society wants. my strong message to everyone in columbia and of course, tall partners is instead of complaining which i like to do a
5:43 am
great deal, why don't we collectively write down the solution, organize, partner institutions, try to figure out -- how to get the people in power to say, ok. i get it, right? that this is reasonably bipartisan. it makes society better. there's this old rule about greshham's law that bad speech drives out good speech. which is the internet is a cesspool. i used to say that and i would say i don't like to live in a cesspool, turn it off. the damage that's being done online to women, that's just horrific. why would we allow this? you just have to have an attitude -- i'm trying to fund some open source technology that are open tools detect bad stuff. it's going to take -- it's going to take a concerted effort and i really appreciate your secretary -- your attention on this.
5:44 am
somebody's got to push hillary: well, you and it let's keep going eric. i'm so grate to feel you. i hope you a great time in the snowstorm and whatever else comes next. but let's show our appreciation to eric schmitt for being with us. thank you so much. [applause] >> thank you all. hillary: well, i think we have a call to action. we just have to get ourselves in the frame of mind that we're willing to do that. appear even writing something down will help to focus our, you know, mind about what makes sense and what doesn't make sense. we're not going to let you all off the hook. we want to come back to you. we want to have something come out of this. we can talk about this, meet about this until cows come home. but in the meantime as eric said and i agree, lit just get worse and worse. and we have to figure out how we can assert ourselves and maintain the good and try to
5:45 am
deal with, you know, that which is harmful. please join us in this effort. as i say we will come back to you and seek your guidance and your support. thank you all very much. [applause]
5:46 am
5:47 am
5:48 am
from oakland, california, this is about one hour and 10 minutes. >> the brilliant, the beautiful, the hilarious, cheryl hines. [applause] ♪ cheryl:

9 Views

info Stream Only

Uploaded by TV Archive on