Communicating for Impact: Digital Technologies, AI, and the Political Landscape

Patrice Buzzanell 0:13
Hello. Welcome to this episode of The Communicating for Impact podcast series, a production of the ICA Podcast Network. I'm Patrice Buzzanell, distinguished university professor at the University of South Florida, ICA Fellow and Past President. I am delighted to invite my three guests here to discuss digital technologies, particularly artificial intelligence, algorithms, and other technologies that affect our everyday lives, the political landscape, and also just what we might do as individuals and as collectives, to capitalize on leverage these kinds of technologies. That last part about leveraging and impact has to do with the theme of this series of podcasts Communicating for Impact. The idea here is to provide practical applications that we can use in our daily lives, but also use in our groups, organizations, institutions, and so on. So I'd like to invite our three guests to introduce themselves, and then I will offer the first question and get us going. So Homero.

Homero Gil de Zuniga 1:29
it's an honor to be here with you and chat with my colleagues today about these interesting topics. I'm Homero Gil de Zuñiga and I'm a professor in the University of Salamanca and Pennsylvania State University. I'm also a research fellow at the Universidad Diego Portales in Chile. Thank you very much for having me

Pablo Boczkowski 1:46
My name is Pablo Boczkowski. I teach at Northwestern University, where I also run the Center for Latinx Digital Media. And I'm the co-director of the Center for the Study of Media and Society in Argentina. It's truly a pleasure to be with all of you today.

Patrice Buzzanell 2:03
Ingrid.

Ingrid Bachmann 2:04
Thank you, Patrice, for having me. This is going to be an interesting conversation. I’m Ingrid Bachmann at the Catholic University of Chile. I used to be the Director of the School of Journalism, but I'm done with that so I'm just a regular Professor now.

Patrice Buzzanell 2:20
So the first thing I would like to ask you is just to describe some of your research in this area, and also what you're currently working on. I'll start with you, Homero.

Homero Gil de Zuniga 2:31
I'm developing further research into the area of the “news finds me” perception, which is a theory that me and some other colleagues have been developing in the past few years. NFM is the idea that people because of the use of social media regularly, they develop this perception that the news will find them without any active effort to consume information. And not only that, but also they perceive that they're very well informed by doing that and not becoming inactive. Reality indicates differently. And what happens is that they end up not learning that much about politics, and their political interest decreases in time. They actually vote less. There are great lengths of different implications and array of implications to the development of the news finds me perception. So lately, we're running a series of experiments and survey research to learn more about implications of the development of this perception, particularly when it comes with the convergence of social media algorithms and how information is presented to us and how there are different and social cues, but also machine cues. And we're learning more about that.

Patrice Buzzanell 3:38
What are some of the social cues for those who are not technologically sophisticated, that you have found in your research?

Homero Gil de Zuniga 3:47
So for instance, social algorithms will indicate that in social media when information or content is presented to me, is based algorithmically on what people in my social network is doing or what they like, or what they do. Therefore, socially, the algorithm is influenced by those who are connected to yourself. And that's why this specific type of information or content is presented to you. The algorithm is, on the one hand, curated or influenced socially, that's what we talk about when we talk about social algorithm. Then the individual algorithm will be more in line with what happens with that content, due to my specific behavior of how I personally curate information.

Patrice Buzzanell 4:31
Okay, thank you very much. Let's go to Ingrid.

Ingrid Bachmann 4:34
Lately, I've been focusing on the use of all kinds of new technologies to sadly, attack women. We've been seeing reports in Chile in recent years about doxing and bots to attack female politicians, female journalists. So I've been focusing on that kind of darker side of these new technologies that are used to attack women and to pretty much make sure that they are in their place. On a more positive side, I'm also working on what women do with technology, and what we find is that women usually use technologies, for example, mobile instant message services, or even social media in a different way that usually when you explore like the nuances of what they use, what they look for, how they engage in these spaces, usually they are very conservative in the way they approach. They learned to not offer too much personal information, not to be too opinionated, because then they are attacked for that. But we've seen that some women actually use these technologies to build a sense of community with other people who have their same experiences, for example, gender violence, but also to talk about politics in a safer environment. So they create their spaces, like private chats, private feeds.

Pablo Boczkowski 6:11
So in my case, I'm working currently on two projects: one theoretical and one empirical. A theoretical book on the comparative study of social media, together with a doctoral student Mora Matassi, where we argue for the triple comparative approach, comparisons across nations, comparisons across media and comparisons across platforms, because if we look at most of the work on social media, for the most part, the vast majority of studies have single country studies, and usually single countries within the Global North, which is where 13 to 14 percent of population lives. But you know, social media are used by 4.8 billion people on the planet, in all countries of the world. And it is very important that we have a sense of similarity, both for descriptive but also for touristic purposes. Likewise, the study of social media tends to isolate social media from the other media objects in our environment. So What we argue in this book is that in order to fully understand or to better understand how social media are developed, deployed, and used, we need to understand them as a comparison of across nations and regions, across different media forums, and across platforms. The other project is an empirical project that is part of a team led by my collaborator Eugenia Mitchelstein at Universidad de San Andres. It’s a project that tries to understand the reception of misinformation. There is a whole lot of work on how misinformation is produced, how it spreads, sort of how it's vitalized, but there is very little in comparison about how people make or don't make sense of misinformation, disinformation, fake news, etc. The book with MIT Press is he first sort of large- scale monograph about not only the research of misinformation, but also the reception of misinformation in the Global South, in a context in which in many countries also in that dilemma, not only in Latin America, but in Southeast Asia, or parts of Africa, where there is very high level of distrust already existing towards the media, that colors the reception in very different ways than how people approach the issue of misinformation, say in Scandinavia, or in the Netherlands or in Canada.

Patrice Buzzanell 8:35
All of you talked in some sense about how the curating work is actually done. What I'd like you to do is to talk about how that's done in your particular work, how the participants are doing that kinds of work, how the algorithms are doing some of this work, and what the consequences are that the audience who's listening to this need to know about the way this information is curated. Information and misinformation, certainly.

Homero Gil de Zuniga 9:07
On the one hand, individuals do know and may understand how things work. But sometimes they might even don't know about it, and here within these two areas of knowing how to not interpreting knowing and understanding algorithms, how they work, and how you manage information and not knowing it's initial step towards understanding better what's going on. The other part of it is how they react to algorithm under influence in our daily lives. Some people take it for granted what in the literature has been named machine heuristics. The idea that what happens is that they interpret that machines are doing the right thing, just because they are not humans. And they think that they will select information more accurately, more balanced, free of bias, for instance, because they're machines. And we will know that that's not the case. So I think all of these kinds of heuristics are also fundamental in understanding how things work.

Patrice Buzzanell 10:04
Ingrid, would you like to jump in?

Ingrid Bachmann 10:06
Yeah, it's interesting what Homero was saying, because I've seen that. On the other side, when some users are themselves the subject of misinformation, for example, they don't know how to control it. I think it's interesting that professionals I've seen are not that aware of how these things work, either. It's not only users in general, it's people who rely on these tools a lot, who are not that well trained on this. And I think that's quite telling about how we embrace technology. And usually, we don't really understand it that well. But we go with the flow. And I think that's a very interesting side to further explore and it's also an opportunity for us as instructors, to actually train our students better on these tools.

Patrice Buzzanell 10:52
Pablo.

Pablo Boczkowski 10:53
Argentines are highly distrustful, I would say that probably what unites us is the distrust in institutions. What we found in the research is that not only people are distrustful, they're quite good in the experiments, knowing what they tell you, in the interviews, when they tell you that they're really good. But in the experiments, they have shown time, and again, with a number of manipulations, to be able to detect false from true. And that's where I think the cross-national comparison might be quite interesting. Because the institutions of the polity have historically, underperformed, citizens of the country tend to view the news and misinformation from a stance of distrust, the first thing that we're going to do distrust the systemic side of it. So there are sort of countries where levels of trust in distribution of the polity are higher. But there are also sub-national groups within these countries that have the same experience whereby they have been disenfranchised by the institutions of their polity. I think there is a balance between some general tendencies about what we tend to attribute to machines or technology, so to speak, as new trail and the contextual environments, both national and sub-national, like mediate that in terms of what effects this has for the experience of reception. And I think in that sense, there are very interesting possibilities for research here that take into account that the dynamics between the more structural factors and the more agentic factors and how that cuts across nationally, and then sub-nationally as well.

Patrice Buzzanell 12:40
We've been talking about bias. And certainly the bias enters when these algorithms and different groups get together in terms of what they see design, and what they fail to see or accommodate for. There's so many challenges in terms of what we can do individually and collectively and technologically. What are our next steps, as individuals who are confronted and cannot necessarily become part of changing the algorithms themselves?

Homero Gil de Zuniga 13:13
That's the million dollar question to me, to be honest. To me, there are two areas that is interesting with respect to these challenges. The number one is literacy. If we take the stance of the audience, if we teach them, if they're educated, if they learn how to interpret, understand, and classify misinformation or fake news, they will be more likely to confront them and convert them. The problem with this is that, as we've been seeing that according to our own research, there is a price in there. Because even when people understand that what they've been exposed to is fake news that has an effect. So it's a cycle that is a little bit poisonous because for you to get better at understanding and classifying and highlighting fake news, they need to be exposed to it. But when you’re exposed to it, we know that it has implications. So they're learning about it because they're being exposed to misinformation and fake news. The second one is that what we've seen in our own data is that they get persuaded. So we don't know the nature of this persuasion. What we know is that people are becoming more persuaded politically when they're exposed to fake news. So that's on the part of the audience. What can we do? Education, news literacy, but it has a cost. So we need to be creative as for how we do this.

Pablo Boczkowski 14:36
I just want to challenge a couple of the statements from a cultural standpoint. So thank goodness, people distrust journalism in some parts of the world, because journalism and the news media have not really served them very well. And there is some value to that. And it is important that we understand that when we go into these conversations, we don't idealize institutions of the polity that have perpetuated marginalization and oppression for decades. I think whether or not distrust is good or bad is contextually dependent. And that's why it is very important that we do a lot of comparative research and not only north-south, but also a lot of south-south comparative research. Because even within regions of the world like Latin America, there is a lot of heterogeneity and the same in Africa, etc. I agree that exposing somebody to misinformation is not good. But it is what has been happening de facto for centuries. In some countries more than others and towards some communities more than others, even in countries where the news media have performed a little bit better, there are communities that have had not their fair share. So, because of that people already have illiteracy. So there is a tension between the kind of literacy and an already existing literacy. So I wanted to bring that to the attention of the listeners.

Ingrid Bachmann 16:02
I guess I'm in the middle ground. I prefer to see the glass half full instead of half empty. So yeah, I think there are challenges, and I do value that the affordances of new technologies and social media has allowed for a lot of networked efforts to improve the standards of how gender violence is covered, for example, or how we talked about it, or the fact that we are talking about it. These were things that were not on the agenda at all, say five years ago. I do think that distrust is not that good to begin with. But I understand where Pablo is coming from when he says that journalism has not done such a great job, to begin with, and that are reasons for distrust. And this is coming from another country with very low levels of trust in all kinds of institutions. I think that's part of the problem with misinformation and itself is misinformation rather than make you believe certain things, the problem is that it makes you distrust everything. And that's why we need to educate people to learn to filter what is true, what is not true, what is fake, what is not fake. So I actually think that we should focus more on misinformation processing. And I also think that moving forward, there is a space to further explore misinformation corrections. You could argue that when you make evident that certain discourses out there are not true, you're kind of making people ultra–aware that there are falsehoods there. It could produce a tainted truth effect that you overcompensate and start thinking that everything or most things are not true. And I think that's problematic as well. So I think that we should study more about what works, what doesn't work, what works for whom. So I think that all these comments should be more nuanced. And for that we need more research, I think.

Patrice Buzzanell 17:56
Would the three of you come together and explore for a moment what an ideal educational design would be knowing that we're talking about very different populations. Where do we start with regard to the kinds of literacies that you're discussing right now?

Homero Gil de Zuniga 18:18
To be honest, what you're suggesting, Patrice, it sounds more as a research proposal for a grant. That's what we probably need to be done with your question.

Ingrid Bachmann 18:27
I will say that the answer is education, but how exactly to do it? I mean, we've been trying to educate about this for a long time. I think it should be part of, I don't know, the education or the curriculum in any school, from primary school and up. These are very relevant skills for anybody to deal with, like how to find information, how to detect work, what source is correct, or at least is a valid one, how to process information. So I think that the challenge is to find what are the spaces or the skills that we need to tackle with more emphasis in this in dealing with these issues, but I think that part of the problem is that this landscape is changing, so fast, so quickly, constantly, that we come with a solution, and we already have a new problem.

Pablo Boczkowski 19:20
I would add that whatever we do, I think will be very important to listen as much as to teach. To teach from the vantage point of listening to what our students say and where they are at. I think that's particularly important in a context and this and that will vary a lot. I think it is also important from my vantage point to overstate the novelty of what's new. We've always had algorithms. For a long time they’ve been automated already. The front page of a newspaper is an algorithm. The front page editors have an algorithm in their mind, and they have had an algorithm for a long, long, long time. And that's why they are so incredibly predictable. So that doesn't mean that the algorithms of today with the power of automation today and the power of artificial intelligence, etc, do not have certain characteristics that might be new, or different scale effects in particular. But I think we've been here before many times.

Patrice Buzzanell 20:27
Before we conclude, what haven't I asked you that you think is important for listeners of this podcast to know?

Homero Gil de Zuniga 20:36
We all do research that is important. And we do research that comes from many different backgrounds. ICA, for instance, is a very large association, and we have people interested in many different things. So the way I would like for people to reflect on their advancing research is the way I've been doing it lately myself, which I think has been very valuable to me. And it's understanding that everything happens in social media, and even within algorithms, whatever it is the topic that you're tackling, we shouldn't demonize or put up in the stand the benefits or the harmful or the perils of this technology like any other technology. I think we should tackle the problems by understanding that the certain way we're seeing that they’re deleterious or negative effects of algorithms, there are very many positive effects of algorithms. So that happens with practically anything in any challenge and any effects that we're trying to observe within social media, whether it be algorithms or something else. There are positive effects and negative effects embedded in the system itself. I would encourage everybody who's listening to take it with that perspective, that not everything is positive or negative. And we hope, obviously, we won't be able to solve everything or all the effects at once, but at least consider that whatever it is that you find in there might be other processes or mechanisms that may explain differently.

Patrice Buzzanell 21:55
Thank you. Ingrid.

Ingrid Bachmann 21:57
I will just add that when we talk about social media and affordances and uses and news, we're talking about very umbrella concepts when you think about it. So I think that the challenge and what we should talk more in the future are about is specific uses, the different context situations, circumstances that frame the way people engage, for example, in news over social media, what they get out of it. It's not a phenomena that is across the board or the same all the time. So those nuances, I think that we need to better understand because that's the way to actually make better use of that, but also to even combat misinformation or suggest strategies to work with these things.

Thank you. Pablo.

Pablo Boczkowski 22:45
One thing we didn't talk a lot about is the role of affect and emotion and pleasure in general. So we've been discussing topics with sort of an unstated view of the more rational information processing view of the person. But the reason why there is almost 5 billion people on social media, and those of us who are on social media spend so much time on it during the day in part because we like it one way or the other. We derive some sort of pleasure or some sort of joy, even if it's a guilty pleasure, it's a pleasure nonetheless. I think it's important as we move the conversation forward in the field, not to demonize these applications and the systems or these technologies. Because if they were really awful to us, we wouldn't be spending that much time in them. I think it is important as scholars to get into that side of the spectrum too and to try to understand why is it that even though we distrust information that we get on social media, and we say the companies around them usually are evil, et cetera? We give them so much of our attention, so that they can monetize that to the advertiser. And what is it that keeps us connected? I think it's, it's a dimension of the conversation that many times gets sort of sidetracked or sidelined. And that is, I think, important to keep in mind.

Patrice Buzzanell 24:15
I'd like to thank all of you for such an engaging discussion about issues. It has been an absolute pleasure to hear about your research, the next steps, the guilty pleasures, certainly, and also just what we could do in terms of nuanced and very contextualized discussions about both the advantages and potential hazards of artificial intelligence algorithms, different kinds of literacies, social media in terms of the different areas that we've talked on. So it was great to see all of you, and thank you very much.

Communicating for Impact is a production of the International Communication Association Podcast Network and is sponsored by the College of Arts and Sciences at the University of South Florida, which focuses on the big questions facing all of humanity. By conducting innovative research to address complex problems, we enhance the quality of life for people and communities. Our producer is Jacqueline Colarusso. Our executive producer is DeVante Brown. The theme music is by Ruhan A Paniyavar. Please check the show notes in the episode description to learn more about me, our sponsor, and Communicating for Impact overall. Thanks for listening.

Transcribed by https://otter.ai

Communicating for Impact: Digital Technologies, AI, and the Political Landscape
Broadcast by