The Authenticity Crisis: Renee DiResta on How Propaganda Is Poisoning Alternative Data

Episode 27 February 24, 2026 00:56:22
The Authenticity Crisis: Renee DiResta on How Propaganda Is Poisoning Alternative Data
The Institutional Edge: Real allocators. Real alpha.
The Authenticity Crisis: Renee DiResta on How Propaganda Is Poisoning Alternative Data

Feb 24 2026 | 00:56:22

/

Show Notes

Your data is only as good as its source. What if the source is broken by design?

In this episode of Institutional Edge, host Angelo Calvello is joined by Renee DiResta, Associate Research Professor at Georgetown's McCourt School of Public Policy and former technical research manager at the Stanford Internet Observatory, to discuss how AI-generated content and algorithmic manipulation are creating an authenticity crisis that threatens institutional investors relying on alternative data. DiResta, who spent seven years as an equity derivatives trader at Jane Street, explains how trust has been reallocated from institutions to influencers, niche creators, and AI bots, producing a collision between volume, velocity, and veracity. For asset managers spending over $2 billion annually scraping the web and social media for data, DiResta warns that AI detection methods cannot reliably verify authenticity, adversarial attacks move faster than defenses, and content moderation has collapsed — making data fidelity a governance and fiduciary duty issue, not just a technical problem.

Renee DiResta is Associate Research Professor at Georgetown's McCourt School of Public Policy, where she studies computational propaganda, disinformation, and information warfare. She previously led technical research at the Stanford Internet Observatory and has briefed Congress, the State Department, and world leaders on influence operations. DiResta led the Senate Intelligence Committee's investigation into Russia's Internet Research Agency and testified publicly on its findings. Before her research career, she spent over seven years as an equity derivatives trader and market maker at Jane Street. She is the author of Invisible Rulers.

In This Episode:

(00:00) Renee DiResta and investment intelligence focus

(05:23) Trust reallocation: how credibility shifted from institutions to influencers

(07:48) The attention economy: 90-9-1 dynamics, algorithms, and content incentives

(12:52) Distrust as default: framing wars and real-time propaganda on social platforms

(16:03) AI, deepfakes, and the collapse of content authenticity verification

(25:41) The alt data danger: managers ingesting propaganda and unverified signals

(33:20) Content moderation collapse, the censorship reframe, and platform failures

(39:42) Signal curation, data fidelity, and fiduciary duty for asset managers
Like, subscribe, and share this episode with someone who might be interested, and please take time to leave us a review!

Dr. Angelo Calvello is a serial innovator and co-founder of multiple investment firms, including Rosetta Analytics and Blue Diamond Asset Management. He leverages his extensive professional network and reputation for authentic thought leadership to curate conversations with genuinely innovative allocators.

As the "Dissident" columnist for Institutional Investor and former "Doctor Is In" columnist for Chief Investment Officer (winner of the 2016 Jesse H. Neal Award), Calvello has become a leading voice challenging conventional investment wisdom.

Beyond his professional pursuits, Calvello serves as Chairman of the Maryland State Retirement and Pension System's Climate Advisory Panel, Chairman of the Board of Outreach with Lacrosse and Schools (OWLS Lacrosse), a nonprofit organization creating opportunities for at-risk youths in Chicago, and trustee for a Chicago-area police pension fund. His career-long focus on leveraging innovation to deliver superior client outcomes makes him the ideal host for cutting-edge institutional investing conversations.

Resources:

Renee DiResta — Website/Bio https://www.reneediresta.com/about/

Book: Invisible Rulers: The People Who Turn Lies Into Reality https://www.amazon.com/Invisible-Rulers-People-Turn-Reality/dp/1541703375

Essay: "There Are Bots; Look Around" (2017, Ribbonfarm) https://www.ribbonfarm.com/2017/05/23/there-are-bots-look-around/

Georgetown McCourt School of Public Policy https://mccourt.georgetown.edu/

Stanford Internet Observatory (former affiliation) https://cyber.fsi.stanford.edu/io

Email Angelo: [email protected]
Email Julie: [email protected]
Pensions & Investments
Dr. Angelo Calvello LinkedIn

View Full Transcript

Episode Transcript

[00:00:00] Speaker A: The thing that you see with AI is two things start to happen. One, it intersects with the trust crisis in a very challenging way. So it's no longer a question of do your. So first there's a question of when your eyes see it, can you believe it? Right. And that becomes very, very difficult because the videos are increasingly difficult to differentiate from reality. When I went and looked through it took me again, like I said, about three hours to go through that video and to try to figure out is this real? Is this decontextualized happening here. By the time I put out an analysis, three hours later, that video has been seen by millions of people. [00:00:39] Speaker B: Welcome to the Institutional Edge, a weekly podcast in partnership with Pensions and Investments. I'm your host, Angelo Calvello. In each 30 minute episode, I interview asset owners, the investment professionals deploying capital, who share insights on carefully curated topics. Occasionally, we feature brilliant minds from outside of our industry, driving the conversation forward. No fluffy, no vendor pitches, no disguise marketing. Our goal is to challenge conventional thinking, elevate the conversation, and help you make smarter investment decisions, but always with a little edginess along the way. Hi everyone. Welcome to another episode of the Institutional Edge. I'm your host, Angelo Calvello. And today's guest is someone I've been wanting to talk to for a very long time, Renee Diresta. Now, Renee might not be a household name in the investment world, but stick with me because her work is incredibly relevant to anyone making decisions based on information pulled from the Internet or social media, which, hey, let's be honest, that's all of us. So here's her story today. Renee is an associate research professor at Georgetown's McCourt School of Public Policy. Before that, she ran technical research at the Stanford Internet Observatory, which at the time studied how bad actors manipulate information online. I mean, think computational propaganda, disinformation campaigns, state sponsored information warfare, the big stuff. Over the years, she's briefed Congress, the State Department, world leaders, you name it. In 2018, the Senate Intelligence Committee asked her to lead an investigation into Russia's Internet Research Agency and find out how this agency manipulated American society online. And she testified publicly about this. A year later, she led another investigation into the influence capabilities of Russia's gru. These capabilities they used alongside their hack and leak operations during the 2016 election. She's also done stints in venture capital and tech startups. But before all that, Renee spent seven years on Wall street as an equity derivatives trader and a market maker at Jane Street. And hey, no, she didn't know sbf, in case you were wondering. And, yeah, one other thing, since people on the Internet love this detail. Yes, she did intern at the CIA as an undergrad. Renee holds degrees in Computer Science program Political Science from Stony Brook University and has been recognized as a Mozilla Fellow. She's been recognized as an Emerson Fellow, Presidential Leadership Scholar, and these are just some of her affiliations. So the question is, how did I find Renee? Well, I was researching the explosive growth of asset managers scraping the web and other digital sources for alternative data. Our industry spends over $2 billion a year doing this. And as Bloomberg reports, almost every quant fund uses machine learning somehow to sweep social media, news articles, earnings reports, you name it. But here's the problem. This dependency on digital intelligence exposes managers to a serious risk. And that risk is they could be ingesting and acting upon bad information. Fortuitously, I found Rene's book, Invisible Rulers, in which he discusses the explosion of. Of misleading digital content. And I'll give you a little heads up here. There's a new edition coming out this spring, and I'm thinking she's going to situate her research in our 2025 post truth environment. I'm telling you this, baby, it's a must read. Today we're going to talk about how online propaganda infiltrates the data streams investors rely on and what this means for making sound decisions in a world drowning in information. Rene, welcome to the show. [00:04:47] Speaker A: Thanks for having me. [00:04:48] Speaker B: Cool. You know, as we prepared for today's recording, you and I discussed an essay you wrote in, I'm gonna say, 2017. There are bots. Look around. And of course, you know, big fan of the book, no question about it. I found, you know, your essay from 2017 to be especially prescient because it discussed how algorithmic manipulation and disinformation campaigns influence markets. And we'll put a link to that. The show notes, in the essay you wrote, markets can't function without trust, and the trust has been eroded. What's caused this erosion? [00:05:23] Speaker A: So I think it's almost more of like a reallocation of trust. It's not that trust is gone. It's that it's moved to places where people increasingly trust different types of creators, different types of authority figures. I would say if you look at the trends over maybe the last 20 years, you've seen a significant decline in trust in government and trust in media. And some of that is deserved. Right? People are now able to much more clearly see areas where their government has failed or where media has not told the truth, some of it is exacerbation of that by incentivized actors who, for example, one political party will point out why another political party is untrustworthy. Fox News made its entire brand challenging the mainstream media, even after it became one of the largest, if not the largest, I think nightly news program still, you know, kind of positioned itself as this little David challenging Goliath. Right. Which stopped being true. Right. It became a massive mainstream program, but it still uses that language of we are challenging the mainstream. Then you saw the rise of social media, where all of a sudden an entire new class of creators emerged, and they spoke very differently. They talked with their audience instead of to their audience. The way we're talking now, where there's a lot of back and forth. And so your audience trusts you because they listen to you. Right. They hear you. They can reach out to you. You probably respond. Right. The New York Times usually doesn't. You know, so there's just a different degree of responsiveness, of relatability, Somebody who seems just like you, talking about the things that you care about. So media really moved much more down into what I call niches where specialized creators with deep expertise in a particular topic and often a shared identity with their audience began connecting much more deeply on specific topics. Even as this authority, you know, trust in authority and trust in mass media was declining at the same time. So you see trust kind of being reallocated to these other types of figures, and one media ecosystem is ascendant as the other starts to decline. [00:07:25] Speaker B: I was hoping you'd go a little deeper into that. [00:07:28] Speaker A: Do you want me to go further? [00:07:29] Speaker B: Yeah, yeah. [00:07:29] Speaker A: I mean, monologue at you. [00:07:31] Speaker B: No, no, it's okay. I mean, people want to hear you. They don't want to hear me. They hear me all the time. So they want to hear you and maybe go into. I mean, what you broke out in the book and in other publications is kind of these categories of misinformation, disinformation, propaganda. Yeah. [00:07:48] Speaker A: So what you start to see happen is as this media ecosystem rearranges, I would say the first thing is that it becomes very participatory. So that means that anybody can become a content creator on social media. And there's this thing that we call 991- Dynamics, where 90% of people are just kind of observers. Maybe they hit the like button, the share button sometimes, but they're just kind of still sort of at the fringes of the conversation. They're kind of taking it in and they become amplifiers. Right. They're sharing information they become conduits for information, but they're not necessarily actively producing content. Then you've got 9% of people who are producing, right? They're making some commentary, they're producing a few tweets, a few posts, few Instagram reels. Maybe they're out there a little bit. And then you've got the 1%, right? And that is the. What we think of as like, the influencers, the creators, the people who become really influential in that space. And so that's how the new media environment begins to take shape. On social media, so you do still have this. It's not that everybody has an equal voice. There is still this division, as everybody could have an equal voice, but attention flows to that kind of top 1% of creators. And so there's an incentive system there, right? I talk about the term, I use the term propaganda in the book a lot. Because if you think about mass media, right, There was this book written in the 1980s by Noam Chomsky. Manufacturing Consent is the name of it, and it describes the role of mass media and how what it produces is incentivized by certain factors. So mass media might choose to cover power deferentially if it's afraid of losing access to power, right? Maybe you have a senator who's a source. You're not going to want to criticize that senator, because then you're going to lose that source. Mass media might not criticize certain industries. Maybe if all of your advertising funding is coming from pharmaceutical companies or finance, you're not going to criticize those industries quite so heavily because they're going to pull their funding and put it somewhere else, right? So Chomsky writes this book talking about how media is a product of incentives. And what I tried to do in mine was talk about the incentives of this ecosystem. So you have the incentives of the platforms, which are to keep all of those people, the 90, the 9 and the 1 on site, right? Like actively on those social networks paying attention to what is happening on their platform. That's how they reach you with advertising. So the social platform has an incentive. That 1%, that creator has an incentive. Their content isn't going to be seen unless an algorithm serves it up, right? Unless some social media platform decides to push that content out to people. And so you see them producing content that is a little bit more sensational than what you'd see in mass media. They have to grab the attention of the niche audience. And so they're always making content not only for the human audience, but for the Algorithmic audience, also for the algorithm that is going to take their content and boost it. And so you have this really bizarre thing that starts to happen where if you ever pull up YouTube and go search for something, you'll see a lot of the thumbnails will look the same. It'll be a person with wide eyes, big face, surprised expression, and big letters, right? And that's because the algorithm privileges that. Right? Now, if the algorithm stops privileging, that they will change their thumbnails and they will change their content to whatever it wants. And so you start to see human creators producing content in response to the incentives of a machine, right? Not only what their audience wants to read, but what the machine wants to read. And then you have the incentives of the crowd, right? We open our social platforms, you know, we open, we take out that phone. We want to be entertained. We want to feel like we're informed, even if we're not really becoming informed. We want to feel like we're seeing content that appeals to us. If you get bored, you're going to drop off to a different app, right? So all of the different apps are trying to keep you there with something that is engaging. Increasingly, it doesn't show you content from people you follow. It shows you random content. It's called unconnected content. In the tech industry, unconnected content is things that the algorithm thinks you will like, maybe because a lot of other people like it who are similar to you or because a lot of other people on the platform are talking about it right now. So there's all these different incentives that govern what gets produced and then what gets seen. And that's how this attention economy works. And so it's not the way mass media worked where everybody's roughly seeing the same story at the same time. And it roughly has to hew to facts or else they're going to get sued. Instead, you have this very different niche economy where people are seeing things that are very personalized to them and content is made to appeal to niche audiences of humans. And then algorithmic amplifiers, I think they [00:12:33] Speaker B: did a wonderful summary. It's really the influencers, the algos, the crowd. It's that triumvirate. And, you know, I was rereading the book and some of your other writings, and it seems like you come to the position that distrust is the default position now. [00:12:52] Speaker A: Yeah, that's very true. I don't know how much time you spend on social media yourself. I am very online. I have been since I was a kid, actually, since, you know, AOL and you know, BBS has emerged when I was in middle school. So I've been very, very online for a long time now. But the, you open up a social platform. I prefer text based ones still. I spend more of my time on threads or on Blue sky or on, you know, even reading X just to see what's happening in text based land. I just don't like watching videos. I feel like they take too much time. But a lot of people do, right? That's where, that's where the, that's where audiences have gone. And when a event happens in the real world, you immediately see a war over what just happened. Even if there's video, you can see this play out in. You know, there's been a lot of unrest in Minnesota. Right. There have been two kind of state involved shootings. Right. Two killings. And there is video. We can, we can see the video there. This has been recorded from multiple angles. A lot of these, these different officer involved, you know, ICE involved shootings. And yet when that happens, there is immediately a rush by the government and then by activists and then by political parties and then by influencers. Everybody rushes to tell the story of what is in those videos to get the, what we call the frame in social science. How do you, how do you immediately tell your audience what they're seeing? And this is a propaganda tactic. How do you immediately tell them what they're seeing? You saw maybe, you know, I don't want to be too political on your podcast, but you saw Kristi Noem say Alex Preddy brandished a gun. And there's no brandishing. The gun's in his back pocket. Right. But, but you see that language, and that language is intended to convey an image. When, when you hear brandish a gun, you imagine a man standing up with a gun in his hand usually, or waving it around. That's not what actually happened. But they're describing a series of events to you and it doesn't comport with what's in the video. And then you have to kind of wait and see which influencers from the same political party are going to break and say that's not what's actually in this video. So we're increasingly now in a world where just because there's video doesn't mean that we all agree on what just happened. [00:15:04] Speaker B: I would add something and maybe kind of take it in a slightly different way. You're talking about not just the old Twitter. We're talking about multimodal stuff. [00:15:14] Speaker A: Absolutely. Yeah. It's on every platform. Yeah, I'm Talking about videos, too. Right, Right. [00:15:18] Speaker B: So the thing is, now we have this, how would I say, an explosion of AI. And, you know, I've spent a lot of time in AI myself, and I've seen how much has changed just in the last three years. And it would seem that is also contributing to this position of distrust, but also to just the explosion of all the garbage that's out there. [00:15:43] Speaker A: Yeah. [00:15:44] Speaker B: And if I just go quickly, add. It's like volume and velocity kicks up. I mean, you were a trader. We're talking about gamma here. We're talking about gamma. So, I mean, what are you thinking about AI? Because maybe when you were on it, you know, when you were in middle school, it was a little bit more stayed, I guess. Yeah, no, there's just all kinds of [00:16:03] Speaker A: crap out there I wrote about. It's funny, the volume and velocity piece was. I think I emphasized that quite a bit in that essay back in 2017, just for by way of background. I was at Jane street from. From 2004 to 2011. I left in mid 2011, so about seven and a half years I was there. And I was equity derivatives trader, arbitrage. And I was a market maker in emerging market ETFs. So that was my kind of background on Wall Street. That was what I did. So arbitrage, trading. So again, very incomplete information. And then very rapid, very rapid kind of flows of information where the world changes very quickly and you see the markets react. Right. Something happens in the real world and information cascades, prices change. The same is, you know, everything resets. It's. It was. It was a fun job. But I think the. The thing that you see with AI is two things start to happen. One, it intersects with the trust crisis in. In a very challenging way. So it's no longer a question of do your. So first there's a question of when you're. When your eyes see it, can you believe it? Right. And that becomes very, very difficult because the videos are increasingly difficult to differentiate from reality. So we can stick with the sort of stories of, you know, ice. For example, there was a video that came out several months, maybe two, three months prior, of an officer in uniform taking a baby away from a mom somewhere in what looked like the Bronx or Harlem, so somewhere in upper New York City. And so this video was interesting because I saw it when it was shared on Blue sky by someone on the left. Right. And it was framed as, you know, ISIS said, separating families. And it's an interesting video because this is a thing that does happen. Right. This thing that happens in the real world. But this particular video was. It was very hard to tell first if it was real or not. And so you have this interesting challenge of is this video authentic or not? And this particular one turned out to be. I spent maybe three hours that night going through kind of frame by frame, looking for indications of whether or not it was real. And that's because. That's because we've hit a point where it is very hard to tell. You used to be able to tell people, oh, look at the fingers. That doesn't work anymore. Oh, look at the writing behind you. You know, look at the writing in the background. It used to be these weird Star Trek type symbols. No, it uses Latin Alphabet now. It even spells things correctly. It's very, very difficult to give people guidelines. And when you tell them, look at these signals, and then those signals are correct, right? The people have the right number of fingers, the letters look good. Then they think, oh, this is real. Right? And that's no longer a signal. Right. So you have this situation where it really becomes very, very difficult to tell if the video is real or not. In this particular case, this video was repurposed footage that had been taken, decontextualized. It was from a different type of arrest entirely. It wasn't ice, it was nypd. And this was a family situation. And so this particular video was taken, clipped, decontextualized, reframed, and pushed back out on Instagram by some Latin American news organizations. That was what had happened in that situation. So then you have this phenomenon of, like, real video that's decontextualized versus AI video where you can't tell if it's real or not. So these are things that you are seeing with your eyes, and you don't know what is actually happening there. The frame that the influencer puts on top of it. In this case, it was a large account saying, this is ICE Separating a mother from its baby, like that is. Or separating a baby from its mom. I think I just said that wrong. But this is. This is a thing that happens, right? So people see it and they think, well, this is the thing that happens. This is plausible. And then they go and they reshare it. So that content goes viral. And so a lot of people are just taking their trust cues, assuming that the person who shared it knows what they're talking about and didn't get deceived themselves. So there's that. That trust component. And then the last piece is, you know, when you do have videos that are real, we saw this a lot around October 7, or images that are real. People will say, no, no, no, that's not real. That's AI. And they'll use the fact that AI exists to diminish atrocities that really did happen by just saying, no, no, no, that's not real, that's not real. And once again, an influencer says that it gives permission to their audience to believe that it didn't happen. And so, so much of what is happening is the war for that framing in that initial couple of minutes when the video comes out. So much happens in those few seconds to like to establish reality for the audience and the people around. When I went and looked through it took me again, like I said, about three hours to go through that video and to try to figure out, is this real, is this decontextualized, what is happening here? Those of us who work on that, where, you know, you're doing forensic analysis, by the time I put out an analysis, three hours later, that video has been seen by millions of people. Exactly, exactly. And so it, it becomes a real challenge to, to address this. Right. Because so much of it is, do you trust the person who shared it? And does the frame seem plausible to you? And that's what's happening over and over and over again now? [00:21:17] Speaker B: Yeah, it's. It's a good example. I was thinking about AI and deep fakes, and they're getting really good, as you point out. The other piece I was thinking of is that with Genai, these large language models, it's basically zero cost now. Anybody could go in and use a large language model and create either text or multimodal. And we see this proliferation of websites that are kind of built by LLMs now and manufactured and perhaps by bad actors. But the other thing that caught my attention was these large language models increasingly train more and more on this Internet data. And the Internet data, I call it slop, you know, kind of going back to that term. But there's the risk of model collapse. You know, there's LLM grooming. So, you know, we're even wondering how well the models that people are using perhaps to deceive us might break down. I mean, that was an observation. You don't have to say anything, Renee. That was just me thinking that. I worry about that because a lot of people are trying to bring LLMs and multigenic systems, you know, into commercial settings. [00:22:29] Speaker A: So, yeah, it's very. No, it's very real. There's a couple things that are happening there. First, there is the competition to influence the Training data. So that happens on a longer time horizon. These models aren't being trained, you know, daily. But there is that question of can you get your stuff into, into the training data? So that's, that's a real thing. This is where, you know, the question of, like, what is a good source? Is becoming a battleground, right? What is, what is a reputable source? What sources should be included in that training data? Because it does then influence what people learn about the world. And this is, this is becoming a political fight, right? So you're starting to see Wikipedia, for example, Wikipedia, you know, models are trained on Wikipedia, right? And Wikipedia is edited by humans. And Wikipedia has sets of domains that it considers to be disreputable. It doesn't mean that you can't use them. It just means that there has to be a justification for why you picked one domain, which is often unreliable, over another domain, which is more frequently reliable. And that is turning into a war. I mean, even in Congress, you're starting to see senators send letters saying, why, you know, accusing the Wikipedia, Sorry, they're sending letters to the Wikimedia foundation, which does not control the content of Wikipedia, but nonetheless asking, why is this happening? Right? So there's no centralized person for them to yell at. So they mostly just kind of yell at the foundation or yell at Jimmy Wales. I did an interview with him for Lawfare, where it was very interesting conversation about it because it is the sort of source wars really. The stakes are what gets into the training data for the model. This is why Elon created Grokopedia, which has a very different corpus of approved sources. Which means that for whatever reason, he's chosen to treat infowars as a reliable domain. This creates a different problem, which is, I mean, I don't know if you've ever read Infowars, but you work in the markets, right? I did too. There are certain things where I remember, you know, the I remember Bloomberg terminals, you had a very approved set of news sources, where if you were going to get your news in a red headline format, where you're going to actually trade off it or make a decision off it, accuracy is paramount. I mean, this is actually sort of one of the origins of established mass media journalism is people, you know, hundreds of years ago saying, we want accurate information about industry so that we can make investment decisions like that is one of the reasons why you start to see journalism professionalize. And so this idea that now any, you know, imagine Bloomberg deciding to just push you while this random person on this random blog wrote a thing and then, you know, your machine reading it and taking an action based on it, people would be horrified. So in areas where information is still tied to stakes, right, where you're still sort of staking capital on information being accurate, people will absolutely understand why InfoWars is not a thing that you want in your reputable domain list. But in the culture wars, it is nonetheless showing up as a thing to fight about. And so you are seeing it show up now in Grokopedia, the competitor to Wikipedia. [00:25:41] Speaker B: The problem that I see from our industry's perspective is you got more and more managers scraping the web, social media, trying to get some insights. Right? I mean, you know, it's all about edge. And your edge could only come from one of two places. Either you have superior information or you've got a superior process to extract information or extract actionable insights from that information. And okay, so we've talked about how there's this erosion of trust. We've talked about how there is slop, propaganda that is becoming hard to distinguish whether it's true or not. Using a deep fake analogy. I mean, the problem for managers is they're pulling this stuff in because they're looking for alternative data. It's different than when you were trading high frequency. Now we're looking for alt data. I'm not sure you know, how they're able to discern the fidelity of the data they're bringing in. Yes, Bloomberg, that's easy, but everybody is a Bloomberg. So they're looking outside of that. And they're not just looking at kind of like news sources. I mean, it's social media. It just plays such a big part because of sentiment analysis. So this is what I wrote about, that they're facing this problem of truth. And you know, candidly, again, volume, velocity, too great for a human to try to figure out. Because you want to move quickly. Because if you do have an edge with information, you're not going to wait 24 hours, you're going to act. And then the second thing is AI. There is no AI detection methodology that's foolproof right now because it seems like the, I'll call them the bad actors are moving faster than AI detection. [00:27:23] Speaker A: Yeah, and that's kind of like always going to be true. That's like the red problem of you're sort of running in place, trying to keep up with the adversary. But the. I remember, gosh, what was it? This must have been, was it three years ago now, two years ago, there are a couple of fake accounts on X1 pretending to be a Bloomberg property, if I recall correctly, that tweeted a picture of the Pentagon, but it looked nothing like the Pentagon. Right. And the physics were wrong, the pillars were wrong, it looked nothing like the Pentagon. But nonetheless, some systems traded off that. Do you remember this? [00:27:57] Speaker B: I do, absolutely. Yeah. [00:27:59] Speaker A: And these are sort of one of these moments where I look at it as somebody on the outside thinking like, well, that's, you know, kind of dumb money right there. But it's sort of like an interesting situation where you let it. You know, that's kind of one second of human review. Looking at that photo makes very clear that the Pentagon was not attacked. That's not the Pentagon on fire. It reminded me a lot of 20. Oh, boy. What would it have been 2011 when. No, gosh, it would have been 2015 or so, I think when Russia ran little influence operation pretending that it had attacked a chemical plant in Louisiana and sort of like kind of blew up the Internet with a bunch of allegations of, you know, sort of photos of this factory supposedly under attack. And the chemical company CEO is coming out saying, I don't know what they're talking about. [00:28:45] Speaker B: I remember this. [00:28:46] Speaker A: There was no attack. Right. [00:28:47] Speaker B: Yeah. [00:28:47] Speaker A: So you do start to realize that people will kind of jump to conclusions and you will see this moving. And unfortunately, AI makes it much more convincing. But in that particular case, the content was so bad, it was more an interesting. Almost like a pen test of how many machines will kick on in response to this. [00:29:10] Speaker B: Sure. [00:29:10] Speaker A: So I think what it sort of showed was that it's a vulnerability that even in that case, even bad AI and a headline, bad AI and a tweet were enough to kind of move the market. Whereas it's going to. It's only going to get better. Because as you note, the problem with detection is that the, you know, a lot of. I don't know how much you want me to go into the sort of science behind detection, but one of the things that is happening is that there are bodies that are trying to come up with standards. One of them is called C2PA, and this is a standards effort that is trying to say that we want to credential content to show where it comes from. It created on this device and you have this sort of chain of edits, so you can see if something was manipulated or changed in some way. Another is some models will indicate that content has been generated with their model. Gemini does this. Right. Nano banana. So there's a couple of them that do this. And then the idea is that These credentials, whether C2PA or some of the watermarking that's called the invisible watermarks that the AI generators produce, that that is then detectable by, or readable, I should say, by social media platforms who then could surface that the content is AI generated or can surface the provenance information from the credentials. So if you ever go to a search engine, you'll see sometimes that little I, you know, it's got a little I with a circle around it, and you can kind of like hover your mouse over it. It'll tell you more information about the. About the piece of content. So that's kind of what they're doing. They're trying to say, can you surface more information that a human can go and look at? This is called assertive provenance, where you can go and you can see more information about the sort of credentials behind the image, where it comes from and where it goes or where it went, how it was edited. One of the things that started to happen, though, was in the early days of content labeling, this was maybe a year and a half, year and a half ago. Now, Meta tried this, and I remember they tried it on threads where it began to surface AI generated content. And it said, like, imagined with AI, I think, was the label that they put on it. But it surfaced a photo. I remember one incident where it surfaced a photo of Mount Fuji. And this was a photographer who had actually taken that photo, but then he'd gone into Photoshop and he had used one of the AI editing tools to remove some lens flare. You know, so very minor, minor editing that then led to, nonetheless, this photo of Mount Fuji surfacing as AI generated or AI imagined with AI content. And this creates kind of an interesting question because it sort of highlights the challenge, which is at what point is there a sufficient amount of editing that it should surface this, right? And it's not necessarily volume. What you actually want to know is that the context has been changed. You want to know that you're being manipulated. That's what the user wants. They don't want to see imagined with AI. Imagined with AI. Imagined with AI. Imagined with AI. Anytime somebody applies a filter to their face in a selfie, right? Because then you're going to have, like, label fatigue, label blindness. You're going to stop paying attention to it. You want to know when somebody has Photoshopped in images of, like, drugs next to a person, right? You want to know when something explicitly manipulative has been done. And it's very hard to do that. So the problem is what people want labels to do or watermarks to do. And what they can do from a technical perspective is it's just, it's not matching up. It's not only a problem of like the technology, it's also a question of what are you conveying and how much information does the person or the technological system actually need. The question is, am I being manipulated? Has the context been changed? Is this real or is this true? That's what people are trying to get at. Not just is this AI generated? Because is this AI generated is going to return an overwhelming amount of yes as more and more people use it to make, you know, random art and stuff. [00:33:08] Speaker B: You're talking about watermarks. I've read some stuff where technology just gets around that. I mean, the adversarial attacks are fast enough. [00:33:16] Speaker A: And that's the other piece of it. You screenshot the image and you've just stripped out a bunch of stuff. [00:33:20] Speaker B: But the other piece you haven't talked about is content moderation by the platforms themselves. There's clearly been an erosion in their commitment to content moderation because we did have humans standing on the parapet at one point and of course they were overwhelmed again by volume and velocity, but without content moderation. And then I'll say, because you mentioned earlier, the incentive structure for platforms. I mean, they're not incentivized to try to come up with the truth. I mean, it's all about virality. It's hard for an old Italian guy to say because it's not a word I grew up with, but virality, am I right? So they have this kind of skewed incentive structure to just keep, you know, putting out what trends, I think. And if I remember you had a nice saying that I can't remember, if it trends, then it's true. Is it something like that? [00:34:10] Speaker A: You make it true. Make it true. [00:34:12] Speaker B: Yeah. [00:34:13] Speaker A: Back in like 2015, trying to make that point. Yeah. So it's an. So content moderation, isn't it became very politicized. That's what happened there. And so just to explain, like, so content moderation refers to three, roughly speaking, buckets of interventions. A platform can take something down right? Where it decides that's it. Like this egregiously breaches our policies and we're going to take it down. It can reduce the distribution where it says, like, we're not going to push this out to as many people. So, like what we were talking about earlier, where influencers are constantly trying to get their stuff to be seen, the platform can decide, you know, what this content here we're not going to recommend. And so that might be. For example, I remember talking about this in the context of, you know, during the measles outbreak in California in 2019, 2015. Like, why is the platform actively recruiting people into anti vaccine groups for people who've never gone searching for that content? Like, what is the. Like what are the ethics behind a recommender system? What do we think it should be recommending? Should it be recommending entertainment content? Should it be recommending political content? Should we be giving users a lot more control over that? That's what I believe. But until that happens, what should the platform's ethical system be? And is it. I remember also being recommended QAnon content. I remember being recommended Pizzagate content. And this is a, you know, the idea that a platform is like actively doing what is essentially digital cult recruitment is also a very bizarre thing. Like, why is it pushing that out? It's fine to have it on the platform. Nobody's saying take it down. But it is a question of what should it be actively pushing out? Where should those lines be? So that's that, that sort of information throttling or recommending. This is. You can either think about it proactively in terms of what does it curate, or from a moderation standpoint, what does it decide to then downrank, right? And those are the two sides of that coin. And then finally there's what the platforms call inform. That's the third bucket of content moderation. And that is them deciding to put a label on something. And the label might be a fact check, right? It might be an explicit fact check where some fact checking entity or now a community note says this isn't true. Here's the accurate information. Or it might even be what Twitter and Facebook did a lot during election 2020 and during COVID which is just to say this information is disputed. For more information, go here. And then they might point you to a more authoritative domain. So you see some content, it's making a claim about a COVID vaccine. And then there's a little line that says this claim is disputed. For more information, go to. And then it directs you to the World Health Organization or the CDC or Children's Hospital Philadelphia or whatever. And so those are the three buckets. That's how content moderation works. But that got reframed. Kind of all of it got stuck under the bucket of censorship. And that was a very effective reframing technique by the people who did that. It was done by actually really election deniers, were the people who really ran that playbook quite well. And they argued that even the labels, even the labels saying mail in ballots are, mail in ballots are, you know, generally don't experience very much fraud. For more information about mail in ballots, go here. You can even link out to like Cato or Heritage, which also had really great kind of papers on the safety and efficacy of mail in ballots. But that was reframed, all of that, all three buckets were reframed as censorship. And once that happened, and then once the House flipped and some of the congressmen, like Jim Jordan, who was an election denier, began to re, you know, got his gavel and began to investigate the companies, all of a sudden they were constantly being subpoenaed. And the combination of that. And then when it became, you know, fairly clear that President Trump had a good chance of being reelected, you started to see them really move backwards and say, we're just going to do nothing, because if we do nothing, then nobody can yell at us. And of course, doing nothing is a choice also, right? When you manage a system that directs the attention of billions of people, you also are making a choice by saying, we're just going to have it be a free for all and whoever produces the most sensational stuff wins. You've also just made a choice right there. So it's an interesting question. I think I've advocated for the better part of seven years. I think now that users should have a lot more control and that it's kind of unreasonable actually that platforms are the ones that steer curation and play the attention game and shape the incentives. But as long as you have centralized platforms, I do think this question of what should the ethical lines be is a critical one. [00:38:51] Speaker B: See, that's where kind of like I'm stuck. I'm going to go back and say I'm an asset manager. I believe I could find an edge somewhere in this information ecosystem. I recognize because there's surveys, managers recognize there is a lot of propaganda out there, but they still want to go for it. They also recognize the adversarial AIs are moving faster than their detection mechanisms. So they've got this combination of humans and AI and they're trying to sort this out. They don't want to give this up. They cannot count on the social media platforms to kind of self regulate and say, we're going to be good at this, government is aware of this, but they're not going to do anything. I mean, I hate to say it that way, but no, they're not. It's especially after your Just your comments there about censorship, that's the last thing they want to do. But go ahead. You were going to say something. [00:39:42] Speaker A: Well, I was going to say you probably don't want them to do a whole lot. I mean, it shouldn't be the business of government to regulate content on a private platform. In my opinion it is. I mean, they are private companies. We've allowed them to hit this level of being, quote, the public square. I think that by itself was bad. I think we should also be advocating for market reforms that allow for the creation of a lot more companies. Because once you have. When you only have three. Right. And for a while there were, particularly back around the 2018 timeframe, before TikTok and some of the others, you really only had three places to go on social media. And so there was more of an argument that if this platform moderated in a way that you didn't like, you were somehow out of the public conversation that led to the creation of truth Social Parlor Getter Gab. Right. Really, the rise it started on the right of platforms that explicitly marketed themselves as being for conservatives. Right. We will appeal to conservatives in the following ways. We will moderate in ways that they like. And one of them rumble, which I love their content policy, they have a rule in there that says like no content glorifying antifa. Fine. That's fantastic. Right? This is private markets, man. [00:40:56] Speaker B: Right, right. [00:40:57] Speaker A: Go out there, make your rule. And I mean, Alex Jones has a terms of service for Infowars like, you know, again, these are private companies. They are not public squares. And I think that allowing the sort of big three to reframe themselves in that way is actually terrible. They shouldn't be. That's not what they are. And so creating more opportunities for protocol based platforms, for smaller platforms, for, you know, rethinking what that ecosystem looks like is actually net beneficial for everybody because it does reduce that stranglehold that they have on attention and communications. [00:41:30] Speaker B: I mean, that's a nice view. I think there's a lot of friction to get there. [00:41:36] Speaker A: Oh, there is. No, no, I know, I know. I mean, this is where like. But the counterpoint where people hope that the government is going to come in and weigh in on content moderation decisions. I don't know how many people have followed the hypocrisy of the censorship conversation, which is that you will see. You know, there were court cases in 2023 or so 2022, there was one called Murthy v. Missouri where the Attorney General of Missouri sued the Biden administration alleging that Biden was trying to suppress Covid content. Right. That the jawboning is the term when the government reaches out to a private company and tries to make the private company do something that they want and it's, you know, violation of the First Amendment. That case was ultimately tossed. There was no evidence that that had actually happened. But again, the framing was very convincing. Now, you see with the Trump administration in office, you see the Trump administration reaching out in the New York Times just this past weekend, actively trying to get the names of people who run anonymous accounts criticizing ice. Right. That is an incredible step beyond what, what was alleged about the Biden administration. Nobody made the allegation that they were trying to unmask anonymous speech. You also see them trying to get apps taken out of the App Store and groups taken down off Facebook. And more importantly, you see the App Store and you see Facebook complying. Right. So again, the hypocrisy of this conversation is that once the government weighs in and applies pressure, this is not something that anybody of any particular political persuasion or party should want. And so what you want to see is transparency regulation that says that those requests should be transparently disclosed. [00:43:14] Speaker B: I got two more questions for you because you've been generous with your time and I want to make sure you can get back to the kids and maybe you got a day off. So you've described, I think where we kind of started was distrust is the default position in many ways because of all of the, I'll just use the big word propaganda that's out there. It's hard to separate the gold from the draws. Yet you still see managers using this alt data. The problems with volume velocity, AI detection mechanisms, forget about it. Not that good. So what could managers do to ensure the fidelity of their data that they're ingesting from this ecosystem? [00:43:52] Speaker A: It's an interesting question. I've wondered about it, you know, since leaving. You know, I was there through the financial crisis and then the European debt crisis, and then I left and went to Silicon Valley and, you know, started a company and did this, the tech thing for a while. I went into academia late, not until 2018. So I'm sort of like a, like a half academic, you know. It's really hard, I think, to derive good signal discernment is really important. I think there is on like, I don't know, I'd be very curious to hear from people who still find X useful for signal at this point. Because one of the things that is that. And because here, let me connect it to incentives. Right Once Elon allowed for the monetization of X content and made it so contingent upon blue checks. Talking to blue checks. Right? That was one of the monetization models. I think they changed it about two weeks ago now. But for a while there it was like blue checks replying to you as a blue check. And then that was, you know, you had to get a certain number of impressions per month. So first you had to be kind of sensational and big and then you had to produce content that was going to inspire responses. And you would see these accounts that knew nothing about the topic at hand weighing in being like very authoritatively loud and wrong. But they were so incentivized to be first because that was what was going to draw the conversation and the replies and that was how they were going to make literally tens of thousands of dollars a month. Just bullshitting. But as long as they were there, like that was what was going to shape the conversation about that incident. And real world events are still what people go to looking for information on, on X. Because no other platform does that as well, right? Threads. The algorithm is weird. It doesn't surface recency very well. Blue sky is small and niche still comparatively Instagram and you know, TikTok are, you're so sort of off in your like, you're very like slotted on those platforms. It decides what you are and you see a lot of that content as a result. It's curated in very, very particular ways. It decides who you are and like you're going to see a lot of whatever it decides you are interested in. And so X is still really one of the few platforms where you are theoretically getting this breaking news content. The problem is it's gotten increasingly unreliable as a function of the incentives. So you'd have to do it by having some kind of like really excellent list where you have curated it yourself, I think. And you've decided these are accounts that are both like rapid and good at, you know, rapid and reliable and not getting caught up in the, the bad incentives of trying to get their stuff seen more by being sensational. [00:46:44] Speaker B: You seem a little uncomfortable with that answer. That, that may not be the best answer if you were in the chair trading again. [00:46:50] Speaker A: I know. Well, you know, we, we didn't use Twitter data in 2011 when I was there. I would, I would like a cat occasionally have it open. I remember like, you know, I like six monitors or whatever and I would have like a Twitter window open sometimes around like breaking news events just to see did you know when, when somebody was Giving a press conference, which would be faster, actually, would Twitter be faster or would the Bloomberg red headlines be faster? And it was always Twitter. It was just that the Bloomberg red headlines were generally more reliable. And so that this trade off of like, is the edge more from the speed or from the reliability. So different strategies would treat that differently. But. [00:47:26] Speaker B: Yeah, but back then you didn't have [00:47:27] Speaker A: all the slop, you didn't have all the nut jobs. Yeah. [00:47:29] Speaker B: I mean, you didn't have all the slop, you didn't have AI like you do today. And I think the incentives were different for the platforms. [00:47:37] Speaker A: I think the incentive change is the big thing on Twitter. Not really like monetizing the feed. I get why they did it. Like, I believe the creator should be paid, but man, did it make it. [00:47:47] Speaker B: Yep. [00:47:48] Speaker A: Like there's. Because there's nothing where they say, like, you have been so wrong so many times that we're now going to derailitize you. [00:47:55] Speaker B: Yeah, right. [00:47:57] Speaker A: Because they would call that censorship. That's the thing. Once you allow that word to be applied to anything, any remote, tiny little bit of friction, you kind of box yourself into a corner where you make it impossible to do any basic maintenance of the information environment. And so it becomes useless for people who are prioritizing accuracy and informedness. But. And it becomes just about expression for the sake of expression. And so, you know, I, I find the whole thing kind of fascinating on that front. I don't know, we haven't talked about Reddit or any of the, you know, the ways in which you do see these, like, niche communities. I wrote a little bit about GameStop and the kind of stonks, you know, ways in which you started to see this interesting model of small groups coordinating, basically. Right. Sort of communicating in these, in these groups to, you know, do these, like, sort of short squeezes and things like that. That was kind of a fascinating dynamic watching that shift. Robin Shiller, Robert Shiller, sorry, wrote about it in his book Narrative Economics. I think also this question of perception changes, right? Like, where does that perception shaping happen? And Wall Street Bets and some of these other groups that really kind of came out to try to be centralized hive minds that go and act in unison was kind of an interesting thing to see. Also the idea of the online crowd making itself a force in that space. [00:49:21] Speaker B: Well, that takes us into prediction markets in some ways. When you start talking about that. I did a piece with Robin Hansen. [00:49:27] Speaker A: Wild thing, too. [00:49:28] Speaker B: Yeah, I did a piece with Robin Hansen last week. He's kind of the, the. I'LL say the godfather of prediction markets. [00:49:35] Speaker A: We used to actually have him every summer at Jane Street. He would come and give a talk. And I remember like I did the Good Judgment project back in the day. I was like a participant in the early phases of that doing the prediction markets, right there were the sort of like the super forecaster identification process. I was not a super forecaster. I didn't like give it enough time. I was always like, oh shit, I have to get my numbers. But the people who do, man, it was kind of wild to see. Now I think the challenge is once you can create prediction markets in everything, you are starting to see the fraud really come in, in force, right? Create the prediction market force the, you know, force the conclusion. And that has been kind of wild to see also. [00:50:17] Speaker B: So one thing before I ask my final question. Stonk. Yes, I hold, let me just be clear. I hold that ticker. No kidding. I, I have two tickers that I hold, Stonk and lstm. Because when we were doing our deep learning stuff, we used an lstm. So I mean nobody knows what it is, but. Okay, so in my perspective here is it really falls to the asset owners, that is the pensions, endowments from my universe that invest with managers to push the managers on this point and have the managers explain how they're able to determine the fidelity of the data and recognize that there's going to be some crap that gets through no matter what. But it's the allocators that are at the end of the day paying those fees to the managers and they need to know what kind of safeguards are in place. This is no different than any kind of any other governance issue that's out there. Just like you have a compliance manual, you have a personal cell phone manual, all that stuff. So I'm going to stop there, but then I'm going to ask you my final question. What was the best trade you ever made? Back in the day when you were on the desk, what was the best trade? [00:51:30] Speaker A: Oh boy, you know, and it could [00:51:33] Speaker B: be one that was the most fun. It doesn't have to be the most profitable. [00:51:37] Speaker A: So, you know, we didn't, it was so fast. I think it was more like index rebalances and things like that were always the, you know, betting on what was going to be and, you know, yeah, [00:51:48] Speaker B: what they're adding or taking out. [00:51:49] Speaker A: Adding and taking out. Yeah, was a big, was a big piece of it. So not so much long term, you know, we didn't, we didn't hold anything. [00:51:56] Speaker B: No no, you guys were. [00:51:57] Speaker A: But yeah, I was at least I didn't, I mean, I don't know. Things have changed. I mean I was like, it was like, gosh, 200 and something people when I left. It's like huge now, so. But yeah, the, I think the index rebalances were always, were always some of the most, some of the most fun. [00:52:12] Speaker B: Prediction markets today could help you with that decision because people will bet what's coming in, what's coming out and you can kind of look at it and there could be manipulation there as well. [00:52:21] Speaker A: Yeah, yeah, I remember like 2008 was wild because I was trading Brazil at the time and the like the limit downs were different. The control of the real, the Brazilian real was different. Just so much of, you know, things would just halt. You know, the ADR would keep moving, the underlying stock would just halt. The, I mean things were just halting constantly at that point. Bear Stearns was our prime broker. Yeah, that was, that was a hole. I remember that happening. We were at dinner and we got a text. I was at dinner with some of my colleagues actually it was a Sunday. We got a text message saying like, Bear Stearns is getting taken over for $2 and something cents a share. Like come in to change what we're routing through and everything. And it was wild times though. 2008 was really something particularly because again, as the, you know, you also have to be like constantly aware of what was put on these, you know, the sort of short list bands and things like that and the various, you know, complexities of certain industries. [00:53:16] Speaker B: You know, I mean you couldn't short ETFs in some cases. They wouldn't let you short ETFs. [00:53:20] Speaker A: It was sort of, it was. I was not on the ETF desk at that point. I was still Latin American equity derivatives in 2008. I didn't move to ETFs until 2009. Was, was when I moved desks. But no, it was, I was actually, you know, I missed the flash crash because I was sitting in a damn airport. I'll never forget that. So that was one of my, like one of the few vacations I took. I was like sitting in an airport when that happened, watching delayed flight, thinking like, man, what a day to. What a day to miss work. [00:53:48] Speaker B: Well, I have a similar story. I was a floor trader. I did my postdoc and then I came back to Chicago and I was a floor trader and I was trading the OEX at the time. This is 1987. I took a vacation right there in October of 87, I was pheasant hunting in South Dakota. I got off the plane and I saw the headline. It said Dow down 508. I said, there must be a typo. Walk right by it. I missed the whole thing. I couldn't do anything. That was crazy. Yeah. Well, Rene, this was great. I appreciate you. [00:54:20] Speaker A: Thanks for having me on. [00:54:21] Speaker B: I appreciate you playing along here. This was great. [00:54:24] Speaker A: In all seriousness, I think that when I wrote that essay in 2017, I was thinking financial markets. There's these multi tiered systems of regulation. You have different people responsible for different types of risk. There is really nothing commensurate like that on the tech and information markets. It's just a complete free for all sorts of. And it's still like that, you know, eight years later or so. [00:54:48] Speaker B: Yeah. So yeah, it's kind of interesting because it takes us back to prediction markets in the intersection there because there's a lot of AI going on there in those thin markets and yet we want to see regulation somehow in there because we don't want to see insider trading and manipulation. Thanks for listening. Be sure to visit PNI's website for outstanding content and to hear previous episodes of the show. You can also find us on p and I's YouTube channel. Links are in the Show Notes. If you have any questions or comments on the episode, or have suggestions for future topics and guests, we'd love to hear from you. My contact information is also in the Show Notes, and if you haven't already done so, we'd really appreciate an honest review on itunes. These reviews help us make sure we're delivering the content you need to be successful. To hear more insightful interviews with allocators, be sure to subscribe to the show on the podcast app of your choice. Finally, a special thanks to the Northrup Family for providing us with music from the Super Trio. We'll see you next time. Namaste. [00:55:55] Speaker A: The information presented in this podcast is for educational and informational purposes only. The hosts, yes, and their affiliated organizations are not providing investment, legal, tax or financial advice. All opinions expressed by the host and guest are solely their own and should not be construed as investment recommendations or advice. Investment strategies discussed may not be suitable for all investors as individual circumstances vary.

Other Episodes

Episode

October 21, 2025 00:30:49
Episode Cover

Beyond Banks: How Insurers Are Reshaping Private Credit

How are insurance companies unlocking yield premiums that public markets can't deliver?In episode 2 of the Private Markets Series, Angelo Calvello, host of Institutional...

Listen

Episode

October 14, 2025 00:38:28
Episode Cover

Private Equity in 401(k)s: Democratizing Returns or Democratizing Risk?

Can "onerous disclosure" actually protect 401(k) participants from alternative investment losses?In Episode 4 of the Private Markets Series, Angelo Calvello, host of Institutional Edge,...

Listen

Episode

December 02, 2025 00:42:13
Episode Cover

How Selective Insurance Unlock Private Market Returns Through Structured Innovation

"We don't have to invest in privates"—When does a CIO managing $11 billion say no to alternative investments?In our 16th episode of our Private...

Listen