A Framework for Fiduciary Innovation

August 13, 2025 00:33:27
A Framework for Fiduciary Innovation
The Institutional Edge: Real allocators. Real alpha.
A Framework for Fiduciary Innovation

Aug 13 2025 | 00:33:27

/

Show Notes

What's the biggest mistake asset managers make when implementing AI?

This week on Institutional Edge, host Angelo Calvello sits down with Peter Strikwerda, Global Head of Digitalization and Innovation at APG Asset Management. Managing €616 billion in pension assets, Peter shares APG's groundbreaking framework for implementing AI while maintaining fiduciary responsibility. He reveals how the Dutch pension giant overcomes status quo bias, manages the reality that four out of five AI experiments fail, and transitions from rule-based to principle-based governance. Peter offers practical guidance for asset owners navigating the tension between innovation imperatives and fiduciary obligations, emphasizing that AI isn't just another tool—it's transformative technology that requires strategic embedding throughout the organization.

In This Episode:
(00:00) Introduction- Peter Strikwerda and APG Asset Management
(02:09) Peter's mandate at APG and AI's role in digitalization
(05:33) Responsible experimentation: Creating structured environments within fiduciary boundaries
(15:44) Governance evolution: From rule-based to principle-based AI oversight
(22:33) Risk-adjusted innovation: Managing experimental failure rates and value assessment
(26:39) Scaling methodology: Progressive frameworks from experiments to enterprise deployment
(29:00) Framework for fiduciary innovation and guidance for peers
(31:09) Worst vendor pitches and enterprise-grade standards
Like, subscribe, and share this episode with someone who might be interested!

Dr. Angelo Calvello is a serial innovator and co-founder of multiple investment firms, including Rosetta Analytics and Blue Diamond Asset Management. He leverages his extensive professional network and reputation for authentic thought leadership to curate conversations with genuinely innovative allocators.

As the "Dissident" columnist for Institutional Investor and former "Doctor Is In" columnist for Chief Investment Officer (winner of the 2016 Jesse H. Neal Award), Calvello has become a leading voice challenging conventional investment wisdom.

Beyond his professional pursuits, Calvello serves as Chairman of the Maryland State Retirement and Pension System's Climate Advisory Panel, Chairman of the Board of Outreach with Lacrosse and Schools (OWLS Lacrosse), a nonprofit organization creating opportunities for at-risk youths in Chicago, and trustee for a Chicago-area police pension fund. His career-long focus on leveraging innovation to deliver superior client outcomes makes him the ideal host for cutting-edge conversations on institutional investing.

Resources:
Peter Strikwerda: https://www.linkedin.com/in/peter-strikwerda-886610/
Email Angelo: [email protected]
Email Julie: [email protected]
Pensions & Investments
Dr. Angelo Calvello LinkedIn

Chapters

View Full Transcript

Episode Transcript

[00:00:00] Speaker A: The thing is, so when we compare the computer to the humans, we can apply rigorous processes on both ends, but what we're running into is a status quo bias effect. So we apply higher standards to the AI running an investment process than the humans. We are very concerned about biases and hallucinating AI. Well, humans also make mistakes, right? [00:00:23] Speaker B: Hi everyone, I'm Angelo Calvello. Welcome to the Institutional Edge Real Allocators, Real Alpha, a podcast and YouTube series in partnership with Pension and Investments. My guest today is Peter Strickuerda, Global head of Digitalization and innovation at APG Asset Management. And Peter is the ideal person to discuss today's topic, A Framework for AI Driven Fiduciary Innovation. For listeners unfamiliar with apg, also known as the All Pensions Group, APG is one of the world's largest pension investors and it's based in the Netherlands. Peter and his colleagues manage approximately €616 billion or $700 billion in pension assets. You can find a link to Peter's LinkedIn profile in the show notes. Peter, welcome to the show. [00:01:12] Speaker A: Thank you, Angelo. Thanks for having me. Pleasure to be here. [00:01:15] Speaker B: I really appreciate it. Let's jump in and get to the basics. What's your mandate at APG and how does AI play a role in your job? [00:01:26] Speaker A: So I'm looking after our digitalization agenda and our innovation agenda and the two obviously are closely linked, yet a little bit separate in their approach. So on digitalization, it's a large five year transformation that we go through as an organization which has a lot to do with becoming highly data driven, a little bit more traditional change approaches if you will. On the innovation side, it's more experimental, we take more risks, we try to shift the boundaries a little bit more and that's where typically we are scaling up AI. [00:02:04] Speaker B: So how has APG adopted and integrated AI into its investment decision making? And I guess I'll say, you know, what digital tools are you using to create business value? [00:02:16] Speaker A: We are in the process of integrating it and scaling it still. So we are not large scale users. We are experimenting at scale and on the verge of implementing it at scale. And I'm pretty sure we will get into the some of the details there in a minute why it's going like that. The digital platforms that we're using, we now focus on Copilot. We are a Microsoft Shop and OpenAI in our own instance. [00:02:44] Speaker B: Are you using a lot of third party technology and are you building stuff internally as well? [00:02:52] Speaker A: Yeah, it's a bit of a mixture in a broader sense. We use A lot of third party technology also giving our nature being part of investment chains with the traditional big banks, brokers, custodians, et cetera. So we also use their platforms. For example, when you look at AI, what we build is in the OpenAI sphere. So that's where you build your own models. And that's also the difference with the copilot applications where you basically have a standard tool set that you deploy on your processes. So it's a little less built, it's more deploying and embedding. But I think it is important to stretch that we as a company, I think you could say we are tech heavy. We're not a tech company, we're tech heavy. Tech and data, it's always been that way in investing, right? It's always been about data and using information wisely. So I think for us we should not try to be the best developers or builders in the area, but we are the ones that should try to deploy the best business logic to the available technologies. And that's also where we need to partner up with external parties to supply is there. [00:04:05] Speaker B: What's the vetting process like for these third parties? [00:04:08] Speaker A: Oh, it's extensive. So we have a standard AI, sorry IT policies for anything that we buy. We have third party management integrated frameworks where our suppliers need to, let's say to comply to a lot of things, to be able to work with them. And that varies from obviously cost and functional things, but also to their own responsible behavior risks, et cetera, et cetera. [00:04:40] Speaker B: Is data privacy top of mind. I was just thinking data privacy because you've got information on millions of pensioners, how do you deal with that? And you know, I assume it's a concern. [00:04:50] Speaker A: So data privacy, working with AI, I always say there are two key concerns. So one is about data, like you said. The other one is about black boxes effects in the models. And maybe a third one now could be autonomy with the advent of AI agents. But absolutely, I think our primary concern looking at AI or in our role managing our business processes is the data of pension participants or any other sensitive data for that matter, that could also be on deals in the private markets, et cetera, that you don't where you don't want the data room to be exposed. But we are working for pension funds and pension funds act on behalf of the participants. And obviously their concern is also very much on the participant side. What that means in practice is that we have, let's say, a granular appreciation of risks about data which varies from very low risk, which could be public data, not even our data to very high risk. And typically anything about participants is in the highest category, leading to a lot of scrutiny. If you want to use it, for example, in AI, I've talked to some. [00:06:05] Speaker B: Of your peers and this has been a problem for them in terms of implementing third party systems. Their concern is these third parties might train on their proprietary data. So it's kind of this tension between do you lease the technology or do you build it? Because if you build it, you have the ability in some ways to ring fence the data, although only to a point. Any thoughts on that? [00:06:30] Speaker A: Yeah, absolutely. So some data, you basically do not want to cross the boundaries of your organization. So again, on participant data, we are not going to put any participant data in a public chatgpt or comparable service. It's just not going to happen because then it's out of our hands and then basically we have a data breach or an upcoming data breach. So again, it's very much depending upon the type of data and the risk category. And one of the ways of handling the situations where you might want to use that external software but are running into this data risk is to internalize it. So for example, on the OpenAI instance that we're using that's contained, it's a corporate instance. So data remains within our and the processing power for that matter remains within confines that we find acceptable. And it's not the public version. One thing adding to that, I mean geopolitical political tensions also lead to the question about compute sovereignty. Right. So we all know that in Europe, US including, we are very much depending upon US cloud providers and it'll probably be good, but it's not as it was two or three years ago, the level of trust. So I think AI, but also in a broader sense of application of IT and data. Compute sovereignty is going to be a big, big theme for the upcoming years. Which basically means how do you maintain control over your own IT and data? [00:08:12] Speaker B: Yeah, I guess that's probably one of your ongoing projects. I don't see a simple solution right here. I mean, you're right, the hyperscalers tend to be US and you don't want to use China, I would think, because that also introduces some tension. So that's a good point. I hadn't thought about that sovereignty because here I am in the US and I have a certain parochial vision. [00:08:34] Speaker A: Yeah, of course. And we all do and did. Right. I mean it's like I said, it's a geopolitical concern. I think that needs to be solved on A European level. And that's not something that we as a company are gonna take care of. But we do need to assess the risks here. [00:08:53] Speaker B: That's a good point. Let's go to the two other concerns, I guess or kind of key issues for you in terms of implementation. You mentioned black box transparency issues. Tell me about that. I mean, I'm coming from a world where I ran a hedge fund that was a dark black box. So I'm on the other side of the table. But again, I wasn't representing pensioners and educators. So what's the issue? [00:09:17] Speaker A: Yeah, yeah, so I think, I mean, we are a fiduciary investor and part of the fiduciary obligation is that we have a duty of care. And that includes, I think, being able to be transparent about everything that you do. So short and simple, it's just not acceptable for us to do a trade of 100 million x with US not being able to deduct why we came to that conclusion. So it's just not going to happen if we're not, the black box is just not going to happen on our end. [00:09:48] Speaker B: And I think that ties into the other point you raised about autonomy. Again, I come from our hedge fund was completely autonomous. These big neural nets, reinforcement learning, we had guardrails and everything. But there was, I mean, our view was let the machine run, it's very powerful, put up the guardrails. It wasn't as if we were using this methodology to augment an existing human framework. So I'm on one side of that spectrum of autonomy. It sounds like you're closer to the other side. Am I right? [00:10:25] Speaker A: Yeah, probably. It has a lot to do with what your starting point is. I don't know your hedge fund, but I have an idea, Angelo. And I think the starting point is being highly systematic, highly data driven from the get go. Right. We come from a different starting point where we are fundamental investors with a lot of, let's say, human touch from the start. So the starting point is quite different. So that's part of, I think, the opposite effect. Another part is that I think on, maybe it's even a cultural level for fiduciary investors. There's just this key assumption that human oversight is the best way to ensure good diligence, good decision making, four eyes principles, et cetera, which, and then, and I'm intentionally flagging it as an assumption because it, it, it was very normal to do it like that. I think with the advent of AI or smarter tools, lots of data availability that humans cannot process, by any. In any shape or form. That assumption may shift over time. I think so. And it's probably already shifting. And you see that a little bit on our end too. But the situation is still that the human is the one in the end making the decision and doing the true vetting, you know. [00:11:56] Speaker B: Interesting. You're talking about the shift. I've been thinking about this and I was thinking about writing something, actually. It's almost as if we're moving the. The acceptance of AI into kind of a pharmaceutical framework where we do clinical trials at different stages to ensure it's ready to be released. There's no regulator, like here in the us the fda. This would be a governance issue. But I've often thought about the clinical trials. It has to pass each one of these and framing it that way may help people understand. At the end of the day, there has been a rigorous process and the example I always used when I was thinking about this was Tylenol. Tylenol went through this rigorous process. Eventually it was released, but we have no idea how it works. I mean, it's very. There's no causal explanation at this point, so it's kind of like a black box. But it did pass these trials. So that was just kind of an analog I was thinking of. [00:12:59] Speaker A: It's interesting. So 10 years ago I had a chat, almost 10 years, eight, nine years ago, I had a chat with the boss of IBM, Watson and the director of the hospital. I've forgotten where it was, but where they did the experiment with Watson and the oncologist, which was then Watson was the big thing right back in the day. Then also the conclusion was that Watson had a better prediction and could make better diagnosis than humans. The thing is. So when we compare the computer to the humans, we can apply rigorous processes on both ends, but what we're running into is a status quo bias effect. And the autonomous driving car is the classical example. I think in this day and age, we know statistically that it's going to cause less casualties and still we find it very hard to accept. So we apply higher standards to the AI running an investment process than the humans. We are very concerned about biases and hallucinating AI, while humans also make mistakes. Right. And I'm not trying to underestimate the issues on AI, but so I think rigorous processes, like you said concerning medicine, will help in this phase, especially since there's a lot of this concern about the direction AI may be taken and that this concern is. Sorry, that concern that it may be taken, that that concern is I think in our end of the spectrum, being fiduciary investors is even a bit higher because it's also visible what is happening and it's. It has high societal impact. So rigorousness helps, but adoption of the technology is what will make the difference, I think in the end. And adoption also means accepting that there may be a mistake every now and then, like humans make mistakes. [00:15:01] Speaker B: That sounds like an implementation hurdle to me that you have this. You're calling it the status quo bias, you know, kind of a Regency bias. And the issue of cultural resistance. We've always done it this way. Are you finding those are hurdles you have to overcome? I mean, you're pushing innovation and digitalization. I mean, do you find you come up against this kind of friction? [00:15:27] Speaker A: Yes, although I would say that it's maybe not that different. First at this moment versus five or ten years ago. I always like the simple concept of adoption curves on innovation or new products. It's this normal distribution. So you have always people at the front and you have always people at the back. And when I look at our company as is, I think we have a already very large and growing group of, for example, portfolio managers, but also risk managers or finance professionals or client interaction professionals that basically are like, okay, help us. We want more and we want more. They really want to use the technology. They're very, very eager to do so. And I think the reason for that is quite simple. It's just abundant and in their private situation, they can do already whatever they. And it's very low barrier to use it. Right. You don't need to go learn coding to do ChatGPT. You need to learn a little prompting only. So the adoption I think on the professional level is quite good. I think the adoption on a cultural level, especially managing the risks and controlling the risks is going far slower. That's far more of an implementation hurdle. [00:16:47] Speaker B: You know, you talked about the professionals and their interest in adopting technology. What about from a governance structure? Again, I've talked to some of your peers here in the US and they're trying to bring AI into their business. But there's a resistance with, I'll say the trustees, who may not be investment professionals and are often not technological natives. How do you deal with governance around this? Again, part of your fiduciary responsibility. [00:17:19] Speaker A: That is a challenge, I have to say. So in our case, we work for pension funds. And like we just discussed, pension funds are concerned about their data, which has a number of reasons. Legislation, Dora in Europe, which gives them a high responsibility on anything that Happens to their data and digital infrastructure, the advent of AI, et cetera. So what you see there is that their reflexes show us that you are in control of this. And then you get into a classical reflex. I understand that question and I respect it, I really do. The natural reflex in an environment as ours is to build extensive control frameworks and business processes to control the risks. So what happens in that reflex is that it's overdone. Right. So we get basically too high control to be able to generate any progress. That's what happened in the beginning. So then you loosen up a little bit and you strike a better balance. And what I think a key. So there's a few key elements here. So one is, I think on board level, if that's the trustees or pension funds or our own board, a lot of education still needs to be done because there's still a lot of people who know a little bit about AI who read their posts on social media, but it's very hard to make sense of it. And it's also a little bit, maybe it's difficult to be a bit vulnerable about it, if you know what I mean. I mean in this day and age, saying that you don't know exactly how ChatGPT works is not very comfortable for anybody probably. Right. So education is one thing that helps, I think with that discomfort and also to build trust. And trust feeds into governance. Right. Lack of trust leads to higher governance and more controls. High trust leads to balanced governance and control. Right. That's what this is about. Second thing is that the governance on AI can never only be control frameworks, business process manuals, etc. There's a few things. One, it's developing so fast. So that means that anything that you apply rule based is outdated tomorrow. So you're creating your own workload, which is very, very expensive in the end. So you need more of a principle based approach here. I think second is that part of the responsibility shifts to the individual professional, the portfolio manager, applying AI and using data and that has a responsibility there. That's not something that only a risk manager or whatever should carry. So that means that investing in the education, not only on a skill level, but also on ethical level, on risk level of the professionals that you want to work with. This is very, very important. And then you get the circles round. I think you get to the cultural aspect also. And I would say this needs full coverage. So it's not a selection of professionals, it's all professionals. [00:20:30] Speaker B: So it's systemic in the organization. [00:20:33] Speaker A: Yeah. And that is A little bit challenging because instead of saying we have all these methodologies and somewhat technocratic control measures now you say listen, we're going to trust our people and we do trust our people, but this is new, etc. So that takes some getting used to and it's a bit of a cultural change. [00:20:55] Speaker B: I think we had talked before that there's another implementation hurdle. And you and I know independently from experience that there's a lot of great ideas in terms of AI applications, but success is only known empirically. And there are costs in terms of time, human resources, compute data to doing this. Scientific uncertainty, if we call it that way. I think you said four out of five experiments fail. I mean, and that's no reflection on the organization, that's just the nature of technology. I think people too often think about AI as being some type of alchemy where I keep telling people, you only know empirically if the experiment will work. I mean, how do you deal with this scientific uncertainty within the organization? It seems like you have to manage expectations in some way. [00:21:50] Speaker A: This is a very good point, Angelo, especially when you're scaling up. Let me explain what I, what I mean by that. I think when you're early stage still, let's say have a few dozen of small scale AI experiments going on, trying to figure out some things, investments are not that high yet and you need some, basically some ground to learn, right? So organizing it at the front is not a smart move in my experience. But then you get into a phase where you say we're going to go into targeted scaling, right? That means higher investments, higher risks maybe. So that's where things become a little bit more serious. And there you really need to balance, I think, clear grip on value, clear grip on the costs involved and who's paying for them. We typically do that in a stage gated approach. So what you do is you create short cycles that gradually basically scale up on a certain initiative or kills it. And then you get to the four out of five being killed historically in our environment. So we ran about 70 of these kinds of small and big experiments. They also, before Gen AI they were highly data driven or in the machine learning driven, et cetera for the last ten years. And indeed it's, it's four out of five that get killed. But the good thing is you haven't in those four, you, if you do it well, you haven't invested too much and you reserve your bigger room for investment in that one out of five that shows true potential. [00:23:27] Speaker B: I mean it sounds like what you're giving us is a framework for fiduciary innovation. I mean, you're talking about what I would call responsible experimentation. You have to create these structured environments. You have a scaling methodology. And that scaling methodology is also kind of a risk management function. And then you have a way of assessing are you going to take it up? I mean you have to be able to evaluate the efficacy of these experiments. So you have metrics that you build around this. And earlier you talked about governance. There's this evolution of governance and you're talking not rule based, principle based. You're dealing with cultural issues in the organization, both and the professionals that are using the AI, but also your trustees and your boards have to understand that this is being done in a certain way. Is that like a fair summary of how you think about this framework? [00:24:29] Speaker A: Yeah, I think so. A lot of good points. And there's only one thing maybe I wanted to add here, Angelo, is that I think if you look at this as innovation, I don't care, I don't really mind if one flags this as innovation or not. But let's say that in an environment as ours, it's at least for the organization, it's innovative, it's new. So what does not work is producing PowerPoints or having only board level discussions about governance, et cetera. It's about show, don't tell. I think that is extremely important in early stage innovation and also when it comes to AI. So any discomfort about value, about risks, about governance, about capabilities, what have you, the best way to make to address that is to show some concrete results and insights and then take it from there and do that in a short cycle manner. That's the best learning processes, at least in my, my experience. So what I try to do with anything, let's say that's not at a scaled level is show, don't tell. [00:25:42] Speaker B: That's good advice. And Peter, just speaking of advice, what guidance would you give your peers if they're just starting this process of integrating AI into their business and investment making decisions? [00:25:56] Speaker A: Early stage create a lot of room for experimenting. But set the guardrails straight and those guardrails especially on the usage of data scaling, do it very much value focused and make sure that it's then embedded in the strategy of the organization. The width and the growth of the application of AI is going to be at such a level that you cannot just approach it as an add on to things that are already there. You need to embed it in a strategy. Thirdly, and that relates to that point, educate the board Invest in board level education here too, because you need top down anchoring for that. Scaling a number of smart wiz kids throughout the organization is not only going to cut it if they do not have the board support to go with it. A little bit of a different perspective. Let's not all reinvent the wheel on ourselves by ourselves. So what I've seen in the Netherlands is that a lot of our peers were on comparable processes. One is going a little faster, one is going a little slower, etc. But most of the current AI explorations are still efficiency driven and that's not typically highly competitive or sensitive. Sometimes it can be there is a tipping point, but we're not talking about new investment products or private market deals or whatever that are very sensitive by nature. So what we've done, for example, is we've created a cooperation between four of four leading Dutch financial services institutions and did a copilot hackathon two weeks ago. And we are going to continue that cooperation to learn together also. And I think that will help. And that also relates to my, my latest point that I would like to, to. To give as an advice. Think about what your role is in scaling AI or technology in a broader sense versus what you, where you partner up, where you buy things. So what investments do you make in your own organization versus where do you need good partners? [00:28:07] Speaker B: I appreciate that it's good advice, I mean, you kind of shrug. But you know, people are not as far as you are in your thinking. And also they often don't have the support of their sponsoring organizations, you know, for headcount or budget or just do we really need that kind of approach? So that was great and you've done it all this innovation within a fiduciary framework. Usually we don't hear innovation and fiduciary in the same sentence. There tends to be a disconnect. [00:28:39] Speaker A: I understand that. May I add one thing there because you trigger me with one point. So I fully understand that. Let's say that discomfort or sometimes even disconnect maybe with the trustees or the funds that, whose money we are investing. What we've done in that is move closer. So look for a level of cooperation to explore. Like okay listen, you as a fund, what are your strategic goals? You want to in our case have a very high level of responsible investing? Well, we think that modern technology and data will help get us there. And by the way, that's also including AI. So let's see if we can connect those so that you win and that. Right. I think that's the only way to move forward. And that again circles back to the point that I made about make a strategic connection in the end, if you really want to take things seriously here. And it also goes for the trustees or the fronts. [00:29:43] Speaker B: And kind of goes back to your comment a moment ago. AI is not just an add on, it's not just another tool. I mean, I think we both share the view that, that done properly, it could be transformative and it could help you reimagine investment outcomes. I know it's a big word, reimagine, but you know what I'm thinking? It's like, don't think about it. Just like it's chatgpt. There's a lot more here. Peter, I'm going to say thank you, but before I let you go, there's a question I always ask my guests. And I know you do not deal directly with making investment hires, but I got to ask, I mean, what's the worst pitch you ever heard from a software vendor or anybody trying to get into the organization? Could you give me an example? [00:30:28] Speaker A: I think I had my share of, let's say, sales pitches throughout the years and maybe there's not, not, not, not one in itself that stands out. But what I've seen a lot of times is a lot of, I'm not sure how you say it in English. In Dutch we say Spiegel and Kraltjes. You can forget about that. But it, it's about bling bling, right? Like software or vendors that are like, okay, look at our fantastic suite and this is what we do. And then you get the most fantastic pre determined demos, et cetera. And our investors would go like, yeah, yeah, I want this. Can I get it tomorrow? And then, and then I would ask like, listen, I always have two questions like, how many paying customers do you have and is this a real product or is this your product roadmap that we're now seeing? And, and a lot of times, to be honest, it's like, oh no, we're not having paying customers. And it's, yeah, we will have this, what you just saw, in two years from now, which I fully understand. So I'm not trying to sell them short. But why that is not a good sales pitch for a company as ours as fiduciary manager. We have very firm enterprise grade standards to be able to onboard something like this. So I think if they do their homework well, they know that they're not going to get away with this bling bling type effect. [00:31:58] Speaker B: Not with me Anyhow, I get it. And I like the point how you need to have a resilient supply chain is what you're talking about. You don't want to take a vendor on, do all the effort, and then they're out of business in four months because they're out of Runway. [00:32:11] Speaker A: Exactly. Yeah, exactly. We cannot afford that. [00:32:14] Speaker B: Now I'd like to thank Peter Strickuerda from APG for sharing his thoughts on a fiduciary framework for innovation, particularly with regard to AI and data. Thank you all for listening. If you have any questions or comments on the episode, or if you have suggestions for future topics and guests, we'd love to hear from you. My contact information appears in the Show Notes, and if you haven't done so already, we'd really appreciate an honest review on itunes. These reviews help us make sure we're delivering the content you need to be successful. And to hear more insightful interviews with allocators and thought leaders, be sure to subscribe to the show on the podcast app of your choice. We'll see you next time. [00:33:01] Speaker C: The information presented in this podcast is for educational and informational purposes only. The host, guest and their affiliated organization are not providing investment, legal, tax or financial advice. All opinions expressed by the host and guest are solely their own and should not be construed as investment recommendations or advice. Investment strategies discussed may not be suitable for all investors, as individual circumstances vary.

Other Episodes

Episode

August 26, 2025 00:34:20
Episode Cover

The AI Implementation Gap: What's Stopping Asset Allocators?

"I feel like the adoption of an AI tool could potentially eliminate the need for other existing tools - it might not be a...

Listen

Episode

August 19, 2025 00:36:07
Episode Cover

An Asset Allocator's AI Use Cases, Implementation Strategy, and Wishlist with Mark Steed

What happens when a retirement fund's AI models start outperforming human investment decisions?In this episode of The Institutional Edge, host Angelo Calvello, CEO of...

Listen

Episode

August 11, 2025 00:04:39
Episode Cover

Trailer: Institutional Edge Podcast Launch: Angelo Calvello Partners with Pensions & Investments for AI Investment Series

Welcome to The Institutional Edge: Real allocators. Real alpha! Host Dr. Angelo Calvello introduces his exciting new podcast partnership with Pensions & Investments, designed...

Listen