AI Claims vs. Reality: An Asset Allocator's Due Diligence Framework

September 09, 2025 00:36:11
AI Claims vs. Reality: An Asset Allocator's Due Diligence Framework
The Institutional Edge: Real allocators. Real alpha.
AI Claims vs. Reality: An Asset Allocator's Due Diligence Framework

Sep 09 2025 | 00:36:11

/

Show Notes

How do you distinguish substance from hype when managers claim AI implementation advantages?

In this week's The Institutional Edge, Angelo welcomes Chris Walvoord, recently the Global Head of Liquid Alternatives Research and Portfolio Management at AON, where he led a team of 14 supporting Advisory and OCIO businesses across hedge funds, private credit, and opportunistic investments. Walvoord provides a comprehensive framework for allocators navigating the critical challenge of assessing AI implementations in investment strategies, emphasizing that allocators don't need machine learning expertise to effectively evaluate AI claims. The conversation reveals practical methods for distinguishing substance from hype, identifying red flags, and conducting thorough due diligence on AI-enhanced strategies in an increasingly complex technological landscape.

In This Episode:

(00:30) Introduction of Chris Walvoord, former Global Head at AON

(01:15) Hypothetical systematic global macro manager using AI enhancement claims

(04:13) Beyond buzzwords: requiring specificity in AI tool explanations

(05:18) Talent requirements: specialized skills beyond traditional finance backgrounds

(09:41) Data importance: sources, curation, and quality assurance processes

(14:35) AI model types and techniques: understanding specific approaches

(18:45) Testing and validation: building robust models for commercial deployment

(25:10) Red flags in AI due diligence and governance frameworks


Like, subscribe, and share this episode with someone who might be interested!

Dr. Angelo Calvello is a serial innovator and co-founder of multiple investment firms, including Rosetta Analytics and Blue Diamond Asset Management. He leverages his extensive professional network and reputation for authentic thought leadership to curate conversations with genuinely innovative allocators.

As the "Dissident" columnist for Institutional Investor and former "Doctor Is In" columnist for Chief Investment Officer (winner of the 2016 Jesse H. Neal Award), Calvello has become a leading voice challenging conventional investment wisdom.

Beyond his professional pursuits, Calvello serves as Chairman of the Maryland State Retirement and Pension System's Climate Advisory Panel, Chairman of the Board of Outreach with Lacrosse and Schools (OWLS Lacrosse), a nonprofit organization creating opportunities for at-risk youths in Chicago, and trustee for a Chicago-area police pension fund. His career-long focus on leveraging innovation to deliver superior client outcomes makes him the ideal host for cutting-edge institutional investing conversations.

Resources:
https://www.linkedin.com/in/chris-walvoord/
https://www.pionline.com/industry-voices/commentary-nav-lending-surging-popularity-what-could-go-wrong/
Email Angelo: [email protected]
Email Julie: [email protected]
Pensions & Investments
Dr. Angelo Calvello LinkedIn

Chapters

View Full Transcript

Episode Transcript

[00:00:00] Speaker A: Today's episode is sponsored by Elacraft AI. What if you could compress weeks of due diligence work into minutes? Helacraft AI makes this possible with the first closed AI powered platform designed for allocators that automatically generates comprehensive due diligence reports with institutional grade accuracy. Get investment grade diligence without the manual grind. Check them out at Allcraft AI. That's a L, L O C R A F. And be sure to tell them Angelo sent you. [00:00:36] Speaker B: You're going to want to do a lot of due diligence on who you're buying it from or renting it from. I mean, if they go out of business, that's a huge problem. And as I mentioned, the data, they should be able to open the kimono and tell you everything about this. You know, maybe they don't show you the code, but they should be able to tell you who they're buying all this stuff from, specifically what models they're buying, what data sets they're buying, how they're accessing those data sets and verifying them and cleaning them and making them useful for the models. You know, all these steps are very and shouldn't be considered proprietary. [00:01:07] Speaker A: Welcome to the Institutional Edge, a weekly podcast in partnership with Pensions and Investments. I'm your host, Angelo Calvello. In each 30 minute episode I interview asset owners, the investment professionals deploying capital, who share insights on carefully curated topics. Occasionally we feature brilliant minds from outside of our industry, driving the conversation forward. No fluff, no vendor pitches, no disguise marketing. Our goal is to challenge conventional thinking, elevate the conversation and help you make smarter investment decisions. But always with a little edginess along the way. Today we explore another AI related topic that is at the top of many allocators minds performing due diligence on AI managers. A recent study by Mercer found that nine out of 10 managers were either using or planning to use AI in their investment processes. This presents allocators with a critical how to effectively assess the purpose, the integrity and the actual impact of AI's implementation. My guest today is Mr. Chris Walford. Chris was recently the Global Head of Liquid Alternative Research and Portfolio Management at aon. Chris brings an invaluable perspective through his extensive writing, his research and his firsthand experience conducting manager due diligence across the ecosystem. As Chris will make clear, allocators don't need to be machine learning experts to effectively assess AI implementation claims. Chris will provide a practical framework for distinguishing substance from hype, identifying critical red flags and tips for conducting due diligence. On AI enhanced strategies. So let's dive in. Let's begin with a hypothetical. Let's say you're doing due diligence on a systematic global macro hedge fund manager and the manager has passed all of your traditional diligence screens, all the ops that you do, and kind of the investment side, but now you have to dig into their claim that they're using AI to enhance their current systematic process. Chris, from where you are, what are the key, and I'll admit high level questions, and they're high level because this is a hypothetical, so we're not going to get into manager specific issues. But, you know, what are the key questions you would ask to determine if the manager's AI claims are substantive and expound upon them if you can. [00:03:39] Speaker B: I think it's important for allocators to think about these claims of using AI, right? It's on the surface attractive, but it could be highly misleading. And so I think it's best to start at the top and say, what is the investment strategy the manager's using and how do these quantitative tools fit in? If it's a fundamental equity long short manager or a Tiger Cub or something that's going to be very different than if it's a highly sophisticated quantitative shop, a renaissance or a D.E. shaw or somebody, you'd have different impacts, different expectations in terms of the support structure for these AI tools, organizational commitment, all the above. And so I think you got to kind of start at the top and think about first off the overall strategy and then how do these tools fit in? And then you dig into sort of specifically, how does AI enhance the process, the investment process, the decision making process, and ultimately the production of returns? That's the real question from an allocator standpoint, is this manager differentiating themselves, generating a differentiated view or a return stream that's somehow more attractive than somebody who doesn't have these capabilities. [00:04:44] Speaker A: Chris, how specific do you want the manager to be? Again, we're at a high level here. [00:04:48] Speaker B: So I think it's fine to start at a high level, but you want to dig in fairly quickly and get down to questions that are, as I said, relevant to the strategy and relevant to the manager. But I think it's reasonable to expect a fair amount of specificity in their answers. And so it's good to ask detailed questions, get explanations of use cases. Is the tool replacing an older tool that maybe didn't use AI, or is it replacing an outsource capability that they had, replacing a person somehow, but the manager should have detailed answers for these things to your level of satisfaction. Because this stuff shouldn't be proprietary, right? They should be able to explain it to investors in a way that the investors can understand and understand why it gives them an edge and why the manager thinks it's a worthwhile investment, because it is a big investment. As we'll get into in a minute here, in terms of actually building tools that are useful, you know, that's not an easy thing to do to me. [00:05:41] Speaker A: You're telling me you got to go beyond the buzzwords. It can't just be, oh, we're using machine learning, we're using nlp, we're using Genai. You want to dig deeper than that. Am I correct? [00:05:51] Speaker B: For sure, Absolutely. It's great to say, you know, we're using these sophisticated tools, but the first question you should ask is, okay, why are you using Genai versus some other tool? Right. Why did you choose that specific tool? How does that help answer the questions that you're trying to ask or trying to answer? Why is that tool more advantageous for your strategy? And then dig into the details. How did you choose that tool? How have you developed that tool? How are you supporting it? It takes a huge amount of expertise to build and maintain these tools. Even if you're buying it from somebody, the basic model, you still have to do a huge amount of customization to make it work for your process as a hedge fund manager versus something else or any kind of investment manager. [00:06:35] Speaker A: Let's pick up on that. That's an interesting point. Whether you buy or lease the tools, the AI you mentioned, you still need a team. What do you focus on then, from the talent side? Because I have always say, you can't do AI with a cfa. You need special skills. So how do you dig in on that talent side, Chris? [00:06:58] Speaker B: Yeah, for sure, it goes well beyond a CFA or even just a master's in computer science or something like that. I mean, you need real experts in AI and machine learning in the specific type of tool that you've chosen and in house to help you turn that vended system into a tool that's useful for your investment process. And depending on what you're buying and what you're trying to do with it, it could take anywhere from one to a whole team of people want to be real skinny, no matter what I think. And in addition to very skilled, specific resources. And by the way, you should ask to see these people's resumes or understand their backgrounds and their training. And. And it's not somebody that you moved over from Your IT department, and they've been playing with these things for a few years. It takes a lot more expertise than that to actually build and customize the system, but it also takes a lot of larger commitment for the organization because you're going to have expensive data sets that you're going to need. Buying or renting or vending these systems is expensive. You're going to want to do a lot of due diligence on who you're buying it from or renting it from. I mean, if they go out of business, that's a huge problem. And as I mentioned, the data, that's a whole nother question that needs to be really dug into, need to get the specifics on. And again, because as an investor, they should be able to open the kimono and tell you everything about this. Maybe they don't show you the code, but they should be able to tell you who they're buying all this stuff from, specifically what models they're buying, what data sets they're buying, how they're accessing those data sets and verifying them and cleaning them and making them useful for the models. All these steps are very, very important and shouldn't be considered proprietary. Yeah, right. Because as an allocator, they could tell me any of those things and I could never go out and recreate it. So it's not like they're giving away trade secrets, it's that they're helping an investor understand, again, how these tools give them an advantage in producing better returns. [00:08:49] Speaker A: Yeah, I mean, it's fundamentally by sharing that information is building trust. And any allocation I assume you recommended was built on fundamental trust. If you didn't trust the manager, you would hesitate with that decision. Let me just go back to a comment you made about digging into the talent, and then we'll go to the data because I think you've laid out a number of important points. You mentioned looking at resumes. Would you also look to see if they publish papers outside of our industry in machine learning or whatever area they're focusing on? Genai nlp, Would you go that deep? [00:09:21] Speaker B: Oh, for sure. Look at papers, look at conferences they speak at. Any of this stuff is just added confidence that they're, number one, taking it seriously, taking the development of these tools seriously, and number two, hiring the right people to do it. Yeah, it's a new field. It's a rapidly evolving field, a very technically complex field. And so, you know, you would not expect to see a bunch of people from big investment banks. Right. You'd expect to see people from either academia or from software engineering firms where they have real life expertise, specifically in AI models. And I think that's vital to building a tool that's actually useful for the investment process. [00:10:01] Speaker A: And Chris, you're talking about what some have called this kind of war for talent. There's this battle to get the talent because the men and women you're describing could just as easily go work at Sloan Kettering or they could go work at Meta. Would you also look at the compensation structure of this talent? Because how they're compensated kind of gives you a sense of retention, am I right? So they don't jump ship and suddenly go to Sloan Kettering in six weeks. [00:10:26] Speaker B: Yeah, absolutely. And again, maybe it might be hard to get them to get your manager to answer specifically what he's paying X or Y person, but you should get a feel for the budget for the whole project. How much of that breaks down into compensation versus data versus the models. And so you can get an idea of average compensation per employee dedicated to building and maintaining the tool. I think that's a perfectly reasonable question to ask. And I think you should also do a little bit of due diligence on what could they earn at these other places. Dig a little bit. You can find that out and determine is this investment manager offering something that's competitive or are they just trying to build it on the cheap? [00:11:03] Speaker A: Let's go to your comments on data. Let's unpack that a little bit. You've talked about the importance of data. I certainly agree. At a high level, data is critical to building good models. Digging deeper, would you ask them, for example, the types of data, would you ask them their inputs? How deep would you go? [00:11:21] Speaker B: Yeah, I think that the data at the end of the day is super important because in general that's what allows these models to do what they do. But the trick is that number one, it takes huge amounts of data. And so you need to understand both for the initial training and then for the specific training for the strategy of the investment manager. So you want to understand what sorts of data did they use, what were the sources for those for that data, did the model come pre trained or did you have to train it yourself? And then as they're building out and adapting and training the model for their specific questions, you want to again dig in a little bit further to the data sources specific to their investment strategy. And then you want to think a little bit more about how the data is pre processed, how it's cleaned, how you've looked at it for potential biases what you've done in order to make it useful to feed into the model to help you arrive at the answers you're looking for. [00:12:18] Speaker A: It's kind of the data curation process. [00:12:21] Speaker B: Exactly. [00:12:22] Speaker A: Yeah, exactly. And I think you're right. There's a certain value in that curation process that, if done correctly, could improve the model's performance. But, Chris, let me ask you a related question. Even if they clean the data, let's assume they've got it in place. They've curated the data set. It's in place now. They've got the pipes hooked up. One of the problems that we encountered early in our days at Rosetta was redundancy. You know, if you're really counting on this data, you need to get it from good sources, and even the good sources could go down or make mistakes with data. Is that something you look at, redundancy in the, you know, let's call it the data sets, for sure. [00:13:06] Speaker B: I think it's important to understand that data sources in general are not static. Right. They're constantly changing. And so you need a process and a structure and some people in place to keep track of that and to keep on top of. Is this data stream that you subscribe to or you bought continuing to provide the sort of data that you thought it did, or have they changed it? Have they changed where they're getting this from or what it looks like or any number of things? Because it's an ongoing process. Right. It's not, as I said, a one time, a static thing. And in conjunction with that, you want to archive it so you can compare what you're getting today from what you got from this vendor a year ago and identify for yourself, has something changed? Have they started giving me different information in a different format, or does it have a different bias? And if so, why is that? So that's definitely an important part of the process that the manager should have thought through and have some structure around so that they can address it on an ongoing basis. [00:14:09] Speaker A: I would add one other piece from my experience. You've got this curated data set now, and let's assume there's some redundancy built in, because that would be best practice, I think, given what you've said. But now you have fresh data that comes into a live model. My sense is you need to have some kind of quality assurance check on that data to make sure the data is accurate because you don't want to be putting in even if it's from the right data source and it's the right specific data in the right format that it's actually correct. I mean, would you look at quality assurance checks in the data? [00:14:44] Speaker B: Yeah, for sure. I think when you think about quality assurance, it applies to the whole thing, for sure. You got to make sure that the data you're getting is still accurate, it's still what you think it is. But you also need to think about how the models are using the data and the models themselves. Right before you go live with your models and start using them to make investment decisions. How did you test them? How did you validate the models? What back testing have you done? And then on an ongoing basis, you need some sort of quality check to make sure that the model itself is working the way you think it is to look for biases or hallucinations or all these other issues that these models can have. Again, the manager needs to have thought about a process to watch for that sort of thing to make sure it doesn't lead them astray. [00:15:31] Speaker A: You mentioned something which is the natural next step. If they've got this data, they've decided they kind of know the research project they're going to work on. As you talked about, what are you looking to enhance and why is AI the right choice? I think is where you started. And then we talk about the talent that you need to do this. It's somewhat specialized. You talk about the data now it's like, okay, so what techniques are you really using? It seems like that's the natural next step. And you mentioned some right there. You talked about an LLM generative AI. You've talked about nlp. I mean, do you go to that level of specificity asking the managers exactly what type of AI they use? [00:16:14] Speaker B: Yes, I think you do. And I think that even if, I mean, I'm certainly not an expert in these things, but that can also work to your advantage. Right? You ask the manager, so why did you choose the model that you chose? They'll say, well, I chose an LLM model for these reasons. And I can say, okay, well, so I don't understand. I don't know that much about it. Why don't you explain to me why that's the best model for this application and what are the positives, what are the limitations and what have you done to address all that? And allocators should not be afraid to dig in to these details and just challenge the manager to explain it to you. You're smart people, you can understand it and they should have a good explanation as to why they did it. And that's the biggest thing you're looking for is they should have a sensible reason why they chose this model as opposed to some other one. [00:17:00] Speaker A: A quick break to talk about Alacraft AI I've been following how AI can transform due diligence and find most platforms give allocators more data, not better decisions. Helacraft AI flips the script by providing decision ready intelligence that actually moves the needle. Instead of drowning in dashboards, you get the tedious work done for you, summarizing documents, flagging risks and writing IC memos so your team can focus on what matters most, making those critical high conviction calls transform your pipeline from backlog to competitive advantage at Alacraft AI. Go to Alacraft AI to schedule a demo and discover the AI due diligence platform allocators deserve. Given the complexity of some of these models and also given their kind of novelty in our industry, we haven't yet built a language around these models. If I'm a systematic global macro manager and I tell you I'm using trend following, you know exactly what I'm talking about. And then you'll ask me to look back periods, there's this common language. But here in the AI space, that language is still forming, it seems, at least in our industry. And your point about asking those basic questions is important because one you learn. But two, it forces the manager to give you some information that, as you point out, is critical. And you're not reverse engineering this stuff. If they tell you they're using reinforcement learning with an actor critic model, I mean, what are you going to do with that? But they tell you something beyond we're using AI or we're using machine learning, or we're using reinforcement learning, they tell you we're using an actor critic model. And then as you hinted at, the question is, well, why is that the right model? Am I right? I mean, you don't want to hear they're using a certain model, tell me why that's the one and not another model. [00:19:04] Speaker B: Exactly. And I think it also helps you as an investor set expectations, right? Similarly, your analogy to trend following is an interesting one, I think, because you ask a trend follower a few questions in terms of their look back period and smoothing and that sort of thing, and that helps you develop expectations around performance, right? If they say they have a real long look back period, you say okay, well, so in times of lots of market chop, I expect this sort of response from your model and I'll watch for that. Similarly, within AI, if they tell you the type of model and the sorts of decisions it can help them with, then that helps you verify performance and drivers of that performance when they come back to you with it in the future. Right. And so that's a, as you say, that's an important language base to build up to help your understanding in terms of when these models should perform and when they might run into trouble. And that's a really valuable thing for both managers and investors to understand that. [00:19:59] Speaker A: So they tell you the research problem, they tell you how they're using AI to enhance the solution to that problem, basically their strategy. You talk about the talent, you talk about the data, and then you dig into the types of models they use and you're saying be specific here. I mean, tell me, educate me. If I'm an investor in your strategy, I'm de facto a partner. So help me understand this. And given those building blocks at teams, it takes us naturally to another point and that is okay, you got the data, you got the people, you know the problem. How are you building these models? These are non trivial projects. So you need to understand how do they test and train their models? [00:20:43] Speaker B: Absolutely. And how long does it take? What's the development cycle? Do they go through revisions, versions, what version are they on? If it's the first pass, then you probably have a little bit less confidence in how much value it's going to add. And are they handling it accordingly? You don't want to just say, okay, well we're just going to turn it on and give it all the money and see what it does. [00:21:03] Speaker A: I assume there's a much more rigorous process than that. [00:21:07] Speaker B: Yes, exactly. [00:21:09] Speaker A: We built these models at our old shop and it would take quite a while, many iterations, many experiments to get them to the right point. But we always knew that our validation method, our testing, training methods were their industry best of class. What a manager's doing, for example, with a neural network, building a neural network should be the same that Google's doing a DeepMind. I mean, maybe on a grander scale there, but there's certain best practices here. And the question becomes, let's assume that they tell you we're, you know, the process is robust and they walk you through it. There should be a slide on that deck somewhere. I would hope. Saying this is our process, you kind of get to the question, at least in my mind, is how do you know it's ready to deploy commercially? I mean, who makes that decision and when do you make it? What do you need to make that decision? And again, we're not being specific here, but there has to be, don't you think some kind of catalyst that says we're good to go? [00:22:05] Speaker B: Yes, for sure. And some of that depends on how integral the AI model is to the entire process. Again, if it's a tool that the PM uses on the desk to make qualitative decisions, that's one thing. And maybe the PM can make that go no go decision. If it's a quantitative investment strategy in and of itself and this model is driving essentially the whole portfolio, that's a much higher level decision. And the whole organization needs to buy off on that. Right. And be part of that decision and agree that, hey, we're going to turn this on, we're going to make it go and have a process around understanding the risks both to the tool itself and to the risks that the tool creates in the entire investment portfolio. Those are key inputs to that decision, as you say, of the go no go decision. [00:22:52] Speaker A: Given the hypothetical we have of a systematic global hedge fund manager, assume the manager doesn't use AI for a minute. Chris, would you ask them if they ever override the model, not counting AI in this, they're just systematic? Would you ask them that question? [00:23:08] Speaker B: Yeah, that's a very important question for all systematic strategies, not just AI, as you say, at what point do you look at your models and say it's giving me results that I'm not going to implement for whatever reason? And that's a tough question, especially if the model hasn't been working, but you can't identify anything wrong with it per se. At what point do you override it? I think you have to, going into the whole thing. You have to set some expectations in terms of range of performance and it takes a lot of work to set a realistic range for that performance expectation. But if you're invested in a systematic fund that makes teens type returns, what sort of a drawdown should you expect from that? It could be pretty big, it could be 20% easily to get those sorts of long run returns. But whatever the numbers are, you should have those a priori. And then as you're approaching those limits, talk to the manager. What are you thinking? What are you doing to convince yourself that this is still within the range of expectations and that the model is fine and we should just let it run because that's one of the hardest investment decisions. I think that's out there for sure across all strategies, not just quantitative strategies. But at what point do you pull the plug is really tough. [00:24:21] Speaker A: I was even thinking somewhat episodically, let's assume the model appears to be working because that's the underlying assumption. But do you ever override the model on a given decision? And again, this doesn't have to be AI. AI, I think, makes it much more complex because it's often AI is a black box where here, if you have a. We'll go back to our hypothetical, let's call it a trend follower again. They built that model from the ground up. I mean, they can see all the working parts and the model's working. Chris. Now you've got an exogenous event occurs the model already has a position on. Do you override it? I mean, that's what I would be asking, but I haven't done the diligence you've done. [00:25:06] Speaker B: Yeah, for sure. And that's a tricky one too, right? Because for any one position, any one decision or output that the model spits out, to override that, I think you need to have a really strong case as to why the model's wrong. If you've done your work and built the system in a sensible manner such that it's reasonable to expect it to give you good output over the long run. Investing is a game of making slightly more good decisions than bad decisions. If you can do that, you can win. For a model to spit out investment recommendations that occasionally don't work out is totally to be expected. And so overriding it, certainly if it happens very often, I'd argue, is a bad thing. You don't want to see that frequently because that's an indication that the model wasn't built the way it should have been and. Or it wasn't implemented within the entire system with appropriate expectations. Because nobody bats 100% in the investment universe. [00:26:08] Speaker A: No, I think if you're 54%, it's a. It's a winner. [00:26:12] Speaker B: I mean, that's a good system, for sure. [00:26:14] Speaker A: Yeah, I agree. I agree with that for sure. Okay, so we've gone through a checklist and the last piece we talked about and we drifted off a bit. But whenever you and I talk, we always tend to drift. This idea of validating and testing the models and the fundamental question. Well, there's two in my mind. Like, one, you've said, how do you know the model's still working? And that applies to any systematic manager. But here they may have some specific metrics that relate to the machine learning side. For example, some of the scientific metrics. And then the other question that I think was embedded in your comment about the testing process is how do you know you didn't overfit? And that's a question you would ask any manager that's using a systematic process. So I don't think it's any different here. It's just, how do you know you didn't overfit the damn thing? [00:27:07] Speaker B: For sure, for sure. Any systematic process that uses historic data always has to answer the overfitting question. That's a tough one, but there are ways to answer that question. That makes sense. [00:27:19] Speaker A: There better be. [00:27:20] Speaker B: Sample, not a sample. [00:27:21] Speaker A: Yeah. If they shrug, then you're in trouble. [00:27:25] Speaker B: Yes, exactly. That would be a red flag. That's one of your red flags, for sure. Oh, we never thought of that one. Yeah. [00:27:31] Speaker A: And speaking of red flags, any other ones off the top of your head? Because we've kind of hit them. Maybe you go back and kind of take the negative side of some of this. If you could just summarize that. What are the red flags? [00:27:41] Speaker B: Yeah, for sure. I think I'll start at the beginning. There should be a rationale for why you're trying an AI model. Right. It's not just, oh, AI is cool. There should be a specific problem they're trying to address that can't be addressed any other way, because AI is difficult and expensive and new and complicated. And if you can do it a simpler, more straightforward way, you should do that. And then a commitment level that's not commensurate with the importance in the process. So, again, if this AI model is driving your entire portfolio, you better have a lot of resources and have really thought through how you're going to build it out and how you're going to feed it data and how you're going to maintain it, and a willingness to go deep. If they won't give you answers to questions that seem like they should be able to answer, that's definitely a red flag. And then I always look for closing the loop. Right. Have they closed the loop on the process? Are they watching it for drift or changes in the data? Are they watching it for changes in the model's performance? Looking at attribution, is it still doing the thing, contributing the way you thought it should or hoped it would? All those things, always closing the loop and saying, okay, is this doing what I thought it would do is key. And if they're not doing that, then that's definitely a red flag too. [00:28:52] Speaker A: It seems like a lot of what you're talking about for doing diligence on managers using AI, it's an extension of what you've done traditionally with some of these questions, because you certainly would ask about talent. If somebody were doing the systematic global macro you'd want to know who the team is, who are the research people, and you would dig into them. In this case, it's a slightly different skill set, but you would still dig in. And I guess the other thing, always with talent, it just comes to mind is integration. You don't want to find that they hired a couple data scientists and they sit in another building and they only come out for client meetings. I mean, you know, that whole. That's why I remember people would do some diligence on us when I was at Man Group, and they would want to talk to everybody they could and not just the pm to find out, do people actually work together here? I mean, I'll summarize and say a lot of this is, you know, kind of what you're doing now, but you're asking more specific questions. And it sounds like your expectation is to go a little bit deeper, because the education process and the technical nature of AI, you know, is certainly one that needs to be expressed to develop comfort and trust. That's my take. [00:30:07] Speaker B: Yeah, I think that's a good summary and your point's a very good one, that if you talk to as many people as you can, again, another red flag, make sure everybody's giving you the same story. If the data scientists are describing a completely different process than what the PM sitting on the trading desk is describing, that's not a good thing. [00:30:27] Speaker A: No. [00:30:28] Speaker B: Yeah. And so another good reason, just to talk to as many people as you can. [00:30:31] Speaker A: Yeah. And I guess to get more information. When we were running Rosetta, people would ask us about our AI governance framework. Because you've hinted at this already in your comments about data bias. Bias in the data. There's also the data privacy issue. Would you dig into the governance, which again, is a little different than traditional systematic managers or any managers. [00:30:58] Speaker B: I think you have to, in this case, in part because of all the noise around AI and the data used to train those models in general, there are lawsuits going on, people saying, this data was stolen, this data was misappropriated, I didn't give permission for this. And so you want to make sure that the manager has at least thought about this stuff. The regulatory framework is still developing, but there are regulations out there that they need to comply with and they should be aware of that. There's a whole kind of compliance, back office aspect of this that's just as important. And the last thing you want to do is be invested in a fund that gets embroiled in some sort of difficulty because of that sort of thing. [00:31:35] Speaker A: We've seen that recently. I won't name any funds, but the SEC has gone after a number of managers for AI washing. And it's not so much a documentation issue, but it's one where there was more hand waving than documentation, I think. So it's a good point. [00:31:52] Speaker B: Yeah, yeah. [00:31:53] Speaker A: Of course we could end it here, man. This was quite informative. I appreciate you making the effort here and extending your competency from the traditional space into the AI space, but I ask every guest a question and I don't really care what are your favorite three books or what's your favorite restaurant? What I want to know is what's the worst pitch you ever heard? [00:32:13] Speaker B: I think one of the funniest pitches I ever heard was one in asset based lending. This isn't real related to AI, although I think it's. [00:32:22] Speaker A: It doesn't have to be. No, no. [00:32:23] Speaker B: Yeah. I think it's interesting to think how AI might have been a factor here. So asset based lending, a strategy where a manager was going to buy old ships, old ocean going ships, oil tankers or container ships, and scra, after 30 or 40 years, they just cut these things up and sell the steel to be melted down. But as you might guess, that's a dirty process. They tear these things apart in third world countries. They're not doing it in Miami Beach. And so the people who do this need financing. And so the pitch was, well, we'll finance these groups, sort of loosely groups that tear the ships apart. We'll buy the ship and let them tear it apart and sell the metal for scrap and then get paid back. And it's nice because you have this collateral, you have this ship so you're fully collateralized. Well, that's very true, but you kind of got to think through the scenario and you're fully collateralized in the beginning, but you get less and less collateralized over time. And by the end they've torn it all apart and sold off all the parts and there's nothing but some guy in a third world country that has sold all the steel and now you want him to pay you back. And that's on occasion, a problem. [00:33:31] Speaker A: I can absolutely see that. I assume you passed on that deal. [00:33:34] Speaker B: Yes, yes, indeed I did. But that's a story where you got to think through a very specific scenario and assess your risk profile as it changes as it goes along. [00:33:45] Speaker A: Sounded too good to be true, and it turns out it wasn't true. [00:33:48] Speaker B: Yeah, exactly. It sounds great. At the beginning, it's fully collateralized. [00:33:53] Speaker A: Well, Chris, thanks for sharing your knowledge on this. I appreciate it. I'm sure our audience is going to learn something from the, you know, kind of the insights, the red flags. And you know, again, maybe it'll get people to think of their own questions because you got to really dig in here. This is a different type of process in the AI side and it takes. It takes some time. And I think it's important for Allocators not to pass on AI managers because they're using AI and they don't understand it. But they should ask the questions that you laid out here and their own questions and expect a sense of openness in the replies. So anything in closing? [00:34:31] Speaker B: Chris yeah, no, I agree. I'd say Allocators should embrace this as an opportunity to learn, think about new tools, new ways to do what they've always been doing, and it's a tremendous opportunity. [00:34:42] Speaker A: Very cool. [00:34:43] Speaker B: It's always great talking to Angelo. I always learn something, so I appreciate it. [00:34:46] Speaker A: I say the same about you, Chris. I always enjoy talking to you and I appreciate you sharing your writings with me. So keep that up. Thanks for listening. Be sure to visit P and I's website website for outstanding content and to hear previous episodes of the show. You can also find us on PI's YouTube channel. Links are in the Show Notes. If you have any questions or comments on the episode, or have suggestions for future topics and guests, we'd love to hear from you. My contact information is also in the show notes and if you haven't already done so, we'd really appreciate an honest review on itunes. These reviews help us make sure we're delivering the content you need to be successful. To hear more insightful interviews with Allocators, be sure to subscribe to the show on the podcast app of your choice. Finally, a special thanks to the Northrup family for providing us with music from the Super Trio. We'll see you next time. Namaste. [00:35:45] Speaker B: The information presented in this podcast is for educational and informational purposes only. The host, guest and their affiliated organizations are not providing investment, legal, tax or financial advice. All opinions expressed by the host and. [00:35:54] Speaker A: Guest are solely their own and should not be construed as investment recommendations or advice. [00:35:57] Speaker B: Investment strategies discussed may not be suitable for all investors as individual circumstances vary.

Other Episodes

Episode

August 26, 2025 00:34:20
Episode Cover

The AI Implementation Gap: What's Stopping Asset Allocators?

"I feel like the adoption of an AI tool could potentially eliminate the need for other existing tools - it might not be a...

Listen

Episode

August 11, 2025 00:04:39
Episode Cover

Trailer: Institutional Edge Podcast Launch: Angelo Calvello Partners with Pensions & Investments for AI Investment Series

Welcome to The Institutional Edge: Real allocators. Real alpha! Host Dr. Angelo Calvello introduces his exciting new podcast partnership with Pensions & Investments, designed...

Listen

Episode

September 02, 2025 00:34:21
Episode Cover

The AI-ESG Paradox: Why Assessing AI's Impact Defies Simple Metrics

What methodological breakthrough is helping institutional investors solve the AI-ESG paradox?This week, host Angelo Calvello interviews Dr. Liming Zhu, Research Director at CSIRO's Data61...

Listen