An Asset Allocator's AI Use Cases, Implementation Strategy, and Wishlist with Mark Steed

August 19, 2025 00:36:07
An Asset Allocator's AI Use Cases, Implementation Strategy, and Wishlist with Mark Steed
The Institutional Edge: Real allocators. Real alpha.
An Asset Allocator's AI Use Cases, Implementation Strategy, and Wishlist with Mark Steed

Aug 19 2025 | 00:36:07

/

Show Notes

What happens when a retirement fund's AI models start outperforming human investment decisions?

In this episode of The Institutional Edge, host Angelo Calvello, CEO of Rosetta Analytics, interviews Mark Steed, Chief Investment Officer of the Arizona Public Safety Personnel Retirement System. Mark discusses his hands-on approach to implementing AI in institutional investing, focusing on two primary use cases: operational efficiency through automated document processing and enhanced decision-making via machine learning. The conversation explores practical challenges including data security, board governance, and talent requirements. Mark shares his vision for AI-powered investment workflows and explains how proper governance frameworks can encourage innovation without compromising oversight in retirement fund management.

In This Episode:
(00:00) Introduction of guest Mark Steed, AI innovation discussion
(03:02) AI use cases: operational efficiency and decision making
(09:30) Manager selection process and generative AI applications
(16:37) Building governance frameworks for AI implementation and oversight
(24:44) Key requirements: talent, data, and compute infrastructure
(29:35) Future wishlist: AI-powered investment workflow automation
(32:36) Multi-agent systems and the future of investing
(33:52) Worst investment pitch: Formula One racetrack story


Like, subscribe, and share this episode with someone who might be interested!

Dr. Angelo Calvello is a serial innovator and co-founder of multiple investment firms, including Rosetta Analytics and Blue Diamond Asset Management. He leverages his extensive professional network and reputation for authentic thought leadership to curate conversations with genuinely innovative allocators.

As the "Dissident" columnist for Institutional Investor and former "Doctor Is In" columnist for Chief Investment Officer (winner of the 2016 Jesse H. Neal Award), Calvello has become a leading voice challenging conventional investment wisdom.

Beyond his professional pursuits, Calvello serves as Chairman of the Maryland State Retirement and Pension System's Climate Advisory Panel, Chairman of the Board of Outreach with Lacrosse and Schools (OWLS Lacrosse), a nonprofit organization creating opportunities for at-risk youths in Chicago, and trustee for a Chicago-area police pension fund. His career-long focus on leveraging innovation to deliver superior client outcomes makes him the ideal host for cutting-edge institutional investing conversations.

Resources:
Mark Steed Bio:  https://www.psprs.com/about/psprs-executive-team/
Arizona Public Safety Personnel Retirement System: https://www.psprs.com/
Email Angelo: [email protected]
Email Julie: [email protected]
Pensions & Investments
Dr. Angelo Calvello LinkedIn

Chapters

View Full Transcript

Episode Transcript

[00:00:00] Speaker A: In every organization. Internally, you've got people who don't really have any knowledge of AI and machine learning, who are in positions of authority. And your board has various levels of sophistication and comfort with, you know, AI and machine learning. And so you're stuck with we need to innovate here and move with the times. But also we've got to bring people along, but people who have other responsibilities also. And I think you do have to move forward with governance in mind because one thing that will kill innovation really quickly is, is just poor governance and poor oversight. [00:00:32] Speaker B: Hey, everyone, I'm Angelo Calvello, host of the Institutional Edge, a podcast in partnership with Pensions and Investments. Thanks for joining us for another episode in our series on artificial intelligence and institutional investing. And I'm pleased to say that my guest today is Mark Steed, CIO of the Arizona Public Safety Personnel Retirement Systems. I'm excited to have Mark on the episode. Mark is one of the few asset owners with an academic background in predictive analytics and firsthand experience integrating AI into his plans, investment decision making. In this episode, Mark discusses PS PRs, AI use cases, implementation strategy, and future wish list. Mark, it's great to have you on today's show. [00:01:18] Speaker A: Yeah, I'm happy to be here. This is fun. Anytime we can talk about AI, it's a good time. [00:01:22] Speaker B: You know, I got to say, I'm grateful that our, our mutual friend Mark Baumgartner, who's also a friend of the show, recommended you as a guest. And, and I gotta say, you know, after talking to you in our pre call, I'm surprised our paths really hadn't crossed previously. [00:01:36] Speaker A: Yeah, me too. Yep. [00:01:38] Speaker B: And I say, I mean, for me, the surprise is you're one of the few senior asset owners who has firsthand knowledge of and firsthand experience with AI investment use cases. And I would have thought our approach at Rosetta analytics would have caused our paths to cross before, but hey, here we are. And let's jump right into the topic. And the topic today, Asset Allocators, AI Use cases and Wish list. So let's start with the use cases, man. Give me your top use cases and their benefits and if you could, you know, kind of take your time and break it down for me. [00:02:13] Speaker A: So I think there are really sort of two main use cases, one of them being just the ability to make things more efficient so that just operational efficiency. The other main benefit is really decision making. So when I think about operational efficiency, that's just using AI to some extent, machine learning to streamline and automate a lot of the routine tasks that the investment office executes. So that's collecting documents from proprietary data sites, getting around the two factor authentication. So going in, retrieving documents, downloading those documents, then extracting data from those documents. Because all of us have PDFs, and that's really what we're mostly worried about are just the PDFs. It's unstructured data, so then extracting relevant information from that data. So just to give you an example, I'm looking at a private equity fund. They give me access to their data room. I go in there and they've got a mountain of documents, PPMs, DDQs, whatever, spreadsheets, and, and I can go in there automatically download all those documents and then within those documents start to extract useful information. How many partners, how many partners to each portfolio company? Portfolio company, operating metrics, things about the track record, right? Fund sizes, vintages, things like that, who the compliance officer is, anything you might want to pull out and stick into a database you can then use to build predictive models to then help inform the front end decision making. So that's sort of operational efficiency. And the, there's a lot of other applications writing investment memos, automatically writing those investment memos once you have a due diligence packet. So lots of ways to use the automation features that AI offers. And then in terms of decision making, there's kind of two components of this. One is just kind of the machine learning component, which isn't necessarily different from traditional kind of statistical techniques in some sense in that most of the time you've got structured data and meaning, you know, kind of what your data set looks like and you want to load it in and you want to figure out, hey, what variables matter? You know, I'm trying to ascertain there's an output variable, whether it's quartile performance or just absolute performance. I'm trying to predict, you know, the predictor variable and you have all these input variables and those are the, you know, the machine learning techniques can help with that. Machine learning technically is part of AI. And the reason why we have machine learning versus say traditional linear regression, which is what most of us are accustomed to, is because a lot of the relationships, relationships are nonlinear. And in fact, most of the data we have violates the assumptions of normality, which is sort of what you need to run a lot of the traditional regression models. And so when you look at small sample sizes, and a lot of us have small sample sizes, just in the traditional sense of statistical measures, we don't have enough data to really make high confidence claims at sort of the 5% level or the 1% level. And a lot of the data is just not normally distributed or it's not independent. [00:05:15] Speaker B: Right. [00:05:15] Speaker A: So we have all those problems and I think machine learning helps with that part. But then there's this other avenue of AI which is deep learning, where you really are kind of in the traditional kind of black box. And that's where you're just pointing it in the direction of information, whether that's in a document or that it's unstructured data or it's a data set of some type. But you know, it could be spreadsheets, whatever. And you're telling, you're asking it to tell you what, what patterns matter. Right. And I think that's a tremendously powerful technique that might highlight some patterns that the traditional investment office wouldn't be aware of. So that's kind of where our brain is at. I know a long answer to that first question, but I think those are the applications. I see. [00:05:52] Speaker B: Let's go back to the efficiency that's gained. What type of AI are you using there? If it's proprietary? Mark, you know, I mean. [00:06:02] Speaker A: No, no. So I can say like all of this is just an R and D for us. And so we just now got authorization to put the LLMs on our local machines. So part of the problem historically has just been data security and, and still is, right, with data security and uploading stuff to chat, GPT and things like that and what do they do with it? But we just got the authorization to put the large language models on our local machines. So the, so for us, what we're going to start to do once we have access to the, once we point it to the. In the direction of the documents. So you can use these robotic processes to kind of go in and just access the documents and you can, you know, write those scripts internally to just go in and access documents and pull them down and set them in a specific location on your S drive. But then what we're going to do next is just use the LLMs, point those to the direction of our documents. And we're probably going to start with there's Gemma, right, Which is the Google or Llama, which is metas that are both fairly robust. So I'm just kind of sharing a little bit of like our research and development here, and we're going to point one of those in the direction of our documents and see how it does and start to train it on our own documents. Locally because you don't need an Internet connection and your data staying local to start to build and extract that documentation just for that proof of concept. Because right now it's just manual. If there's something, you know, or you can compartmentalize it in your DDQs and just make everything, hey, just tell us who this person is or what this is and what this is, which is what we're currently doing so that we can then move the information over to a database that people can search. But now we're going to start to automate that with the LLMs and that'll just be the start of it. And then we'll start to train the LLMs on writing the investment memos and things like that. [00:07:45] Speaker B: So Gen AI is going to really be a kind of a foundational approach. [00:07:50] Speaker A: Yeah, I hope so. Yeah. And these are, and these are, and these are like exercises that, where you can confirm and verify what the machine is doing. So we're not fussing with the black box element yet, we're just saying, hey, you know, did you label the compliance officer the right person? You know, did you identify carry as 20%? Because some groups call it carry, some will call it the performance bonus or whatever. Right. So there's these different nomenclatures for the same thing. And the nice thing about having the machine do a lot of that is just that we can confirm whether it was accurate or not. Yeah. [00:08:24] Speaker B: So using this generative AI, you know, you talked about document analysis, are you using it also for manager selection process? I mean, there's a lot of data that you have to be pulling in and I'm not sure if you use a consultant or not, but either way there's going to be a lot of data. [00:08:40] Speaker A: We do, yeah, we do use a consultant and we've always just run parallel processes so we do our own work. And then, you know, hopefully they kind of come to the same conclusion. And I think for the most part, you know, we haven't had any issues with that. So on the screening side, I'd say yes and no at the same time. So when we get inquiries, what we don't have, and I think this is a change that we're making, is the ability for managers to enter information without contacting a staff member, but just enter information, say through like a web portal or some other location about their fund. And I imagine what, what will happen for us is after we have enough observations from the machine learning AI algorithms that can sort of start to say these factors matter. Right. So they're sort of like Kind of what you, your hunch in terms of what you think matters in terms of performance. But you look for the models to help inform that and you need a number of observations before you're going to be confidence that hey, this, you know, this actually might be accurate here. Then what you do is you put that on your web portal and you say, hey, this, these are the five or six things that we think matter most. Some of them might be, you might imagine what they are, you know, top performance and prior funds is maybe a good indication that it might continue or maybe not, but things like that. And you put them on a, you know, a portal so that managers can kind of enter that information and then you have a really good way to just, you know, scan for new ideas. We don't have that portal set up yet and so right now what we're doing is when we have a GP that's contacted us, you know, we're taking their information. Our due diligence is really sort of front loaded. So we send a spreadsheet with all sorts of quantitative requests and then they send that back to us and then we're crunching it through our models. So we're not quite at the point yet where we've been able to sort of filter based on three or four criteria that matter. I think we'll probably get there soon, but, but you just need a number of observations before you have any confidence that the model outputs are accurate. I think when we get that we'll start to say, okay, here's the filter. This is going to, you know, it's an 80, 20 kind of a thing, so we might, you know, there might be some false negatives, but we're not too worried about that. That's, you know, the deals we don't do that turn out to be okay. We're only worried about the deals that we do and we need to make sure that those ones are okay. [00:11:05] Speaker B: It's kind of interesting. You know, a theme that's run throughout your comments is this idea of verification. And the word I'll use is explainability. You want to be able to understand the decisions, the models, you know, the LLMs or if you're using some kind of NLP, you want to understand, you know, where the decision's coming from. Is that just a kind of a bedrock for you, like, like a ground truth? You need to have that verification and explainability in the process. [00:11:32] Speaker A: I mean, I think you want it, if you can get it, you want to be able to explain how things are going and just like we do with. With kind of human reasoning, right? Where ideally what you want is somebody to kind of explain their view, cogently organized thinking, right? Adhering to the best practices of kind of logic. But sometimes that doesn't happen. Sometimes you can't get there. And sometimes you're just. Sometimes you're left with these arguments where you have to say, I don't know, it's just kind of my gut. I just. I can't articulate it, you know? And so I think that's kind of an advantage that we have in our organization where every recommendation that's relevant, we record. And you have to be very specific about what it is you're recommending or what you think will happen. So if you're a PM recommending investment in a company or a fund or whatever, you've just got to say, I think this. Here's a real clear definition of success. For example, this fund will outperform the s and P500 by at least 2% over the next 12 months. And I'm 75% confident. And then we benchmark people and we ask them to explain why they think that way. So there's always this ability to kind of go back and piece together sort of their logic. And the reason you want to do that is because it's more likely to be repeatable and you're less likely to have surprises. But sometimes you just can't articulate it. And there's something the cio, where you're just. You just feel like. I just feel like this is compelling. I can't quite articulate. Or maybe it violates kind of our traditional rules, but I still feel exercised about it and that it's going to perform. So we write these things down so that over time we can sort of look at these gut decisions as well as the ones that weren't gut decisions and start to say, like, well, actually they are pretty accurate. So what's going on here? Maybe we can pull this apart and understand it. And I think that's the same. The same discipline we apply to the models, which is we want to be able to explain what's going on here. Sometimes we can't. So we're going to benchmark these things. We're going to track them just to see, hey, if it's. If we're. If it's saying we should be 80% confident here, it's going to be 80% accurate. Is it actually right? 80% of the times it says that, and that does help us, but it's just a way of Managing surprises. And if you can explain it, potentially you have better control over it, and that's why we're interested in it. Although, again, just like with humans, there's an element where sometimes you just can't explain it. I expect that's going to be the case with models, too. [00:13:44] Speaker B: I know that firsthand given the reinforcement learning model we were using was a dark black box, man. [00:13:49] Speaker A: Yeah. [00:13:50] Speaker B: You know, it was a challenge. [00:13:51] Speaker A: Yeah. And the more complicated and ironically, I think some of the more accurate models are the hardest to explain. And so if you look at with some of the neural networks, there are these devices you can use, whether it's what's called feature attribution or these activation layers, where with feature attribution, whether it's the lime or shape, you can kind of pull apart how much each of these features is sort of contributing to the output. Or with the activation layers, you can actually see what patterns the neural network is, is keen in on, for example, shapes. It might be keen on, like outlines or things like that. So there are things you can do to some of these models. I don't want to say that the entire thing's a black box. Right. But decision trees and neural networks, I feel like, are the. Are easy, are on the more explainable side of kind of the AI machine learning, but like some of the reinforcement learning, deep learning, generic, that stuff is really kind of hard to explain at a deep level just because of all. I mean, it's effectively like trying to explain the synapses in the human brain that are going on at any point in time. They're just so fast and so many layers and parameters. It's really, really kind of hard to explain. So I think what's important there is you come up with a governance framework as to how you're going to approach these models and these outputs and how you benchmark them and what decisions these models are actually making and whether they're doing them without human oversight or not, which in our shop, isn't happening. [00:15:16] Speaker B: Really. You're building a governance framework around the use of AI, is what I hear you saying. [00:15:22] Speaker A: Yeah, you have to. [00:15:23] Speaker B: And documentation. You've got, you know, documentation to support the structure. Yeah, but go ahead. I mean, you have to. I don't know if a lot of people are thinking this way. Some people just use it, and of course they use it within, you know, a compliance framework. You mentioned, you know, you're putting your own data, but it's not a machine connected to the Internet, et cetera. But talk about that as a governance Framework. [00:15:48] Speaker A: Yeah. So I think this is a tough one because in every organization, you know, you experience this where you have, internally you've got people who don't really have any knowledge of AI and machine learning, who are in positions of authority. And so internally you have that dynamic going on at your organization. And then for people like me, where we report to a board, your board has various levels of sophistication and comfort with AI and machine learning. And so that you, you're stuck with, you know, this awkward, you know, hey, we need to innovate here and move with the times. But also we've got to bring people along, but people who have other responsibilities also. And so like how do we, how do we kind of bring these groups together? And I think you do have to move forward with governance in mind because one thing that will kill innovation really quickly is just poor governance and poor oversight. So oversight is as much, I think it's kind of paradoxical, but it's as much part of the problem as it is part of the solution. And so you, it can be onerous and prevent any growth or innovation from occurring. But, and it, and, and it can also be like, you know, too lax and you can, and it could be non existent. So for us we've, we've started to say, look, with every decision it's written down and it's tracked and that's point number one. And that goes a long way in debiasing people in the discussion because now it, you have kind of an objective score and it's not so subjective and you can start to say, hey, this machine is saying it's 70, you know, here's its predictions for, at the 75% threshold, here's its predictions at the 70 65. And we start to track it and we can say, hey, how many, how many forecasts does it have at 70%? Well, it's, we've done 10 of them. Okay, small sample size, but maybe it's right. You know, we'd expect 7 out of the 10 to be right if it were appropriately calibrated. And that goes a long way. You know, if after you've done say 50 of these and you can say, look, it's done, you know, we've got, we've got 50 of these. It's about 70% accurate for 70% confidence, that gives you some level of comfort. But the other thing you can do too is start to just give it simple tasks that like I said earlier, that you can just verify. Right, Is it, is it identifying the right things in documents Running certain analyses, doing value attribution bridges and things like that, say in private equity that you've done yourself and just double checking its work and that's one way to build it. But I think along sort of like, just like those hard rules internally about hey, we're benchmarking decisions and things like that, I think you also have to educate your constituents, whether that's your board, your executive directors, other people on your investment team, and just create that fluency with the vernacular to start to get them comfortable with it. Because it's like going to another country and you're listening to people talk and you don't understand what they're saying. There's going to be a natural level of distrust. And so if you don't understand the language of AI and machine learning, there's just going to be a natural distrust. So I think that's sort of another prong to the approach. [00:18:44] Speaker B: Do you use like workshops to kind of build that educational level or does that occur, you know, for example, in a board meeting? I mean, how do you get them fluent? [00:18:54] Speaker A: Right now it's mentioning on the occasion during the board meetings and just, you know, our board meetings are fairly, they're svelte, you know, there's not a lot of Mickey Mouse and going around. We're talking about performance and you know, governance, you know. And so on the occasion I'll do my best to sort of drop in a nugget about how we're doing things and why we're doing it. And I suspect we've been sort of waiting to kind of get the LLM initiative launched. And once we get there, I think we'll have a lot more to say and probably what we'll do is some sort of semiannual education with the board to just sort of formally kind of go through again, set the stage for the ecosystem and the various aspects of AI and how it's impacting their day to day with ways that they can understand and then talk specifically about what we're doing with it and what decisions it's making. Because I've got some trustees who are really comfortable and like the idea that we're kind of moving in this direction and comfortable with my background and experience in us doing that and then others who are sort of cautiously optimistic and I think would be a generous interpretation, a little more cynical and, and you know, you have our articles like the one yesterday in the Wall Street Journal about the, you know, AI overriding its code so you can't shut it down and then it kind of Sets you back, you know, Terminator. Yeah, exactly. Yeah. So we're, you know, that's. So I suspect, you know, as we broadly roll it out, it'll be under very controlled circumstances. And then when we get to sort of the predictive side of it and letting it sort of make decisions, we'll run those in parallel with the human, you know, and this is just the deep learning side of it. You know, we're already using the machine learning algorithms and things like that. But I. Those are easily, I think, easy to explain and interpret. It's the deep learning where you're basically just giving it to a black box and saying, hey, here's a bunch of due diligence material. Tell me which fund is going to outperform, say, the other funds and, you know, or what variables matter most. Where. I think you have to run those in parallel with the human decisions and over a number of observations to get any comfort. There's. [00:20:50] Speaker B: Let me go back to use cases for a minute. I hear from managers and I also hear from allocators that they're looking at sentiment analysis, trying to scrape the web and then kind of detect sentiment, because sentiment is pre price. It's there before price is manifest. What do you think about this? Whether you're using it or not, just intellectually, Is that a tool that you want to focus on? [00:21:18] Speaker A: Well, I'm a little more cynical because I don't know that I come from the perspective that the sentiment is sort of pre trade. I think you can argue that causation could go the other way, that people, people make the trades and then they got to talk the book and create sentiment if it's not going the right direction. But I think, anyway, let's just assume that sentiment is sort of pre trade. I looked into this a number of years ago. It was probably 10 years ago when sentiment analysis seemed like, I mean, you know, someone's probably been doing this for 20 years, but I know somebody has been doing this for 20 years, but I mean, it felt like it really entered the mainstream in the predictive analytics circles about 10 years ago. And then back then, everyone was having trouble with, hey, if I'm looking at a review for a vacuum cleaner and it says this vacuum cleaner sucks, you know, is the model interpreting that the right way or not a good review or a bad review? But I think my bias would be to say I'm not so sure that, like, the sentiment is actually being reflected in sort of the aggregate levels of flow. I think there are a lot of regulatory requirements that are Also driving allocation decisions in terms of what you have to rebalance, you know, what you have to buy or sell to rebalance to stay in compliance. And I think, you know, who's got to buy Treasuries because they have to have a certain number of, you know, credits, you know, a certain amount of credits at this level. And so I don't, I'd be a little, I've always been a little dubious of sentiment analysis because I just feel like there's too much erosion between like the sentiment and then like what's actually happening in the trade. I think relevant, but I don't know how relevant. [00:22:47] Speaker B: You know, I agree with you. Historically it's been around for a little bit. What's kind of gotten my attention, just as an aside, is the amount of disinformation that's out there. [00:22:57] Speaker A: Right. [00:22:57] Speaker B: And it's very difficult to detect, you know, for a machine to detect the truthfulness. And it's also difficult for humans to do it, especially when you have to do it at a certain velocity. If you're looking at X feeds or blue sky feeds, there's just so much disinformation. I think it erodes the benefits of any kind of possible sentiment analysis. [00:23:17] Speaker A: Yeah, I think that's right. Yeah, for sure. I mean, because I think the, the, the media with the, I mean sort of the highest sort of like periodicity. Right. I mean the, it seems like the, the services that are spitting out information the fastest are, are also the ones that have the most misinformation. And so I think it does make that job pretty hard. [00:23:36] Speaker B: Just shifting gears for a second. I mean we've talked a little bit now. If you've got these use cases and you talked about governance, making sure there's good governance documentation around it. What are the two or three other key features that you need as an asset owner to actually do this stuff? I mean, okay, governance. And I'm going to guess you're going to tell me talent because you can't do this alone given your full time job. But what are the few things you need to actually accomplish this? [00:24:05] Speaker A: Yeah, you do have the multiple dependency problem which makes it hard to get off the ground. So you do need the talent in house. So we have two data scientists as part of our investment program. One of them was a younger investor turned data scientist who, you know, years ago, you know, went and got an advanced degree, went back to school to kind of learn it, but kind of came from the risk standpoint and learned data science and has been, you know, our data scientist for probably seven or eight years at this point. And then another one came from food science and was just a data scientist naturally and is now learning investments. And so I think it's important to kind of have both of those represented on your team. And I think it is important and it's okay to have people on your team at a certain point who don't know what a stock or a bond is because I think that's part of the advantage. And sort of, again, like we haven't talked about this, but bias in these models is important. Now. I think you're less likely to have a bias with some of the deep learning and the bias in the traditional sense of like, you know, human bias, but certainly in this, with the traditional statistical techniques and machine learning. Even if you're giving, you know, there's bias in what information you give the models to look at to start with. So you can't convince me that these models are unbiased. So I, I think it's important to have that discipline on the team so you have people that have less of a traditional investment bias in terms of what they think should, should matter and are just looking at raw data. You also need data. And that's another big problem that we all have. It's, I mentioned the PDFs. Most of what we have is unstructured data locked up in PDFs and some human's going to have to go through and manually extract information and put it somewhere. So you have unstructured data. You also just don't have a lot of it. And so the biggest problem for most of us is just with the alternative investments, that's where you have the most difficult information. A lot of it's in PDFs, a lot of it's locked down because your partners don't want to give you ability to, they're not going to give you word docs or they're not going to give you excels that aren't traditionally locked down. So you can't change it after they give it to you. So you've got a lot of like data problems on that side. You don't have a large data set by any means. Most of us, in terms of alternative investments, you're looking at a handful of portfolio companies and private equity funds that maybe update quarterly. But again like I said at the start, in a statistical sense, that's a pretty small sample size. It's not like you've got 50,000 people or 100,000 people that are using credit cards and you're, you've got a huge set that you can extract insights from. So you have talent and data and compute is probably the other one. Once you have the data, you've got to have some pretty serious horsepower. So I mentioned our initiative with the LLMs. Take llama or olama, which would be the LLM, you know, the meta open source LLM and you can put that on your machine if you use the 70 billion parameter version. A parameter being basically like a word or a sub, kind of a subword that you probably need about 96 gigs of memory, you know, so that's a pretty robust, you know, gaming PC probably that can handle that. So it's, it's, you know, it's serious compute. And again, that's fairly, that has a context window of maybe 150,000. So that's sort of like how many parameters can it sort of like ingest and synthesize at the same time? So if you think about like a 10Q or something, you know, that's 100 or 200 pages, that's probably 50 to 75,000 parameters. So you would need sort of the more robust version of Ollama and a pretty strong computer to just sort of like, you know, work through a 10Q. So if you're thinking about multiple doc, then obviously the requirements just extrapolate from there. [00:27:42] Speaker B: You're talking about doing this locally, you're not talking about doing it in the cloud first. [00:27:45] Speaker A: For now. For now, yeah, yeah. [00:27:47] Speaker B: That must be a security issue, you know, going into the cloud, I assume. [00:27:51] Speaker A: Yeah. I think while we investigate sort of the cloud security and get our arms around the enterprise solutions, what we're doing right now is basically R and D. So it's can we get proof of concept? It's not going to cost us anything really to do this. It's open source. Put it on your machine. And then we're working actually with one of our former PMs who retired last year, you know, who's a computer scientist and said, hey, actually I think we're onto something here. This is cool. I just want to do that full time. So he's probably going to be consulting with us so we can come up, work on these use cases with the LLMs. And if we get good proof of concept there using the LLMs and these ways that I explained, then I think we'll go to the enterprise solutions and start to see if we can get our arms around the security there because we probably will need more compute. [00:28:35] Speaker B: Yeah, I wanted to ask then, what's next? I mean, you're Kind of building, you know, this, this library and you know, there's different applications, but let's go to the wish list. [00:28:46] Speaker A: Yeah, yeah. [00:28:47] Speaker B: Let's assume that you find satisfaction in these tests and these early implementations. What, what would you like to see AI do for you, given you're the CIO of a very large public plan? [00:29:00] Speaker A: I'm certainly interested in the efficiency side of it. So I want, I want my team removed from these really, I mean, high volume but kind of low value add tasks, you know, of fetching documents, you know, extracting data from those documents that we just want to use for reporting purposes. And then a lot of those become variables that we then feed into the predictive models. But at the very least the efficiency side is crucial. And I'm actually surprised when I have conversations with colleagues how many of them are very hesitant to use AI or you know, generally because most of us have really spartan staffs. And I, and I think again, there's just a bias against it, but you know, it can save a ton of time. So if any, I, I would think that institutional investors like us would be at the forefront of this because we all have, you know, pretty small budgets. So, you know, I think where I see this going is we'll ask a GP for a certain set of information, right. That information will know is pretty highly relevant. It'll be a pared down sort of data request from us based on the feedback we have from our models that will have analyzed all of the funds that we've invested in to this point and we'll ask them a pared down set of questions that'll matter, they'll answer that. So they're not going to have to, they're not going to be responding to numerous data requests from us. They'll respond to that, we'll ask for access to their data room, we'll get it. Then the model will pull down all the documents, it'll extract all the relevant data, it will, it will conduct all the analyses and now we're doing some of that already. And then it'll write the investment memo and that will be reviewed by the PM and the investment team internally. And then we can just sit around and talk about what the information means, not have to go looking for it, fetching it, editing, things like that. So that's where I see us going. [00:30:52] Speaker B: You know, to me, it know it sounds like you're building kind of a multi agenic system in the future. [00:30:58] Speaker A: Yeah, very much. Could be, Yep. [00:31:00] Speaker B: Yeah, I mean I, I could see where, you know, you've Got one agent doing, you know, kind of the assembly of the information, another agent, you know, kind of reading. And then there's a compliance governance feature in there. I mean, it's. You know, I wrote a piece on this, I think. [00:31:14] Speaker A: Yeah, you, you wrote about that? Well, the, Yeah, I think it's the, the agent hospital. Right, The. I think the white paper that you wrote about, right. Where you have sort of like, you know, and I could totally see that. I could totally see that. Where you have one sort of one agent that's, that's doing data kind of recon and you have another one cleaning data. You've got another one, you know, writing, writing memos. And then you've got, you even have a hierarchical system. Right. Then, like what they're doing is then reports up to the humans. [00:31:39] Speaker B: Yeah, yeah. I mean, we're, we're a ways off from that conceptually. I love talking about it, but there's so many, there's so many barriers to get there. [00:31:46] Speaker A: Yeah. [00:31:47] Speaker B: So, Mark, I want to wrap this up and say thank you, but I'm going to try to do a quick summary. And that is first, your approach to this. Your use cases are built around two things. One is, let's call it improved efficiency, and the second is improved decision making. And that's a fair summary. [00:32:05] Speaker A: That's fair. [00:32:06] Speaker B: Those are the two big use cases. And underneath it, you've talked about specifics as they relate to analyzing opportunities, you know, kind of structuring unstructured data in a way that can be read and, you know, kind of getting to the point where you have an AI assistant at this point. You. Now clearly you've got a governance structure, you've got your own knowledge, you've got some data science talent around you. You know, this is, you know, you're kind of native in this space, but your colleagues, as you point out, are not quite there yet. And I've always scratched my head wondering on that first piece, you got a limited headcount, limited budget, man. If you could use something within, you know, a very constrained environment, it'd be a good thing. [00:32:52] Speaker A: Yeah, that's right. [00:32:53] Speaker B: So. [00:32:53] Speaker A: That's right. Yep, nailed it. [00:32:54] Speaker B: I'm glad I got it. But, and I got to ask you, you know, my final question with all my guests. What's the worst investment pitch you ever heard? [00:33:02] Speaker A: Yeah, I think, look, my, my glib response to that is the worst investment pitch is one that's never given. If you, if you have a chance to take a swing, take a swing. Now, that said, I was part of somebody who took me up on that offer. And I was part of a pitch that was kind of awkward where, you know, it was, hey, this was 15 years ago, and as an investment into Mexico. And, you know, the bricks were kind of hot back then. And I was like, all right, let's hear this. And it wasn't so much investment in Mexico as it was like, just a formula one racetrack in Mexico. And I think it was Monterey. And it was one of these, you know, where you kind of wanted to hear it out because, like, well, okay, you know, every idea sounds crazy until some of, you know, some of them are kind of proven and they actually aren't that crazy. But this one was crazy. And I just thought, guys, we're not. They had offered to put us in a formula one race car to test that out. I don't know what that had to do with anything. We didn't take them up on that offer, so we said no. But they also then asked for the pitch books back because they were beautiful, really high gloss covers. They had everything. I mean, they just planned this out. And when we said no right there then and there, I mean, normally we go back and kind of think through things, but I was convinced that this was a no. And, you know, we said, hey, it's not. Not something we do. And they said, okay, well, that's. Thanks for your time. But by the way, can we have the books back? We only have three, and you have our three. So I was like, well, you know, bless their hearts for trying. [00:34:31] Speaker B: It's a tight budget they're on. [00:34:32] Speaker A: Yeah. [00:34:33] Speaker B: A little concerned about how much Runway they have. [00:34:35] Speaker A: Yeah, exactly. Yeah. Yeah. [00:34:37] Speaker B: Well, this is cool. Thank you very much again, Mark. I enjoyed it. I'm glad that our paths have crossed, and I certainly appreciate you sharing your knowledge and experience. [00:34:45] Speaker A: Yeah, thanks, Angela. I appreciate the invite. It was fun. [00:34:48] Speaker B: Thanks for listening. Be sure to visit PNI's website for outstanding content and to hear previous episodes of the show. You can also find us on p and I's YouTube channel. Links are in the show notes. If you have any questions or comments on the episode or have suggestions for future topics and guests, we'd love to hear from you. My contact information is all also in the show notes, and if you haven't already done so, we'd really appreciate an honest review on itunes. These reviews help us make sure we're delivering the content you need to be successful. To hear more insightful interviews with allocators, be sure to subscribe to the show on the podcast app of your choice. Finally, a special thanks to the Northrop family for providing us with with music from the Super Trio. We'll see you next time. Namaste. [00:35:41] Speaker C: The information presented in this podcast is for educational and informational purposes only. The host, guests and their affiliated organizations are not providing investment, legal, tax or financial advice. All opinions expressed by the host and guests are solely their own and should not be construed as investment recommendations or advice. Investment strategies discussed may not be suitable for all investors as individual circumstances vary.

Other Episodes

Episode

August 26, 2025 00:34:20
Episode Cover

The AI Implementation Gap: What's Stopping Asset Allocators?

"I feel like the adoption of an AI tool could potentially eliminate the need for other existing tools - it might not be a...

Listen

Episode

August 11, 2025 00:04:39
Episode Cover

Trailer: Institutional Edge Podcast Launch: Angelo Calvello Partners with Pensions & Investments for AI Investment Series

Welcome to The Institutional Edge: Real allocators. Real alpha! Host Dr. Angelo Calvello introduces his exciting new podcast partnership with Pensions & Investments, designed...

Listen

Episode

August 13, 2025 00:33:27
Episode Cover

A Framework for Fiduciary Innovation

What's the biggest mistake asset managers make when implementing AI?This week on Institutional Edge, host Angelo Calvello sits down with Peter Strikwerda, Global Head...

Listen