How to Pitch an AI Investment Strategy

January 13, 2026 00:14:49
How to Pitch an AI Investment Strategy
The Institutional Edge: Real allocators. Real alpha.
How to Pitch an AI Investment Strategy

Jan 13 2026 | 00:14:49

/

Show Notes

What if transparency (not secrecy) is actually your competitive advantage when pitching AI strategies?

In this solo episode of The Institutional Edge, host Angelo Calvello, PhD, co-founder of Rosetta Analytics, shares critical lessons from nine years of pitching AI-based investment strategies to institutional allocators. Angelo breaks down four essential lessons for managers: defining AI precisely through transparency, disclosing model sources and demonstrating technical ownership, emphasizing human-AI collaboration over autonomy, and making explainability non-negotiable. Drawing on research from Gary Marcus and Anthropic, plus real allocator feedback, Angelo reveals why institutional investors aren't ready for fully autonomous AI strategies and provides practical frameworks for successful fundraising conversations.

In This Episode:

(00:00) Introduction to pitching AI strategies and the allocator knowledge gap

(01:08) Lesson 1: Define your AI precisely and build trust through transparency

(04:10) Lesson 2: Disclose model source and demonstrate technical ownership

(07:23) Lesson 3: Emphasize human-AI collaboration, not AI autonomy

(09:02) Lesson 4: Explainability techniques, black box challenges, and closing recap
Send me your ideas on how to pitch an AI strategy!

Like, subscribe, and share this episode with someone who might be interested, and please take time to leave us a review!

Dr. Angelo Calvello is a serial innovator and co-founder of multiple investment firms, including Rosetta Analytics and Blue Diamond Asset Management. He leverages his extensive professional network and reputation for authentic thought leadership to curate conversations with genuinely innovative allocators.

As the "Dissident" columnist for Institutional Investor and former "Doctor Is In" columnist for Chief Investment Officer (winner of the 2016 Jesse H. Neal Award), Calvello has become a leading voice challenging conventional investment wisdom.

Beyond his professional pursuits, Calvello serves as Chairman of the Maryland State Retirement and Pension System's Climate Advisory Panel, Chairman of the Board of Outreach with Lacrosse and Schools (OWLS Lacrosse), a nonprofit organization creating opportunities for at-risk youths in Chicago, and trustee for a Chicago-area police pension fund. His career-long focus on leveraging innovation to deliver superior client outcomes makes him the ideal host for cutting-edge institutional investing conversations.

Resources:

Related Articles:

Research & References Mentioned:

Email Angelo: [email protected]
Email Julie: [email protected]
Pensions & Investments
Dr. Angelo Calvello LinkedIn

Chapters

View Full Transcript

Episode Transcript

[00:00:04] Speaker A: Hey, everyone. Welcome to the Institutional Edge, a weekly podcast in partnership with pensions and investment that cuts through the noise of institutional investing. I'm Angelo Calvello, and today we're going to do something a little different. No guest, just me. I want to talk about something I've lived through for nine years as co founder of Rosetta analytics, and that is pitching AI based investment strategies to institutional allocators. Look, I've learned some things the hard way. I've learned what works, what doesn't work, and why. That first meeting can make or break your entire fundraising effort. So if you're a manager trying to raise capital with an AI based strategy, this one's for you. Because what I've seen is that allocators get excited about AI's potential, right? But at the same time, they're suspicious of AI washing. You know, managers who oversell what they can actually do. Most allocators conceptually get the possibilities, but they're still early in developing the technical chops to really evaluate this stuff. So there's this tension, there's this knowledge gap, and it creates both an opportunity and a challenge for you. But here's the thing. If you don't nail that first meeting, you're done. Fundraising efforts are effectively over. Let me share four critical lessons that can help you succeed. And honestly, you won't find these anywhere else on YouTube, especially lesson three. Okay, let's start. Lesson one. Define your AI precisely and build trust through transparency. Sounds obvious, right? But I can't tell you how many managers screw this up. You've got to start by explicitly telling them what you mean when you say we use AI. And here's why. With traditional investment pitches, everybody's using the same language. Just say, momentum, strategy, value investing. Top down, bottom up. Everybody knows what you're talking about. But with AI, there's no shared foundation. You're literally starting from scratch in many cases. So you need to build common ground before you get into your strategy or performance. And it's important not to dumb it down. That's a mistake. Use the technical terms. Explain what type of AI you're using. Nlp, random forest, autoregress models, deep neural nets, and why Your choice fits the problem you're trying to solve. Yeah, yeah, yeah. Allocators might not know these terms, but here's what a good explanation does. It shows you really understand your models and you're willing to be their technical resource. That builds confidence, not confusion. Get into the details of your implementation. How do your models actually work? What are the design principles? What are the training methodologies? What kind of data are you using and what are the sources of this data? And importantly, what metrics are you using to make sure the model's solid? And you use these metrics to evaluate the performance in the real world. And here's something that might surprise you. Don't be afraid of reverse engineering. Don't worry about giving away the secret sauce. Look, AI model development involves hundreds of little decisions that humans make. Every implementation is unique because of that. And candidly, few allocators have the desire or resources to reverse engineer your models. So better to be forthcoming than cagey. Allocators are tired of hearing managers say, hey, if I tell you this, I have to kill you and be honest about what works and doesn't work. Talk about the research projects and experiments that have failed. Because you know AI development is iterative. And honestly, you're going to fail way more than you succeed. Being upfront about that, yeah, it reinforces your credibility. It shows you have realistic expectations about what this technology can and cannot do. Be sure to tell them where the AI initiative came from. Is it a C level push? Is it from the investment team? It's important Allocators note that the firm is actually committed to this, that you've got the resources for the long haul and this isn't just some experiment or hobby that somebody's playing with. And always, always frame everything in terms of better investment outcomes for clients. Technology for its own sake, forget about it. That doesn't raise money. You're using AI to achieve better outcomes for clients. Okay, let's go to lesson number two. Disclose your model source and demonstrate technical ownership. Be upfront. Did you build your model or are you using third party providers? Hey, this is going to come out in a due diligence anyway, so address it right up front. Using third party models isn't a problem by itself, but you need to show you understand the risks. Risks like data privacy, supply chain issues, model rigidity, and that you've got strategies to deal with these challenges. And hey, if you're using large language models, be specific. Which large language model? Why that one? How are you dealing with the limitations like hallucinations, potential monoclaps, and the fact that they can't really do advanced reasoning? Here's something critical about LLMs. They're great at operational stuff like research summaries, automating tasks, and basically generating reports. But they don't generate reliable, actionable insights for portfolio decisions. Not yet. They fall short every time. And as Gary Marcus, the emeritus professor from NYU says about LLMs, you can't simply drop ChatGPT or Claude into some complex problem and expect it to work reliably. If you're pitching strategies based solely on LLMs, allocators might think it's AI washing. And if you're using AI agents which run on LLMs, be ready to explain exactly what they do, how you measure their contribution, and how you're dealing with their limitations. And there are serious limitations. Hallucinations? No, persistent memory. And this is from Anthropic's recent research, something called realagenic misalignment. It's a real risk. We're talking about models doing malicious stuff like blackmail and data leaks. Marcus Warren's Nobody in the industry has a clue how to stop this from happening. If you have built your models in house, well, then talk about your team. Who are they? Where'd they come from? What's their background? How did you recruit them? How do you have your AI engineers? Work with the people who actually understand investing. And bring your engineers to the first meeting. But make sure they're trained to talk to clients. This shows they're doing real commercial work and not academic research. And please skip the, you know, the famous statement that, oh, we have 200 PhDs, forget about it. Allocators don't want to fund a research lab. They want to see a small, talented, aligned team that's focused on commercial results, people who can actually build and deploy stuff that works. And finally, outline your research agenda. How do you evaluate new AI developments? What are you looking at for the future, for new applications to make things better? This shows you're committed beyond what you're doing right now. All right, let's go to lesson three. And lesson three means you got to emphasize human AI collaboration, not AI autonomy. Make it crystal clear, crystal clear, that your models augment human decision making. They don't replace it. Map out exactly where AI sits in your investment process and what it's doing at each stage. Talk about the safeguards, your override protocols. Walk them through the manual review process. When humans can intervene, how? What triggers an intervention? How do you actually make the final investment decisions? And here's the hard truth, and I mean this because I've suffered the slings and arrows. Allocators are not ready to invest in fully autonomous AI strategies. Period. Full stop. Doesn't matter how good your team is, doesn't matter how transparent your models are, doesn't matter if your track record is incredible. Josh Adler from Raytheon said it best. It'll take time before we Trust AI with investment decisions related to strategy and market insights. So if your strategy is built on fully autonomous self learning systems, expect serious fundraising challenges, even if your results are better than your peers. That's just where we're at right now. So let's move to lesson four. And lesson four is explainability is non negotiable. Take it from me, allocators will not invest in AI strategies where you cannot explain what the model is doing. Yeah, yeah, yeah, there's irony here because human decision making is just as opaque, Right? But again, that's the standard we're being held to right now. Allocators need explanations for performance. Why did you Crush it in 2020? And why did you struggle in 2025? They need these narratives to make an allocation, and CIOs especially need them when defending performance to their boards. As Julia Sommer Lagarde, the head of risk and AI at Industriens Pension Fund in Denmark, put it this investment managers, our CIOs, everyone needs to understand this is not just a black box. Now this is especially tough if you're using deep learning and deep reinforcement learning like we used at Rosetta. The internal mechanics of these systems are inherently opaque. You might be tempted to just use some kind of post hoc interpretability method, maybe like Lime or Shapley, to try to explain what's happening. But you know, the research shows these techniques don't reliably capture the relationship between inputs and outputs. These techniques might even produce misleading explanations. Hey, it's better to just be honest, build a trust, rather than overselling your ability to look inside the black box and then doing so potentially destroying confidence. Later, when I was pitching black box strategies, I used a few techniques that I thought worked pretty well. I talked about acetaminophen, Tylenol. Hey, Tylenol's been around for 50 years and we still only partially understand how it relieves pain and reduces fever. Yet we know it's safe and effective because it's been validated extensively in randomized controlled trials. That's the gold standard for medical interventions and it should be the same for AI systems. I'd explain that we use the same gold standard core training and testing principles that all the leading AI companies and researchers use. Proper data splits, robust validation protocols, rigorous risk focused testing. The integrity of your development process combined with a strong team and live performance. Hey man, that goes a long way towards addressing allocators concerns. I also found it useful to point out that the black box is already in their lives. LLMs, Netflix recommendations, Amazon suggestions, Spotify list Waymo autonomous vehicles. All of these use deep neural nets black boxes in their foundations. And because I'm formally trained as a philosopher, believe it or not, I'd sometimes reference Aristotle. Aristotle wrote in the Nicomachean Ethics that we shouldn't demand the cause and all matters alike. It is enough in some cases that the facts be well established. Well, I mean, maybe the Aristotle thing was a bit too much, but you know, some allocators appreciated the effort. You know, what are you going to do? But look, here's the CIOs have to explain performance to their boards, especially when things go south. And they will go south. Only a few exceptionally trained forward thinking allocators are going to risk their reputation on black box strategies. You can't build a viable fundraising strategy by trying to find these unicorns. There you have it. Four lessons from me being in the trenches pitching AI strategies to some of the smartest allocators in the world. My tips Define your AI precisely and build trust through transparency. Disclose the source of your models and demonstrate technical ownership. Emphasize human AI collaboration, not autonomy and make explainability non negotiable. So your next challenge Taking this advice and turning it into a compelling 12 slide pitch deck. Ironically, that may be the perfect time to let AI help you out. I published a version of these ideas in an op ed in Pensions and Investments in April 2025. We'll put a link in the show notes so you can check that out. I just want to tell you thanks for listening. Thanks for supporting the institutional edge. If you found this podcast useful, share it with a manager who's getting ready to pitch their AI strategy. If you have any ideas on how to pitch an AI strategy, please share them in the comments. And if you got questions or you want to keep the conversation going, DM me on LinkedIn or send me an email. Be sure to visit P and I's website for great content and to hear previous episodes. You can also find us on p and I's YouTube channel. And if you haven't already done so, we'd appreciate subscribing to our podcast series and give us an honest review on itunes or any of the platforms. These reviews help us make sure we're delivering what you need. I just want to say our music, which I so enjoy our music is provided by the Super Trio courtesy of the Northrop Family. Until then, namaste. [00:14:23] Speaker B: The information presented in this podcast is for educational and informational purpose. The host, guest and their affiliated organizations are not providing investment, legal, tax or financial advice. All opinions expressed by the host and guest are solely their own and should not be construed as investment recommendations or advice. Investment strategies discussed may not be suitable for all investors as individual circumstances vary.

Other Episodes

Episode

February 03, 2026 00:27:56
Episode Cover

Part II: AGI and Quantum – Betina Kitzler on AI's Next Chapter

Can AI pass the "coffee test"—problem-solving when things break?In Part II of this Institutional Edge episode, host Angelo Calvello and Betina Kitzler, MIT Sloan...

Listen

Episode

November 18, 2025 00:40:59
Episode Cover

An Unfiltered Take on Private Markets: Huizenga Capital Management’s Brad Bryndal on AI, Defense Tech, and the Tokenization of RWA

What if enterprise software as we know it is dead in five years? A family office insider explains why AI might make traditional SaaS...

Listen

Episode

August 11, 2025 00:04:39
Episode Cover

Trailer: Institutional Edge Podcast Launch: Angelo Calvello Partners with Pensions & Investments for AI Investment Series

Welcome to The Institutional Edge: Real allocators. Real alpha! Host Dr. Angelo Calvello introduces his exciting new podcast partnership with Pensions & Investments, designed...

Listen