The AI-ESG Paradox: Why Assessing AI's Impact Defies Simple Metrics

September 02, 2025 00:34:21
The AI-ESG Paradox: Why Assessing AI's Impact Defies Simple Metrics
The Institutional Edge: Real allocators. Real alpha.
The AI-ESG Paradox: Why Assessing AI's Impact Defies Simple Metrics

Sep 02 2025 | 00:34:21

/

Show Notes

What methodological breakthrough is helping institutional investors solve the AI-ESG paradox?

This week, host Angelo Calvello interviews Dr. Liming Zhu, Research Director at CSIRO's Data61 and a leading voice in responsible AI development. Dr. Zhu reveals why traditional ESG metrics fall short when measuring AI's complex impacts, from the staggering energy consumption of model training (equivalent to 350,000 households annually for early ChatGPT) to nuanced second-order effects across stakeholder groups. The conversation explores how AI's diverse applications resist standardized assessment, requiring sophisticated frameworks that capture both immediate environmental costs and long-term societal implications. Dr. Zhu presents the practical assessment methodology he developed with CSIRO colleagues, offering asset owners a pathway through the measurement complexity that defines modern AI investing.

In This Episode:
(00:00) Introduction to AI's environmental impact and energy consumption
(02:00) Carbon footprint breakdown: training versus usage phases
(05:08) Energy sources and renewable scheduling for AI operations
(09:22) Grid stress and national energy infrastructure challenges
(10:57) Resource utilization efficiency and baseline comparisons
(14:39) Water usage for cooling and community impact concerns
(19:38) Human labor implications and global south workforce
(26:49) AI's potential to solve climate problems and accelerate science

Like, subscribe, and share this episode with someone who might be interested!

Dr. Angelo Calvello is a serial innovator and co-founder of multiple investment firms, including Rosetta Analytics and Blue Diamond Asset Management. He leverages his extensive professional network and reputation for authentic thought leadership to curate conversations with genuinely innovative allocators.

As the "Dissident" columnist for Institutional Investor and former "Doctor Is In" columnist for Chief Investment Officer (winner of the 2016 Jesse H. Neal Award), Calvello has become a leading voice challenging conventional investment wisdom.

Beyond his professional pursuits, Calvello serves as Chairman of the Maryland State Retirement and Pension System's Climate Advisory Panel, Chairman of the Board of Outreach with Lacrosse and Schools (OWLS Lacrosse), a nonprofit organization creating opportunities for at-risk youths in Chicago, and trustee for a Chicago-area police pension fund. His career-long focus on leveraging innovation to deliver superior client outcomes makes him the ideal host for cutting-edge institutional investing conversations.

Resources:
Dr. Zhu's Bio:  https://liming-zhu.org/about-me

Dr. Zhu's research related to the episode:

The framework: 

https://www.csiro.au/en/research/technology-space/ai/Responsible-AI/RAI-ESG-Framework-for-investors
Responsible AI research for a general audience: https://www.csiro.au/en/research/technology-space/ai/responsible-ai
https://www.csiro.au/en/news/All/News/2025/July/2024-25-GenCost-Final-Report
Responsible AI research for technical audience https://research.csiro.au/ss/team/se4ai/responsible-ai-engineering/
Two recent books he co-authored:
https://www.amazon.com.au/dp/0138073929
https://www.amazon.com/Engineering-AI-Systems-Architecture-Essentials/dp/0138261415/
Article I co-authored: https://inv.institutionalinvestor.com/article/2em204zws6p4dnrm00npc/opinion/allocators-do-the-benefits-of-ai-really-outweigh-the-costs
Email Angelo: [email protected]
Email Julie: [email protected]
Pensions & Investments
Dr. Angelo Calvello LinkedIn

Chapters

View Full Transcript

Episode Transcript

[00:00:00] Speaker A: One statistics for very early version, even not a very powerful version of ChatGPT, like 3.5 is like one year of energy consumption of I remember the data to be around 350,000 households using electricity for a whole year to train a version of a not very powerful model. So that's a lot of energy being used. [00:00:22] Speaker B: Hi everyone, I'm Angelo Calvello and I'm the host of the Institutional Edge. Thanks for joining us for another episode in our series on artificial intelligence and institutional investing. Today we're going to focus on a topic that is just beginning to get the attention of asset owners around the globe. And that topic is the environmental and social implications of AI. More and more managers are using artificial intelligence in their investment processes and their business operations. If you're an allocator that's focusing on the long term sustainability of your portfolio and you're allocating the managers using artificial intelligence, then it's important for you to try to assess the environmental and social impacts of the AI the managers are using. I'm fortunate today to have as my guest Dr. Li Ming Zhu, the research director of software and computational systems at CSIRO's Data61, the data and digital specialist arm of the Australian National Science Agency. He and his colleagues have developed a framework investors can use to assist assess the environmental and social impact of AI. We'll include a link to Dr. Zhu's bio in the show notes, and we'll also include links to papers and other publications he and his colleagues have done in this area. Let's jump in and I want to start by asking what environmental impact does AI's development and deployment create? [00:01:52] Speaker A: It actually creates quite a number of environment impact across multiple dimensions. Carbon emission, greenhouse carbon emission of course, is one of them. But there's also a lot of resource efficiency, type of questions, how to better utilize the resources, including because you are using electronics, computer chips to trim them, you upgrade them, you throw them away. So the environment impact of those resources is also a major impact. Then it has the ecosystem impact. When you look at those things, it's not the electronics, the minerals itself, but the wider impact of the ecosystem. So when we did our study within environment impact, we focus on all three of those aspects, carbon emissions, resource efficiency and ecosystem impact. [00:02:41] Speaker B: Let's talk about carbon footprint for a minute. I know there's a lot of, how would I say, challenges in terms of measurement and assessment and we can get into those challenges if you'd like, as a prefatory note. But I'd like to start with the carbon Footprint, the greenhouse gas emissions, how are they related to AI? [00:03:00] Speaker A: It's important to realize when we talk about AI, it's largely divided into two phases. One is when you create AI, when we say we train a model model training. And people probably have read to train a model like a chatgpt, the most powerful model, you have to consume a lot of energies. They train it in a couple of months. And these clusters of training have tens of thousands of GPUs, computer chips running 24,7 for a couple of months. And people have compared like I'm coming from Australia and down under. And one statistics for very early version, even not a very powerful version of ChatGPT, like 3.5 is like one year of energy consumption of I remember the data to be around 350,000 households using electricity for a whole year to train a version of a not very powerful model. So that's a lot of energy being used. And then after a while you have a new version. Sometimes a new version version is an improvement of the old version, but other times it's a significant retraining. So we will repeat that again. On the other hand, the training efficiency, like how much energy you need to train is also improving. And we have seen other AI labs coming out to say now it requires much less, maybe 1/10 of the energy to train the latest model. That's the training part of the AI model. Then you have to use the model every time you type into a query into ChatGPT. It also consumes energy. And there were studies to show the consumption of energy for that sometimes is higher than a typical Google search. And of course, if you look up a digital dictionary, it's also probably less carbon emissions. But the way people ask ChatGPT questions or AI model questions also changes. These days you can see a lot of model, they call it reasoning model. So instead of giving you an answer within a single second, it will go back to say let me think about it step by step. And they could be thinking for 10 minutes and coming back with some answers by using additional tools for search. So a one second query, you can't ask it to do it in 10 minutes because you want to have a better result. You want to exhaust all the searches and then people may ask model multiple times to have multiple outputs to compare which output is better. So you can quickly end up a lot of emissions into usage. But the usage part, in a way it's the user and deployer's responsibility, while the training part is really the model developer, AI developer's responsibility and the Greenhouse. [00:05:34] Speaker B: Gas emissions is really tied to the energy source and most of the. And we're going to talk data centers because that seems to be the hyperscalers and the data centers are using fossil fuel based energy. By and large there's a lot of talk about using renewables, but there are challenges as we know it renewables. So the greenhouse gas emissions are really tied to the fuel source and the fuel needed to generate the electricity for the GPU CPUs, right? [00:06:05] Speaker A: Yes, and I think people can have some control over there. Of course sometimes we have oversupply of renewable energy at a particular day of time. For example, in Australia, sunlight has been quite strong and if you schedule some training or even usage at those hours, maybe you can leverage the renewable sources much more and the training runs. I suppose there's a lot of control. We can let this trainings to run 247 regardless of energy sources. You could also schedule it. So I think it's very important maybe sometimes for investors to ask not only how much total energy you are using to train those models, but what are the energy sources and whether you are proactively scheduling your training rounds or even some of the usages, especially batch usage. You can batch it up at a particular day of the time to use your AI as well. So it can be very proactive, very consciously managed. [00:06:59] Speaker B: If a manager is using AI and we discover they're not using generative AI or an LLM, but they're using more classical machine learning models, random forest type models or regressors, would those use just not knowing the specific function, they tend to use less energy. From a user perspective, yes. [00:07:19] Speaker A: It's hard to say for all the cases, but on average I would argue it probably use less energy. Because remember a modern large language model, generative AI model have billions of billions parameters. Every time you ask to generate 100 words, the model will be exercised 100 times, each time producing the next token or next word. Of course, each time it's not like billions of parameters being all activated, calculated, it's some part of it being calculated, but still quite a significant draw on energy. And sometimes people do feel because large language models and this kind of big gen AI model has performed so well in certain tasks, people are looking at them as if they had a hammer for every nail. So they are looking, using it for everything. Even people will ask it for a definition of a word. Of course it will give a more interesting comprehensive definition. But again, using that compared to using a traditional tool. I mean back to the traditional machine learning models. Certainly they May consume more energy. [00:08:21] Speaker B: How about deep neural nets like deep learning and reinforcement learning? I know those are undergirding the LLMs and the generative AI, but just looking at them in a standalone basis, do they consume a lot of energy also or. [00:08:34] Speaker A: Yeah, I mean depending on the size of the model. In a way it depends on the size and the complexity of the model. Some of the gen AI models they are deep neural networks. So it's the same technology but rather than a few thousands or a few million parameters, you are talking about billion parameter. Even now we are entering the trillion parameter word. So the size of the model matters. So you could harness the same powerful technology but using a smaller model. You do see a few smaller models coming out very performant in certain cases. So selecting model probably again with environment conscious lens, picking some of smaller model, more suitable model or even train your own Classical machine learning can be the answer. Same to reinforcement learning actually. Interesting trend recently is the pre training stage. The AI raiding the entire Internet that we call the pre training seem to have exhausted its capability to some degree. So the next scaling game like what can we do in addition comes from reinforcement learning which is basically let's improve the pre trained model. That model being reading the entire Internet and textbook and copyrighted materials and how it can be improved further at the inference time at time where answering questions and reinforcement learning has been a very important techniques recently being used. [00:10:01] Speaker B: Significantly here in the US we don't have a national energy grid. We have, how would I say, a very disjunctive grid set up here. What we've been talking about in the US is the stress on the energy grid that these data centers that are used for training and inference are distressing the grids. Do you guys have that same thing in Australia or do you have a national grid? [00:10:27] Speaker A: Yeah, we probably is more in a national grade kind of scenario more than us. But on the other hand we're facing the same conundrum. On one hand, more data centers are being built in Australia for training and specifically for usage for inference time usage of the AI every day. And Australian has been using renewable energy quite significantly. Its use of renewable energy has been increasing. And there are situations where this competition, or maybe there's weather events combined with industry use and domestic use have created some challenges for managing that grid. So I think there is definitely discussions in how industry use, including AI use of data centers can be more environment, not only environment conscious, but community conscious of when to use them, how to use them, how to manage them. Yeah, it's a Challenge. [00:11:22] Speaker B: And you mentioned in your opening remarks the resource utilization. We've talked about greenhouse gas emissions and the environmental footprint and now the grid fragility. But tell me about the resource utilization that's related to the use of AI. [00:11:38] Speaker A: Yeah, I think the resource utilization is basically for the same amount of inputs and or same amount of output. How efficient you are using the resources. This resources coming from the hardware layer, which is including our hardware, your chips. And of course these days people upgrade their chips, throw away chips and creating electronics kind of waste. And then upper stream you have to mine more of these electronics which create a lot of other environment and human labor challenges. So looking at what's the input output, how do you utilize your existing resources best? That's very important. I think the key thing is you have to compare with the baseline organization. These days they perform a lot of tasks using traditional means, traditional tools, traditional software. They have computers turned on how much energies they are consuming. Is that very efficient for producing the same amount of outputs. When then you introduce your generative AI, your AI solutions comparing with that baseline is important. And often the baseline is not there. So although we can see what's the absolute emissions of using AI, but compared to what? So that's where some of the debate happens in the community. Imagining if you have to do a task by turning on your computer for five hours to finish the task, and that's a lot of emission. But if you can do the same task within one hour by using AI, it could arguably it saves a lot of emissions because five hours of computer use, computer screen turning on compared to one hour or even a few seconds. If the task is so amenable to AI generated, it could be a few seconds and the work is done. So the emission was obviously lower. However, then you would ask yourself what are you going to use the rest of the time for? Of course you're going to use the rest of time to do more. That more work could be very valuable, could be very justifiable. That's one use case. But other times, maybe people just generating images for fun or not. The best way to generate a diagram, for example, you want to draw a diagram, you could ask AI to generate 100 times, wasting a lot of time every time. Telling it how to do that would generate enormous amount of emissions, 100 times of image generation tasks. On the other hand, you could draw that simple diagram in the diagramming software in five minutes. Or maybe you can ask AI to write a small program, a script to draw a diagram. Then you can modify the script and Ask AI to help you to modify that. That can take five minutes and would save a lot. So it's really important to comp with alternatives to understand the overall emission. Comparing to what? That's why in our report we look at the different use cases, different sectors. Some sectors the return on investment is very high. So it's very justifiable to use AI more and more. But in some other use cases, certainly we find it's not that justifiable to use AI. Traditional approach might still, at least from the emission environment impact point of view is still very reasonable. [00:14:47] Speaker B: But it's not just the emissions. I mean, you mentioned for example, some resources. I know a big concern here in the US is water. You need water for cooling and for humidification of the big hyperscaler data centers. We've had a few instances here in the US where hyperscalers have put data centers in near good energy sources. But the water scarcity problem for the local community raised a lot of concerns. Can you talk a little bit about water usage and what it's for and why it's important? [00:15:20] Speaker A: All the data centers, when they use computer chips, they generate enormous amounts of heat and you have to use water to cool. And water cooling sometimes considered as more environment friendly than using fan to cool and other kind of waste. So actually there's an increased use of purely water cooled data centers and computer systems. But that requires a lot of water coming in and does have this community tension with that. Often there are downstream benefits to justify it for the community. You have trained our model, the model used by tens of millions of people. They might have benefits from it, whether it's environmental benefits or other benefits. But the water you are using is from a local community. I have tensions over there. How do you compensate? It's back to probably the value redistribution question. Many of the things we are creating in AI, whether it's from data, from water, from other ways, are generated enormous amount of value and benefits. But value are not captured probably sufficiently by the contributors, the contributors of data, the contributors of water resources. And how do you balance that? I think is the answer. It's not necessarily just using less water. It's the benefits generated from that are being used for the community, local community. There are other means to help them. [00:16:42] Speaker B: You're talking about an alternative. Instead of using the AI, using it in a different way, or if you're using it, it has a real benefit that outweighs the impact. [00:16:52] Speaker A: Exactly. I mean, we often see that the value or the benefits captured by certain companies or wider community is not the same one that has probably sacrificed or contributed to that. So it's more of value redistribution. I think this mostly being hotly debated in the copyright space, of course, but I think other environmental inputs, such as water and local environment impact of having a data center over there. It's not a question of to stop that, but it's a question if this data center has generated so much value for so many people or even private companies, can we redistribute some of the values back to the local community? And from a technical point of view, I think it's how to measure that, because AI is a very complex supply chain. Who benefits from it in what way? And from a technical point of view, it's not easy to measure. So that's some of the work in this intersection of ESG and AI is to measure those things across the supply chain so that we can understand to minimize the risk, minimize the environment impact. But also in other times, it's about redistribution, the value captured back to the people who have contributed. [00:18:06] Speaker B: Let me stay with water for a minute. There's been a lot of discussion in these local communities in the U.S. the use of water and how that water evaporates. You know, it's not like it's all recycled. So there's some loss of this water. And it's caused actually a few lawsuits in the US already where people are weighing their local benefits here, saying, you know, in this case it was in an agricultural community which needs the water for the production of crop. And it's been a pretty heated topic. That's perhaps a bit political and doesn't, you know, touch us in our topic today, but it certainly has been top of mind here in the us. Are you seeing the same thing abroad? [00:18:46] Speaker A: Definitely. Water usage, especially for farming versus other purposes, has been in Australia. There are certainly debates and policy discussions on that. I'm not expert in that at all. But back to the AI implication is to understand are there better technologies to manage that from the physical technologies of data center design, potentially capturing some of the water loss and better technology to be invested. This investment may not sometimes come from maybe private companies directly because of the mismatch of the benefits and gains and the regulation environment, but could be coming from other places for improving that and training efficiency. Using less water, less compute is certainly possible. You have seen dramatic advancement in training the same capable model using much less compute. That will save water, save energy. There are many ways to solve that problem. From an AI expert. From CSR point of view, we are technology science agency. So we try to of course use the technologies to solve the problem, but leave the wider debate to the community and the government to ponder. [00:19:59] Speaker B: You mentioned in your general remarks about resource utilization mining. There's a certain amount of mining that must go in to extract the rare earth minerals that are used in a lot of the advanced electronics. That's typically a human activity. What can you tell us about AI social consequences for human labor? [00:20:19] Speaker A: The AI consequences for human labor is certainly quite a lot. You rightly pointed out of the mineral extraction of that at the very upstream of the supply chain. You have to have these minerals extracted. And there are, as I pointed to earlier, a lot of the latest chips. People want to buy the latest, just like you want to buy the latest phone. And this is computer data center upgrades. And there was some waste on some of the chips. Certainly we can save more because you can use many of the chips and computing infrastructure for many other things, maybe not for training, but this one is not very good for usage. So asking investors to be more conscious of how these companies are utilizing their infrastructure upgrade, total cost of ownership over life cycle, that's very important. But AI is largely a digital technology in that space. We have heard stories of. You need humans to annotate, to label the data. Somebody out there somewhere, typically in a global south country where you will look at your screen 8 hours or even more than that a day looking at a picture to see is that a human, is that a traffic light? Whether this driving is right, whether this answer is sum up or sum down. So people are doing that. It's quite a challenging work. And we all know people may be paid in a kind of not a respectable wages for that. So how do we solve that? I mean fundamentally from an AI researcher or national science agency point of view is is that possible to get a human input more meaningful? It's not human data, it's just labeling, training, AI. And after a while AI knows about everything. You don't need the training. So people lose their job or the labeling itself is such a tedious task. So many of the latest AI technology can actually doing a lot of automated annotation labeling could generate synthetic data sets. So we don't need human data sets, input, especially those kind of low tedious work and leave interesting work for humans. So human are providing more meaningful oversight, meaningful feedback. So how technology can keep uplifting the interesting work for human and help human to complete more interesting work in this, to improve AI, both from oversight point of view, but also from a performance point of view has been an active Research area. One other thing just quickly mentioned people may not realize is AI generated a lot of contents. Human generate a lot of content. Sometimes content needs to be moderated. People need to look at harmful contents and the people, the individuals looking at harmful contents every day to verify whether it's harmful or not is creating significant mental stress. Whether from the law enforcement case which you look really horrible images or to the daily online safety point of view. There were research that actually data61 have done. We called it the data airlock which allows the AI to look at those contents and only ask a human to look at special cases. Corner cases have final decision making so they don't have to filter through thousands of thousands of images which creates mental stress. So AI can help in this space and having automated solution in some of the moderation. So AI can both be a risk and also be a tool to reduce those risks. That's my part. [00:23:46] Speaker B: And you mentioned, let's call it the early days of LLMs when there was a lot of dependency on human decision making. Most of this was done in the Global South. And clearly from the research I've seen, the payment structure typically falls below minimum wage and there's a certain insecurity in the job. There's lack of benefits, paid leave, there's no unionization and. And it's the Global South. So we kind of here in the US we don't look at it. That's certainly a cost. And there is the cost of actually mineral extraction. We mentioned it. But at the end of the day this is hard work often done in the Global south and often done with conditions that are at least by our standards in the us not at the union level. Some will point to the child labor that's being used. I mean, I'm not arguing yes or no to it. I mean it's just kind of a hidden cost. We think AI basically works autonomously and off it goes. But there's human input here. Any thoughts on mineral extraction? [00:24:50] Speaker A: That's a significant cost. And CSR is a national science agency in Australia. We have a mineral research unit. It's looking at how to make mines safer for human. In the distant future it will be a zero human mind. Basically you do not need humans to be working in those dangerous conditions to do that. So the endpoint is to make sure human working in very safe and comfortable conditions, reputable salaries and everything. So that's the end point that we are working on right now. Again, AI can help in that space, but I think a lot of challenges now is to transition from the current states to the future states, one is takes time sometimes also the transition path will create impact on certain communities. So some people would argue that even for some of the jobs compared to, you know, Western countries, that it may not be very appealing, but it's a job. If they do not have that job, of course those jobs need to follow the basic human rights, the laws, the international laws and local laws, of course. But some of the jobs are there to support if we transition into less and less human involved automated world, which a lot of the discussion on AI is about the impact on workforce, people losing jobs and things. How do we transition to that ideal state? I think it's a question. So I think some of the relevant to the investor questions in our report and other is to look at the transition path. Are you just building the ideal future and that future with less human involvement? There are some impact that future might be better than the current. But on the way of transition to that future, whether the people may have lost temporarily. For people, 10 years is a long time. But for AI evolution, for technology evolution, maybe you say looking back 100 years or 50 years, we'll say, yeah, we're ending up at a much better. We don't have agricultural workforce anymore. Much 1% of the labor is in agriculture. One used to be 90%. But during that 90% to 1% transition, even the endpoint is terrific. How do we use AI and other technologies to make that transition easier? I think it's an important point in many of these technology discussions, including our research. [00:27:10] Speaker B: Yeah, you kind of go to the bigger topic here that I wanted to get to Eric Schmidt, the former CEO of Google. He made a public statement saying my own opinion is that we're not going to hit the climate goals anyway. I'd rather bet on AI solving the problem than constraining AI and having the problem. Do you think AI can solve the climate problem? [00:27:33] Speaker A: Well, it's anybody's bet. So AI for science probably is the hottest topic I think in CSR. At the National Science Agency every day we have 5,000 scientists working on science from different domain from climate change to agriculture to minerals to AI. And obviously internally we are developing tools and we are using external tools for AI for science. So AI can assess, accelerate scientific discoveries significantly, whether it's more autonomous or not, with human in the loop. The role of a human scientist in solving these problems, I think that's probably beside the point is AI will accelerate, has accelerated. From the inside of National Science Agency, I can categorically say AI has dramatically Accelerated the discovery pace. And climate change is a very active research area within CSRO and it certainly has accelerated whether we'll solve. And solve is a very big word, fully solved. Certainly there is a lot of risk in there and even you solve that. I think the problem is you also drive up demand and if we can reduce the environment impact of training AI models, AI using AI models by 100 fold, but people's usage of it will increase probably 1000 fold. So it's always there being more demand as you are being more efficient. So we are in this paradox. But I do believe we should invest significantly in AI for scientific discovery by itself. You have seen Google's latest research on alpha evolve, which AI scientists can discover new algorithms. You can see the AI co scientists released by many organizations. Csro, Science Digital is AI, I would say AI co scientist working with our scientists on that. So. So I'm optimistic that AI will dramatically accelerate the climate change question. But that shouldn't distract us from doing the right thing right now rather than waiting for that because the impact is real every day, not only in long term trend, but we mentioned from human labor to local environment impact, which is material. And today that should be on everybody's mind. [00:29:51] Speaker B: Let's conclude by talking about the framework you've developed. I think you've kind of hit different pieces of it, but as you and I both know, the consensus is it's really hard to develop a framework given the issues you've raised, the types of AI, the frequency, et cetera. Tell us a little bit about the framework you put together. [00:30:10] Speaker A: So first I think esg, environment, social and governance issues, they repeat in the AI risks as well. So AI, risk management and governance is also there's environment and social issues and at the top level they care about the same thing. At the very top level they even use the very same metrics. But how to measure those, which is really the gist of this report, is how do you measure those things? AI is a new technology, evolving technology. We don't know how to measure many of the things or maybe sometimes it's awareness people know how to measure, but they don't measure along the supply chain and looking at all these difficult areas. So this piece of work is to tell you how to measure and also drawing awareness, which is you should measure those things by combining the two because a lot of people are very familiar with esg, especially probably from your investor communities. And then I think there are so many metrics, so many things you can measure. The framework highlights 10 indicators. So if you want to just focus on 10 indicators. What are be those two? Some of the indicators might be coming from senior management like bot accountability and capability, some about sensitive use cases, some about, you know, employee awareness and the system integration. So what, we have 10 indicators in there to help you to start then for each of the area or indicator there are going to be some metrics. And there are so many metrics. We give you some pointers if you ask one question, if you use one metrics, what would be that metrics? So that will give you some guidance on that. Everything is aligned with the ESG broader framework. So that's why the report is called the intersection of the two. They are not entirely overlapping, especially if you look at the measurement techniques method. But at the very top they have a lot of overlaps. I think rather than creating silos, you have a ESG kind of function to reporting on ESG tracking esg, you have a separate one for AI, maybe another one for data, another one for cybersecurity. There is benefits of streamlining and I think it's esg the wider investor. The reason we wrote this for investors is I think the investors have the lever to pull for many of the companies to do the right thing. And there are a lot of research to be done as well. [00:32:24] Speaker B: Yeah, I assume there's going to be a lot of ongoing research given changes in the technology itself and as you already mentioned earlier about the improvements in efficiency of LLMs, small language models, et cetera. Well, I want to conclude by saying thank you. Your comments today should help asset owners allocators, managers become more aware of the environmental and social impacts of AI. And also again, we're going to put a link to the framework in the show notes. And you know, you've been great. I appreciate it and I look forward to speaking with you in the future. [00:33:00] Speaker A: Oh, thanks very much. It's been terrific. Thank you very much. [00:33:03] Speaker B: Thanks for listening. Be sure to visit P and I's website for outstanding content and to hear previous episodes of the show. You can also find us on p and I's YouTube channel. Links are in the Show Notes. If you have any questions or comments on the episode or have suggestions for future topics and guests, we'd love to hear from you. My contact information is also in the Show Notes and if you haven't already done so, we'd really appreciate an honest review on itunes. These reviews help us make sure we're delivering the content you need to be successful. To hear more insightful interviews with Alicay creators, be sure to subscribe to the show on the podcast app of your choice. Finally, a special thanks to the Northrup family for providing us with music from the Super Trio. We'll see you next time. Namaste. [00:33:56] Speaker A: The information presented in this podcast is for educational and informational purposes only. The host, yes, and their affiliated organizations are not providing investment, legal, tax or financial advice. All opinions expressed by the host and guests are solely their own and should not be construed as investment recommendations or advice. Investment strategies discussed may not be suitable for all investors as individual circumstances vary.

Other Episodes

Episode

September 09, 2025 00:36:11
Episode Cover

AI Claims vs. Reality: An Asset Allocator's Due Diligence Framework

How do you distinguish substance from hype when managers claim AI implementation advantages?In this week's The Institutional Edge, Angelo welcomes Chris Walvoord, recently the...

Listen

Episode

August 26, 2025 00:34:20
Episode Cover

The AI Implementation Gap: What's Stopping Asset Allocators?

"I feel like the adoption of an AI tool could potentially eliminate the need for other existing tools - it might not be a...

Listen

Episode

August 13, 2025 00:33:27
Episode Cover

A Framework for Fiduciary Innovation

What's the biggest mistake asset managers make when implementing AI?This week on Institutional Edge, host Angelo Calvello sits down with Peter Strikwerda, Global Head...

Listen