Artificial intelligence is reshaping how investment professionals generate ideas and analyze investment opportunities. Not only is AI now able to pass all three CFA exam levels, but it can complete long, complex investment analysis tasks autonomously. Yet a close reading of the latest academic research reveals a more nuanced picture for professional investors. While recent advancements are striking, a closer reading of current research, reinforced by Yann LeCun’s recent testimony to the UK Parliament, points to a more structural shift.
Across academic papers, company studies, and regulatory reports, three structural themes recur. Together, they suggest that AI will not simply enhance investor skill. Instead, it will reprice expertise, elevate the importance of process design, and shift competitive advantages toward those who understand AI’s technical, institutional, and cognitive constraints.
This post is the fourth installment in a quarterly series on AI developments relevant to investment management professionals. Drawing on insights from contributors to the bi-monthly newsletter, Augmented Intelligence in Investment Management, it builds on earlier articles to take a more nuanced view of AI’s evolving role in the industry.
Capability Is Outpacing Reliability
The first observation is the widening gap between capability and reliability. Recent studies show that frontier reasoning models can clear CFA Level I to III mock exams with exceptionally high scores, undermining the idea that memorization-heavy knowledge confers durable advantage (Columbia University et al., 2025). Similarly, large language models increasingly perform well across benchmarks for reasoning, math, and structured problem solving, as reflected in new cognitive scoring frameworks for AGI (Center for AI Safety et al., 2025).
However, a body of research warns that benchmark success masks fragility in real-world scenarios. OpenAI and Georgia Tech (2025) show that hallucinations reflect a structural trade-off: efforts to reduce false or fabricated responses inherently constrain a model’s ability to answer rare, ambiguous, or under-specified questions. Related work on causal extraction from large language models further indicates that strong performance in symbolic or linguistic reasoning does not translate into robust causal understanding of real-world systems (Adobe Research & UMass Amherst, 2025).
For the investment industry, this distinction is critical. Investment analysis, portfolio construction, and risk management do not operate with stable ground truths. Outcomes are regime-dependent, probabilistic, and highly sensitive to tail risks. In such environments, outputs that appear coherent and authoritative, yet are incorrect, can carry disproportionate consequences.
The implication for investment professionals is that AI risk increasingly resembles model risk. Just as back tests routinely overstate real-world performance, AI benchmarks tend to overstate decision reliability. Firms that deploy AI without adequate validation, grounding, and control frameworks risk embedding latent fragilities directly into their investment processes.
From Individual Skill to Institutional Decision Quality
The second theme is that AI is commoditizing investment knowledge while increasing the value of the investment decision process. Evidence from AI use in production environments makes this clear. The first large-scale study of AI agents in production finds that successful deployments are simple, tightly constrained, and continuously supervised. In other words, AI agents today are neither autonomous nor causally “intelligent” (UC Berkeley, Stanford, IBM Research, 2025). In regulated workflows, smaller models are often preferred because they are more auditable, predictable, and stable.
Behavioral research reinforces this conclusion. Kellogg School of Management (2025) shows that professionals under-use AI when its use is visible to supervisors, even when it improves accuracy. Gerlich (2025) finds that frequent AI use can reduce critical thinking through cognitive offloading. Left unmanaged, AI therefore introduces a dual risk of both under-utilization and over-reliance.
For investment organizations, the lesson is therefore structural: the benefits of AI do not accrue to individuals, but they accrue to investment processes. Leading firms are already embedding AI directly into standardized research templates, monitoring dashboards, and risk workflows. Governance, validation, and documentation increasingly matter more than raw analytical firepower, especially as supervisors adopt AI-enabled oversight themselves (State of SupTech Report, 2025).
In this environment, the traditional notion of the “star analyst” also weakens. Repeatability, auditability, and institutional learning may become the true source of sustainable investment success. Such an environment requires a distinct shift in how investment processes are designed. In the aftermath of the Global Financial Crisis (GFC), investment processes were largely standardized with a strong focus on compliance.
The emerging environment, however, requires investment processes to be optimized for decision quality. This shift is significant in scope and difficult to achieve, as it depends on managing individual behavioral change as a foundational layer of organizational adaptive capacity. This is something the investment industry has often sought to avoid through impersonal standardization and automation—and is now attempting again through AI integration, mischaracterizing a behavioral challenge as a technological one.
Why AI’s Constraints Determine Who Captures Value
The third theme focuses on the limitations of AI, rather than viewing it solely as a technological race. On the physical side, infrastructure limits are becoming binding. Research highlights that only a small fraction of announced US data center capacity is actually under construction, with grid access, power generation, and transmission timelines measured in years, not quarters (JPMorgan, 2025).
Economic models reinforce why this matters. Restrepo (2025) shows that in an artificial general intelligence (AGI)-driven economy, output becomes linear in compute, not labor. Economic returns therefore accrue to owners of chips, data centers, and energy. Compute infrastructure placement, chips, datacenters, energy, and platforms that manage allocation, is the controlling factor in capturing value as labor is removed from the equation for growth.
Institutional constraints also demand closer attention. Regulators are strongly expanding their AI capabilities, raising expectations for explainability, traceability, and control in the investment industry’s use of AI (State of SupTech Report, 2025).
Finally, cognitive constraints loom large. As AI-generated research proliferates, consensus forms faster. Chu and Evans (2021) warn that algorithmic systems tend to reinforce dominant paradigms, increasing the risk of intellectual stagnation. When everyone optimizes on similar data and models, differentiation disappears.
For professional investors, widespread AI adoption elevates the value of independent judgment and process diversity by making both increasingly scarce.
Implications for the Investment Industry
AI’s growing role in automating investment workflows clarifies what it cannot remove: uncertainty, judgment, and accountability. Firms that design their organizations around that reality are more likely to remain successful in the decade ahead.
Taken together, the evidence suggests that AI will act as a differentiator rather than a universal uplift, widening the gap between firms that design for reliability, governance, and constraint, and those that do not.
At a deeper level, the research points to a philosophical shift. AI’s greatest value may lie less in prediction than in reflection—challenging assumptions, surfacing disagreement, and forcing better questions rather than simply delivering faster answers.
References
Almog, D. AI Recommendations and Non-instrumental Image Concerns Preliminary working paper, Kellogg School of Management Northwestern University, April 2025
di Castri, S. et al. State of SupTech Report 2025, December 2025
Chu, J and J. Evans, Slowed canonical progress in large fields of science, PNAS, October 2021
Gerlich, M., AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking, Center for Strategic Corporate Foresight and Sustainability, 2025
Hendryckx, et al. D, A Definition of AGI, https://arxiv.org/pdf/2510.18212, October 2025
Kalai, A, et al., Why Language Models Hallucinate, OpenAI, 2025, arXiv:2509.04664, 2025
Mahadevan, S. Large Causal Models from Large Language Models, Adobe Research, https://arxiv.org/abs/2512.07796, December 2025
Patel, J., Reasoning Models Ace the CFA Exams, Columbia University, December 2025
Restrepo, P., We Won’t Be Missed: Work and Growth in the Era of AGI, NBER Chapters, July 2025
UC Berkeley, Intesa Sanpaolo, Stanford, IBM Research, Measuring Agents in Production, , https://arxiv.org/pdf/2512.04123, December 2025















