top of page

Perspectives

AI and Active Management

AI and Active Management

The Financial Times has got it right for all the wrong reasons. Their recent piece warning that AI might render active management obsolete misses the real story entirely. Yes, AI will accelerate the collapse of much of what we call "active management" today. But that's not because machines are replacing human judgment. It's because we're finally being forced to admit that most "active" management was never truly active at all.

The Financial Times has got it right for all the wrong reasons. Their recent piece warning that AI might render active management obsolete misses the real story entirely. Yes, AI will accelerate the collapse of much of what we call "active management" today. But that's not because machines are replacing human judgment. It's because we're finally being forced to admit that most "active" management was never truly active at all.


This isn't a story about technological convergence or the birth of some hybrid future where active and passive blend into one. It's a definitional cleanup that's been decades in the making, and AI is simply the catalyst that's making the pretence impossible to maintain.


The Industry's Open Secret

For years, the asset management industry has operated under a polite fiction. Large swathes of what we labelled "active management" were really just factor tilts and benchmark-aware processes wrapped in compelling quarterly narratives. Fund managers would tell elaborate stories about their stock selection process, their proprietary research, their unique insights, while running what amounted to systematic exposures with higher fees attached.

True differentiation was vanishingly rare. Instead, the industry perfected the art of closet indexing and career-risk-aware positioning. Portfolio managers learned that straying too far from the benchmark was a career-limiting move, so they hugged it close while charging active fees for the privilege. The stories improved, the PowerPoints became more sophisticated, but the actual value-add remained elusive.

This wasn't sustainable, and investor disillusionment was inevitable. Clients were essentially paying premium prices for systematic exposures dressed up in market commentary and conference calls. They sensed something was wrong, even if they couldn't quite articulate what. The persistent underperformance of active managers wasn't just bad luck or market efficiency. It was the natural outcome of an industry that had forgotten what active management actually meant.

When the Schroders CEO recently suggested that the active-passive divide will fade, he touched on something important, though perhaps not in the way he intended. He's correct that the divide is disappearing, but only because we're finally admitting it was largely artificial to begin with. We're not witnessing the birth of something new; we're watching the industry's taxonomical correction as it's forced to acknowledge what's been true all along.


The Governance Charade

To understand how we got here, you need to understand the governance reality that shapes the industry. Passive strategies have become shields for boards and investment committees who want to avoid the appearance of making discretionary investment decisions. Choose the S&P 500, and you can't be blamed if it goes down. Choose an active manager who underperforms, and suddenly you have a problem. Passive gives committees a shield because choosing passive looks like not choosing.

Consultants learned this game too. Recommending passive reduces their perceived liability. The index can't sue you, and neither can the client when the index does what indices do. Everyone involved is managing risk, but it's not portfolio risk they're worried about. It's career risk, litigation risk, headline risk. The dominant risk being managed in most investment decisions is the risk of being fired.

This reveals an uncomfortable truth: choosing passive is itself an active decision. Every benchmark selection, every allocation choice, every rebalancing schedule represents discretionary judgment. But benchmarks act as governance shields that diffuse accountability. "We just track the market" sounds much safer than "we made a bet that these particular securities would outperform."

The irony is exquisite. In trying to avoid active decisions, the industry created an elaborate framework of pseudo-passive choices that were anything but passive. Meanwhile, the genuinely active managers, the ones actually trying to generate differentiated returns through forward-looking judgment, got lumped in with the closet indexers and the factor tilters.


What Passive Really Is

Here's what most people miss about the passive revolution: passive is only "passive" because it holds the definitional high ground of the benchmark. Strip away the semantics, and what you have is a simple quantitative model. Buy these stocks in these weights, rebalance on this schedule, track this index. It's systematic, rules-based, and entirely mechanical.

Smart beta? Factor investing? Risk parity? These aren't "semi-active" hybrids or a middle ground between active and passive. They're just more sophisticated systematic models. The labels we use are legacy semantics from an era when we needed to maintain the fiction that there was a clear distinction between human judgment and rules-based implementation.

This matters because once you understand that passive is just a simple, systematic strategy that happens to track a popular benchmark, you realise that most of what calls itself active is just running slightly more complex systematic strategies with less transparent rules. The difference isn't fundamental; it's cosmetic.

There is no conceptual difference between smart beta, quantitative and AI. They are all systematic rule sets. As implementation standardises on AI, these labels become legacy semantics. They will all simply be 'AI' in practice.


Enter the Machines

This is where AI changes everything, and not in the way the FT article suggests. AI doesn't threaten active management by replacing human judgment with machine judgment. It threatens the vast middle ground of fake active management by exposing it for what it is: systematic processes that machines can execute better, faster, and cheaper.


Anything that follows a repeatable rulebook will be absorbed by AI. Every screening process, every factor tilt, every momentum strategy, every mean reversion trade that can be codified will be codified. And once it's codified, humans become not just unnecessary but inferior. The machine doesn't get emotional during drawdowns. It doesn't second-guess the model during volatility. It doesn't need coffee breaks or Bloomberg terminals or corporate credit cards.


But here's where the story gets interesting. AI doesn't think like we do. Humans process information sequentially, following logical chains from premise to conclusion. AI operates non-linearly, exploring thousands of paths simultaneously, existing in what I call a liquid Möbius state where multiple possibilities coexist until the optimal solution crystallises.


Think of AI cognition as a quantum superposition of strategies. It doesn't move from A to B to C like human analysis. It exists simultaneously at A, B, C, and every point in between, collapsing into a specific output only when forced to generate a decision. For best results, you architect processes that let AI work on many interconnected problems simultaneously, searching for the lowest entropy solution across the entire possibility space.


This is why forcing AI into traditional investment processes is like asking a fish to climb a ladder. You're not just limiting its potential; you're negating its fundamental advantage. Linear workflows, sequential decision trees, staged approval processes, these are human constructs that make AI worse, not better. Managers keep taking something liquid and quantum, then twisting and forcing it into a linear 2D process that was never designed to contain it. No wonder the agents break and the situation leaks all over the floor.


The Architecture Problem

When AI implementations fail inside asset management firms, and they fail constantly, the problem is almost never the model. It's the process architecture. Firms take their existing linear workflows, the same ones that produced closet indexing and factor tilts dressed as active management, and try to inject AI into them like it's a performance enhancement drug.


But AI isn't a drug; it's a different species of intelligence. It needs different structures, different processes, and different frameworks to thrive. The design goal shouldn't be to make AI better at executing existing processes. It should be to create frameworks that enable simultaneous, interconnected exploration with human-defined boundaries and escalation protocols.


This is where the real challenge emerges, and it's not technological. It's organisational and cultural. Incumbents consistently underestimate the redesign required to move from linear workflows to AI-native frameworks. They fund tools, not transformation. They buy licenses for large language models and expect magic to happen. Without new mandates, incentives, and operating models, AI benefits often stall in pilots and proof-of-concepts that never reach production.


Horizontal AI platforms are powerful generalists, but vertical finance is unforgiving. Precision, auditability and domain heuristics matter. That is why deployments that work rely on forward-deployed engineering and service layers that translate generic models into high-precision, domain-constrained systems. The product is never 'just the model'; it is model plus process plus controls.


The cultural friction is more intense than most leaders anticipate. The industry has spent decades building a credentialed priesthood of CFAs, MBAs, and PhDs who've been trained to think in exactly the wrong way for an AI-native world. These professionals resist role redefinition, not out of Luddite fear but because their entire identity is wrapped up in analytical frameworks that AI makes obsolete.


Firms keep buying tools and calling it transformation. Without changes to mandates, incentives and operating model, AI gets stuck in pilots.


The Commoditization Cascade

The commoditization is already happening, just unevenly distributed. Earnings call summarisation and sentiment extraction, once the province of armies of junior analysts, are now commodity services offered by multiple vendors with no real differentiation. Large parts of middle-layer research and reporting are following the same path under LLM pressure.


This creates a brutal new reality for asset managers. When everyone has access to the same AI-powered research, the same sentiment analysis, the same pattern recognition, where does differentiation come from? For incumbents competing on near-identical offerings, efficiency gains of even 2 to 5 basis points can decide mandates at scale. Operational excellence becomes the new battleground when informational edges evaporate.


But let's be clear about what GenAI won't do. It will not crack the hardest problems in public markets. Stock ranking, timing signals, tail-risk controls and portfolio construction are tabular, high-precision tasks that still rely on traditional machine learning and carefully governed infrastructure. The accuracy gains that matter are measured in low single digits. That is where robust data pipelines, feature engineering, model lifecycle management and battle-tested execution infrastructure remain decisive.


The Irony They Can't See

Here's the delicious irony that fundamental managers miss while watching their quant colleagues struggle. They see quants failing with AI and feel safe, but they're misreading the failure entirely. Quants are not failing because AI is useless. They are failing because they are asking AI to do tabular precision it does not yet excel at, inside linear workflows it cannot breathe in. Meanwhile, the real threat is arriving from teams who build for AI's native strengths and then route around traditional roles entirely. Even if quants solved their architectural problems, LLMs still aren't very good at maths in their default form.


But while fundamental managers take comfort from these visible failures, they're completely blind to what's happening in the shadows. Others are architecting novel frameworks in AI's native dimensions, building systems that will simply appear one day and eat their lunch. They're watching the wrong experiment and drawing the wrong conclusions. The quants' failure isn't proof that AI can't transform investment management. It's proof that most people don't understand how to architect for AI's actual capabilities. The real threat isn't coming from their peers trying to make AI do their existing jobs better. It's coming from people building entirely different frameworks that bypass their jobs altogether.


Nor will GenAI rescue you from geopolitics. Tariffs, sanctions, elections and regulatory shocks are not neatly structured inputs. A model can enumerate scenarios, but a human still has to weigh path dependencies, second-order effects and narrative shifts. That is the judgment layer where true active earns its keep.


I Know This Is Happening Because I Built It

When writing the KEVI investment policy, I built a 4-step, 3-page framework that has the potential to eliminate the entire RI consulting layer for endowments. Traditional responsible investment governance requires armies of consultants managing ESG compliance frameworks, committee choreography, and quarterly reporting cycles that generate bureaucracy without decisions.


The Newgen 4-step Responsible Investment Framework (N4RIF) [TM], the recursive logic technology underpinning the 3-page framework, is designed to work in 3 dimensions: all actors (trustees, managers, advisors), all governance structures (direct holdings, pooled funds, property), and all asset classes (equities, bonds, real estate, alternatives). It works both "forwards" as a policy development tool, and "backwards" as a reporting and stakeholder communications technology.


The framework is designed like a quantum superposition that collapses into a steady real-world state at its point of highest entropy, lowest energy. This means the rules of thermodynamics, information theory, and topological information conservation are baked in by definition. I'll detail how this works another time, but for now I'll just note that none of these fundamental principles - thermodynamics, information theory, or topological information conservation - appear in any module of the CFA or on the Actuaries' exam list.


Forward: The framework develops policy guidance across actors, governance models, and assets - institutional values become specific industry-recognised investment criteria that any stakeholder can understand and implement across all three dimensions simultaneously.


Backwards it operates as three integrated tools:

  1. Monitoring Tool: Upload managers' ESG reports and your holdings, plug it into the news, and AI monitors portfolio alignment in real-time. We'd know about our values breaches before our asset managers and advisors.

  2. Stakeholder Reporting Platform: Converts manager case studies into institutional narratives - take any ESG report and automatically generate stakeholder communications that demonstrate organizational values in action, directly linked to the case study in question, with accompanying explanatory narrative.

  3. Values Alignment Assessment: Evaluates existing and potential investments and partners for values alignment. At KEVI Foundation we've already used it to assess a potential investment, with the model identifying that the retailer's 10% of revenue from tobacco sales would potentially breach our Care value.

A use-case within a case-study: potential tobacco retail investment at KEVI Foundation:

At KEVI Foundation we've already used it to assess a potential investment, with the model identifying that the retailer's 10% of revenue from tobacco sales would potentially breach our Care value.

I used the GPT framework, and separately the CoISC used the framework manually (on paper) to identify that this characteristic:

  • Step 4: Would be a possible breach of SDG 3: Good Health & Well-Being, and SDG 12: Responsible Consumption & Production.

  • Step 3: Which would fulfil our "Seek to avoid" behaviour: "Investments that undermine public well-being, such as those involved in predatory financial practices, exploitative labour conditions, or industries with significant negative health or social impacts"

  • Step 2: Which would breach our investment value of Ethical and Sustainable, Asset and Investment Management. Translated as: Ensuring responsible stewardship of financial and real estate assets

  • Step 1: Which would breach the Foundation's core Value of Care.

[NB: for those that are interested, in an educational context Care is defined as: "People's wellbeing and growth is at the centre of all we do, and we take seriously our environmental responsibilities. We make a positive contribution to individuals and society."]

The framework can also assess the RI policies of potential partners for compatibility and areas of conflict. I ran another instance of the N4RIF on an LGPS and an LGPS Pool and it immediately produced structured, actionable intelligence.


What traditionally takes:
  • ESG consultants months to design

  • Quarterly committee meetings to review

  • Annual reporting cycles to communicate

  • Ongoing compliance management to maintain

Now happens automatically:

  • Values translate directly to investment guidance across all dimensions

  • Manager reports become stakeholder narratives instantly

  • Portfolio alignment monitored in real-time

  • Partner compatibility assessed immediately

  • Zero ongoing consultant dependency

This isn't theoretical. I've built a systematic architecture that does automatically what takes the RI industry months of process management. Linear thinkers might look at AI and think: how can we optimise or make more efficient the different stages and parts of the Charity Investment Governance Principles (CIGP)? What this framework can do when I've finished the at-scale deployment is make processes like CIGP obsolete in their entirety. Plus it can go significantly further by providing a stakeholder communications and integrated monitoring platform. This is an example of the implications for the wider asset management market - entire categories of systematic advisory work disguised as strategic consulting simply disappear.


The 3D-Printed Future

The future of true active management isn't about humans versus machines. It's about modular hybrid constructs, effectively 3D-printed from human insight and machine execution. Picture a ringmaster in a circus of specialised intelligence. The portfolio lead doesn't personally perform every analytical task. Instead, they orchestrate AI specialists, systems engineers, and domain modules, each optimised for specific types of processing.


In this model, implementation becomes agnostic. The old categories of passive, smart beta, quant, and fundamental lose their meaning. What matters is problem-solving quality: can you generate differentiated insights and implement them efficiently? The specific mix of human and machine intelligence used to achieve this becomes irrelevant to the end client.


This is already happening at the edges of the industry. Agentic AI 1.0, which mainly wrapped linear workflows and Q&A systems, delivered productivity gains but shallow edge. Agentic 1.0: scrape the earnings call, summarise the transcript, tag sentiment, push into a dashboard. Useful, but a commodity.

Now we're seeing Agentic AI 2.0, which requires completely re-architected processes for simultaneous, interconnected tasks where agents coordinate within constraints set by human overseers. Agentic 2.0: on earnings day, orchestrate data ingestion, reconcile fundamentals with alternative data, refit risk and liquidity constraints, simulate cross-portfolio knock-ons, and propose implementation plans that respect mandate and turnover limits, all at once, and within defined guardrails.


The new Model Context Protocol (MCP) standardises tool access and reduces friction, but it doesn't confer a competitive advantage by itself. Real edge comes from process architecture and what I call data connoisseurship: the ability to identify, curate, and synthesise information sources that others miss or misinterpret.


The durable moats are private data and private infrastructure. Private data gives you signals others do not have. Private infrastructure lets you clean, align and exploit those signals at speed and with reliability. Prompts do not create edge. Pipelines and platforms do.


As AI drives implementation, we're also seeing pricing models evolve. The pressure pushes providers toward outcome-based pricing rather than classic software-as-a-service or basis point models. Integration and performance accountability tighten. "Tools" become "co-managed outcomes" with shared risk between asset managers and technology providers.


The Human Remainder

Most commentary treats AI as a time-and-cost optimiser on a familiar production line. That is the wrong picture. The future manager is not assembled on a line. The future manager is 3D printed from a template that blends human insight with machine execution in one object. If your organisation is designed around sequential handoffs, you are bending a liquid shape into a flat mould. It will split and leak.


The only durable edge is cognitive. Architect the frame in which machine intelligence explores, constrain it with institutional purpose, and reserve human judgement for the few decisions where context and accountability cannot be automated. Everything else belongs to the machine.


Here's the uncomfortable truth about workforce implications: approximately 80% of current roles in asset management may disappear as AI automates repeatable tasks. But the real tragedy is that the remaining 20% may be ill-suited to the majority of current professionals. The survivors won't be the best analysts or the most experienced portfolio managers. They'll be system architects, integrators, and trust builders, skills that weren't even on the radar when most of today's professionals entered the industry.


Some aspects of workforce change, such as concrete re-skilling pathways or migration routes for roles displaced by automation, are noted but deliberately not fully expanded here. The deeper issue is not tactical redeployment within the old skill frame, but the wholesale shift to an entirely different cognitive domain once AI-native value creation becomes dominant.


We face a massive credentials mismatch. Traditional qualifications (CFA, MBA, actuarial science) were built for a rules-based, sequential-analysis economy that will no longer define the frontier of value creation. The challenge for both holders and accrediting bodies is one of adaptation. For holders, it means using their domain expertise in ways that will feel alien, requiring multi-disciplinary skills in addition to their subject excellence. For accrediting bodies, the question becomes how to impart domain excellence without embedding the crystallised linear thinking that makes graduates unsuitable for AI-native environments.


The skills gap is not just about learning new tools. Systems design, hypothesis engineering, and live model governance are to traditional CFA and actuarial skillsets what Python is to French - a fundamentally different cognitive domain, not just a new tool in the same toolkit. This makes clear that retrofitting existing professionals is less about "upskilling" in their own discipline and more about translating them into an entirely new mental operating system.


The CFA curriculum, with its emphasis on sequential analysis and crystallised knowledge, is particularly vulnerable. But this vulnerability isn't about the knowledge itself - it's about the thinking patterns it embeds. Unless these programs radically reimagine their pedagogy around systems thinking and AI-era practice, they risk producing graduates optimised for a world that no longer exists.


Think of it as a two-speed labour market, like a Formula 1 pit crew surrounding layers of automation. Small, elite human teams operate where they add unique value: strategy and governance, client-facing trust roles, technology architecture, and model operation. But this isn't just reskilling. It's an identity shift that requires professionals to fundamentally reconceive their role within hybrid systems.


What Actually Stays Human

Several domains will remain distinctly human, not because machines can't do them but because we won't let them. Strategy and governance, including mission, values, and decision rights, stay with us because they're about human purposes and human accountability. Client-facing trust roles involving fiduciary communication and mandate stewardship remain human because trust, ultimately, is a human-to-human phenomenon. Not because machines cannot, but because we will not grant them the agency to do so.


Technology and systems architecture, paradoxically, stay human because someone needs to build the rails on which AI runs. Model operators who steer and intervene in live, self-learning systems remain essential because AI, for all its capabilities, lacks the contextual awareness to know when it's going off the rails.

A new role is emerging that exemplifies this shift: the Risk Overseer. This isn't traditional risk management with its VaR models and stress tests. It's about designing operating boundaries for AI systems: defining permissible search spaces, setting error tolerances, creating escalation protocols, and embedding ethical constraints. It's a creative and strategic role closer to product design than compliance policing, requiring a combination of technical fluency, strategic thinking, and philosophical grounding that no current certification provides.


The high-value skills of tomorrow look nothing like today's curriculum. Non-linear thinking and mental model design across domains. System design and orchestration of hybrid workflows. Interdisciplinary synthesis combining complexity science, behavioural economics, network theory, geopolitics, and technology. The ability to frame human intuition for scaling, codifying tacit knowledge so machines can amplify it without distortion.


Current LLMs are strong in language but weaker in mathematics, though this is changing rapidly. Near-term commoditisation begins with narrative parsing and surface-level reasoning. Those who understand these limitations and design around them will thrive. Those who don't will find themselves competing against machines in games that the machines were designed to win.


Career Migration Paths

The transition paths are starting to emerge. Analysts need to move toward data ontology design and prompt engineering. Operations professionals must shift to model validation and real-time control layers. Compliance evolves into AI policy, assurance, and scenario monitoring. Portfolio managers become framework architects and risk-domain designers.


But this isn't just about new skills. It's about a fundamental reconception of value. The premium shifts from execution to orchestration, from analysis to synthesis, from following processes to designing systems. The professionals who survive won't be those who learn to use AI tools. They'll be those who learn to think in AI-native ways.


Cross-Industry Lessons

Finance isn't the first industry to face this transition. CAD in engineering shifted the focus from drafting execution to constraint-guided design and orchestration. Engineers stopped drawing lines and started defining parameters. Aviation autopilot evolved pilots from manual operators into systems managers. They still fly the plane, but mostly they manage the systems that fly the plane.


Industrial automation moved the skill premium from craft execution to machine orchestration and exception handling. The best factory workers today aren't the ones who can operate a lathe most precisely; they're the ones who can optimise entire production systems and intervene when automation fails.


Finance is following the same trajectory, just decades later. The patterns are predictable if you know where to look. First, the routine tasks get automated. Then the complicated-but-systematic tasks. Finally, everything that can be expressed as rules or patterns gets absorbed by machines. What remains is the genuinely complex: strategy, synthesis, and judgment calls that require understanding context machines can't access.


The Implementation-Agnostic Endgame

We're heading toward an implementation-agnostic future where labels become legacy semantics. The only meaningful question will be how effectively a process combines human insight and machine capability to solve the investor's problem. Boards and clients will judge on outcome quality, resilience, and explainability rather than style-box categories that lost their meaning years ago.


Consider how we talk about technology today. No one debates whether a phone is "mobile" anymore because mobility has become the default assumption. The distinction became so universal that it became invisible. The old active-passive debate will feel just as obsolete. Implementation is agnostic, and labels are meaningless when every strategy is a hybrid of human and machine intelligence.


This isn't a convergence of active and passive into some middle ground. It's a recognition that the taxonomy was wrong from the start. Most "active" was systematic. Most "passive" was active choices wrapped in benchmark clothing. The real distinction was never about active versus passive. It was about systematic versus judgmental, scalable versus artisanal, commoditizable versus differentiated.


McKinsey Gets Closer, Still Misses

A recent McKinsey report (July 2025) on "How AI could reshape the economics of the asset management industry" gets tantalizingly close to understanding what's coming. They correctly identify that firms spend 60-80% of their technology budgets on maintaining legacy systems rather than transformation. They document the complete failure of technology spending to improve productivity, finding "virtually no meaningful relationship between spend and productivity" despite an 8.9% CAGR in tech investment.

They even advocate for "domain-level reimagination" rather than isolated use cases. The domain-level reimagining they're arguing for is the right concept, but it's trapped in too few dimensions. They want firms to be bolder in re-engineering their existing linear processes, to think across entire domains rather than in silos. But they're still thinking in 2-D when AI operates in 4-D.


It's like the classic fish-in-water analogy. McKinsey has progressed from organising things in their own corner to considering how to arrange everything across the entire floor of the fish tank - not just in silos, but holistically across the floor. But just as fish cannot conceive of the water they swim in, they conceive even less (which is strangely possible in a quantum sense) that the water itself is in a bowl, and that someone outside controls the entire container. While McKinsey meticulously documents how to optimise movement across the gravel, they miss that someone could simply create a new fish tank that would redefine the boundaries of their universe entirely.


McKinsey sees the symptoms: massive tech spend with no productivity gain, the need for domain transformation, and workforce disruption. But they can't conceive the cure because they're still thinking linearly about non-linear intelligence. They're documenting the failure of linear thinking while thinking linearly about the solution.


The report perfectly demonstrates why even smart, well-resourced analysis misses what's coming. They can measure that the old ways aren't working, but can't imagine thinking in the dimensions where the new ways operate.


The Question That Matters

The FT article asks whether AI will kill active management, but that's the wrong question entirely. The right question is whether your process design ever supported real active management in the first place. Most of the industry has been running systematic processes with narrative wrappers for decades, charging active fees for what amounts to complicated beta.


AI isn't destroying active management; it's forcing us to admit how little of it actually existed. It's stripping away the comfortable fiction that running screens and tilting factors while telling stories about stock selection constitutes active investing. It's revealing that most of what we called active was just systematic processes that machines can do better.


The firms that survive won't be the ones that resist this change or the ones that blindly embrace every AI tool that comes to market. They'll be the ones who understand the difference between genuine human judgment and repeatable processes, then architect systems that amplify the former while automating the latter.


But here's where the FT got it wrong in ways they can't possibly conceive, at a scale nobody wants to admit. They're thinking about AI-enhancing or replacing existing processes, when what's actually coming is the complete rearchitecture of what asset management firms will be.


The FT imagines machines doing what portfolio managers do today. They can't conceive of asset management firms that look nothing like today's org charts, where the few humans who remain are doing jobs that don't currently exist, with skills that today's professionals don't possess and probably can't develop.


The fund manager of the future might indeed be a machine for everything that can be systematised. But the real disruption isn't job replacement. It's organisational reconstruction. The surviving firms won't be today's asset managers with AI bolted on. They'll be entirely new structures designed around AI's non-linear, liquid intelligence, with human roles that have no current analogue.


The FT worried about machines replacing humans in existing jobs. They can't think liquidly enough to imagine that the entire concept of an asset management firm is about to be redesigned from scratch. The real story isn't that most humans were already doing machine work. We've built an entire industry architecture around sequential, linear thinking, which AI now makes obsolete.


When the rebuilding is done, the few humans left standing won't be doing better versions of today's jobs. They'll be doing jobs we don't have names for yet, in firms that look nothing like anything that exists today.

However, humans will remain essential for true active management.


The human brain has 10^1,000,000 potential configurations, which is more potential classical computing power than the universe itself. But the human mind doesn't just follow paths - it creates wormholes between ideas that have no business being connected.


AI can operate brilliantly within bounded universes, but only humans can make the wild conceptual leaps that link and create new universes. AI doesn't make these connections. It can understand and follow once a human has made them, but it can't originate the connection itself.


This is why AI will not replace true active management. To bring it back to Edison: AI can handle the 99% perspiration, but it cannot produce the 1% inspiration. And in that lies all the difference.

Background image of a geometic shape

Stay Connected. Learn from Our Experts. Follow us.

Address

Wayman House,

141 Wickham Rd,

Croydon CR0 8TE

© 2022 by Newgen Strategy. 

Newgen Strategy ltd. Wayman House, 141 Wickham Rd, Croydon CR0 8TE. Company number: 15250744.

Newgen Strategy is not an FCA regulated business and does not provide regulated advice or services.

The information contained in this website is for general information purposes only. The information is provided by Newgen Strategy ltd and while we endeavour to keep the information up to date and correct, we make no representations or warranties of any kind, express or implied, about the completeness, accuracy, reliability, suitability or availability with respect to the website or the information, products, services, or related graphics contained on the website for any purpose. Any reliance you place on such information is therefore strictly at your own risk. In no event will we be liable for any loss or damage including without limitation, indirect or consequential loss or damage, or any loss or damage whatsoever arising from loss of data or profits arising out of, or in connection with, the use of this website. Through this website you are able to link to other websites which are not under the control of Newgen Strategy. We have no control over the nature, content and availability of those sites. The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them. Every effort is made to keep the website up and running smoothly. However, Newgen Strategy ltd takes no responsibility for, and will not be liable for, the website being temporarily unavailable due to technical issues beyond our control.

Thanks for getting in touch!

bottom of page