LITERATURE REVIEW - January 2026
- Jenni Pignatelli

- Jan 27
- 30 min read
REFRAMING THE FUTURE OF WORK: FROM INDUSTRIALISATION TO INFRASTRUCTURAL TRANSFORMATION
Introduction: Why the Future of Work (“FoW”) needs reframing
The future of work, or FoW, is now a vast academic field within economics, organisation studies, information systems, sociology, management and much more. There is an abundance of study out there, but more often than not, it seems that the field has a one-size-fits-all concern: will technology replace human jobs? This labour-centric lens is analytically convenient, since jobs, occupations, and employment statistics can be easily measured at scale. It is also politically salient because job loss is seen, emotionally charged, and easily mobilised in public debate.
Yet, the labour-centric frame is increasingly ill-fitted to an era of infrastructural technological change, where intelligent systems introduce large-scale adaptive cognition. FoW debates frequently emphasise task exposure and employment shifts, while overlooking that value creation occurs mainly within firms, supply chains, and technical stacks (Barley, 2020). The substitution narrative persists and perpetuates, especially when “the popular press tends to associate artificial Intelligence (AI )and robotics with substitution, in part because of an assumption that productivity gains are at the expense of labour” (Raj & Seamans, 2019).
This article advances a capacity-centric reframing: the economic focus is not on the substitution of labour but on the expansion and reconfiguration of Total Productive Capacity (TPC). For the purposes of this research, TPC is defined as: the output potential of a particular socio-technical production system, including human judgement, supervision, and coordination; digital cognition and automation; organisational design; infrastructure, data, and institutional context. In such a view, productivity is a realised (utilised) part of capacity. As observed by Robert M. Solow and Paul A. David’s empirical studies, capacity can grow well in advance of actually being operationalised, monetised or observable statistically.
The evolution of industrialisation and productive systems
Industrial-era theories of work and productivity emerged in a context where productive effort and coordination were predominantly human. Frederick Winslow Taylor's scientific management saw work as decomposable tasks that could be optimised through standardisation, measurement, and managerial control (Taylor, 1911). Under this vision, productivity improvement comes from changing how humans perform work, and labour is the main input from which optimisation is derived.
Coase's theory of the firm, too, placed coordination at the front line, but implicitly, still as a product of human coordination. Firms exist because hierarchical coordination can sometimes be more efficient than market exchange, particularly when transaction costs are high (Coase, 1937).
What changes with AI, however, is not merely an improvement in the efficiency of individual tasks but a transformation in how coordination itself is organised. As Barley observes, “the problem with infrastructural technological change is that people often view these technologies as another instance of substitutional change” (Barley, 2020, p.10). AI extends coordination by enabling cognition to be replicated, scaled, and redistributed across agents and systems, thereby altering the relative costs of hierarchy, markets, and platforms.
Meanwhile, management accounting and costing systems evolved to favour direct labour and direct production inputs. According to Kaplan’s history of management accounting (1984), costing systems structured managerial attention and reinforced labour-centric concepts of performance. These assumptions remain institutionally entrenched: the human full-time equivalent (FTE) is still the dominant unit for headcount, cost, and productivity comparisons even where systems engage in continuous labour-like work.
Infrastructural technology revolutions and economic reconfiguration
Growth theory explains why labour-concentrated metrics do not work when technological change is infrastructural. Solow demonstrated that measured growth could not be adequately explained by labour and capital inputs alone and that there was a residual effect accounting for technological change (Solow, 1957). But even if the residual is recognised, it is often hard for economists and managers to track the organisational paths through which it forms.
From a production-function perspective, this reflects a deeper limitation: standard production function formulations such as Cobb–Douglas implicitly treat productive inputs as homogeneous and predominantly labour-bound, making them ill-equipped to represent heterogeneous, non-human, and coordination-intensive forms of capacity. This research therefore builds toward an extended formalisation of productive capacity that relaxes these assumptions, developed in Article 2.
The productivity paradox (which describes the delay between investments in new technology, like AI, and the expected increase in measured output) reflects this lag from adoption to actual productivity. Solow had observed that “you can see the computer age everywhere but in the productivity statistics” (Solow, 1987) anticipated the AI productivity paradox: widespread AI diffusion with no significant growth in aggregate productivity (Brynjolfsson, Rock & Syverson, 2019; Gordon, 2016). Historical general purpose technologies (e.g. steam, electricity, computing) comparisons show that expansion of capacity usually precedes productivity gains that can be quantifiably measured: the large-scale production of electricity only resulted when factories were reorganised around its flexibility and distributed power (David, 1990).
Consistent with Stephen R. Barley’s rejection of “Fourth Industrial Revolution” rhetoric, this article conceptualises AI as an infrastructural innovation emerging from the computing-based control revolution—one that extends the system’s capacity to coordinate, predict, decide, and scale. Barley argues, technologically induced change unfolds through “a series of interlinked reverberations across levels of analysis that eventually alter the organization’s structure and culture,” rather than through immediate efficiency gains (Barley, 2020, pg. 26). The primary effect of AI is therefore not labour productivity per se, but economic reconfiguration. Accordingly, the unit of analysis shifts from tasks to stacks: layered socio-technical architectures in which value creation propagates vertically — across infrastructural, coordination, and cognitive layers — and horizontally, across functions, firms, and industries. This stack framing is intended as an analytical heuristic rather than a rigid or exhaustive taxonomy of organisational systems.
Another limitation to account for in the extant literature is the tendency to analyse the effects of AI at highly aggregated levels (occupations, industries, national labour markets) while abstracting away from firm-level organisation and decision-making. As Raj and Seamans suggest, focusing too much on these approaches obfuscates how firms have adopted, deployed and reorganised themselves in relation to AI, and that means the explanation of different productivity, employment and value outcomes are restricted. Without a firm-level context, technological change is too often seen as a uniform process of substitution instead of organisational reconfiguration (as determined by managerial choices, coordination structures and complementary investments). This gap motivates a shift in the unit of analysis from tasks or occupations to productive capacity at the firm level.
Innovation as a spawned, not substitutional, effect
A common organisational tendency observed in practice is to frame AI primarily in terms of automation and the automation of jobs or replaceability. When you only ask which tasks can be automated, you have prematurely narrowed a general-purpose capacity down to small, captive cost-saving initiatives. Organisations often ask themselves “What tasks can AI replace?” instead of the more consequential question of “What new productive capacity does AI enable?”.
With a capacity-centric lens, innovation is understood as an effect or consequence of infrastructural change rather than a substitutional effect of individual technologies. As Barley argues, technological revolutions are better conceived as evolutionary processes “punctuated by phases of intensification,” during which infrastructural change gives rise to a “swarming” of complementary innovations rather than discrete acts of replacement (Barley, 2020). New capacity generates new coordination opportunities, which in turn generate new types of work: oversight, exception handling, integration, auditing and governance. This aligns with role-systems conceptions of work focused on recomposing, not abolishing, (Barley, 2020). It is also in keeping with the literature on algorithm-augmented work where the performance is contingent on workflow redesign and task-instance allocation rather than tool use (Jarrahi, 2018; Raisch & Krakowski, 2021).
The stack of productive capacity illustrates an analytical bridge of macro narratives to firm-level mechanisms. Beneath the stack is the infrastructure: data architectures, compute, connectivity, standards. On top of this is a coordination layer: platforms, algorithms, process orchestration, governance mechanisms. On top of which resides the cognitive layer: delegation, trust calibration, judgement allocation, escalation and exception design. Outputs and productivity metrics are at the top. The main claim is that the fundamental value-creating changes happen deep in the stack, where existing labour-centric measures are not as sensitive.
Skill reconfiguration across industries within an economy
If AI increases total productive capacity (TPC) through stack reconfiguration, skills reconfigure across industries within an economy as capacity diffuses. Task-based exposure models (Frey and Osborne, 2017) infer displacement risk by mapping machine capabilities onto occupations, implicitly treating productivity as the sum of task substitutions. However, firm-level outcomes depend on orchestration capabilities: the capacity to allocate task instances between human and machine, to design governance and control mechanisms, and to reconfigure workflows across the stack. These orchestration capabilities are themselves a form of productive capacity, as they determine whether and how technical potential is translated into realised output. Consequently, the same AI capability can yield divergent productivity outcomes across firms, not because algorithms differ, but because organisational systems vary in their ability to assemble, coordinate, and activate productive capacity.
The conditional nature of augmentation is increasingly appreciated within the FoW literature. For instance, in studies of human-AI collaboration, it is evident that performance depends on delegation and metaknowledge, and that miscalibration may lead to over-reliance, deskilling, or inefficiencies (Fügener et al., 2022; Raisch & Krakowski, 2021). Other studies demonstrate that technology adoption is shaped by identity, jurisdictions, and power, and that FoW narratives are politically constructed rather than neutral forecasts (Pfeiffer, 2017; Kelan, 2023). These insights imply that the central question is not aggregate displacement, but how firms deliberately design, negotiate, and govern new configurations of work.
Empirically, the reconfiguration is evident in instances where AI performs continuous cognition at scale. Autonomous driving systems such as Waymo, perform perception and decision-making trained on millions of annotations (Sun et al., 2020). Predictive maintenance systems for on shore wind turbines continuously analyse operational data to prevent failures and optimise output beyond human monitoring capacity (Enercon GmbH, 2024). Hybrid human–AI teams can outperform either humans or AI alone on tasks involving high levels of uncertainty, interdependence, and judgment (Vaccaro, Almaatouq, and Malone, 2024). But in all cases, current accounting classifications systematically fail to capitalise, attribute, or performance-link digital and AI systems to productive output, which implies value leakage when these systems perform labour-like or coordination-intensive work (Kaplan, 1984).
Productive capacity over time: Adjustment, diffusion and lag
A capacity lens should allow for diffusion, adjustment, and lag. Increased capacity is not immediately monetised and requires complementary investments (data quality, workflow redesign, training), institutional adaptation (governance, regulation), and learning effects. This logic is in accordance with historical experience of electrification and computing (David, 1990; Brynjolfsson & Hitt, 2000).
‘Black Swan’ effects such as the financial crisis of 2008 and COVID-19 might also act as accelerators in FoW scholarship, as they compress timelines of experimentation and organisations accept risks they would normally avoid by deploying immature systems, bypassing governance, fast-tracking AI etc. (Coombs, 2020). But acceleration is not a panacea for the reconfiguration problem; rather, it can amplify it by forcing rapid re-composition under uncertainty and by increasing human labour in extra and often invisible work akin to training, monitoring, and correction.
The main measurement implication is that aggregate employment estimates over-represent displacement and churn, while not representing deep stack-level capacity transformations. It allows for a systematic bias in public and academic narratives: displacement is concrete; capacity reconfiguration is latent. As a result, the absence of effective firm-level TPC measures continuously support substitution narratives to be repeated even in stark contrast to the empirical reality of re-composition.
Representation of dynamic skill and capacity reconfiguration
As AI expands TPC through stack reconfiguration, skills necessarily reconfigure across industries as capacity diffuses, recombines, and is redeployed. Influential task-based exposure models, most notably Frey and Osborne (2017), estimate displacement risk by mapping advances in machine capabilities onto occupational task bundles, implicitly representing productivity change as the aggregation of task-level substitution effects. While analytically powerful in highlighting differential exposure across occupations, such models are largely static and abstract from firm-level organisation, coordination, and complementary investment decisions. As a result, they offer limited insight into how productive capacity is actively assembled, orchestrated, and reconfigured within firms over time.
More fundamentally, these approaches struggle to capture the temporal dynamics through which skills and responsibilities evolve as infrastructures mature and organisational designs adapt. In particular, they provide little visibility into how human and non-human contributors are continuously assembled, coordinated, and re-orchestrated across organisational layers as productive capacity expands
Summary: From labour-centric narratives to capacity-centric logic
Moving toward capacity-centric thinking requires a shift in how work is narrated: from labour as the primary unit of analysis to productive capacity as an organising principle in its own right. AI is best conceptualised as an infrastructural technology that expands the total productive capacity of production systems by reshaping the capacity stack—spanning infrastructure, coordination layers, and cognitive orchestration. From this perspective, innovation does not arise primarily through task replacement, but through the expansion of coordination possibilities that enable new configurations of work, roles, and value creation.
Scholarly work around the FoW demonstrates different narrative framings—from pessimistic fears about job loss as well as deskilling, to hopeful expectations of new jobs and meaningful work, to sceptical and nuanced thinking that highlights contingent outcomes influenced by institutional, organisational and social conditions (Sarala, Post, Doh and Muzio, 2025). Economic studies meanwhile demarcate between three primary mechanisms in which AI influences labour markets and productivity: the substitutional (displacement) effect, through which technology directly replaces human jobs and tasks; the productivity (demand) effect, where technological change brings down costs and increases output; and the skills complementarity effect, whereby new technologies push up the demand for skills that complement them, and in so, augmenting human labour (Ernst, 2019; Acemoglu and Restrepo, 2018). These three distinct dimensions go a long way in understanding the polarised debates whether that be around automation fears, augmentation hopes and the framing of AI as complementing, supplementing or substituting human labour rather than the inevitable macro-level displacement that it entails.
Translating the unit of analysis to TPC, the core theme of research now lies in the question of how firms can measure, organise, and govern this heterogeneous productive capacity, across human and non-human contributors, in order to ensure the conversion of capacity into sustained value creation rather than prematurely collapsing it into short-term efficiency gains.
Building on the arguments developed in this paper, Article 2 extends the capacity-centric logic by deconstructing workforce composition beyond the human full-time equivalent (FTE), while Article 3 introduces the Pignatelli Framework (PF) as an organisational measurement system designed to operationalise total productive capacity (TPC) and mitigate value leakage arising from misaligned accounting and governance practices.
DECONSTRUCTING WORKFORCE COMPOSITION BEYOND THE HUMAN FTE
Introduction: The human FTE as an organisational construct
Emerging from the 1960s in the United States as a standardised method for organisations to measure labour input, personnel costs, and compare full-time and part-time staff on an equal basis, the full-time equivalent (FTE) is one of today's most enduring concepts in management. It distils human productivity into a time-expressed, ‘unitable’ dimension which may be costed, compared, allocated, and forecasted.
In modern organisations, this abstraction increasingly mis-calibrates productive reality. Artificial intelligence (AI) systems now perform labour-like cognitive tasks such as prediction, classification, optimisation, generation, and continuous monitoring. But workforce measurements, costing systems and performance dashboards are still all dominated by humans. This article argues that the continued reliance on the human FTE reflects institutional path dependence rather than empirical necessity, leading to systematic misallocation of value and persistent measurement challenges.
In a widely cited Wall Street Journal analysis, McGrath (2025) documents how professional-services firms adopting AI most effectively face declining revenues under hourly billing, despite delivering superior outcomes. Because value is tied to time spent rather than results achieved, organisations are structurally disincentivised from activating productive capacity that reduces labour hours - “the economic absurdity becomes clear when firms adopting AI most successfully would paradoxically see revenue collapse under hourly billing, even as they deliver superior results more efficiently” (McGrath, 2025).
In this context, productive capacity is not merely unmeasured — it is actively punished.
The economic case for a productive capacity strategy
The Strategic Problem: Organisations Cannot See Their Productive Capacity
Even as investment in artificial intelligence accelerates, there is an enduring observation across both empirical and managerial literature that many organisations find it difficult to adequately articulate where productive capacity resides, how it is configured, or how it should evolve over time. Research on AI adoption demonstrates that efforts are often framed as discrete automation projects, experimental pilots, or technology-led deployments that can be justified in terms of short-term efficiency gains or narrow return-on-investment criteria (Brynjolfsson, Rock & Syverson, 2019; Davenport & Ronanki, 2018).
At Amazon, AI is deeply embedded in demand forecasting, inventory optimisation, and logistics planning across its global network. However, realising these capabilities requires redesigning workflows and coordination processes; capacity grows on paper but output gains depend on organisational integration and governance of these systems (Frazer, 2025). Robotic systems further enhance fulfilment throughput, yet their value is realised only when paired with redesigns of human workflows and decision rights (Exotec, 2024)
This organisational confusion reflects a deeper economic problem. Dominant managerial and economic models remain anchored to labour-centric representations of production, in which output is primarily constrained by human effort measured through headcount or time. As a result, AI-driven changes in productive capacity appear fragmented, delayed, or misattributed. Motivated by the need to resolve this strategic blind spot, the Pignatelli Framework seeks to reconnect organisational decision-making with the underlying economics of AI-enabled production.
Why AI Breaks Labour-Centric Economic Assumptions
The classical growth theory is an output-dependent analysis of the relations between labour and capital and considers technological change as an exogenous residual (Solow, 1957). At the organisation-level, this logic is normally translated into production functions such as that by Cobb–Douglas that take labour in a uniform, time constrained contribution (Cobb & Douglas, 1928). These formulations implicitly assume human labour, that the amount of productive input scales with the number of hours worked, and that labour is the main binding constraint on output.
These premises of AI are challenged by the AI literature. In contrast with earlier models of mechanisation which tended to emphasise physical operations, AI systems carry out cognitive, predictive and coordination-centric processes that once were typically embedded within professional and managerial capacities (Autor, Levy & Murnane, 2003; Brynjolfsson & McAfee, 2014; Acemoglu & Restrepo, 2020). Thus, AI changes the nature of production rather than just the efficiency of labour in it.
Economically, AI displays all of the features of a non-rival capital, that is, intangible capital unlike other assets: high fixed development costs, near-zero marginal replication costs, high scale effects (Brynjolfsson, Rock & Syverson, 2019). AI-infused cognition can be leveraged when applied to replicative functions, processes and organisational units without corresponding increases in labour input. Productive capacity may therefore be able to grow independently of the work of human workers well before output or productivity statistics reflect it.
In these circumstances, analytical models which primarily depict AI as labour substitution or labour augmentation systematically mischaracterise its economic role.
Elasticities Reinterpreted Through a Capacity Lens
The Pignatelli Framework has its basis upon how AI reconfigures three elasticities, which are fundamental to the economics of technological evolution: capital elasticity, labour elasticity, and substitution elasticity.
Capital elasticity
Under classical production theory, capital deepening (i.e. the process of increasing the amount of capital per worker) increases output subject to diminishing returns (Cobb & Douglas, 1928). AI capital displays high elasticity in deployment but low elasticity in ownership. Data dependence, intellectual property protection, and platform-level scale advantages lead to rapid internal scaling and concentrated ownership and control across firms and markets (Aghion et al., 2019; Brynjolfsson et al., 2019). Labour-centric perspectives cannot account for why productive capacity scales quickly within firms yet remains highly concentrated across the economy.
Labour elasticity and skills complementarity
Task-based and skill-biased models predict that technological change raises demand for certain skills while displacing others (Autor et al., 2003). However, research on digitalisation and global value chains demonstrates that AI generates skills complementarity effects, recomposing rather than uniformly displacing labour (Ernst, 2019; Ernst & Merola, 2021). AI embeds expertise into systems while shifting human roles toward orchestration, integration, exception handling, and judgement. Labour thus transitions from being a direct factor of production to a capacity-orchestrating input—a shift that cannot be captured through headcount or FTE-based measures.
Substitution elasticity
Empirical evidence reveals that substitution effects are local and task-specific, while system-level productive capacity expands (Autor, 2015; Acemoglu & Restrepo, 2020). Tasks are automated, recombined, or newly created, but aggregate employment effects remain muted. Models that narrowly model displacement misinterpret these dynamics due to abstractions from organisational re-composition and capacity expansion.
The Productivity Paradox as a Measurement Failure
Rapid adoption of digital and weak growth of productivity can be considered the productivity paradox. Often blamed on adjustment lags, substantial research continues to claim that paradox embodies a systematic measurement deficit in digital as well as intangible capital (Brynjolfsson & Hitt, 2000; Syverson, 2017; Brynjolfsson et al., 2019).
Productivity refers to realised output per unit of labour (Solow, 1957), AI is about expanding potential output, optionality and organisational capacity. Capacity, therefore, grows without being fully utilised, monetised, or detectable statistically (Brynjolfsson & Hitt, 2000; Brynjolfsson et al., 2019). This is why AI investment is empirically associated with stable employment, delayed productivity gains, and increasing dispersion in firm-level performance (Ernst, 2017; Acemoglu & Restrepo, 2020; Autor et al., 2020).
Formulation of this distinction is made possible by the Total Productive Capacity Method which establishes a distinction between capacity and utilisation. The Pignatelli Framework builds on this understanding, and challenges the organisational question, which productivity metrics cannot address: how productive capacity must be intentionally designed, directed, deployed, governed, and evolved over time.
Inequality as a Capacity-Rent Problem
The distributional effects of AI adoption also underpins the importance of a capacity-based approach. AI exacerbates scale effects, first-mover advantages, market concentration, and rent capture by firms that are able to control scalable productive capacity (as opposed to by labour) (Autor et al., 2020; Acemoglu & Restrepo, 2020).
Ernst’s study of digital transformations of global value chains shows that inequality comes from unequal access to and control over productive capabilities more than by large scale displacement of human labour (Ernst, 2019; Ernst & Merola, 2021). Employment could be stable, yet value accrues disproportionately to firms that reconfigure and govern capacity.
In this light inequality isn’t just a wage or job issue, it’s a business of capacity orchestration/governance problem. Organisational models based on skills supply or labour markets alone are, therefore, inadequate. By making productive capacity explicit and decomposable, the Pignatelli Framework provides an organisational foundation for addressing distributional dynamics.
Why a Strategy Framework Is Economically Necessary
Collectively, these economic arguments suggest that the path towards AI adoption is not a series of simple locally invested, optimisation issues. It requires a conscious productive capacity strategy. The Pignatelli Framework is propose to justify such economics:
Links organisational decision-making to the true locus of value creation (Solow, 1957; Brynjolfsson et al., 2019),
Accommodates altered capital–labour relationships by reconceptualising the full-time equivalent as a family of capacity units—human (hFTE), machine (mFTE), algorithmic (aFTE), and digital system (dFTE)—reflecting the heterogeneous sources of productive capacity in AI-augmented production systems (Autor et al., 2003; Acemoglu & Restrepo, 2020).
Accounts for delayed productivity without invoking technological failure (Syverson, 2017),
Rethinks inequality as a capacity-rent governance challenge (Ernst, 2019).
From jobs to tasks to capabilities – and the limits of each
Labour economics turned to task analysis more so when faced with automation, as it accepts that technologies replace specific and not entire professions (Autor, Levy, & Murnane, 2003). Task-based models improved realism by decomposing labour input into bundles that had varying degrees of susceptibility to automation.
But from an economic perspective, the task-based approaches still view AI as an exogenous shock to labour demand and therefore not as an endogenous contributor to the production system. Tasks may be redistributed but the process of production is still labour-centric. AI is judged according to that of exposure, similarity, or displacement risk, rather than as a producing input with potential for producing activity.
Capability-based perspectives also recognize AI’s capacity to do, as well, more advanced cognitive processes like prediction and optimisation (Huang & Rust, 2021), and that managerial work can be partially delegated or automated (van Quaquebeke & Gerpott, 2023). Even here, though, AI remains presented as an adjunct to human labour rather than someone responsible for representation in workforce accounting. The human FTE continues to work as an implicit unit value (Susskind, 2023).
Collaboration, governance and the persistent valuation void
Managerial discourse often positions AI adoption as a trade-off between automation and augmentation (Daugherty & Wilson, 2018; Raisch & Krakowski, 2021). Descriptively useful as such framing may be, it is weak analytically from an economic standpoint as it does not identify what unit of input is being augmented or automated. To account for this gap in information as well as ethical risk caused by algorithms, many governance literatures have sprung up, which focus on trust, explainability, accountability, and oversight (Zhang et al., 2020; Weibel et al., 2025).
But governance regimes control behaviour yet do not solve the valuation problem. Humans are managed and priced as labour; AI systems are regulated as risk-bearing assets but not folded into workforce or productivity indicators.
Within the National Health Service (NHS) in the United Kingdom, predictive analytics and digital health innovations have been widely deployed, but institutional fragmentation, governance barriers, and workflow misalignment have limited their adoption and impact, illustrating how capacity can remain under-utilised despite availability (Asthana, Jones, Sheaff, 2019).
Economically, there is a valuation gap: productive contribution is regulated but not represented. Organisations are unable to attribute output, costs or responsibility coherently within hybrid systems because of the absence of a unit to capture non-human cognitive contribution.
Value leakage as a measurement failure
Mis-assigned workforce models lead to predictable economic and behavioural effects. An increase in the productivity of an organisation caused by AI appears as unexplained productivity residuals or is misattributed to human contributions. This gives rise to skewed analysis of performance, return on investment, and capacity utilisation.
At its behavioural level, employees logically react to this ambiguity. When automation is presented as replacement rather than re-composition of productive contribution, workers display resistance, disengagement, distrust and knowledge-hiding under technological turbulence (Peng et al., 2023). The responses reflect uncertainty over how value is generated and rewarded within hybrid systems.
Generative AI intensifies the situation as it collapses the task boundaries. When nonhuman systems create text, code, designs, analyses, and strategic options, the line between worker and tool grows fuzzy at an outcome level even though moral agency is human (Susskind, 2023). Unfortunately, organisational measurement systems have been guided by human FTE-based labour accounting, resulting in systematic value leakage.
Synthesis: Beyond the human FTE toward workforce portfolios
The human FTE, it has been argued in this article, is no longer sufficient to encapsulate the productivity contribution in an adequate economic or organisational manner. This abstraction endures not because it reflects production accurately but because organisational accounting systems lack alternative units capable of representing heterogeneous contributors without attributing human characteristics and behaviours onto technology.
Not to respond with considering AI as labour but rather to re-conceptualise the composition of workforce as one of productive contributors being deployed. This lens of a portfolio retains one of human accountability and judgement while making visible how productive capacity is distributed through human and non-human systems.
In the wider context of the trilogy, the correct analytic lens was framed by Article 1 — the total productive capacity. This article shows why current workforce constructs hinder organisations from operationalising that lens. As a consequence, article 3 presents the Pignatelli Framework (PCF) as a systematic structural and managerial architecture describing hybrid productive capacity in a form that is measurable, governable and also decision-relevant.
Without such an architecture, organisations will continue to enjoy AI-prompted output gains without the capacity to explain, attribute or manage them. The result is less technological failure than persistent economic mismeasurement.
THE PIGNATELLI FRAMEWORK (PF): OPERATIONALISING TOTAL PRODUCTIVE CAPACITY
Introduction: Completing the capacity-centric pivot
Article 1 redirected the Future of Work from narratives of labour substitution into TPC as the model of analysis necessary for infrastructural technological change. Productivity was redefined as the realised component of capacity, explaining observed lags, dispersion, and paradoxes in AI-intensive contexts. Article 2 also displayed the continuing legacy of the human full-time equivalent (FTE) as an organisational artefact, one whose empirical robustness is waning, and how labour-centric measurement systems obscure non-human contributors performing continuous cognitive and coordination work.
This article closes the trilogy by addressing the remaining void – organisational operationalisation of TPC in a meaningful, measurable, governable, and decision-relevant manner. It does so by extending classical production theory through a capacity-based formulation and by introducing the Pignatelli Framework (PF) as an organisational architecture for managing hybrid productive capacity. The article is explicitly forward-looking and proposal-oriented, detailing a research programme to empirically validate the framework with industry partners.
Limits of labour-centric production functions under AI
Classical growth theory models output as a function of labour, capital, and an exogenous technology residual. The Cobb–Douglas production function formalises this:

Where:
Y = Total Output
A = Total factor productivity (the part of output growth that cannot be explained by increases in capital or labour alone; the residual. e.g. technology)
L = Inputs of human labour
K = Capital
𝜶,𝜷 = Elasticity (measures the percentage change in output resulting from a 1% change in input)
In this function, labour is assumed to be homogeneous, time-bound, and the primary scalable input. Solow’s residual captures technological change but offers no organisational mechanism for how new productive potential is assembled, coordinated, or activated.
As established in Articles 1 and 2, these assumptions break down under AI. Intelligent systems perform prediction, optimisation, classification, and continuous monitoring at scale, often independently of human working time. Their contribution is not adequately represented as labour augmentation nor as capital deepening in the traditional sense. As a result, AI-driven capacity expansion appears statistically delayed, fragmented, or misattributed, giving rise to the productivity paradox and persistent confusion at the firm level
From labour to total productive capacity
In order to adapt to these realities, this paper suggests replacing labour L with Total Productive Capacity (TPC) as the relevant organizational productive input. In its simplest form:

TPC is defined as the output potential of a particular socio-technical production system, including human judgement, supervision, and coordination; digital cognition and automation; organisational design; infrastructure, data, and institutional context. This formulation preserves the core analytical intuition of classical growth theory—namely, that output depends on scalable productive inputs—while relaxing the restrictive assumption that productive capacity must be human, time-bound, or labour-augmenting.
Crucially, TPC is conceptually distinct from productivity. Productive capacity can expand in advance of its utilisation, monetisation, or statistical visibility. This distinction provides a coherent explanation for empirically observed patterns in AI-intensive settings, including delayed productivity gains, stable employment alongside rising investment, and increasing dispersion in firm-level performance. Technological change is thus treated as endogenous to capacity formation rather than as an external efficiency residual.
Why total productive capacity replaces the Total Factor of Productivity
In classical growth theory, technological change becomes involved in the production function as Total Factor Productivity (TFP), which is represented by the residual term, 𝐴. Although analytically helpful at the macroeconomic level, TFP is a statistical remainder: it captures increases in output that cannot be accounted for by measured labour or capital inputs but whose organizational origins, composition, or governance are unclear. TFP is not a direct measure of technology but a residual that absorbs all improvements in output not attributable to measured inputs (Hulten, 2001).
Instead, the Pignatelli Framework (PF) replaces the residual with a full definition of productive capacity, which departs from this logic. PF does not reduce the technological, organisational, and coordination effects into a scalar efficiency term but decomposes Total Productive Capacity into a series of capacity-equivalent units—human (hFTE), machine (mFTE), algorithmic (aFTE), and digital system (dFTE). Each unit is therefore, a separate source of productive potential. TFP can only be observed ex post; once output has materialised, however, with the PF, productive capacity is visible ex ante, before it is fully utilised or monetised. Technological change is therefore seen as endogenous to organisational capacity formation, rather than an unexplained efficiency residual.
Decomposing TPC into capacity-equivalent units
Building directly on Article 2, TPC is operationalised as a portfolio of heterogeneous capacity-equivalent units, rather than a single labour aggregate:

Where:
C = Capacity
For tractability, this article proposes four primary capacity (C) equivalents as defined in the Pignatelli Framework:
𝑻𝑷𝑪=𝒉𝑭𝑻𝑬+𝒎𝑭𝑻𝑬+𝒂𝑭𝑻𝑬+𝒅𝑭𝑻𝑬
These are not anthropomorphised forms of labour, but measurement units representing distinct sources of productive capacity.
Human FTE (hFTE): Human judgement, accountability, ethical responsibility, and exception handling. Examples: engineers, pilots, clinicians, managers.
Machine FTE (mFTE): Physical automation capacity performing repetitive or precision tasks. Examples: robotics, CNC machinery, automated assembly systems.
Algorithmic FTE (aFTE): Continuous cognitive capacity embedded in algorithms performing prediction, optimisation, classification, or generation. Examples: demand forecasting models, route optimisation systems, generative AI models.
Digital System FTE (dFTE): Orchestration and integration capacity coordinating humans, machines, and algorithms across workflows. Examples: platforms, digital twins, workflow engines, learning management systems.
Fig. 1: The Pignatelli Framework for Total Productive Capacity

This decomposition makes visible productive contributions that are currently governed but not represented in workforce, costing, or productivity systems, directly addressing the valuation gap identified in Article 2
Capacity, utilisation, and the productivity paradox
With this formulation, productivity (P) can be expressed as:

Under AI adoption, TPC often expands rapidly through aFTE and dFTE growth, while output Y increases more slowly due to complementary investments, learning effects, and organisational redesign. The result is stable employment, delayed productivity gains, and increasing dispersion across firms—outcomes widely observed in the empirical literature but poorly explained by labour-centric models.
This distinction formalises the intuition that productivity is not a measure of technological potential but of activated capacity, and that misinterpreting capacity growth as inefficiency leads to systematic, analytical and managerial error.
A transitional framework for moving toward TPC
Recognising that organisations may not adopt capacity-centric thinking instantaneously, this article proposes a four-stage TPC transition framework.

The Pignatelli Framework for Total Productive Capacity
The Pignatelli Costing Framework (PF) for Total Productive Capacity is proposed as the organisational architecture enabling Stage 3 and Stage 4 transitions. PF:
Identifies capacity units,
Assigns cost and governance structures,
Tracks utilisation separately from creation,
Prevents value leakage caused by misattribution.
Rather than evaluating isolated AI projects, PCF evaluates configured capacity stacks, aligning investment decisions with the true locus of value creation.
Empirical research programme
This article proposes a multi-case empirical programme to validate the TPC method and PF.
Peoposed primary research partners
Robinson Helicopter: AI-augmented manufacturing, certification, and safety systems.
Kallidus: digital learning systems as dFTE capacity and skill reconfiguration mechanisms.
Enercon: predictive maintenance and optimisation in wind energy infrastructure.
Secondary literature-based cases
Amazon: algorithmic logistics, demand forecasting, and platform-scale orchestration, illustrating how algorithmic and digital system capacity (aFTEs and dFTEs) enable continuous coordination across fulfilment, inventory, and delivery networks.
Waymo: vertically integrated autonomous driving systems, demonstrating large-scale deployment of non-human cognitive capacity through perception, prediction, and decision-making models trained on extensive real-world data, and highlighting how productive capacity can expand independently of human labour while remaining contingent on orchestration, governance, and regulatory context.
These cases enable comparative analysis of capacity creation, orchestration, utilisation, and value capture across sectors.
Conclusion
This article argues that the primary problem of AI-enabled production is not technological capacity but rather measurement architecture. Building on classical production theory and extending it to Total Productive Capacity, including a decomposition of labour into its capacity-equivalent units, its production model and the Pignatelli Framework provide a coherent, testable alternative to labour-centric models.
Together with Articles 1 and 2, this work contributes to a capacity-centric economics of AI, providing organisations with avenues to transform technological promise into enduring value creation rather than unexplained residuals.
REFERENCES
Acemoglu, D. and Restrepo, P. (2018) ‘Artificial Intelligence, Automation and Work’. In: Agrawal, A., Gans, J. and Goldfarb, A. (eds.) The Economics of Artificial Intelligence: An Agenda. Chicago: University of Chicago Press.
Acemoglu, D. and Restrepo, P. (2020) ‘Robots and Jobs: Evidence from US Labor Markets’ (and related AI/automation papers). Journal/working paper series (various).
Aghion, P. et al. (2019) AI, innovation and scale effects (AI as intangible/non-rival capital). Working paper / journal article (various).
Arias-Pérez, J, Vélez-Jaramillo, J (2021) Understanding knowledge hiding under technological turbulence caused by artificial intelligence and robotics
Armstrong, M. and Taylor, S. (2023) Armstrong’s Handbook of Human Resource Management Practice (16th ed.). London: Kogan Page.
Asthana, S., Jones, R. and Sheaff, R. (2019) ‘Why does the NHS struggle to adopt eHealth innovations? A review of macro, meso and micro factors’. BMC Health Services Research. Available at: PubMed record and full text via Plymouth/PEARL.
Autor, D.H. (2015) ‘Why Are There Still So Many Jobs? The History and Future of Workplace Automation’. Journal of Economic Perspectives.
Autor, D.H. (2019) The Work of the Past and Work of the Future
Autor, D.H., Levy, F. and Murnane, R.J. (2003) ‘The Skill Content of Recent Technological Change: An Empirical Exploration’. Quarterly Journal of Economics.
Autor, D. et al. (2020) Market power, superstar firms, and distributional effects of technology. Working paper / journal article (various).
Barley, S.R. (2020) Work and Technological Change. Oxford: Oxford University Press (and/or related essays on infrastructural change).
Beier, G, Niehoff, S , Ziems, T, and Xue, B (2017) Sustainability Aspects of a Digitalized Industry
Brynjolfsson, E. and Hitt, L.M. (2000) ‘Beyond Computation: Information Technology, Organizational Transformation and Business Performance’. Journal of Economic Perspectives / related IT productivity literature.
Brynjolfsson, E. and McAfee, A. (2014) The Second Machine Age. New York: W.W. Norton.
Brynjolfsson, E., Rock, D. and Syverson, C. (2019) ‘Artificial Intelligence and the Modern Productivity Paradox: A Clash of Expectations and Statistics’.
Brynjolfsson, E, Rock, D, Syverson, C (2020) The Productivity J-Curve: How Intangibles Complement General Purpose Technologies
Goldfarb, A. (eds.) The Economics of Artificial Intelligence: An Agenda. Chicago: University of Chicago Press.
Brynjolfsson, E. et al. (2019) Intangible capital, mismeasurement and productivity (selected papers). Journal/working paper series (various).
ChatGPT– format and paragraph editing
Choudhury, P, Starr, E, Agarwal, R (2020) Machine learning and human capital complementarities Experimental evidence on bias mitigation.pdf.
Coase, R.H. (1937) ‘The Nature of the Firm’. Economica, 4(16), pp. 386–405.
Cobb, C.W. and Douglas, P.H. (1928) ‘A Theory of Production’. American Economic Review, 18(Supplement), pp. 139–165.
Coombs, C. (2020) COVID-19 as accelerator of digital transformation and experimentation. Journal article (various).
Dabic, M, Maley, J. F, Jadranka, S, Pocek, J (2023) Future of digital work: Challenges for sustainable human resource
David, P.A. (1990) The Dynamo and the Computer
Daugherty, P.R. and Wilson, H.J. (2018) Human + Machine. Boston: Harvard Business Review Press.
Davenport, T.H. and Ronanki, R. (2018) ‘Artificial Intelligence for the Real World’. Harvard Business Review.
David, P.A. (1990) ‘The Dynamo and the Computer: An Historical Perspective on the Modern Productivity Paradox’. American Economic Review, 80(2), pp. 355–361.
Dixon, J , Hong, B, Wu, L (2025) The robot revolution managerial and employment consequences for firms
Dranove, D, Garthwaite, C (2022) Artificial Intelligence, The Evolution of the Healthcare Value Chain and the Future of the Physician
Drury, C. (2020) Management and Cost Accounting (11th ed.). Boston: Cengage Learning.
Editorial, International Journal of Information Management (2024) The impending disruption of creative industries by generative AI Opportunities, challenges, and research agenda
Edwards, H, Edwards, D (2022) There are 170,000 fewer retail jobs in 2017—and 75,000 more Amazon robots
Enercon GmbH (2024) Sustainability and Innovation Report 2024: Intelligent Turbine Monitoring and Digital Wind Farm Management. Aurich, Germany: Enercon GmbH.
Ernst, E. (2017) AI, productivity and firm dispersion (selected works). ILO / journal / working paper (various).
Ernst, E. (2019) Digital transformations of global value chains, capabilities and inequality. ILO working paper / report (and related publications).
Ernst, E. and Merola, R. (2021) Digitalisation and global value chains (capability distribution and inequality). Report / working paper.
Evans, A.J, (2025) Methodological implications of using machine learning to estimate the impact of AI on the workforce
Exotec (2024) Warehouse robotics / fulfilment throughput case materials. Industry report / vendor documentation.
FAA (2023) Rotorcraft Accident and Safety Data Summary: R22, R44, R66 Series. Washington, DC: Federal Aviation Administration.
Frey, C.B. and Osborne, M.A. (2017) ‘The Future of Employment: How Susceptible Are Jobs to Computerisation?’. Technological Forecasting and Social Change, 114, pp. 254–280.
Fügener, A. et al. (2022) Miscalibration, delegation and performance in human–AI collaboration. Management Science / related journal article.
Furr, N, Shipilov, A (2025) Beware the AI Experimentation Trap
Gandel, S (2021) Zillow, facing big losses, quits flipping houses and will lay off a quarter of its staff. - The New York Times
Goller, D, Gschwendt, C, Stefan C. Wolter, S.C (2025) This time it's different – Generative artificial intelligence and occupational choice
Gordon, R.J. (2016) The Rise and Fall of American Growth. Princeton: Princeton University Press.
Grahl, J, Gupta, A, Ketter, W (2021) Will humans-in-the-loop become borgs
Grossman, M (2022) IBM Sells Watson Health Assets to Investment Firm
Gruetzemacher, R, Paradice, D, Bok, L.K. (2020) Forecasting extreme labor displacement A survey of AI practitioners
Hanna, A, Nye, C.D, Samo, A, Chu, C, Hoff K.A, Rounds, J, Oswald, F.L, (2024) Interests of the future: An integrative review and research agenda
House of Commons UK (2023) NAO - Digital Transformation of the NHS
Huang, M.-H. and Rust, R.T. (2020) Engaged to a Robot? The Role of AI in Service
Huang, M.-H. and Rust, R.T. (2021) ‘Artificial Intelligence in Service’. Journal of Service Research.
Hulten, C.R. (2001) ‘Total Factor Productivity: A Short Biography’. In: Hulten, C.R., Dean, E.R. and Harper, M.J. (eds.) New Developments in Productivity Analysis. Chicago: University of Chicago Press.
IMF (2025) The Global Impact of AI – Mind the Gap. Washington, DC: International Monetary Fund.
Jarrahi, M.H. (2018) ‘Artificial intelligence and the future of work: Human–AI symbiosis in organisational decision making’. Business Horizons / related venue.
Kallidus (2024) Learning, Talent and Experience Platform Overview. Cirencester, UK: Kallidus.
Kaplan, R.S. (1984) ‘The Rise and Fall of Management Accounting’. Accounting Review / related publication.
Karakilic, E (2022) Why Do Humans Remain Central to the Knowledge Work in the Age of Robots
Keding, C (2020) Understanding the interplay of artificial intelligence and strategic management four decades of research in review.
Kelan, E.K. (2023) Technology, gender and organisational narratives (selected works). Journal/book (various).
Kolade, O, Owoseni, A (2022) Employment 5.0: The work of the future and the future of work
Kong, D. (2023) ‘Employee–AI collaboration scale’. Journal of Applied Psychology, 108(3), pp. 415–430.
Langer, M, König, C.J, Back, C, Hemsing, V (2022) Trust in Artificial Intelligence Comparing Trust Processes Between Human and Automated Trustees in Light of Unfair Bias
Le, K.B.Q, Sajtos, l, Kunz, W.H, and Fernandez, K.V, (2025) The Future of Work: Understanding the Effectiveness of Collaboration Between Human and Digital Employees in Service
Lundstrom, T., Baqersad, J. and Niezrecki, C. (2022) ‘Using high-speed stereophotogrammetry to collect operating data on a Robinson R44 helicopter’. Aircraft Engineering and Aerospace Technology, 94(8), pp. 1017–1032.
Mayer, H, Yee, L Chui, M, Roberts, R (2025) Superagency in the workplace empowering people to unlock Ais full potential
McElheran, K., Yang, M.-J., Kroff, Z. and Brynjolfsson, E. (2025) The Rise of Industrial AI in America: Microfoundations of the Productivity J-curve(s). U.S. Census CES Working Paper 25-27.
McGrath, R.G. (2025) ‘Say Goodbye to the Billable Hour, Thanks to AI’. The Wall Street Journal, 4 December.
McKinsey (2025) What is Productivity
Minor, L.B, (2025) AI Alone Won’t Transform US Healthcare
Mintzberg, H. (1979) The Structuring of Organizations. Englewood Cliffs, NJ: Prentice-Hall.
Moldoveanu, M (2019) Why AI Underperforms and What Companies Can Do About It
Nazareno,L, Daniel S. Schiff, D.S. (2021) The impact of automation and ai on worker well-being
Nilsson, K.H. et al. (2025) Algorithmic management and occupational health. Safety Science.
NHS England (2024) Planning and implementing real-world artificial intelligence (AI) evaluations_ lessons from the AI in Health and Care Award
Nonaka, I. and Takeuchi, H. (1995) The Knowledge-Creating Company. New York: Oxford University Press.
OECD (2015) Oslo Manual: Guidelines for Collecting and Interpreting Innovation Data. Paris: OECD Publishing.
Ore, O, Sposato,M (2021) Opportunities and risks of artificial intelligence in recruitment and selection
Panos Constantinides, P, Monteiro, E, Mathiassen, L (2024) Human-AI joint task performance: Learning from uncertainty in autonomous driving systems
Peng, H. et al. (2023) Knowledge hiding and resistance under technological turbulence. Journal article (various).
Pfeiffer, S. (2017) Technology narratives, power and work (selected works). Journal/book (various).
Pfeiffer, S (2017) The Vision of Industrie 4.0 in the Making—a Case of Future Told, Tamed, and Traded
Pignatelli, J. (2025) The Pignatelli Costing Framework (PCF): Redefining Productivity and Costing in the Age of AI. Unpublished manuscript / working paper.
Piller, F.T. (2010) Forecasting Next Generation Manufacturing
PWC (2020) Why Most Organizations’ Investments in AI Fall Flat
Quaquebeke, N.v. Gerpot , F.H.(2023) The Now, New, and Next of Digital Leadership: How Artificial Intelligence (AI) Will Take Over and Change Leadership as We Know It
Raj, M. and Seamans, R. (2019) ‘Primer on Artificial Intelligence and Robotics’. Journal of Organization Design, 8(1). Available at: SpringerLink.
Raisch, S. and Krakowski, S. (2021) ‘Artificial intelligence and management: The automation–augmentation paradox’. Academy of Management Review / related venue.
Reis, J, Melao, N, Salvadorinho, J, Soares, B, Rosete, A (2020) Service robots in the hospitality industry The case of Henn-na hotel, Japan
Robinson Helicopter Company (2024) Company Profile and Aircraft Specifications. Torrance, CA: Robinson Helicopter Company. Available at: Robinson official website.
Ross, C, Swetlitz, I (2017) IBM pitched its Watson supercomputer as a revolution in cancer care. It’s nowhere close
Santana, M, Manuel J. Cobo, M.J, (2020) What is the future of work A science mapping analysis
Schumpeter, J.A. (1942) Capitalism, Socialism and Democracy. New York: Harper & Brothers.
Shaw, M.J, Menon, U Knowledge-based manufacturing quality management A qualitative reasoning approach
Solow, R.M. (1957) ‘Technical Change and the Aggregate Production Function’. Review of Economics and Statistics, 39(3), pp. 312–320.
Solow, R.M. (1987) ‘You can see the computer age everywhere but in the productivity statistics’ (remark). Popular quotation; discussed in productivity-paradox literature.
Sun, P. et al. (2020) ‘Scalability in Perception for Autonomous Driving: Waymo Open Dataset’. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2446–2454.
Susskind, D. (2023) Growth: A Reckoning (and related work on work and technology). London: Allen Lane / Penguin.
Scarbrough, H, Yaru Chen, Y, Patriotta, G (2025) The AI of the Beholder: Intra-Professional Sensemaking of an Epistemic Technology
Seenisamy, V (2025) What Walmart’s AI Architecture Reveals About the Future of Work
Syverson, C. (2017) ‘Challenges to Mismeasurement Explanations for the U.S. Productivity Slowdown’. Journal of Economic Perspectives, 31(2), pp. 165–186.
Taylor, F.W. (1911) The Principles of Scientific Management. New York: Harper & Brothers.
UNCTAD (2025) Technology and Innovation Report 2025. Geneva: UN Trade and Development (UNCTAD).
Vaccaro, M., Almaatouq, A. and Malone, T. (2024) When combinations of humans and AI are useful. Nature Human Behaviour (preprint / forthcoming).
van Quaquebeke, N. and Gerpott, F.H. (2023) AI and managerial delegation (selected work). Journal article (various).
Weibel, A. et al. (2025) Trust and governance regimes for AI. Journal article (various).
Weibela, A , Schafheitleb, S, Werffc, L.v.d (2025) Smart Tech is all Around us – Bridging Employee Vulnerability with Organizational Active Trust-Building
West, D.W. The future of employment
WTO (2025) AI risks widening global wealth gap (news on report). Geneva: World Trade Organization (news).
Varsha, P.S., (2023) How can we manage biases in artificial intelligence systems – A systematic literature review
Verganti, R, Vendraminelli, l, and Iansiti, M (2020) Innovation and Design in the Age of Artificial Intelligence.
Viswanathan, V (2026) What OpenAI and Anthropic’s Healthcare Launches Actually Mean for the Industry
Zhang, X. et al. (2020) AI explainability, accountability and oversight. Journal article (various).
Zhang, Y, Liao, Q, Bellamy, R (2020) Effect of Confidence and Explanation on Accuracy and
Trust Calibration in AI-Assisted Decision Making for an automated world of work
Zillow-Group (2021) Q3'21-Shareholder-Letter
.png)



Comments