Aslı Şimşek

Aslı is the founder of Higher Narrative and an interdisciplinary advisor working at the intersection of technology, culture, and human–AI futures. An industrial engineer with a master’s in sociology, she brings 15+ years of experience, including co-leading a digital media agency from the ground up and delivering over 100 projects across sectors such as energy, tech, finance, and hospitality. Her work combines strategic thinking with data and operational experience to support clarity and adaptive decision-making. She translates complex contexts into practical roadmaps, workflows, and frameworks. She works with organizations at inflection points, helping align strategic direction with the realities of technological and cultural change.
Abstract: Artificial intelligence (AI) remains at an early stage of integration into organizational life. Yet even in this formative period, it is already shaping imaginaries, influencing strategies, and amplifying emerging systemic dynamics. One of these is what scholars describe as technofeudalism (Durand, 2023) an evolving political-economic order where digital platforms consolidate control, extract data as rent, and enclose value creation. AI both depends on and reinforces this order, centralizing access to data, infrastructure, and cultural narratives of efficiency. This article employs Causal Layered Analysis (CLA) (Inayatullah, 1998) to examine AI’s role across litany, systemic causes, worldview, and myths/metaphors. The analysis shows how AI narratives privilege efficiency and disembodied knowledge while obscuring embodied experience, ethical imagination, and collective resilience. To address these challenges, the paper brings CLA into dialogue with Shrii P. R. Sarkar’s concept of the sadvipra (Inayatullah, 2013) and the ethical vision of neohumanism (Sarkar, 2011), extending this lineage into the context of AI and technofeudalism. In doing so, it outlines a model of leadership that is layered, reflexive, and ethically engaged. The article argues that the future of organizations hinges not only on responding to technological disruption but also on interrogating the structural and cultural forces that frame AI within a broader technofeudal order, highlighting the dual risks of escalating dependence on platforms and the neglect of embodied human realities in organizational life.
Keywords: AI, Technofeudalism, Neohumanism, CLA, Organizational Futures.
Introduction: AI at the Threshold, Technofeudalism, and the Leadership Gap
Artificial intelligence is not yet fully embedded in organizational or social life. Its applications remain partial, experimental, and uneven across contexts. Yet its symbolic weight is outsized: AI dominates public discourse, generates waves of expectation and anxiety, and is increasingly positioned as a driver of future economic and cultural arrangements. At this threshold stage, AI’s influence is less about full-scale integration and more about the futures it prefigures.One of the broader systemic shifts visible through AI is the rise of what has been described as technofeudalism (Durand, 2023; Varoufakis, 2023). This emerging order is not reducible to AI alone; rather, it reflects the consolidation of power by digital platforms that act as new landlords of the economy. By enclosing creativity, labor, and knowledge within proprietary infrastructures, they extract value as rent rather than exchange (Varoufakis, 2023). AI both draws on and amplifies this feudal logic: it relies on massive proprietary datasets and cloud infrastructures, and its value chains often return benefits to platforms rather than to users.
Causal Layered Analysis (CLA) offers a way to move beyond surface-level hype cycles or deterministic forecasts. By examining AI across the litany, systemic, worldview, and myth/metaphor layers, CLA makes visible how technofeudal structures intertwine with cultural imaginaries. At present, AI functions as much as a mythic construct as a technical system, alternately imagined as savior, threat, or mirror. These layered narratives shape organizational choices well before integration is complete.
“The question is what kind of leadership can navigate AI dynamics with wisdom and ethical clarity before they harden into irreversible structures.“
But diagnosis is not enough. The question is what kind of leadership can navigate these dynamics with wisdom and ethical clarity before they harden into irreversible structures. Here, Sarkar’s neohumanism (2011) and the archetype of the sadvipra provide resources for reimagining futures. The sadvipra is envisioned as a leader who can perceive across layers, resist capture by systemic or ideological interests, and re-narrate myths toward collective well-being (Inayatullah, 2013). Neohumanism emphasizes interconnectedness (between inner and outer life, human and ecological systems, material progress and cultural depth) offering an ethical horizon against which technofeudal tendencies can be both understood and countered.
This paper therefore explores the intersection of CLA, technofeudalism, and neohumanism as complementary frames for analyzing AI futures. It argues that organizational futures cannot be understood through disruption narratives alone. While disruption highlights turbulence, it obscures the deeper systemic dependencies, cultural imaginaries, and embodied realities that shape how AI enters organizational life. Situated within technofeudal dynamics, organizations face a dual risk: growing dependence on platform infrastructures and the neglect of embodied knowledge and human grounding. By drawing on Sarkar’s concepts of the sadvipra and neohumanism, the paper proposes a form of leadership that is layered, reflexive, and ethically engaged, capable of resisting inevitability narratives while navigating AI’s uncertainties with greater autonomy and grounding.
Methodology: CLA and Layered Leadership
This paper employs Causal Layered Analysis (CLA) as both method and orientation. Rather than reiterating its structure, CLA is used to explore AI at a threshold moment: where public discourse (litany), emerging systemic dynamics such as technofeudalism, epistemic orientations, and cultural myths are beginning to consolidate. The emphasis is less on forecasting adoption curves and more on interrogating the imaginaries and narratives that shape organizational and societal responses before integration is complete.
“The sadvipra is not tied to a single ideology or interest group but can balance competing pressures with ethical orientation and long-term vision. Applied to organizations, this becomes a form of layered literacy: engaging immediate disruptions, confronting structural dependencies, questioning cultural assumptions, and reshaping guiding narratives.“
CLA itself was influenced by Sarkar’s theory of the mind and the archetype of the sadvipra, which highlight the layered nature of change and the role of leadership in navigating it (Inayatullah, 1999). In this sense, CLA does not merely align with neohumanist thought; it is already situated within its intellectual lineage (Inayatullah, 2013). Bringing CLA into dialogue with neohumanism therefore makes visible the ethical and visionary dimensions of futures methods, demonstrating how they can diagnose structures of power while also inspiring imaginative reconstruction (Inayatullah, 1999).
Litany: AI as Disruption and Spectacle
At the litany level, artificial intelligence is framed largely as disruption and spectacle. Public discourse emphasizes predictions of mass job losses, exponential productivity gains, and existential risks posed by superintelligent systems. Headlines oscillate between utopian promises of efficiency and innovation, and dystopian fears of displacement and loss of control. This oscillation positions AI as both inevitable and beyond ordinary governance, reinforcing a sense of urgency without necessarily deepening understanding.
Much of this discourse is amplified through the narratives of major technology firms, which dominate the media space and strategically shape expectations. Press releases and keynote presentations present AI as a transformative solution, while simultaneously obscuring its partial and experimental integration into everyday organizational life. The gap between promise and practice contributes to a discourse environment saturated with hype, in which speculative futures often overshadow present realities.
Even at this surface layer, the outlines of a technofeudal order are discernible. The most prominent narratives around AI originate from a small set of platforms, which act as both developers and landlords of the infrastructure on which AI depends. Their discourse constructs AI as a natural progression of technological advancement, while positioning themselves as indispensable mediators of that future. As a result, the litany of disruption not only reflects societal hopes and anxieties but also legitimizes the concentration of power in platforms that frame the horizon of possibility.
Systemic Causes: Platform Consolidation and Technofeudal Dynamics
Beneath the surface narratives of disruption, AI is embedded within the structural dynamics of a political-economic order increasingly described as technofeudalism. This term captures a shift in which digital platforms function less as competitive firms in a capitalist market and more as landlords of infrastructural estates (Varoufakis, 2023). Rather than generating value through exchange, they increasingly extract value as rent, by enclosing data, mediating access to digital infrastructures, and controlling the ecosystems within which organizations and individuals operate (Durand, 2023).
Artificial intelligence both relies on and reinforces this feudal logic. Large-scale AI systems are dependent on massive proprietary datasets, advanced computational resources, and cloud infrastructures that are concentrated in the hands of a few corporations. Access to these infrastructures determines who can meaningfully participate in AI development and deployment, creating a growing asymmetry between platform owners and dependent users. Organizations that wish to experiment with AI often find themselves as tenants of these infrastructures, locked into proprietary ecosystems that limit autonomy.
“AI and its technofeudal systemics are legitimized by deeper cultural logics, the central one being the privileging of efficiency as a universal good.”
While many AI companies currently operate at a loss, burning capital to train and deploy frontier models, the structural logic is nonetheless feudal. By consolidating proprietary datasets, compute infrastructures, and API access, platforms position themselves as indispensable intermediaries. Creative labor, organizational processes, and cultural knowledge are increasingly absorbed into these systems, establishing conditions for future rent extraction. Even if profitability remains elusive in the short term, the dynamic intensifies inequalities by consolidating control and narrowing the channels through which value can flow, positioning AI less as a democratizing force than as a mechanism of enclosure.
The systemic framing of AI within technofeudalism also highlights how governance and regulation struggle to keep pace. Most policies address AI at the level of ethics, safety, or risk, but rarely confront the deeper political-economic dynamics of enclosure and rent extraction. This leaves the technofeudal order largely unchecked, allowing platforms to consolidate control over infrastructures and narratives. As a result, organizational futures are increasingly shaped by dependencies that limit agency, restrict alternatives, and entrench asymmetries of power.
Worldview: Efficiency, Dataism, and Disembodied Knowledge
At the worldview level, AI and its systemic embedding within technofeudal structures are legitimized by deeper cultural logics. Central among these is the privileging of efficiency as a universal good. Within this frame, technological progress is equated with optimization: faster decisions, leaner organizations, predictive precision. Efficiency is not treated as one value among many, but as the central criterion by which systems are judged. This worldview narrows the horizon of organizational futures by subordinating questions of justice, resilience, or meaning to the imperatives of speed and productivity.
Closely aligned with this is what has been described as dataism (Harari, 2016, p. 428), the belief that data flows constitute the ultimate truth of social and organizational life. Within this epistemology, knowledge is reduced to quantifiable patterns, and decision-making is increasingly trusted to algorithmic systems that can detect correlations beyond human perception. While dataism presents itself as objective, it obscures the embodied, contextual, and interpretive dimensions of knowledge. It also legitimizes the massive extraction and enclosure of data as both necessary and desirable.
This worldview is further shaped by a disembodied rationalism that separates cognition from experience and positions technological mediation as superior to human judgment, reflecting what Harnad (1990, p.335) called the “symbol grounding problem.” AI is celebrated as a more rational actor, capable of overcoming human bias and error, even as it encodes and amplifies existing inequities. The resulting epistemic orientation privileges abstraction over embodiment, efficiency over care, and prediction over participation.
This privileging of efficiency reflects not only a systemic preference for quantification but also a cultural logic that detaches knowledge from presence. As Benjamin (1969, p. 3) argued, mechanical reproduction diminishes the “aura” of art by severing it from its unique context in time and space. In parallel, AI reproduces knowledge in ways that strip it of situated, embodied resonance, flattening learning into data points that circulate with a different kind of grounding. This erosion of the aura of knowledge sets the stage for further abstraction, where Merleau-Ponty’s (2012) reminder of embodied perception becomes crucial for restoring depth to organizational futures.
These cultural logics not only sustain technofeudal dynamics but also restrict the kinds of futures that can be imagined. If efficiency, dataism, and disembodied knowledge remain unquestioned, then the consolidation of power by platforms appears not as a political choice but as a natural trajectory. It is at this layer that alternative worldviews must be articulated, ones that foreground interconnectedness, ethical responsibility, and the plural dimensions of human flourishing.
Myths and Metaphors: AI as God, Monster, and Mirror
At the deepest layer, artificial intelligence is carried by myths and metaphors that extend far beyond its technical capabilities. These narratives invest AI with archetypal significance, shaping imagination in ways that both reveal and obscure the dynamics of technofeudalism.
“Moving the sadvipra from a spiritual ideal to a functional leadership model capable of dismantling technofeudal logic operationalizes the sadvipra. By arming this ethical leader with Taoist tactics (humility against the ‘God’ myth, balance against the ‘Monster,’ and clarity against the ‘Mirror’) we move beyond abstract theory.“
One powerful myth positions AI as a godlike force, a framing analyzed in Harari’s discussion of Dataism (2016) and in Bostrom’s exploration of superintelligence (2014). It is imagined as omniscient, capable of answering any question, predicting any outcome, and guiding societies toward a higher order of efficiency and control. This recalls Baudrillard’s (1994) notion of the simulation, where signs take on a reality of their own and technological systems are endowed with transcendental authority. The god-metaphor legitimizes dependence on platforms that mediate access to this supposed divinity, naturalizing the concentration of infrastructural power.
A countervailing myth casts AI as a monster, echoing archetypal patterns identified by Jung (1969). Here AI becomes the shadow of human creativity: a hubristic construction that threatens to overwhelm its maker. Like Frankenstein or Prometheus unbound, the monster narrative dramatizes human fear of losing control. While critical in tone, this metaphor also magnifies AI’s symbolic power, positioning it as a singular threat and obscuring the more mundane realities of rent extraction and systemic dependency.
A third metaphor frames AI as a mirror, reflecting human biases, aspirations, and contradictions. This resonates with Merleau-Ponty’s (2012) emphasis on embodiment and perception: what the mirror reveals is not an objective truth but the cultural and bodily conditions of human experience inscribed into code. The mirror metaphor has critical potential, since it makes visible the encodings of inequality and exclusion. Yet it can also normalize technofeudal dynamics, suggesting that concentration of power is simply a reflection of society rather than a political choice.
Threaded through these myths is the narrative of inevitability, the idea that AI represents an unstoppable progression of technological destiny. This myth of inevitability forecloses alternative futures, positioning resistance as futile and reinforcing the authority of platforms as custodians of progress.
Taken together, these myths and metaphors construct AI as more than a tool: they render it sacred, monstrous, reflective, and inevitable. By doing so, they both dramatize and conceal the structural realities of technofeudalism, narrowing the space for critical engagement. Unpacking these cultural imaginaries, through the lenses of Baudrillard’s (1994) simulation, Jungian (1969) archetypes, and Merleau-Ponty’s (2012) phenomenology, is therefore essential to opening pathways toward futures that resist enclosure and affirm more interconnected and humane possibilities.
Implications for the Future of Organizations
The layered analysis of AI within the context of technofeudalism carries direct implications for how organizations orient themselves toward the future. If AI is narrated primarily through myths of salvation, doom, and inevitability, then organizational strategy cannot remain a neutral exercise in efficiency metrics or scenario planning. The challenge is not only to anticipate technological disruption, but to interrogate the narratives and structures that make certain futures appear natural while foreclosing alternatives.
First, this calls for a shift from hype to layered understanding. Public discourse on AI tends to remain at the litany level, focused on surface crises or promises. Futures practice must engage across layers, examining how systemic dependencies, cultural worldviews, and deep metaphors shape both technological development and its imagination. As noted in The Bodies of Emerging Futures (Mozzini-Alister & Inayatullah, 2025), such practice must also re-center the role of the body and somatic knowledge in grounding foresight, countering the disembodied abstractions that dominate AI narratives. Causal Layered Analysis provides a method for this, but institutional uptake requires treating meaning-making as central to strategic work.
Second, organizations need to move from neutrality to ethical positioning. Narratives of inevitability naturalize the consolidation of power by platforms. If leaders remain agnostic, they risk reinforcing these enclosures. Organizational foresight must therefore function as an ethical intervention: clarifying where political choice is disguised as technological destiny and illuminating alternative pathways that align with human and ecological well-being.
Third, the analysis highlights a transition from prediction to meaning reconstruction. AI’s dominant myths are less about accuracy than about shaping imagination. For organizations, resilience and innovation do not come from adjudicating between utopia and dystopia but from cultivating new metaphors, symbols, and worldviews that expand the horizon of possibility. This includes designing processes that re-embody knowledge (Merleau-Ponty), surface shadow archetypes (Jung), and challenge simulations that detach signs from lived experience (Baudrillard).
Finally, organizations must grapple with their own systemic entanglement. Most rely on the very platforms that structure technofeudal power, embedding themselves in infrastructures they cannot fully control. Developing futures literacy in this environment requires reflexivity: an awareness of how tools, data, and narratives shape the imaginaries within which strategic choices are made.
Taken together, these implications suggest that organizations of the future must themselves become multi-layered and reflexive. They must function not only as instruments of efficiency or anticipation but as practices of cultural and systemic stewardship: confronting the myths of inevitability, resisting enclosure, and reconstructing meaning. This sets the stage for envisioning forms of leadership, such as Sarkar’s sadvipra, that can hold these complexities and guide organizations toward more ethical, resilient, and humane futures.
Sadvipra and Neohumanism: Leadership Beyond the Layers
The layered analysis of AI highlights not only systemic dependencies but also the tendency toward abstraction, where decisions are driven by data and efficiency metrics while neglecting lived experience. For organizations, this imbalance creates risks: strategies may optimize processes but erode trust, resilience, and human capacity. Leadership that matters in this context must be able to move across layers of analysis and remain attentive to the embodied realities of organizational life.
Sarkar’s concept of the sadvipra is instructive here. The sadvipra is not tied to a single ideology or interest group but can balance competing pressures with ethical orientation and long-term vision (Sarkar, 2011). Applied to organizations, this becomes a form of layered literacy: engaging immediate disruptions, confronting structural dependencies, questioning cultural assumptions, and reshaping guiding narratives.
At the same time, insights from phenomenology and somatic intelligence remind us that futures are not only imagined but lived. Bodies register stress, overload, and disconnection before they appear in metrics. They also carry tacit knowledge (patterns of trust, rhythm, and resilience) that shape organizational adaptability. Ignoring these embodied dimensions risks reinforcing the very disembodiment that fuels technofeudal dependency.
Here, neohumanism provides a practical counterbalance. It reframes efficiency not as an end but as one value among others, alongside dignity, meaning, and ecological care. By reconnecting abstract strategy with embodied and relational realities, neohumanism helps organizations resist the pull of inevitability narratives and cultivate cultures that can sustain innovation without eroding their human foundations.
Taken together, sadvipra leadership and neohumanist grounding point toward a model of futures-oriented leadership that is both systemic and embodied: capable of diagnosing structural risks while staying rooted in the human and relational dynamics that determine whether organizations can adapt, collaborate, and thrive.
Conclusion: From Technofeudal Enclosure to Taoist Stewardship
Discussions on artificial intelligence often remain trapped between immediate technological disruptions and speculative long-term futures. However, the real challenge lies in the deep narratives and structures that shape how organizations relate to technology. The Causal Layered Analysis presented in this study reveals how AI is currently embedded within technofeudal dependencies, legitimized by myths of efficiency, and animated by a sense of inevitability.
Against this backdrop, a critical question emerges: If AI possesses the all-knowing power attributed to it by the “God” myth, is AI itself the sadvipra, the enlightened leader of the future?
This paper argues that the answer is no. AI operates on a logic of accumulation, processing past data to predict the future, which merely reproduces the static, rent-seeking order of technofeudalism. Sarkar’s sadvipra, conversely, represents a logic of transformation driven by ethical vision. Yet, the question of how the sadvipra concretely achieves this transformation must be answered. To address this, this study brings together two distinct traditions that have historically stood apart (Taoism and Neohumanism) synthesizing them into a novel strategic framework.
In the synthesis proposed by this paper, the sadvipra utilizes Taoist wisdom as a practical method to transform the dominant AI myths:
Countering the “God” Myth with Wu Wei: Technofeudalism presents algorithmic outputs as a divine authority (“God”) requiring submission. In this model, the sadvipra responds not with submission, but with the Taoist tool of wu wei (non-forcing action). This is not passive acceptance, but a conscious humility before complexity. The leader positions AI as a map, not the master; using technology not to force human nature into algorithmic boxes, but to exercise stewardship that flows in harmony with societal well-being.
Countering the “Monster” Myth with Balance: The fear of the “Monster” (Frankenstein) arises when external technological power outpaces our internal ethical capacity. By integrating the Taoist principle of balance (Yin-Yang) into the sadvipra framework, this paper offers a path to tame the monster. The leader ensures that the speed of technological adoption (outer) never exceeds the organization’s ethical maturity (inner). Thus, technology ceases to be an out-of-control threat and becomes a balanced instrument serving human purpose.
Countering the “Mirror” Myth with the Clear Mind: AI acts as a mirror reflecting existing social inequalities. Leaders trapped in the rentier mindset accept this reflection as unchangeable reality. This study proposes that the sadvipra applies the Taoist metaphor of the “clear mind” to this problem: by polishing the mirror of their own perception through discipline, the leader can see beyond the data-driven “is” to the ethical “ought”. They do not merely accept the reflection; they exert the will to transform the reality being reflected.
Ultimately, this paper moves the sadvipra from a spiritual ideal to a functional leadership model capable of dismantling technofeudal logic. This synthesis operationalizes the sadvipra. By arming this ethical leader with Taoist tactics (humility against the “God” myth, balance against the “Monster,” and clarity against the “Mirror”) we move beyond abstract theory. This approach offers a concrete alternative where technology is guided not by the rigid extraction of platforms, but by a flexible, compassionate stewardship committed to human and planetary well-being.
References
Baudrillard, J. (1994). Simulacra and simulation (S. F. Glaser, Trans.). University of Michigan Press. (Original work published 1981)
Benjamin, W. (1968). The work of art in the age of mechanical reproduction (H. Zohn, Trans.). In H. Arendt (Ed.), Illuminations (pp. 217–251). Schocken Books. (Original work published 1935)
Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press.
Durand, C. (2023). Technofeudalism: Critique of the digital economy (D. Broder, Trans.). Verso.
Harari, Y. N. (2016). Homo deus: A brief history of tomorrow. Harper.
Harnad, S. (1990). The symbol grounding problem. Physica D: Nonlinear Phenomena, 42(1–3), 335–346. https://doi.org/10.1016/0167-2789(90)90087-6
Inayatullah, S. (1998). Causal layered analysis: Poststructuralism as method. Futures, 30(8), 815–829. https://doi.org/10.1016/S0016-3287(98)00086-X
Inayatullah, S. (1999). Situating Sarkar: Historical, comparative and poststructural inquiries. Gurukula Press / Proutist Universal.
Inayatullah, S. (2013). The Sarkar game in action. Journal of Futures Studies, 18(1), 1–14. https://jfsdigital.org/wp-content/uploads/2013/10/181-A01.pdf
Jung, C. G. (1969). The archetypes and the collective unconscious (R. F. C. Hull, Trans., 2nd ed.). Princeton University Press.
Merleau-Ponty, M. (2012). Phenomenology of perception (D. A. Landes, Trans.). Routledge. (Original work published 1945)
Mozzini-Alister, C., & Inayatullah, S. (2025). The bodies of emerging futures. Journal of Futures Studies, 29(3). https://jfsdigital.org/the-bodies-of-emerging-futures/
Mitchell, S. (Trans.). (2015). Tao Te Ching (Lao Tzu). Frances Lincoln Limited. (Original work published ca. 6th century BCE)
Sarkar, P. R. (2011). Neo-humanism: Principles and cardinal values, sentimentality to spirituality, human society [PDF]. Ananda Marga Publications.
Varoufakis, Y. (2023). Technofeudalism: What killed capitalism. Bodley Head.
