The Fragmentation of Systems Thinking
How Institutional Forces Dismantled Bertalanffy's Unified Vision
Ludwig von Bertalanffy’s General Systems Theory, conceived as a unified scientific framework revealing universal principles across all domains, fragmented into specialized silos within three decades of its founding. This transformation occurred not because the intellectual vision failed, but because powerful institutional, economic, and structural forces systematically rewarded narrow specialization over interdisciplinary unity. By the 1980s, the field had splintered into at least five major specializations—cybernetics, systems dynamics, organizational theory, autopoiesis, and complexity science—each with separate societies, journals, and methodologies. The fragmentation accelerated precisely when it confronted Cold War funding priorities, university departmental structures, tenure systems that penalized boundary-crossing work, and consulting markets that valued proprietary expertise. What began as an attempt to develop “generalized ears” for recognizing patterns across sciences devolved into what Kenneth Boulding warned against: “an assemblage of walled-in hermits, each mumbling to himself words in a private language that only he can understand.”
Bertalanffy’s ambitious synthesis aimed to unify all sciences
Between 1945 and 1968, Ludwig von Bertalanffy developed General Systems Theory as nothing less than a new mathesis universalis—a logico-mathematical discipline that would formulate principles valid for systems in general, regardless of their specific domain. His 1950 paper “An Outline of General System Theory” in the British Journal for the Philosophy of Science articulated the revolutionary proposition: “There exist models, principles, and laws that apply to generalized systems or their subclasses, irrespective of their particular kind, the nature of their component elements, and the relation or ‘forces’ between them.”
This was not mere interdisciplinary cooperation. Bertalanffy’s vision, rooted in his organismic biology work from the 1920s-1930s, sought to discover isomorphisms—structural similarities in how systems across radically different domains behave. He documented how the same exponential law described radioactive decay in physics, bacterial death rates in microbiology, and population decline in demography. The logistic equation appeared identically in autocatalytic chemical reactions, organic growth patterns, and the spread of technological innovations like railway systems. Allometric growth laws governing how organs scale with body size in biology followed the identical mathematical form as Pareto’s law of income distribution in economics.
These weren’t analogies or metaphors. Bertalanffy argued these patterns revealed fundamental truths about how reality is organized. He proposed a hierarchical view of nature—from physical and chemical systems through organisms to social systems—where each level maintained autonomy and specific laws while displaying structural uniformities with other levels. His philosophical stance, which he called “Perspectivism,” was designed to legitimize comparing phenomena across organizational levels without falling into naive reductionism or ungrounded holism.
When Bertalanffy joined with economist Kenneth Boulding, physiologist Ralph Gerard, and mathematician Anatol Rapoport to found the Society for General Systems Research in 1954 at Stanford’s Center for Advanced Study in the Behavioral Sciences, they articulated four objectives: investigate isomorphies across fields to facilitate useful transfers between them; encourage theoretical model development in fields lacking them; minimize duplication of theoretical effort; and promote unity of science through improved communication among specialists. The society’s 1956 General Systems yearbook became the vehicle for this unification project.
Yet even at this founding moment, compromise was necessary. The society’s manifesto defined GST merely as “any theoretical system of interest to more than one discipline”—far less ambitious than Bertalanffy’s vision of discovering laws for systems in general. Historian David Pouvreau notes this represented a fundamental retreat from unified understanding to interdisciplinary cooperation. The fragmentation was already beginning.
Parallel systems movements immediately diverged along methodological lines
The unified vision faced competition from its inception. Norbert Wiener’s 1948 Cybernetics: Or Control and Communication in the Animal and the Machine emerged from wartime research on antiaircraft fire control and established an alternative framework emphasizing control, communication, and feedback. While Wiener’s original conception was remarkably broad—spanning biological homeostasis, nervous systems, servo mechanisms, and social systems—his work arose from engineering contexts and attracted a different community than Bertalanffy’s biologically-rooted approach.
The Macy Conferences (1946-1953) brought mathematicians, neurologists, engineers, and social scientists together to explore cybernetics, creating an intellectual ferment that paralleled GST’s development. But from the start, cybernetics and GST represented distinct methodological orientations: cybernetics emphasized mathematical formalization, feedback control, and engineering applications, while GST emphasized conceptual frameworks, biological metaphors, and philosophical foundations. Gordon Pask later observed that cyberneticians and systems theorists argued endlessly over boundaries without resolution.
By 1956, the first major breakaway specialization emerged. Jay Forrester joined MIT’s Sloan School of Management after work on the SAGE air defense system and began developing what he called “Industrial Dynamics.” Forrester created a computer simulation-based approach using stock-and-flow diagrams and feedback loops to model corporate and organizational dynamics. His 1961 book Industrial Dynamics established systems dynamics as a distinct field with operational methodology—far narrower in scope than GST but far more practical for managers seeking tools rather than philosophical frameworks.
Forrester’s approach succeeded precisely because it specialized. He developed specific software tools, clear modeling standards, and demonstrable results for business problems. By 1969-1971, he expanded applications to urban systems and global modeling, with his students producing The Limits to Growth in 1972. The System Dynamics Society, founded in 1983 with Forrester as first president, formalized this separation with its own journal (System Dynamics Review), annual conferences, and educational programs. Systems dynamics had become a thriving specialization entirely separate from general systems theory.
The 1970s witnessed explosive theoretical diversification
Between 1970 and 1984, the systems movement fragmented further as new theoretical frameworks emerged, each carving out specialized domains. Chilean biologists Humberto Maturana and Francisco Varela developed autopoiesis theory around 1970, publishing De Máquinas y Seres Vivos in 1972. Their theory described living systems as self-producing networks of processes—fundamentally different from both cybernetics’ emphasis on control and GST’s search for isomorphisms. When the English edition Autopoiesis and Cognition appeared in 1980, it was quickly adopted by German sociologist Niklas Luhmann for social systems theory, extending well beyond its biological origins into sociology and law.
Cybernetics itself underwent internal differentiation. At a 1974 American Society for Cybernetics meeting in Philadelphia, Heinz von Foerster introduced “second-order cybernetics”—cybernetics of cybernetics, or cybernetics of observing systems. This reflexive, constructivist epistemology argued that observers cannot be separated from their observations. Von Foerster’s innovation represented an evolution away from first-order cybernetics’ objective stance toward radical constructivism, further fragmenting the field.
Meanwhile, organizational systems approaches proliferated. Russell Ackoff moved from operations research toward systems thinking, arguing in the 1970s that society had transitioned from a “Machine Age” to a “Systems Age” requiring new conceptual tools. He introduced “interactive planning” in 1974 for addressing organizational “messes”—interconnected problem complexes. Stafford Beer developed the Viable System Model in the 1950s-1960s, publishing Brain of the Firm in 1972 to establish management cybernetics as a distinct specialization. His most dramatic application came in 1971-1973 with Project Cybersyn in Chile under Salvador Allende, applying cybernetic principles to national economic coordination.
Peter Checkland at Lancaster University grew frustrated with “hard systems” engineering approaches and developed Soft Systems Methodology (SSM) for ill-defined organizational problems. His 1972 paper “Towards a Systems-Based Methodology for Real-World Problem Solving” and 1981 book Systems Thinking, Systems Practice established SSM as an alternative paradigm. Checkland explicitly distinguished his approach from systems engineering, representing a philosophical split about whether systems could be “engineered” or only “learned about.”
By the mid-1970s, even the American Society for Cybernetics collapsed into crisis. No ASC conferences occurred between 1974 and 1980 following disputes over journal control. The field fractured between engineering and social science orientations, with many cyberneticians attending Society for General Systems Research meetings instead. When ASC reorganized in 1980, it emerged as a smaller, more specialized organization focused on second-order cybernetics and constructivist epistemology. The broader cybernetics movement of the Macy Conference era had fragmented beyond recovery.
Academic structures systematically penalized interdisciplinary work
These intellectual divergences occurred within institutional contexts that powerfully reinforced specialization. The fundamental organizational principle of modern universities—disciplinary departments controlling faculty hiring, curriculum, resources, and degree programs—created structural barriers to systems thinking that proved decisive.
University departments emerged in the 18th-19th centuries as necessary responses to exploding scientific knowledge. But as SEBoK’s history notes, “The creation of educational structures to pass on this knowledge to the next generation of specialists perpetuated the fragmentation of knowledge.” Each department developed distinct methodologies, terminologies, evaluation standards, and professional identities. Systems research requiring movement across biology, engineering, social sciences, and mathematics confronted multiple incompatible intellectual cultures.
Resource competition intensified these boundaries. Interdisciplinary initiatives competed with established departments for scarce resources, and during budgetary contractions, departments naturally served their primary constituencies—students majoring in traditional disciplines—making resources scarce for teaching and research far from disciplinary centers. Faculty conducting interdisciplinary work struggled to establish professional identity, facing challenges when seeking promotion and tenure from evaluators lacking commitment to interdisciplinarity.
The tenure and promotion system created asymmetric career risks that rationally steered early-career scholars toward specialization. Multiple studies documented these pressures. The “up or out” requirement allowing only 4-10 years before tenure decisions discouraged participation in long-term interdisciplinary projects. Quality and quantity of publications in recognized journals remained the primary tenure criterion, but interdisciplinary journals were newer, less prestigious, and lower in impact factors than established disciplinary journals. One Computing Research Association study found “the single greatest difficulty is that faculty tend to judge other faculty according to the norms and criteria of their own discipline, and often departments tend to believe that their approach to research is the best one.”
Co-authorship presented particular problems. Interdisciplinary work typically involved multiple authors from different fields, but tenure committees worried when candidates were “always third or fourth author”—a common position in collaborative teams. External reviewers for tenure cases were typically drawn from established disciplines and often didn’t value or understand interdisciplinary contributions. Recent 2025 data shows faculty newly hired by top-ranked universities tend to be less interdisciplinary in their PhD research, particularly when from top universities and remaining in their PhD field—evidence that career incentives continue favoring specialization.
A 2005 National Academies report acknowledged that “many institutions are impeded by traditions and policies that govern hiring, promotion, tenure, and resource allocation” and recommended institutions “develop new and strengthen existing policies and practices that lower or remove barriers to interdisciplinary research.” Yet these recommendations have proven difficult to implement against entrenched structures.
Funding mechanisms reinforced disciplinary boundaries through peer review
The post-World War II expansion of federal science funding created powerful mechanisms that systematically favored disciplinary specialization over interdisciplinary integration. The National Science Foundation, established to support and sustain scholarly disciplines, organized its structure around these disciplines with program managers drawn from the fields they administered—half career civil servants, half on 1-3 year contracts from universities.
A comprehensive MIT study of NSF and DARPA as research funding models revealed how this structure biased outcomes. NSF’s peer review process drew panels from within each discipline to rank proposals by disciplinary standards. The result, as one MIT report documented, was that “NSF has a reputation for being extremely conservative with an overwhelming bias in favor of proposals which hover very close to the center of the discipline.” Faculty reported that “so much emphasis was placed on feasibility at NSF that you actually had to have done the research (or a good part of it) before you submitted the proposal”—creating a catch-22 for interdisciplinary work requiring new methodologies.
The MIT study explained this outcome through NSF’s self-conception: “The way in which the Agency conceives of its mission...is to sustain the country’s scientific capability through education and research, a capability which is in turn embedded in the academic disciplines.” Program managers share disciplinary biases because they are selected from and return to those disciplines. This created what Thomas Kuhn called “normal science”—progress within disciplinary boundaries through adherence to community standards—which systematically excluded work challenging disciplinary assumptions as GST inherently did.
The Defense Advanced Research Projects Agency, created in 1958 following Sputnik, was explicitly designed to counter NSF’s conservative bias. DARPA program managers had wide discretion to fund radical, interdisciplinary work without peer review requirements, using contracts with performance requirements rather than grants. This structure enabled breakthrough projects by creating temporary research communities across disciplines.
However, DARPA’s success remained confined to military applications and required clear defense missions to justify unconventional approaches. Its model depended on substantial budget authority outside normal peer review, short-term project focus, and military urgency that overrode institutional resistance to boundary-crossing. This exception proved the rule: interdisciplinary systems work could succeed only when tied to missions (military defense) powerful enough to override disciplinary boundaries. Without such extraordinary justification, interdisciplinary work faced systematic institutional resistance.
Between 1992-1999, only 23% of NIH Requests for Applications addressed interdisciplinary research, with the rest focusing on disciplinary specialization. Interdisciplinary funding remained marginal despite rhetorical support. The journal peer review system created parallel barriers, as reviewers specialized by discipline struggled to evaluate work spanning multiple fields. Research from 2016 documented how “technical skills, understanding and experience required to operate within a domain can be opaque or intractable to non-specialists,” making it “very hard for outsiders to understand” and “difficult to recruit reviewers who are qualified to assess all facets of a manuscript.”
Cold War priorities channeled systems thinking into mission-specific applications
The Cold War context fundamentally shaped which systems approaches received support and how they developed. Operational Research, born from World War II military needs, demonstrated how scientific methods could address strategic and tactical operations. By V-E Day, 365 scientists engaged in OR for the British Army; by V-J Day, 26 OR groups operated at U.S. Air Force headquarters with average teams of 10 analysts. In 1942, General Arnold required all Air Force generals to include OR teams.
When the war ended, OR practitioners moved to civilian industry—nationalized coal, electricity, and transport in Britain; manufacturing and logistics in the U.S.—but retained their applied, problem-specific focus rather than pursuing general theory. The success pattern was clear: narrow technical expertise applied to defined problems produced demonstrable results and institutional support.
The RAND Corporation, created 1946-1948 by the U.S. Army Air Forces and Douglas Aircraft to “connect military planning with research and development decisions,” became the powerhouse for systems analysis. RAND attracted extraordinary talent—32 Nobel Prize winners eventually associated with the organization—and pioneered the field during the Cold War. Charles Hitch and Alain Enthoven developed frameworks for complex decision-making; Herman Kahn, Thomas Schelling, and Bernard Brodie developed nuclear deterrence theory; RAND created Planning-Programming-Budgeting systems adopted by McNamara’s Pentagon.
RAND’s impact on systems thinking was profound and distorting. Its success came from applying systems analysis to specific military problems: nuclear strategy, bomber survivability, ICBM deployment. This created a powerful model that systems thinking required defined missions rather than general theory. RAND emphasized mathematical modeling, operations research, game theory, and cost-benefit analysis—reinforcing the view that “systems” meant quantitative optimization rather than Bertalanffy’s organismic, holistic vision.
The methods spread to specific domains: defense strategic studies and weapons analysis, policy cost-effectiveness studies, operations logistics and resource allocation. By the late 1960s, more of RAND’s budget went to domestic research applying Cold War methods to urban problems, education, and healthcare—but always within specific problem domains, not as general systems theory. The Cold War incentive structure tied success metrics to demonstrable defense applications, not theoretical unification, creating powerful incentives for specialized applications over general theory.
Systems engineering followed a parallel trajectory. The term emerged at Bell Telephone Laboratories in the 1940s for telephone network design. In the 1950s, General Bernard Schriever and Simon Ramo developed SE for ICBM programs, and by the 1960s it became standard for defense contractors—TRW, Lockheed, Martin Marietta, Boeing. Systems engineering emerged as Kenneth Schlager noted in 1956 in “custom design and development industries, in which contracting plays a major role,” responding to “increased complexity in the fields of communications, instruments, computation, and control.”
SE practitioners explicitly distinguished themselves from GST. One operational researcher noted systems engineering was “not the making of a better gadget, but the better using of an existing gadget”—practical application, not theoretical unification. Companies developed proprietary expertise in systems management, creating competitive advantages that discouraged sharing methods or developing general theory. Commercial pressures required SE consultants to demonstrate expertise in specific industries—aerospace, telecommunications, defense—not generalist systems expertise, which had no market value.
Commercial pressures demanded narrow specialization and proprietary methods
The consulting market created economic incentives that systematically rewarded fragmentation. Organizational Development emerged in the 1950s-60s as consultants applied behavioral science to planned organization-wide interventions. Richard Beckhard’s definition emphasized “using behavioral-science knowledge” for organizational processes. The most in-demand services—executive coaching, team building, change management, leadership development—were all specific applications requiring demonstrable results.
Commercial success demanded specialized expertise by industry, organization type, or intervention method. Clients paid premium rates for sector-specific knowledge and measurable return on investment in specific domains, not general theory. McKinsey, BCG, and other firms developed branded, proprietary frameworks that became competitive advantages. The market structure made generalist systems consultation economically nonviable.
Systems dynamics succeeded commercially by offering operational tools. Software like STELLA and Vensim created technical communities around specific modeling practices. Jay Forrester’s group at MIT developed educational programs training practitioners in defined methodologies. The field provided clear value propositions to clients: computer simulations modeling business dynamics, urban systems, environmental problems. This specialization generated revenue, established careers, and built institutional infrastructure—all unavailable to generalist systems theorists.
The consulting dynamics reinforced fragmentation through multiple mechanisms. Specialized expertise commanded higher fees with clearer value propositions than generalist approaches. Proprietary methods created competitive advantage and protectable intellectual property. Measurable results were easier to demonstrate in narrow domains than across broad systems. Client expectations shaped toward specific solutions rather than holistic understanding. The economic structure made unification intellectually appealing but commercially impractical.
The cybernetics case reveals how feedback concepts narrowed from holistic to technical
The evolution of cybernetics and feedback theory provides the clearest example of how broad systems concepts became narrowly technical. Wiener’s original 1948 vision unified control and communication theory across biological and mechanical systems, named from the Greek kybernētēs (steersman). His framework encompassed homeostasis, nervous system behavior, servomechanisms, information theory, and circular causality across biological, mechanical, cognitive, and social domains. The work was simultaneously philosophical and technical, with the first chapter devoted to Newtonian and Bergsonian conceptions of time.
The Macy Conferences (1946-1953) brought together mathematicians like von Neumann and Shannon, neurologists like McCulloch, engineers like von Foerster, and social scientists like Bateson and Mead to explore these ideas across disciplines. Wiener’s concept of feedback as messages sent and responded to, with information quality determining system functionality and noise corrupting messages preventing homeostasis, offered a genuinely unifying principle.
But control engineering adopted cybernetics in ways that dramatically narrowed its scope. While feedback mechanisms actually predated Wiener—James Watt’s steam engine governor (1868), Maxwell’s mathematical analysis of governors (1868), Black and Nyquist’s work on telephone repeater amplifiers at Bell Labs (1930s)—Wiener’s synthesis provided conceptual legitimacy for the engineering focus. The engineering formalization emphasized explicit measurement requirements: feedback signals must be measurable quantities (voltage, position, temperature, pressure) that sensors convert into electrical or mechanical signals comparable to reference inputs.
This led to rigorous mathematical frameworks—transfer functions, Laplace transforms, frequency domain analysis, stability criteria from Nyquist and Bode, optimization objectives minimizing overshoot, settling time, and steady-state error. The canonical architecture became: sensor measures output, comparator compares desired versus actual state, error signal quantifies difference, controller processes error signal, actuator applies correction, feedback loop closes the cycle.
Thermostats and servomechanisms became paradigmatic examples. A thermostat’s temperature sensor provides voltage proportional to temperature, compared to setpoint voltage, with error signal turning heater on/off—explicit measurement converting temperature into voltage with quantifiable values and clear signal paths. Servomechanisms for position control, using encoders or resolvers to measure shaft position for continuous comparison and correction, exemplified explicit feedback: angular position converted to digital or analog signals for applications in radar antennas, robotics, and CNC machinery.
By mid-20th century, control engineering textbooks formally defined feedback as “a control system in which the output is measured by sensors and compared with the reference input, generating an error signal used to adjust the control input.” The characteristics became: explicit (dedicated sensor producing measurable signal), quantifiable (numerical values processed mathematically), separable (feedback path distinct from forward path), designed (intentionally architected components), and optimizable (subject to mathematical analysis and tuning).
C.S. Holling’s distinction between engineering resilience and ecological resilience illuminated what was lost. Engineering resilience emphasized return time to equilibrium, speed of recovery, maintaining efficiency of function, and assumed single stable states. Ecological resilience emphasized amount of disturbance before system state change, maintaining existence of function, and acknowledged multiple stable states. The engineering formalization embraced the former while losing sight of the latter.
This narrowing excluded entire categories of feedback that Wiener’s original vision encompassed. Implicit feedback systems—where state information guides behavior without explicit measurement or dedicated sensors—became marginalized. A mechanic hearing engine sounds receives acoustic patterns directly indicating combustion quality, timing, and bearing wear without explicit sensors converting sound to comparable signals. The feedback is diffuse, holistic, integrated with perception—the system state itself is the feedback.
Similarly, a chef smelling and tasting during cooking receives olfactory and gustatory information about food’s chemical state, adjusting heat, ingredients, and timing through sensory integration without temperature probes or chemical analyzers. The feedback is embedded in ecological interaction, with skilled perception replacing measurement. Other examples abound: sailors feeling wind and wave patterns to adjust sail trim, craftspeople sensing material resistance while working, musicians hearing ensemble dynamics to adjust timing, organisms maintaining homeostasis through distributed sensing.
These implicit feedback systems fundamentally differ from explicit engineering feedback. Where engineering uses dedicated sensors, implicit systems use distributed sensing. Where engineering requires measurable signals, implicit systems use perceptual patterns. Engineering demands quantification; implicit systems employ qualitative assessment. Engineering creates designed architectures; implicit systems display emergent coupling. Engineering separates feedback paths; implicit systems integrate interactions. Engineering enables mathematical optimization; implicit systems support skilled adaptation. Engineering solutions are universal and transferable; implicit knowledge is context-specific and embodied.
The engineering formalization excluded perceptual feedback (direct coupling without measurement), ecological coupling (organism-environment mutual adaptation), skilled practice (embodied know-how), emergent patterns (self-organizing feedback), multiple simultaneous feedbacks (holistic integration), and qualitative information (non-quantifiable yet functional). The entire domain of tacit, skilled, embodied feedback—which includes vast ranges of human expertise and biological adaptation—fell outside engineering cybernetics’ purview.
Gregory Bateson and Ross Ashby attempted to preserve broader conceptions. Ashby defined cybernetics as “the study of all possible systems”—not just engineered ones—and focused on self-organization, adaptation, black box methodology, variety and constraint, and homeostasis across domains. While mathematically rigorous, Ashby maintained that biological and social applications were central, not peripheral. His Law of Requisite Variety (”only variety can absorb variety”) and the Good Regulator theorem (”every good regulator must be a model of that system”) applied to management, game theory, and computing, not just mechanical control.
Bateson brought cybernetics into anthropology, ecology, psychology, and communication theory, maintaining the broadest conception. His revolutionary ideas included defining information as “difference that makes a difference” (functional and relational, not Shannon’s statistical measure), conceiving mind as immanent in systems (distributed in organism-environment circuits, not located in brains), emphasizing circular causality as essential to all mental process, developing schismogenesis theory (runaway positive feedback in social systems like arms races), formulating double bind theory (contradictory feedback patterns causing pathology), and proposing an “ecology of mind” where consciousness is a cybernetic system embedded in larger systems.
Bateson’s cybernetic epistemology recognized that feedback operates at multiple hierarchical levels, pathology occurs when feedback fails (addiction, arms races, ecological destruction), and systemic wisdom recognizing circular causality should replace conscious purpose divorced from systemic understanding. In Steps to an Ecology of Mind, he critiqued engineering cybernetics for focusing on negative feedback (homeostasis, control) while ignoring positive feedback (growth, creativity, evolution), emphasizing conscious purpose over unconscious systemic processes, reducing mind to computation rather than relational patterns, and losing holistic, ecological understanding.
Heinz von Foerster’s 1974 introduction of “second-order cybernetics” attempted to recover what first-order had lost. Second-order cybernetics—cybernetics of observing systems—included the observer in the system, embraced reflexivity and radical constructivism, and introduced an ethical imperative: “Act always so as to increase the number of choices.” This addressed first-order’s limitations: ignoring the observer’s role in constructing reality, assuming objective measurement was possible, neglecting reflexive feedback, missing ethical dimensions, and overlooking social construction of knowledge.
Yet second-order cybernetics itself became a specialization, appealing primarily to philosophers and social scientists while remaining marginal to engineering applications. The broader cybernetics movement never recovered. The American Society for Cybernetics collapsed between 1974-1980, and when reorganized, it emerged as a smaller organization focused on constructivist epistemology rather than Wiener’s original unified vision. Cybernetics journals and conferences fragmented between engineering applications and philosophical reflections, with minimal communication between these communities.
The irony is profound: Wiener explicitly warned against this narrowing. In The Human Use of Human Beings (1950), he cautioned against automation displacing human judgment, feedback systems serving power rather than people, noise corrupting communication, and “gadget worshipers” reducing systems thinking to technical tools. Yet engineering cybernetics became precisely what Wiener feared—a technical toolbox divorced from ethical reflection, serving military and industrial power, optimizing control rather than cultivating wisdom.
By the 1980s, institutional crystallization made fragmentation irreversible
The period 1974-1988 witnessed the formal institutionalization of fragmentation through organizational changes that recognized the unified field no longer existed. The American Society for Cybernetics’ six-year conference hiatus (1974-1980) following journal disputes fragmented cybernetics between engineering and social science orientations. Heinz von Foerster retired from the University of Illinois in 1976, the Biological Computer Laboratory closed, and government funding cuts reduced campus research. When ASC reorganized in 1980, it emerged smaller and more specialized, never regaining the unified prominence of the Macy Conference era.
The System Dynamics Society founded in 1983 with Jay Forrester as first president formalized systems dynamics as a distinct profession separate from ISSS, with its own journal (System Dynamics Review), annual conferences, software tools (STELLA, Vensim, Powersim), and educational programs at MIT and worldwide. Clear methodological standards for modeling and validation created professional coherence—but as a specialization, not integration.
In 1988, the Society for General Systems Research renamed itself the International Society for the Systems Sciences (ISSS). The name change signaled “broadening scope” but actually reflected loss of unified vision. By 1988, multiple specialized societies already existed: systems dynamics had its own society (1983), cybernetics had ASC (reformed 1980), operational research had INFORMS, and complexity science was emerging at Santa Fe (founded 1984). ISSS became an umbrella organization attempting to maintain bridges between fields but unable to unite specializations that had fundamentally diverged.
The Santa Fe Institute, founded in 1984 by George Cowan, Murray Gell-Mann, and David Pines from Los Alamos, represented a new paradigm. Originally called the Rio Grande Institute, SFI focused on complex adaptive systems, emergence, and self-organization using computational approaches. The institute’s 1986 workshop on “The Economy as an Evolving Adaptive System” funded by Citicorp attracted Nobel laureates like Kenneth Arrow, Phil Anderson, and Murray Gell-Mann. Complexity science distinguished itself from GST through computational modeling versus conceptual frameworks, emphasis on emergence and adaptation, focus on “edge of chaos” dynamics, and less concern with unifying all sciences—another specialization rather than the integration GST envisioned.
Journal proliferation by the 1990s confirmed the fragmented landscape. Cybernetics had Cybernetics and Human Knowing (1992) for second-order cybernetics plus various engineering journals. Systems dynamics had System Dynamics Review (1985). General systems maintained Systems Research and Behavioral Science, Systemic Practice and Action Research, and International Journal of General Systems. Complexity had Complexity (1995) and Santa Fe working papers. No single “systems” journal existed; the publishing landscape was thoroughly fragmented.
The unified vision’s loss created profound intellectual and practical deficits
Kenneth Boulding’s 1956 warning proved prophetic. His call for “generalized ears” capable of recognizing when knowledge from one field is relevant to another articulated precisely what fragmentation destroyed. The Society for General Systems Research aimed to develop these generalized ears through recognition of isomorphisms, common theoretical frameworks, and shared vocabulary, but Boulding foresaw the danger: “The more science breaks into sub-groups, and the less communication is possible among the disciplines, however, the greater chance there is that the total growth of knowledge is being slowed down by the loss of relevant communications.”
The capabilities lost include cross-domain pattern recognition—the ability to see that population crashes in ecology and market crashes in economics both involve positive feedback loops, that traffic flow in cities and network congestion in computers both display self-organizing behavior with emergent properties, that psychological trauma responses and ecosystem regime shifts both exhibit threshold effects and hysteresis. These connections go unrecognized because specialists lack frameworks for identifying isomorphisms.
Holistic understanding of feedback loops was lost. Research on wicked problems shows that humans are “not very good at understanding behavior with long delays between event and response” and that applied solutions lead to unintended consequences because feedback loops and cascading effects go unrecognized by reductionist analysis. The Green Revolution increased food production but led to groundwater depletion, soil degradation, and social inequality through feedback loops not anticipated. Social media connected people globally but created filter bubbles, polarization, and mental health crises through emergent properties not predicted. Antibiotic use cured infections but created resistance crises through evolutionary system dynamics that were ignored.
Multi-scale dynamics became invisible to specialists. Panarchy theory’s research shows small-scale events can trigger large-scale crises (”revolt” connections) while large-scale patterns constrain local reorganization (”remember” connections). The Bristol Bay Alaska wild sockeye salmon fisheries were well-managed locally but entered crisis due to competition from globalized salmon farms—a cross-scale interaction missed by local management. Conventional disciplines specialize at different levels, creating barriers to investigating these multi-level interactions that are fundamental to system behavior.
The loss of leverage point identification is critical. Donella Meadows’ systems thinking framework ranks intervention points by effectiveness: highest leverage comes from changing paradigms and system goals, medium leverage from altering feedback loop structure and information flows, and lowest leverage from adjusting parameters and constants—yet most policies target low-leverage parameters because specialists lack holistic views necessary to see high-leverage structural interventions. Climate policy focusing on carbon prices (parameter adjustment) misses the need to restructure the economic growth paradigm itself (system goal change).
Complex problems increasingly demonstrate the costs of fragmentation. The 2008 financial crisis occurred because specialists missed positive feedback loops between housing prices, mortgage-backed securities, credit ratings, and bank lending. Economists modeling individual firm behavior without system-level feedback couldn’t see the reinforcing loops driving exponential growth then collapse. Climate change requires integrating atmospheric physics, ocean dynamics, ecology, economics, political science, psychology, and engineering, but narrow technical solutions like carbon capture ignore social feedback loops and political constraints, missing the cross-scale interactions from individual behavior to global systems.
Healthcare systems revealed fragmentation’s failures during COVID-19. Reductionist models’ fallibility became immediately apparent as specialists failed to understand feedback between disease spread and behavioral adaptation, cross-scale dynamics from molecular to societal levels, and emergent system behaviors like supply chain collapse and healthcare worker burnout cascades. Environmental management repeatedly fails when narrow technical fixes like liming acidified lakes change pH levels but don’t restore ecological function because they fail to create cross-scale interactions crucial for ecological resilience—mitigation rather than restoration.
Contemporary attempts at revival show the unified vision persists but faces structural barriers. The Santa Fe Institute achieved methodological unification through agent-based modeling and network science while retreating from unified theoretical frameworks. When challenged about developing a unified complexity theory, SFI researcher Melanie Mitchell responded: “I don’t even know what that would mean... I don’t think that will be very useful.” This skepticism, even within an institution founded for synthesis, reveals how deeply fragmentation is entrenched.
Complexity economics recovered non-equilibrium thinking, increasing returns and positive feedback loops, path dependence, and emergent patterns—all elements classical economics lost through specialization. Brian Arthur’s work shows that “lock-in and dominance of one or a few players... can’t be done by equilibrium economics—it’s not an equilibrium phenomenon,” revealing how disciplinary boundaries create blind spots. Resilience theory and panarchy explicitly address cross-scale interactions that specialization obscures, but these remain marginal to mainstream ecology and policy.
Assembly theory, published in Nature in October 2023 by Sara Walker, Lee Cronin, and Christopher Kempes, represents the most recent unification attempt—”a completely new lens for looking at physics, chemistry, and biology as different perspectives of the same underlying reality” aiming to “close the gap between reductionist physics and Darwinian evolution.” Yet even such ambitious efforts struggle against institutional structures, funding mechanisms, and professional incentives that systematically reward specialization.
Structural forces overwhelmed intellectual vision through mutually reinforcing mechanisms
The fragmentation of General Systems Theory was not inevitable intellectually but was institutionally overdetermined. Academic structures organized around disciplines controlled hiring, tenure, and resources. Funding mechanisms, especially NSF peer review, reinforced disciplinary boundaries and conservative methodological choices. Tenure systems created career risks for interdisciplinary work that outweighed potential rewards. Peer review processes empowered disciplinary gatekeepers judging work by narrow standards. Military and think tank priorities channeled systems thinking into mission-specific applications at RAND, in operational research, and through systems engineering. Commercial pressures rewarded specialized consulting expertise over general theoretical integration. The Cold War context concentrated resources on defense applications, creating path dependencies persisting today.
These forces were mutually reinforcing: disciplinary organization shaped funding structures, which shaped career incentives, which shaped publication venues, which reinforced disciplinary organization. Breaking this cycle required coordinated change across multiple institutional levels—a collective action problem no individual researcher or institution could solve. The compression from Bertalanffy’s ambitious vision to the Society for General Systems Research’s compromised manifesto defining GST merely as “any theoretical system of interest to more than one discipline” reflected this reality.
The critical acceleration occurred during the 1970s-1980s when intellectual divergence met institutional crystallization. The ASC collapse in 1974, the System Dynamics Society founding in 1983, the Santa Fe Institute establishment in 1984, and the ISSS name change in 1988 were symptoms of underlying structural forces, not causes. Bertalanffy and colleagues recognized these barriers early but lacked power to restructure universities, funding agencies, journals, and markets to support unified science.
The question Bertalanffy posed in 1950 remains unanswered: Can science develop principles valid for systems in general, or must each domain develop its own specialized approaches? His vision that GST might “play a role similar to that of Aristotelian logic in the science of antiquity” as a unifying framework has not been realized. Instead, we have walled-in hermits speaking private languages—specialists in systems dynamics, cyberneticians, complexity scientists, organizational theorists, each with distinct communities, methodologies, and institutions, minimally communicating across boundaries.
The fragmentation matters because the most pressing problems of the 21st century—climate change, pandemics, financial instability, ecological collapse, social polarization—are precisely the wicked problems requiring cross-domain integration that specialization prevents. The capabilities lost with GST’s fragmentation—recognizing patterns across domains, understanding holistic system behaviors, identifying feedback loops and leverage points, anticipating emergent properties and unintended consequences—are exactly what is needed but systematically unavailable. Recovering these capabilities requires not just intellectual synthesis but institutional restructuring to align academic, funding, and professional structures with integrative rather than fragmentary incentives. Until such restructuring occurs, Bertalanffy’s vision of unified systems science will remain what it has been for seventy years: intellectually compelling but institutionally impossible.
References
Primary Sources
Ashby, W. Ross. An Introduction to Cybernetics. London: Chapman & Hall, 1956.
Bateson, Gregory. Steps to an Ecology of Mind. New York: Ballantine Books, 1972.
Beer, Stafford. Brain of the Firm: The Managerial Cybernetics of Organization. London: Allen Lane, 1972.
Bertalanffy, Ludwig von. “An Outline of General System Theory.” British Journal for the Philosophy of Science 1, no. 2 (1950): 134-165. Available at: https://www.isnature.org/Events/2009/Summer/r/Bertalanffy1950-GST_Outline_SELECT.pdf
Bertalanffy, Ludwig von. General System Theory: Foundations, Development, Applications. New York: George Braziller, 1968. Available at: https://www.panarchy.org/vonbertalanffy/systems.1968.html
Boulding, Kenneth E. “General Systems Theory—The Skeleton of Science.” Management Science 2, no. 3 (1956): 197-208. Available at: https://www.panarchy.org/boulding/systems.1956.html
Checkland, Peter. Systems Thinking, Systems Practice. Chichester: John Wiley & Sons, 1981.
Forrester, Jay W. Industrial Dynamics. Cambridge, MA: MIT Press, 1961.
Maturana, Humberto R., and Francisco J. Varela. Autopoiesis and Cognition: The Realization of the Living. Dordrecht: D. Reidel, 1980.
von Foerster, Heinz. “Ethics and Second-Order Cybernetics.” 1991. Available at: https://www.pangaro.com/hciiseminar2019/Heinz_von_Foerster-Ethics_and_Second-order_Cybernetics.pdf
Wiener, Norbert. Cybernetics: Or Control and Communication in the Animal and the Machine. Cambridge, MA: MIT Press, 1948. Available at: https://direct.mit.edu/books/oa-monograph/4581/Cybernetics-or-Control-and-Communication-in-the
Historical and Institutional Studies
Arthur, W. Brian. “Complexity Economics.” Santa Fe Institute. Available at: https://sites.santafe.edu/~wbarthur/complexityeconomics.htm
Computing Research Association. “Promotion and Tenure of Interdisciplinary Faculty.” Available at: https://cra.org/resources/best-practice-memos/promotion-and-tenure-of-interdisciplinary-faculty/
Drack, Manfred. “Ludwig von Bertalanffy’s Early System Approach.” Systems Research and Behavioral Science 26, no. 5 (2009): 563-572. https://onlinelibrary.wiley.com/doi/abs/10.1002/sres.992
Holling, C.S. “Engineering Resilience versus Ecological Resilience.” In Engineering Within Ecological Constraints, edited by Peter Schulze. Washington, DC: National Academy Press, 1996. https://nap.nationalacademies.org/read/4919/chapter/4
International Society for the Systems Sciences. “History.” Available at: https://www.isss.org/history/
MIT Industrial Performance Center. “NSF and DARPA as Models for Research Funding: An Institutional Analysis.” 2023. Available at: https://ipc.mit.edu/wp-content/uploads/2023/07/NSF-and-DARPA-as-Models-for-Research-Funding-An-Institutional-Analysis.pdf
National Academies of Sciences, Engineering, and Medicine. “Barriers to Interdisciplinary Research and Training.” In Bridging Disciplines in the Brain, Behavioral, and Clinical Sciences. Washington, DC: The National Academies Press, 2005. https://www.ncbi.nlm.nih.gov/books/NBK44876/
Pouvreau, David. “On the History of Ludwig von Bertalanffy’s ‘General Systemology’, and on Its Relationship to Cybernetics – Part I: Elements on the Origins and Genesis of Ludwig von Bertalanffy’s ‘General Systemology’.” International Journal of General Systems 43, no. 2 (2014): 172-245. https://www.researchgate.net/publication/266997998_On_the_History_of_Ludwig_von_Bertalanffy’s_’General_Systemology’_and_on_Its_Relationship_to_Cybernetics_-_Part_I_Elements_on_the_Origins_and_Genesis_of_Ludwig_von_Bertalanffy’s_’General_Systemology‘
Santa Fe Institute. “About.” Available at: https://www.santafe.edu/about/overview
Santa Fe Institute. “History.” Available at: https://www.santafe.edu/about/history
Systems Engineering Body of Knowledge (SEBoK). “History of Systems Science.” Available at: https://sebokwiki.org/wiki/History_of_Systems_Science
Cybernetics and Feedback Theory
Massachusetts Institute of Technology. “Cybernetics: Knowledge Domains in Engineering Systems.” Available at: https://web.mit.edu/esd.83/www/notebook/Cybernetics.PDF
McGill University. “Introduction to Feedback Control Systems.” Available at: https://cim.mcgill.ca/~ialab/ev/Intro_control1.pdf
MIT Science, Technology & Society Program. “Between Human and Machine: Feedback, Control, and Computing before Cybernetics.” Available at: https://sts-program.mit.edu/book/human-machine-feedback-control-computing-cybernetics/
University of Illinois Archives. “W. Ross Ashby – The Cybernetics Thought Collective.” Available at: https://archives.library.illinois.edu/thought-collective/cyberneticians/w-ross-ashby/
Contemporary Systems Research
ADVANCED Motion Controls. “What is Servomechanism: Servo System Definition, History, Components & Applications.” Available at: https://www.a-m-c.com/servomechanism/
American Society for Cybernetics. “Journals.” Available at: https://asc-cybernetics.org/journals/
Complex Systems Theory. “Santa Fe Institute.” Available at: https://complexsystemstheory.net/santa-fe-institute/
Control Engineering. “Efficient Controls Require Feedback.” Available at: https://www.controleng.com/efficient-controls-require-feedback/
INFORMS. “The Origins of OR.” Available at: https://www.informs.org/Explore/History-of-O.R.-Excellence/Bibliographies/The-Origins-of-OR
International Council for Systems Engineering (INCOSE). “History of Systems Engineering.” Available at: https://www.incose.org/about-systems-engineering/history-of-systems-engineering
McKinsey & Company. “The Beginning of System Dynamics.” Available at: https://www.mckinsey.com/business-functions/strategy-and-corporate-finance/our-insights/the-beginning-of-system-dynamics
MIT Sloan School of Management. “Professor Emeritus Jay W. Forrester, Digital Computing and System Dynamics Pioneer, Dies at 98.” Available at: https://mitsloan.mit.edu/ideas-made-to-matter/professor-emeritus-jay-w-forrester-digital-computing-and-system-dynamics-pioneer-dies-98
MIT System Design and Management. “The Evolution of Systems Engineering in the US Department of Defense.” Available at: https://sdm.mit.edu/the-evolution-of-systems-engineering-in-the-us-department-of-defense/
Oxford Bibliographies. “Feedback Dynamics – Environmental Science.” Available at: https://www.oxfordbibliographies.com/display/document/obo-9780199363445/obo-9780199363445-0091.xml
RAND Corporation. “The Cold War, RAND, and the Generation of Knowledge, 1946-1962.” Available at: https://www.rand.org/pubs/reprints/RP729.html
Systems Thinking Alliance. “A Brief History of Systems Thinking.” Available at: https://systemsthinkingalliance.org/brief-history-of-systems-thinking/
Systems Thinking Alliance. “Russell Ackoff: A Visionary in Systems Thinking History.” Available at: https://systemsthinkingalliance.org/russell-ackoff/
Systems Thinking Alliance. “Stafford Beer, The Father of Management Cybernetics.” Available at: https://systemsthinkingalliance.org/stafford-beer-the-father-of-management-cybernetics/
Recent Research on Interdisciplinarity
arXiv. “Interdisciplinary PhDs Face Barriers to Top University Placement Within Their Disciplines.” Available at: https://arxiv.org/abs/2503.21912
PubMed Central. “The Present and Future of Peer Review: Ideas, Interventions, and Evidence.” Available at: https://pmc.ncbi.nlm.nih.gov/articles/PMC11804526/
Proceedings of the National Academy of Sciences. “The Present and Future of Peer Review: Ideas, Interventions, and Evidence.” Available at: https://www.pnas.org/doi/10.1073/pnas.2401232121
ResearchGate. “Overcoming Obstacles to Interdisciplinary Research.” Available at: https://www.researchgate.net/publication/220041522_Overcoming_Obstacles_to_Interdisciplinary_Research
ResearchGate. “The Conflict Between Complex Systems and Reductionism.” Available at: https://www.researchgate.net/publication/23291908_The_Conflict_Between_Complex_Systems_and_Reductionism
Reference Works and Encyclopedias
Bertalanffy Center for the Study of Systems Science. “Legacy.” Available at: https://www.bcsss.org/legacy/
Britannica. “Gregory Bateson.” Available at: https://www.britannica.com/biography/Gregory-Bateson
EBSCO Research. “General Systems Theory.” Available at: https://www.ebsco.com/research-starters/history/general-systems-theory
New World Encyclopedia. “Gregory Bateson.” Available at: https://www.newworldencyclopedia.org/entry/Gregory_Bateson
ScienceDirect Topics. “Autopoiesis – An Overview.” Available at: https://www.sciencedirect.com/topics/social-sciences/autopoiesis
ScienceDirect Topics. “Bertalanffy – An Overview.” Available at: https://www.sciencedirect.com/topics/mathematics/bertalanffy
ScienceDirect Topics. “General System Theory – An Overview.” Available at: https://www.sciencedirect.com/topics/computer-science/general-system-theory
Wikipedia. “Autopoiesis.” Available at: https://en.wikipedia.org/wiki/Autopoiesis
Wikipedia. “Cybernetics.” Available at: https://en.wikipedia.org/wiki/Cybernetics
Wikipedia. “Cybernetics: Or Control and Communication in the Animal and the Machine.” Available at: https://en.wikipedia.org/wiki/Cybernetics:_Or_Control_and_Communication_in_the_Animal_and_the_Machine
Wikipedia. “Gregory Bateson.” Available at: https://en.wikipedia.org/wiki/Gregory_Bateson
Wikipedia. “International Society for the Systems Sciences.” Available at: https://en.wikipedia.org/wiki/International_Society_for_the_Systems_Sciences
Wikipedia. “Interdisciplinarity.” Available at: https://en.wikipedia.org/wiki/Interdisciplinarity
Wikipedia. “Jay Wright Forrester.” Available at: https://en.wikipedia.org/wiki/Jay_Wright_Forrester
Wikipedia. “Ludwig von Bertalanffy.” Available at: https://en.wikipedia.org/wiki/Ludwig_von_Bertalanffy
Wikipedia. “RAND Corporation.” Available at: https://en.wikipedia.org/wiki/RAND_Corporation
Wikipedia. “Russell L. Ackoff.” Available at: https://en.wikipedia.org/wiki/Russell_L._Ackoff
Wikipedia. “Santa Fe Institute.” Available at: https://en.wikipedia.org/wiki/Santa_Fe_Institute
Wikipedia. “Second-order Cybernetics.” Available at: https://en.wikipedia.org/wiki/Second-order_cybernetics
Wikipedia. “Soft Systems Methodology.” Available at: https://en.wikipedia.org/wiki/Soft_systems_methodology
Wikipedia. “Stafford Beer.” Available at: https://en.wikipedia.org/wiki/Stafford_Beer
Wikipedia. “Systems Philosophy.” Available at: https://en.wikipedia.org/wiki/Systems_philosophy
Wikipedia. “Systems Theory.” Available at: https://en.wikipedia.org/wiki/Systems_theory
Wikipedia. “W. Brian Arthur.” Available at: https://en.wikipedia.org/wiki/W._Brian_Arthur
Wikipedia. “W. Ross Ashby.” Available at: https://en.wikipedia.org/wiki/W._Ross_Ashby


