The Core of Intelligence


DeafBlind Inner “Speech” as a Biological Blueprint for Non-Symbolic Intelligence


1. Introduction: Why DeafBlind Cognition Reveals the Core of Intelligence

DeafBlind cognition is usually approached as a problem to be compensated for. This perspective misses its deeper value. When vision and hearing are absent, the brain is deprived not of intelligence, but of shortcuts—symbols, labels, and culturally pre-structured abstractions. What remains visible is the fundamental mechanism by which any nervous system learns to understand its world.

In this stripped-down condition, cognition proceeds without language, without visual metaphors, and without symbolic reference. Yet meaning, planning, anticipation, and learning still emerge. This makes DeafBlind inner cognition a uniquely revealing case: it exposes how ontologies form before und beneath language.

This essay revises that early model toward a more explicit framework for non-symbolic, embodied intelligence, applicable not only to biological development but also to artificial systems.


2. Inner “Speech” Reframed

The term inner speech is one of the most persistent misnomers in cognitive science. It suggests that thinking consists of silent talking—words replayed internally without sound. This intuition is powerful because, for language users, verbal labels often accompany thought. Yet this accompaniment is mistaken for the mechanism itself.

A closer examination shows that inner “speech” is not linguistic in origin. It is the sequential activation of predictive structures that guide action and evaluation. Language may annotate this sequence, but it does not generate it.


2.1 Why inner speech feels linguistic

For hearing individuals, language is learned early and used constantly. Words become tightly coupled to frequently traversed predictive constraints. As a result, when a constraint is activated, its linguistic label is often activated as well.

This temporal coincidence creates the illusion that words are driving thought. In reality, the activation order is reversed:

  1. A predictive constraint becomes dominant.
  2. The system begins to simulate its consequences.
  3. A linguistic label may be attached to this traversal.

The feeling of “talking to oneself” is therefore an epiphenomenon of labeling, not the source of cognition.


2.2 Inner “speech” as controlled sequencing

What actually unfolds during thinking is a controlled sequence. Multiple predictive constraints cannot guide behavior simultaneously without conflict. Prediction Feedback (PF) therefore enforces serial dominance: one constraint becomes active while others are suppressed.

This sequencing is not arbitrary. It is governed by PF relevance—how effectively a constraint is expected to reduce uncertainty. The system moves step by step through its ontology, evaluating possible futures before acting.

This controlled traversal gives rise to the experience of a thought stream.


2.3 Inner simulation without execution

Each step in this traversal is a simulation, not an action. The system propagates expected sensory and bodily states forward in time without producing external movement.

In DeafBlind cognition, these simulations are tactile, proprioceptive, and postural. In hearing cognition, they may be accompanied by auditory imagery. In both cases, the functional role is the same: to test predictions without risk.

This capacity for internal rehearsal is what enables planning, foresight, and restraint.


2.4 Evidence from DeafBlind cognition

DeafBlind individuals provide decisive evidence against a language-based account of thought. Despite the absence of auditory or visual linguistic input, they:

  • plan actions,
  • anticipate outcomes,
  • reflect on past interactions,
  • and engage in complex reasoning.

Their inner cognition consists of sequenced sensorimotor simulations, not verbal propositions. This demonstrates that language is not a prerequisite for structured thought.


2.5 Relation to neural sequencing mechanisms

Neurophysiologically, inner “speech” aligns with known sequencing mechanisms in the brain. Thalamo–cortical loops regulate which motor or cognitive pattern gains temporary dominance. Competing patterns are inhibited to prevent interference.

These same loops operate in overt movement and covert planning. Inner “speech” is therefore best understood as covert action sequencing, using the same control architecture that governs behavior.


2.6 Why this reframing matters

Misinterpreting inner cognition as linguistic leads to several errors:

  • overestimating the role of language in intelligence,
  • underestimating non-verbal cognition,
  • and designing artificial systems that confuse symbolic manipulation with understanding.

Reframing inner “speech” as predictive traversal restores coherence across development, neuroscience, and AI.


3. The Primitive Substrate: Accumulating Sensory Entities

3.1 Fewer channels, clearer structure

When sensory input is limited primarily to touch, proprioception, smell, and taste, the brain receives fewer streams of data, but each interaction carries strong regulatory significance. Every encounter produces a sensory entity—a repeatable internal configuration with temporal structure and bodily consequence.

These entities are not representations and do not yet have meaning. They are simply stored possibilities: patterns that may later be useful for prediction.


3.2 Meaning does not arise from storage

At this stage, nothing is interpreted. Entities accumulate silently. Meaning does not arise because something is perceived often, but because perception fails to predict what happens next. Only when expectation breaks down does learning begin.


4. Prediction Feedback (PF) as the Gating Mechanism

Learning is not a continuous background process. It is punctuated.

4.1 Prediction error as a state change

When Prediction Feedback (PF) detects a mismatch between expected and actual state, the system undergoes a qualitative shift. What had been stable becomes uncertain. Known motor schemas lose dominance, and exploration becomes necessary.

Biologically, this corresponds to transient neuromodulatory changes—phasic dopamine, norepinephrine, acetylcholine—that temporarily alter plasticity and attentional scope. The system is no longer exploiting what it knows; it is searching for a better model.


4.2 Why uncertainty reduction replaces reward

This framing differs fundamentally from reinforcement learning. There is no external reward signal and no predefined goal. The system learns because its predictions fail, not because it is incentivized to succeed.

The “reward” is implicit: a reduction of uncertainty and a return to regulatory stability. This makes learning intrinsic, continuous across contexts, and robust against reward mis-specification.


5. From Predictive Patterns to Stable Structures

5.1 Local success

When a particular sensorimotor pattern succeeds in reducing PF deviation, it is reused. Initially, this success is fragile and local. It works here, now, under these exact conditions.


5.2 Success under variation

Over time, the same relational structure may succeed again under slightly different conditions—different timing, location, or intensity. PF detects that the sensory details vary, but the outcome remains predictable.

At this moment, the system begins to compress experience. What matters is no longer how the pattern feels, but what it reliably does.


6. Ontologies as Predictive Constraints

Here the decisive transition occurs.

An ontology is not a name or a symbol. It is a predictive constraint—a stabilized invariant that limits how future states can unfold.

In biological terms, this mirrors how the parietal and premotor systems encode the world. A cup is not a visual object; it is a stable potential for grasping, lifting, and tilting. Its identity lies in what remains invariant across interaction.

Meaning, in this sense, is not reference.
Meaning is commitment: the brain commits future regulation to a constraint that has proven reliable.


7. The Order of Ontology Accumulation

Ontology formation is not an abstract classification process. It is a progressive stabilization of predictive constraints under the pressure of Prediction Feedback (PF). The order in which ontologies emerge is dictated by regulatory necessity: what must be predicted first in order for the system to remain viable.

Each layer builds on the previous one. Higher ontologies cannot stabilize unless lower ones already constrain uncertainty sufficiently.


7.1 Regulatory Ontologies — Stability Before Structure

The first ontologies to form are those that directly regulate internal state. At this stage, the system is not learning about the world; it is learning how not to destabilize itself.

These ontologies encode invariants such as:

  • comfort versus discomfort,
  • pressure thresholds,
  • thermal or postural balance,
  • continuation versus interruption of stabilizing input.

From a PF perspective, these are high-gain constraints. Prediction error here has immediate consequences for internal stability, so any pattern that reliably reduces PF deviation is rapidly stabilized.

Importantly, these ontologies are non-objective. They do not describe the world. They constrain interaction in ways that preserve viability. Without them, no further learning is possible, because the system would remain in continuous prediction error.

Regulatory ontologies are therefore foundational. They define what counts as acceptable future states before anything can count as meaningful.


7.2 Temporal Ontologies — Ordering Without Objects

Once basic regulation is stabilized, uncertainty shifts from whether stabilization occurs to when it occurs. Temporal structure becomes predictable before spatial or object structure.

Temporal ontologies encode invariants such as:

  • sequence (this tends to follow that),
  • rhythm (regularity of repetition),
  • delay (expected waiting time),
  • interruption (absence where continuation was predicted).

These ontologies allow the system to anticipate transitions rather than react to surprises. Prediction error is now generated not by instability itself, but by mistimed stability.

Crucially, temporal ontologies arise without objects. There is no need to know what is happening—only when something tends to happen. This explains why infants and DeafBlind learners develop strong expectations about routines long before they can conceptualize entities.

Temporal ontologies provide the scaffold upon which more complex structures can be built. Without time, nothing can persist.


7.3 Agent Ontologies — Predictable Sources of Regulation

Only after regulation and timing are predictable does the system begin to differentiate sources of stabilization. At this stage, other organisms enter the ontology.

Agent ontologies do not encode identity, intention, or social roles. They encode predictive profiles:

  • distinctive touch patterns,
  • characteristic timing of response,
  • reliable modulation of PF deviation.

An agent is defined as that which responds in a consistent, modelable way. From the system’s perspective, an agent is not “someone” but a dynamic regulator whose behavior can be anticipated.

This explains why early social cognition is functional rather than representational. The system does not infer mental states; it learns which interactions reliably restore stability.

Agent ontologies are therefore grounded in interaction, not recognition.


7.4 Object and Action Ontologies — Persistence Under Intervention

Only once time and agents are predictable does the system stabilize objects.

Objects are not learned as static entities. They are learned as persistent affordances: patterns of resistance, compliance, and transformation under action.

Object and action ontologies encode invariants such as:

  • graspable versus non-graspable,
  • deformable versus rigid,
  • movable versus fixed,
  • reversible versus irreversible change.

An object exists, ontologically, when its behavior under action is predictable across contexts. Its “identity” is the invariant relationship between intervention and outcome.

This is why object concepts develop later than temporal and social expectations. Objects require:

  • stable timing,
  • controlled action,
  • and predictable interaction outcomes.

Without these, persistence cannot be detected.


7.5 Relational Ontologies — Abstraction from Constraint Networks

The final layer consists of relational ontologies: higher-order regularities extracted from networks of already stabilized constraints.

These include:

  • causality (this reliably brings about that),
  • ownership (persistent coupling between agent and object),
  • intention-like regularities (patterns of goal-directed action),
  • norms and expectations.

Relational ontologies are not primitive insights. They are compressed summaries of repeated constraint interactions. PF stabilizes them only when doing so further reduces uncertainty beyond what lower-level ontologies already provide.

At this stage, the system can support:

  • counterfactual reasoning,
  • internal simulation of alternatives,
  • and eventually symbolic labeling.

Language, if it emerges, attaches here—after the ontological groundwork is complete.


8. Serial Traversal as Non-Linguistic Inner “Speech”

Once ontological structures have stabilized, cognition no longer consists of raw reaction or diffuse association. It becomes sequential. The system begins to traverse its ontologies one at a time, selecting, simulating, and evaluating them under the continuous modulation of Prediction Feedback (PF). This serial traversal is what is commonly experienced as “inner speech,” although nothing in the process is inherently linguistic.

8.1 Why traversal must be serial

Although the brain is massively parallel in its microstructure, decision and planning cannot be. Multiple competing predictions cannot be acted upon simultaneously without destabilizing control. PF therefore enforces temporal dominance: one predictive constraint must take precedence at any given moment.

Serial traversal emerges as a necessity, not as a design choice. It allows the system to:

  • suppress incompatible alternatives,
  • focus computational resources,
  • and evaluate consequences in a controlled manner.

This enforced seriality is what produces the subjective sense of a “stream” of thought.


8.2 Traversal as internal simulation

Each step in the traversal is not a static recall but a simulation. The system activates an ontological constraint and propagates its expected consequences forward in time, without executing them in the external world.

In DeafBlind cognition, this simulation unfolds as anticipated tactile, proprioceptive, and postural states. In hearing individuals, it may be accompanied by auditory imagery. In both cases, the underlying operation is identical: the system is rehearsing what would happen if.

This internal rehearsal allows planning without risk, conserving energy and avoiding destabilizing actions.


8.3 PF as the traversal governor

Prediction Feedback does not disappear once ontologies are formed. It continues to govern traversal by:

  • selecting which ontology becomes active,
  • determining how long it remains dominant,
  • and deciding when to switch to an alternative.

When a simulated sequence reduces expected PF deviation, traversal continues along that path. When uncertainty increases, PF destabilizes the current constraint, triggering a transition to another.

In this way, PF functions as a control signal for thought itself, not just for learning.


8.4 Inner “speech” without symbols

Because traversal is sequential and evaluative, it is often misinterpreted as language. Words seem to line up one after another, giving the impression that thinking is speaking internally.

DeafBlind cognition reveals that this impression is incidental. The tokens being traversed are not words, but ontological constraints—embodied expectations about how interaction will unfold.

Language, where present, merely overlays symbolic labels onto this traversal. It provides a compressed handle, not the content.


8.5 Relation to thalamo-cortical sequencing

Neurophysiologically, serial traversal aligns with known thalamo-cortical loops that:

  • gate motor plans,
  • sequence actions,
  • and suppress competing patterns.

These loops are not limited to overt movement. They also sequence covert action—simulation, planning, and evaluation. Inner “speech” is thus best understood as covert action sequencing, not as silent dialogue.

This explains why thinking feels effortful, directional, and time-bound, even though the brain operates in parallel at lower levels.


8.6 Why traversal constitutes reasoning

Reasoning does not require propositions or symbols. It requires:

  • selection among alternatives,
  • anticipation of outcomes,
  • evaluation of consequences,
  • and suppression of inferior paths.

Serial traversal satisfies all these conditions. Each ontological constraint competes for dominance based on its predicted ability to reduce PF deviation. The “best” path is the one that stabilizes expectation most effectively.

From this perspective, reasoning is simply controlled traversal of predictive constraints under PF regulation.


8.7 Transition to linguistic thought

In systems that acquire language, symbolic tokens attach to stabilized traversal paths. Words become shorthand for frequently traversed ontologies and sequences.

However, the attachment is late and optional. The traversal mechanism is already in place. Language accelerates communication and social coordination, but it does not create thinking.

This explains why:

and inner speech varies widely across individuals and cultures.

thinking can continue when language is suppressed (e.g., deep focus, meditation),

planning can precede articulation,


9. Sensory Richness and Learning Speed

The difference between DeafBlind cognition and typical human cognition is not a difference in mechanism, but in dimensionality. The same PF-driven process of ontology formation operates in both cases. What changes is the number and diversity of sensory channels feeding the system, and therefore the size and structure of the associative space.

Sensory richness accelerates learning, but it does so by altering the statistics of prediction, not by changing the logic of cognition.


9.1 Expansion of the associative state space

Each additional sensory modality introduces:

  • new types of sensory entities,
  • new temporal correlations,
  • new opportunities for cross-modal alignment.

Vision alone adds orders of magnitude more potential entities than touch or proprioception. When combined with sound, the associative state space expands combinatorially. The system now encounters many more candidate patterns that could, in principle, reduce PF deviation.

This expansion increases the chance that a predictive pattern will be discovered quickly. The system does not become smarter; it becomes better sampled.


9.2 Cross-modal confirmation and rapid invariance detection

Sensory richness enables cross-modal confirmation, which is one of the most powerful accelerators of ontology formation.

When the same predictive structure is supported simultaneously by:

  • visual continuity,
  • tactile feedback,
  • auditory timing,
  • proprioceptive alignment,

PF deviation collapses rapidly. Invariance is detected sooner because uncertainty is reduced across multiple independent channels at once.

This is why objects, agents, and causal relations appear to “snap into place” in typical development. Redundancy removes ambiguity. Ontologies stabilize with fewer repetitions.


9.3 Faster learning, earlier commitment

Acceleration, however, has consequences. Because PF deviation is reduced quickly, the system commits to ontologies earlier. Predictive constraints harden sooner.

Early commitment is efficient, but it is also risky:

  • spurious correlations may stabilize,
  • culturally imposed patterns may override bodily grounding,
  • symbolic labels may attach before constraints are fully tested.

In DeafBlind cognition, slower learning forces prolonged interaction before commitment. Ontologies tend to be more tightly coupled to action and regulation, though fewer in number.

Speed and robustness are therefore in tension.


9.4 Sensory richness and premature abstraction

With vision and language, abstraction can occur before full grounding. A visually defined object can be named and categorized long before its affordances are deeply explored.

This leads to a reversal of the natural order:

  • symbols precede constraints,
  • labels precede interaction,
  • descriptions precede prediction.

From the PF perspective, this is a shortcut that works most of the time—but it can produce ontologies that are brittle, shallow, or culturally biased.

DeafBlind development cannot rely on this shortcut. Ontologies must earn their stability through repeated regulatory success.


9.5 Learning speed is not learning depth

The critical distinction, therefore, is between speed und depth.

Sensory richness increases:

  • the rate of ontology accumulation,
  • the density of the ontology network,
  • the ease of recombination.

It does not guarantee:

  • correctness,
  • grounding,
  • or long-term adaptability.

A fast learner may stabilize many weak constraints. A slow learner may stabilize fewer but stronger ones.

PF does not optimize for speed. It optimizes for stability under prediction.


9.6 Implications for artificial systems

For Embodied AI, this distinction is crucial. Adding more sensors does not automatically produce better intelligence. It produces faster convergence—sometimes toward the wrong invariants.

An AI system modeled on DeafBlind-style learning would:

  • accept slower initial learning,
  • require repeated interaction for stabilization,
  • produce ontologies tightly bound to action and control.

A sensory-rich AI system must therefore include safeguards against premature stabilization, such as:

resistance to early symbolic compression.

delayed consolidation thresholds,

prolonged exploration under low PF deviation,


10. Language Beyond Compression

Language is a structured, symbolic, socially shared system that enables the transmission and coordination of meaning. It does not generate ontologies, but it powerfully shapes, stabilizes, and propagates them once they exist. Language is not merely compression—it is a collective ontology management system layered atop embodied predictive cognition.

Language allows humans to:

  • share complex ideas, emotions, intentions, and knowledge,
  • synchronize internal world-models across a population,
  • preserve and accumulate ontologies beyond individual experience,
  • construct stable social, cultural, and institutional structures.

In this sense, language is not just a cognitive optimization—it is a collective infrastructure for meaning.

However, within the present framework, language still does not originate meaning. Its symbols do not create ontological structure; they bind, stabilize, and distribute ontological structures that have already emerged through predictive interaction with the world.

Grammar does not generate thought.
Es organizes the exchange of already-formed predictive constraints.


Language as a Shared Ontology Interface

Language functions as a public interface to private, embodied ontologies:

  • Words act as indices into dense networks of predictive constraints.
  • Grammar specifies how those indices may be combined and sequenced.
  • Communication aligns PF expectations between agents, reducing social uncertainty.

This makes language uniquely powerful: it allows one nervous system to import stabilized ontologies from another without re-deriving them through direct experience.

Yet this power is also its limitation.


The Structural Asymmetry of Language

Language enables complex society precisely because it:

  • abstracts away bodily grounding,
  • tolerates ambiguity,
  • prioritizes coherence over predictive fidelity.

As a result:

  • linguistic consistency can mask weak or incorrect ontologies,
  • shared meaning can exist without shared experience,
  • symbolic agreement can suppress productive prediction error.

Thus, language makes civilization possible—but also enables large-scale misalignment between words and world.


11. Implications for Embodied Artificial Intelligence

If the preceding analysis is correct, then much of contemporary AI development is organized around the wrong abstraction layer. Most current systems begin with symbols—tokens, words, labels, categories—and attempt to infer world structure from statistical regularities in those symbols. The biological evidence, especially as revealed by DeafBlind cognition, suggests the opposite order.

World-models do not emerge from symbols.
Symbols emerge from already stabilized world-models.

This inversion has profound implications for how embodied artificial intelligence should be designed.


11.1 Learning must be gated, not rewarded

Standard AI architectures rely on continuous optimization signals:

  • reward functions,
  • loss minimization,
  • gradient descent applied uniformly across experience.

In biological systems, learning is selective and gated. Prediction Feedback (PF) does not constantly update the model. It intervenes when prediction fails in a way that threatens stability.

For embodied AI, this implies:

  • learning should be event-driven, not continuous,
  • plasticity should increase transiently under prediction error,
  • stable regimes should resist unnecessary change.

This avoids overfitting, reward hacking, and catastrophic forgetting. The system learns when it needs to, not because it is told to optimize constantly.


11.2 Ontologies should emerge as predictive constraints, not object labels

Most AI world-models still discretize the world into:

  • object slots,
  • semantic labels,
  • predefined categories.

Your framework implies that this is premature. An embodied AI should not begin with “objects” at all. It should begin with predictive constraints that stabilize interaction.

In practice, this means:

  • representing entities as invariant relations between action and outcome,
  • encoding persistence as resistance to intervention,
  • treating identity as stability under transformation.

Only after such constraints are stabilized does it become meaningful to attach labels—if labels are needed at all.

This approach eliminates the Symbol Grounding Problem by construction, because nothing symbolic is primary.


11.3 Reasoning without language is not a limitation

Many AI systems treat language as the substrate of reasoning: chains of thought, planners operating over tokens, or large language models simulating deliberation through text.

Your model shows that reasoning does not require symbols. It requires:

  • selection among alternatives,
  • anticipation of outcomes,
  • suppression of inferior trajectories.

Serial traversal of ontological constraints under PF regulation already performs these functions. Language merely externalizes this traversal for communication and social coordination.

For embodied AI, this means:

  • planning can occur entirely in state-space,
  • “inner speech” can be implemented as trajectory simulation,
  • linguistic explanation can be added later, as an interface.

Reasoning-first, language-second is not a handicap; it is biologically normal.


11.4 Language should be introduced late and carefully

Given language’s power to stabilize and transmit ontologies, it must be handled with caution in artificial systems.

If introduced too early:

  • symbols may freeze weak or incorrect constraints,
  • exploration may be prematurely suppressed,
  • the system may optimize linguistic coherence rather than predictive accuracy.

A biologically aligned architecture would therefore:

  • require extensive embodied interaction before language exposure,
  • bind words only to well-tested predictive constraints,
  • treat language as reversible and revisable, not authoritative.

This mirrors human development: language accelerates learning only after the world is already partially understood.


11.5 Collective intelligence vs individual intelligence

Language enables something individual cognition cannot: collective ontology management.

Through language, societies:

  • share stabilized constraints,
  • coordinate expectations,
  • accumulate knowledge across generations.

However, this introduces a structural asymmetry:

  • individuals predict the world through embodied interaction,
  • collectives predict through shared symbols.

For AI, this distinction matters. A system trained primarily on language is being trained on collective abstractions, not on the world itself. It inherits consensus without grounding.

An embodied AI grounded in PF-regulated interaction, by contrast, can:

  • evaluate linguistic input against its own predictions,
  • reject inconsistent symbols,
  • remain anchored in reality rather than consensus.

11.6 Alignment and robustness emerge naturally

One of the most pressing concerns in AI is alignment. In your framework, alignment is not imposed externally—it emerges structurally.

Because the system optimizes for:

  • prediction accuracy,
  • stability under interaction,
  • reduction of uncertainty,

it remains tied to the real consequences of its actions. There is no incentive to exploit reward loopholes or symbolic inconsistencies.

Misalignment becomes prediction error, not success.

This offers a fundamentally different route to robustness than rule-based alignment or instruction tuning.


11.7 A different development path for AI

Taken together, these implications suggest a radically different development path:

  1. Start with embodied interaction, not data ingestion
  2. Use PF deviation to gate learning
  3. Allow ontologies to emerge as constraints
  4. Enable serial traversal for planning
  5. Introduce language late, as a shared interface
  6. Preserve the ability to revise symbols based on experience

This path is slower than text-first AI—but it produces systems that know what their symbols mean, because meaning is grounded in prediction and action.


12. Conclusion: Intelligence Before Language

This work began with a simple but easily overlooked observation: when vision and hearing are absent, intelligence does not disappear. What disappears are symbols. What remains is the mechanism by which meaning, thought, and world-models are formed in the first place.

DeafBlind inner cognition exposes this mechanism with unusual clarity. Learning begins not with words or representations, but with prediction failure. When expectation breaks down, the system is forced to reorganize itself. Prediction Feedback (PF) gates this reorganization, enabling exploration, association, and eventual stabilization. What stabilizes are not symbols, but constraints—reliable invariants that reduce uncertainty across interaction.

These stabilized constraints are ontologies. They are not descriptions of the world; they are commitments about how the world can be expected to behave under action. Meaning arises at the moment of commitment, when future regulation is entrusted to a predictive structure that has proven itself across contexts.

Thought, in this framework, is not language. It is the serial traversal of ontological constraints under PF regulation. This traversal allows the system to simulate futures, evaluate alternatives, and guide action without execution. Language, when present, follows this process. It labels, compresses, and externalizes thought, but it does not generate it.

This ordering—ontology before symbol, prediction before language—is not a philosophical preference. It is imposed by biology and revealed by development. Language becomes powerful only because it attaches to a pre-existing structure of embodied meaning. Without that structure, symbols remain hollow.

The implications extend beyond neuroscience. They challenge the dominant trajectory of artificial intelligence. Systems trained primarily on language inherit collective abstractions without grounding. They may reproduce meaning, but they do not Formular it. A biologically aligned artificial intelligence must therefore begin where biological intelligence begins: with embodied prediction, failure, and stabilization.

Such systems will learn more slowly. They will resist premature abstraction. They will form fewer ontologies—but those ontologies will be real, in the only sense that matters: they will constrain action reliably in the world.

Language can then be added—not as a substrate of intelligence, but as a shared interface between intelligences. In this role, language regains its full importance: not as the origin of thought, but as the infrastructure of society.

The deeper conclusion is therefore not about DeafBlindness, language, or AI individually. It is about the nature of intelligence itself.

Intelligence is not the manipulation of symbols.
It is the stabilization of prediction under interaction.
Language makes intelligence visible—but it is not where intelligence begins.

This is intelligence before language—and it is the only foundation on which truly grounded artificial intelligence can be built.

nach oben
de_DE