Structured Representations — Thoughts Without Content Are Empty
Mental acrobats — people who can memorise a shuffled deck of 52 cards in seconds and recall it days later — demonstrate abilities that humans use every day. Long before writing was invented, metre, rhyme, and melody served as memory aids. Mnemonic techniques, like the memory palace, let us store and retrieve information by mapping it onto imagined spaces. This raises a deeper question: is information not stored arbitrarily in our minds, but translated into decodable structures? In other words, do we create retrievable mental images of objects and concepts that enable us to remember and act? This seminar paper, submitted for the Seminar Neuronale Netze — Symbolverarbeitung — Kognition, addresses two core questions: what do we mean by a mind, and what do we mean by a representation?
The Problem of the Mind
Competing theories try to explain what a mind is and whether it can be replicated. Eliminative materialists hold that all mental phenomena reduce to physical processes in the brain; there is no "mind" as a separate phenomenon to explain. Non-reductionists, by contrast, argue either that complex mental behaviour emerges from the right physical organisation (emergentism), or that mental states are defined by their functional role, independent of whether they run on neurons, silicon, or — as Searle memorably put it — alien goo (functionalism). The paper works primarily within the non-reductionist tradition. A recurring challenge is the problem of intentionality: the capacity of a mental state to be about something. Searle's Chinese Room and Dreyfus's critique of AI both argue that formal, purely syntactic systems cannot, by themselves, give rise to genuine meaning. This problem remains unsolved and casts a long shadow over every theory of mental representation.
Symbols and Meaning
A naive view of symbols holds that a sign simply stands for something else — the scholastic formula aliquid stat pro aliquo. But this misses a crucial point: it is not the sign itself that carries meaning; meaning requires an interpreting subject and a context. Peirce defined a sign as a three-place relation: something that stands for something, for someone, in some respect. Shannon's information theory, powerful as it is, deliberately brackets semantics. Philosophers like Dretske tried to build a bridge from Shannon's notion of information to mental content — but as the paper argues, causal or informational relationships alone are not enough. Tree rings, smoke, and thermometers all carry information about their causes, but they do not represent in the full sense. Representation requires a cognitive system that processes that information in an intentional, meaning-bearing state.
A Deflationary Account of Mental Representation
A central focus of the paper is Frances Egan's Deflating Mental Representation (MIT Press, 2025), which challenges the dominant robust realist view that mental states have an intrinsic, objective content — a substantial "representation relation" binding them to the world. Egan's Deflationary Account of Mental Representations (DAMR) makes three key moves:
- No representation relation needed. Interpreting a mental state as a representation does not presuppose a special metaphysical link between that state and what it is about. Representation is a matter of interpretation, not of ontology.
- Content is not essential. The same type of mental state could, in principle, have had different content — or no content at all.
- Content attribution is pragmatic. When we ascribe content to a mental state, we do so for explanatory purposes. Content is a gloss — a useful interpretive overlay on processes that are not themselves intentional. To explain how this gloss is assigned, Egan proposes external sortalism: we use the external world as a scaffold. We sort and understand internal states by relating them to external objects and categories. This mapping works because our model of the outer world (S1) and our perceptual experiences (S2) share a similar structure — a correspondence built up through years of sensorimotor interaction with the environment, beginning in early childhood. Egan's position is carefully modest. She acknowledges that DAMR is a variant of representationalism that keeps what is useful and relaxes the problematic commitments. Yet questions remain open: how exactly is a particular gloss determined, and by whom?
Structured Representation
Gary Marcus argues that any adequate theory of the mind must satisfy three demands simultaneously:
- The mind represents abstract relationships between variables.
- The mind has a system of recursively structured representations.
- The mind distinguishes between representations of individuals and representations of kinds. Marcus finds that existing connectionist proposals (e.g. Ramsey, Stich, and Garon) fail to meet these criteria. Semantic networks fare better, since they can in principle support arbitrarily recursive representations — what Marcus calls productivity of thought — as well as systematicity: if a mind can think "John loves Mary", it must also be able to think "Mary loves John". His own proposal, Treelets, uses directed graphs inspired by semantic networks to realise recursive representations in neural networks. But open questions persist: how are the concepts filling the tree nodes chosen? And on what principle are the empty slots populated? Unsupervised learning methods offer a natural answer — an approach also pursued by Dittadi and by Kumar & Schrater — though introducing inductive biases during training raises further questions about what is being assumed in advance.
Closing Thoughts
The debate around mental and structured representations is less a technical problem than a conceptual one. Neither strict materialism nor a purely deflationary account fully captures the phenomenon. Approaches like Embodied Cognition suggest that representation and meaning are not located inside the individual mind alone, but emerge from the interplay of body, environment, and social action.
Thoughts without content are empty, intuitions without concepts are blind. — Kant, Critique of Pure Reason, 1787 Whether "thinking" can be adequately described through representations at all remains open. Representation may itself be only a heuristic metaphor — one that helps us organise the complexity of consciousness conceptually, but which future research in philosophy of mind and artificial intelligence may ultimately need to replace. The full seminar paper can be downloaded here.