• Psyche and Praxis: Extended Cognition and the Austrian School

    Alexander J. Malt
    Durham University
    Email: a.j.malt@durham.ac.uk

    pdficon2

    Get article (pdf)

    Abstract: Neoclassical economics assumes agents have a perfect internal decision-making procedure that operates on a set of complete, consistent, and transitive preferences (which might be viewed as ‘mental representations’). An alternative approach to cognition emphasises the coupling of an agent with environmental structures. I outline this notion of ‘ecological control’ and Clark and Chalmers’ ‘extended mind hypothesis’, suggesting its relevance to the Austrian school and illustrating its differences to the neoclassical model with examples from robotics. I then introduce the concept of ‘cognitive technology’ – external structures augmenting an agent’s mental capacities. I suggest money is such a technology and that treating it as such allows a response to the ‘expectational objection’ to Austrian business cycle theory.

    Keywords: Extended Cognition; Calculation; Cognitive Technology

     Psyche and Praxis: Extended Cognition and the Austrian School

    ‘Rational behaviour’, according to neoclassical theory (see, e.g. Nicholson: 2005), is assumed to be maximisation of utility given a preference set, assumed to be complete, transitive, and continuous. The implication appears to be that agents are assumed to gather all relevant information, compute the optimal course of action given their fully ordered and consistent set of preferences, and then execute the action – a process easily translated into an algorithm, specifying steps sequentially but not in real time (the sequence is important to the explanation, but not the time-frame in which the sequence is performed). Behavioural economists, to the extent that they dispute this that we are fully rational, appear to evaluate us according to this definition.

    I suggest here that an alternative view of mind – ‘extended cognition’ – might be very fruitful to the Austrian school.

    The Extended Mind

    Clark and Chalmers’ seminal paper argues that cognition extends beyond the “boundary of brain and skull” and that the environment itself plays an “active role… in driving cognitive processes” (1998, 7). This argument is made on the basis of functional equivalence: “If, as we confront some task, a part of the world functions as a process which, were it done in the head, we would have no hesitation in recognizing as part of the cognitive process, then that part of the world is (so we claim) part of the cognitive process. Cognitive processes ain’t (all) in the head!” (1998, 8) Hence: “the actual local operations that realise certain forms of human cognising include inextricable tangles of feedback, feed-forward, and feed-around loops: loops that promiscuously criss-cross the boundaries of brain, body, and world… Cognition leaks out into body and world” (Clark: 2011, xxviii).

    Extended cognition has been inspired by, amongst others, the mobile robots of Rodney Brooks that also provide an illustration of how behaviour may be systematically determined by such complex loops. Brooks has built robots according to a ‘subsumption architecture’ (Brooks: 1985), and he contrasts this approach with the ‘sense-model-plan-act’ (SMPA) framework (Brooks: 1991). SMPA involves: generating representations on the basis of sensory data; determining optimal action given those representations; executing that action. We analogously characterise neoclassical conceptions of economic rational decisions: collecting relevant information; generating a complete, transitive, and continuous preference-set; computation of optimal decision with respect to preferences and budget; executing that decision.

    Subsumption architectures stand in stark contrast. First, robots are situated within the world, dealing with it directly rather than via representational models – sensors automatically produce output signals given appropriate environmental ‘triggers’ (generating computable representations is unnecessary). Second, robots are (non-trivially) embodied, i.e. their physical components all perform given functions upon appropriate environmental ‘triggers’ with some components able to override – or ‘subsume’ – others, allowing flexible and fluid behaviour in a dynamic environment (e.g. a sensor detecting an object on the front of a moving robot will override the component producing forward movement, allowing collision to be prevented). Third, robots manifest ‘intelligence’ but, unlike SMPA, this intelligence is not strictly determined by on-board computational processes – rather, intelligent behaviour emerges from the interaction between the levels of physical components and that the world (i.e. often there is not a single component identifiable as the ‘cause’ of a certain action). Brooks’ robots therefore make use of what Clark has referred to as ‘ecological control’: “goals are not achieved by micromanaging every detail of the desired action or response but by making the most of robust, reliable sources of relevant order in the bodily or worldly environment of the controller” (Clark: 2011, 5).

    In contrast, then, to an internal, brain-bound thinker/computer concerned with generating and manipulating representations algorithmically (the Cartesian ‘cogito’), mind is instead conceived as non-trivially situated within and acting upon the environment in real time (Heideggerian ‘Dasein’) – organisms are coupled to the world, and this two-way interaction is considered “a cognitive system in its own right” (Clark and Chalmers: 1998, 8). Clark formulates the “first moral of [extended] cognition” as avoiding “excessive world-modelling” and confines such modelling “to the demands of real-time, behaviour-producing systems” (Clark: 1998, 23). This is ‘inner symbol flight’ (2001, 5).

    The themes of coupling and inner symbol flight have led some theorists to reject symbolic, algorithmic, computational strategies in favour of using dynamical systems theory (DST) to model cognitive processes geometrically (van Gelder: 1998; 1995; Beer: 2003). “Cognitive processes”, on this view, “may be state-space evolution within these very different types of systems” (van Gelder: 1995, 346). Hence, the dynamical hypothesis claims “cognitive agents are dynamical systems” (van Gelder: 1998, 615) where a dynamical system has one or more of the follow characteristics: it operates on numerical values; time itself is a variable such that “amounts of change in state are systematically related to amounts of lapsed time” (ibid. 618); both changes in values and the rate of those changes are explanatory factors (implying cognition is best modelled by differential equations – ibid., 619). DST explanations require no symbols in the classical sense.

    Cognitive Technology

    Clark nonetheless cautions against the wholesale abandonment of complex mediating inner states in favour of, say, approaches based on DST: “they [dynamical approaches] should be treated as complementary to the search for computational and representational understandings” (Clark: 1998, 102). One phenomenon which appears to be irreducibly symbolic is language[1] – however to acknowledge this is not necessarily to revert to a Cartesian view of words as merely an arbitrary pairing of sound/sign and concept, i.e. as linking external form with internal representation. Rather, Clark hypothesises words to be a kind of ‘scaffold’, i.e. tools which augment our cognitive processes and thereby allow us to solve a range of problems unsolvable by the ‘naked brain’ (ibid. 195). Actions are ‘scaffolded’ if their successful execution requires external support, e.g. an adult’s hands provide a ‘scaffold’ allowing a young infant to walk (an ability otherwise beyond them). If language is such a scaffold, then words might be considered tools that “squeeze maximum coherence and utility from fundamentally short-sighted, special-purpose, internally fragmented minds” (ibid. 33).

    Scaffolding is explicitly linked by Clark to Soviet psychology. Vygotsky argued that the “sign acts as an instrument of psychological activity in a manner analogous to the role of a tool in labour” (Vygotsky: 1978, 52). In his Thought and Language, Vygotsky describes how children are able to accomplish certain tasks by using egocentric speech that serves as “mental orientation” (1993, 228). Taking up such themes, Clark cites studies where training with symbols allowed chimpanzees to solve the reverse reward contingency task (where apes were presented with a choice between a large array of food and a small array, and given the non-selected array, i.e. if the ape chooses plate A they receive B, and vice versa). Although initially unable to obtain the larger array by selecting the smaller, training and use of Arabic numerals allowed the apes to overcome the problem and gain the larger array (see Boysen et al.: 1996). Clark concludes: “the act of labelling creates a new realm of perceptible objects upon which to target basic [cognitive] capacities” (Clark: 2011, 45). That is, words might be considered cognitive technology: “The computational value of a public system of essentially context-free, arbitrary symbols, lies… in the way such a system can push, pull, tweak, cajole, and eventually cooperate with various non-arbitrary, modality-rich, context-sensitive forms of biologically basic encoding” (ibid. 47).

    Monetary Scaffolds?

    Clark has applied extended cognition to new institutionalist economics (1998; 1997): institutions function as scaffolds. Human psychology, first, drives the creation and evolution of such scaffolds and, second, is constrained by those scaffolds (1997, 287). Hence Clark conjectures “the frontiers of institutional economics may turn out to border rather closely on those of cognitive psychology, cognitive science, and the theory of complex nonlinear systems and neural networks” (1997, 288). From the Austrian perspective, I suggest money is such a scaffold.

    Particularly relevant here are ‘stigmergic’ algorithms which allow indirect communication between agents via perception and modification of the local environment. Actions reshape the environment, and the resultant environmental structures influence actions. Both trail recruitment in ants and nest building in termites have been treated as stigmergic, and the principle has been utilised – in conjunction with subsumption architectures – in ‘collective robotics’ (see Beckers et al.: 1994). Stigmergic algorithms dictate actions in accordance with structures that are themselves the objects of those actions. Clark highlights the following advantages of stigmergy (1998, 76): no internal encoding or decoding is required; no load is placed on memory; environmental signals persist if an actor fails or moves to another area.[2] Stigmergy is therefore an elegant account of how “computational power and expertise is spread across a heterogeneous assembly of brains, bodies, artefacts, and other external structures” (ibid. 77). Clark postulates that economic institutions are stigmergic and their creation and evolution is driven by the profit and loss mechanism (ibid. 191).

    Money, if stigmergic, is so on a ‘deeper’ level – profit and loss presuppose (not explain) money. Nonetheless, stigmergy is implied in Hayek’s description of price’s coordinating function:

    We must look at the price system as… a mechanism for communicating information if we want to understand its real function… The most significant fact about this system is the economy of knowledge with which it operates, or how little the individual participants need to know in order to be able to take the right action. In abbreviated form, by a kind of symbol, only the most essential information is passed on and passed on only to those concerned. It is more than a metaphor to describe the price system as a kind of machinery for registering change, or a system of telecommunications which enables individual producers to watch merely the movement of a few pointers, as an engineer might watch the hands of a few dials, in order to adjust their activities to changes of which they may never know more than is reflected in the price movement. (Hayek: 2009, 86-7 italics mine)

    Ecological control is also strongly hinted at, as economic activity is coordinated on the basis of information continuously transmitted in real-time via price fluctuations. However, I wish to focus on Hayek’s observation that money is symbolic and suggest it functions as cognitive technology. Here I briefly cite two possible sources of evidence for this contention: first, capuchin monkeys trained to use tokens; second, a description of the indigenous Brazilian Pirahã tribe’s trade.

    Chen et al. (2006) trained capuchins on ‘fiat currency’ – tokens exchangeable for food – and found capuchins’ behaviour conformed to the generalised axiom of revealed preference in response to price and wealth shocks, exhibited reference dependence and loss aversion. Capuchins, however, not only can understand token have value, but also use these tokens as a scaffold. Adessi and Rossi (2011) found tokens improved performance of capuchin monkeys in reverse-reward contingency tasks: “tokens allowed capuchins to achieve psychological distancing from the incentive features of food, leading them to avoid impulsive choices in favour of more advantageous alternatives” (2011, 853; see also Anderson et al.: 2008).

    Another, perhaps stronger, source of evidence comes from the Pirahã – an indigenous Brazilian tribe. Pirahã have no number words in their language, and appear to have little/no success at tasks involving matching exact quantities (see Everett: 2005; see also Gordon: 2004). Hence, Frank et al. (2008b) argue that number words function as cognitive technology (interestingly, English speakers perform similarly when given a verbal task to perform whilst attempting the matching task, implying the ability to complete the later requires the linguistic systems – see Frank et al.: 2008a). In this regard, it is interesting to note Everett’s description of the Pirahã’s trade with Brazilians:

    Riverboats come regularly to the Pirahã villages during the Brazil nut season. This contact has probably been going on for more than 200 years. Pirahã men collect Brazil nuts and store them around their village for trade… They will point at goods on the boat until the owner says that they have been paid in full. They will remember the items they received (but not exact quantities) and tell me and other Pirahã what transpired, looking for confirmation that they got a good deal. There is little connection, however, between the amount they bring to trade and the amount they ask for. (Everett: 2005, 626)

    Aside from implying that money might be path-dependent on number, the description notes prices do not seem to emerge from Pirahã trading practices and therefore they have no ‘benchmark’ allowing them to judge the relative worth of their transaction (aside from direct comparison with the trades of their peers).

    Money as cognitive technology would allow a symbol – manipulable via our basic cognitive capacities – to encapsulate information generated by trading activities. If true, money as cognitive technology would be characterised by ‘dynamical-computational complementarity’ (Clark: 2011, 27): as a symbolic vehicle, it is manipulated like other symbols (i.e. computationally); its content (value) is formed by a complex stigmergic process which coordinates supply and demand and in turn is determined by those very market forces (i.e. dynamically[3]). An adequate account of both aspects and their relation is a task for future research.

    ‘Entrepreneurial Stupidity’: An Outline of a Response

    Criticisms of the Austrian business cycle theory (ABCT) by, amongst others, Caplan (a; b; c) and Tullock (1988) are often based on rational expectations. Such criticisms ask why it is that, according to the Austrian theory, entrepreneurs, knowing that the interest rate has been artificially lowered (below the natural rate), do not anticipate rising interest rates in their calculations – in short, why do entrepreneurs have irrational expectations? In Caplan’s words:

    Given that interest rates are artificially and unsustainably low, why would any businessman make his profitability calculations based on the assumption that the low interest rates will prevail indefinitely? No, what would happen is that entrepreneurs would realize that interest rates are only temporarily low, and take this into account… In short, the Austrians are assuming that entrepreneurs have strange irrational expectations. (Caplan: a)

    And:

    “I can’t figure out why Rothbard thinks businessmen are so incompetent at forecasting government policy. He credits them with entrepreneurial foresight about all market-generated conditions, but curiously finds them unable to forecast government policy, or even to avoid falling prey to simple accounting illusions generated by inflation and deflation.” (Caplan: b)

    Hence, Caplan maintains:

    “The ABC requires bizarre assumptions about entrepreneurial stupidity in order to work: in particular, it must assume that businesspeople blindly use current interest rates to make investment decisions.” (Caplan: a)

    The point, I think, is repeated by Tullock whose second ‘nit’ with the Austrian theory concerns “Rothbard’s apparent belief that business people never learn. One would think that business people might be misled in the first couple of runs of the Rothbard cycle and not anticipate that the low interest rate will later be raised. That they would continue unable to figure this out, however, seems unlikely” (1988, 73). What entrepreneurs should do, says Caplan, is “make investments which will be profitable when interest rates will later rise” and “refrain from making investments which would be profitable only on the assumption that interest rates will not later rise” (Caplan: a).

    If money is considered a cognitive technology then – on the psychological level – entrepreneurial calculation may require a monetary scaffold. An (initial, highly tentative) outline of a response to the ‘expectational objection’ suggests itself: money is cognitive technology and makes possible calculation, however accurate calculation is impossible when the technology making it possible is undermined.

    References

    Adessi, Elsa, and Sabrina Rossi. 2011. ‘Tokens Improve Capuchin Performance in the Reverse-Reward Contingency Task’ in Proceedings of the Royal Society: Biological Sciences, 278, pp. 849-854

    Anderson, James R., Yuko Hattori, Kazuo Fujita. 2008. ‘Quality Before Quantity: Rapid Learning of Reverse-Reward Contingency by Capuchin Monkeys (Cebus apella)’ in Journal of Comparative Psychology, Vol. 122(4), pp. 445-8

    Beckers, R., O. Holland, and J. Deneubourg. 1994. ‘From Local Actions to Global Tasks: Stigmergy and Collective Robotics’ in Artificial Life, Vol. 4

    Beer. 2003. ‘The Dynamics of Active Categorical Perception in an Evolved Model Agent’ in Adaptive Behaviour, Vol. 11, pp. 209-43

    Boeckx, Cedric. 2006. Linguistic Minimalism: Origins, Concepts, Methods, and Aims [Oxford, Oxford University Press]

    Boysen, S. T., G. G. Berntson, M. B. Hannan, and J. T. Cacioppo. 1996. ‘Quantity-Based Inference and Symbolic Representations in Chimpanzees (Pan Troglodytes)’ in Animal Behaviour Processes, Vol. 22 (1), pp. 76-86

    Brooks, Rodney.1985. ‘A Robust Layered Control System for a Mobile Robot’ in Massachusetts Institute of Technology Artificial Life Laboratory A.I. Memo No. 864

    Brooks, Rodney. 1991. ‘Intelligence Without Reason’ in Massachusetts Institute of Technology Artificial Life Laboratory A.I. Memo No. 1213

    Caplan, Bryan. a. ‘Why I am not an Austrian Economist’ [Accessed: 09/11/2013, http://econfaculty.gmu.edu/bcaplan/whyaust.htm]

    Caplan, Bryan. b. ‘Comments on Austrian Business Cycle Theory and Rothbard’s America’s Great Depression’ [Accessed: 09/11/2013, http://econfaculty.gmu.edu/bcaplan/whyaust.htm]

    Caplan, Bryan. c. ‘Rejoinder to my Critics on Austrian Business Cycle Theory’ [Accessed: 09/11/2013, http://econfaculty.gmu.edu/bcaplan/aust3]

    Chen, Keith M., Venkat Lakshminarayanan, and Laurie R. Santos. 2006. ‘How Basic are Behavioural Biases? Evidence from Capuchin Monkey Trading Behaviour’ in Journal of Political Economy, Vol. 114, pp. 517-537

    Chomsky, Noam.1957a. ‘Three Models for the Description of Language’ in I. R. E. Transactions on Information Theory, Vol. IT-2, Proceedings of the Symposium on Information Theory

    Chomsky, Noam.1957b. Syntactic Structures [New York, Mouton de Gruyter]

    Clark, Andy. 2011. Supersizing the Mind: Embodiment, Action, and Cognitive Extension [Oxford, Oxford University Press]

    Clark, Andy. 2001. Mindware: An Introduction to the Philosophy of Cognitive Science [Oxford, Oxford University Press]

    Clark, Andy. 1998. Being There: Putting Brain, Body, and World Together Again [Cambridge, MIT Press]

    Clark, Andy. 1997. ‘Economic Reason: The Interplay of Individual Learning and External Structure’ in The Frontiers of New Institutional Economics, edited by J. Drobak and J. Nye [San Diego, Academic Press]

    Clark, Andy, and David Chalmers. 1998. ‘The Extended Mind’ in Analysis, Vol. 58, pp. 7-19

    Elman, J. 2004. ‘Language as a Dynamical System’ in Mind as Motion, edited by R. Port and T. Van Gelder [Cambridge MA, MIT Press]

    Everett, Daniel L. 2005. ‘Cultural Constraints on Grammar and Cognition in Pirahã: Another Loo at the Design Features of Human Language’ in Current Anthropology, Vol. 46, pp. 621-646

    Fama, Eugeme. 2013. ‘Interview with Eugene Fama’ in The New Yorker, Jan. 13. Accessed: 09/11/2013, http://www.newyorker.com/online/blogs/johncassidy/2010/01/interview-with-eugene-fama.html

    Frank, Michael C., Evelina Fedorenko, and Edward Gibson. 2008a. ‘Language as a Cognitive Technology: English-Speakers Match Like Pirahã when you Don’t let them Count’ in 30th Annual Meeting of the Cognitive Science Society in Washington DC

    Frank, Michael C., Daniel L. Everett, Evelina Fedorenko, and Edward Gibson. 2008b. ‘Number as a Cognitive Technology: Evidence from Pirahã Language and Cognition’ in Cognition

    Gleick, James. 1998. Chaos: Making a New Science [London, Vintage]

    Gordon, Peter. 2004. ‘Numerical Cognition Without Words: Evidence from Amazonia’ in Science, Vol. 306, pp. 496-499

    Hayek, Friedrich August. 2009 [1945]. ‘The Use of Knowledge in Society’ in the American Economic Review, Vol. XXXV, pp. 519-30, reprinted in Individualism and Economic Order [Auburn, Ludwig von Mises Institute]

    Jackendoff, Ray. 2007. Language, Consciousness, Culture: Essays on Mental Structure [London, MIT Press]

    Nicholson, Walter. 2005. Microeconomic Theory: Basic Principles and Extensions [United States, Thompson South Western]

    Ormerod, Paul. 2000. Butterfly Economics: A New General Theory of Social and Economic Behaviour [New York, Perseus Books]

    Pinker, Steven. 1994. The Language Instinct: The New Science of Language and Mind [London, Penguin]

    Tullock, Gordon. 1988. ‘Why the Austrians are Wrong about Depressions’ in The Review of Austrian Economics, Vol. 2, pp 73-78

    Van Gelder, Tim.1998. ‘The Dynamical Hypothesis in Cognitive Science’ in Behavioural and Brain Sciences, Vol. 21, 615-65

    Van Gelder, Tim.1995. ‘What Might Cognition be, if not Computation?’ in Journal of Philosophy, Vol. 92, 345-81

    Vygotsky, Lev S.1978. Mind in Society: The Development of Higher Psychological Processes, edited by Michael Cole, Vera John-Steiner, Sylvia Scribner, and Ellen Souberman [London, Harvard University Press]

    Vygotsky, Lev S. 1993. Thought and Language, translated by Alex Kozulin [London, MIT Press]

     


    [1] Models of ‘higher’ cognitive abilities – such as language – based on non-symbolic, pattern-completing connectionist networks or dynamical systems face a formidable difficulty. Such models (see, e.g., Elman 2004), as Jackendoff points out (2007, 28), appear to be variants of finite state Markov processes and, consequently, vulnerable to Chomsky’s criticisms of these systems’ suitability for modelling language (1957a and 1957b, §3.1-3.3; for introductory formulations of Chomsky’s critique see Pinker: 1994, 89-97 and Boeckx 2006, §2.4.1.1).

    [2] Roboticists have noted the robustness of stigmergic strategies: “a stopped robot simply becomes a static obstacle; other robots avoid it and any [objects] it was carrying are soon scavenged” (Beckers et al.: 1994).

    [3] This dynamical aspect bears on criticisms of ABCT based on ‘prediction’. Caplan, responding to Sorens, writes: “I don’t know what you mean when you say that business cycles are ‘continuous, institutionalized, and regular.’ Can you predict when the next downturn will be, how severe it will be, and how long it will last? Clearly they aren’t regular in that sense. What sense do you mean?” (Caplan: c). An alternative formulation of this point is Tullock’s ‘third nit’: “Rothbard’s apparent belief that the depression and booms are cyclical. There are statistical tests that will detect cycles if they exist and these have been applied to the historic data. The result… is a random walk rather than a cycle” (1988, 74). A more extreme variant of this is given by Fama in a 2010 interview: “I don’t know what a credit bubble means. I don’t even know what a bubble means. These words have become popular. I don’t think they have any meaning” (a, italics mine). He then clarified: “[Bubbles] have to be predictable phenomena. I don’t think any of this was particularly predictable” (ibid. italics mine). Fama’s remarks come close to/endorse positivism: terms are meaningful insofar as they refer to objects; theories are meaningful insofar as they are verifiable/falsifiable. Both Caplan and Tullock suggest the unpredictability of the cycle’s inflection points speak against cyclicity itself (in favour of a ‘random walk’). Such remarks overlook a critically important point – tangentially related to the psychology-economics connection, hence only a footnote here – concerning nonlinear systems and, in particular, chaotic systems. Such systems are unpredictable but determinate – in Gleick’s words, they are “order masquerading as randomness” (1998, 22). If the economy is an instance of a chaotic system, then such systems’ illusory randomness precludes the naïve positivist approach to identifying business cycles by looking for regular intervals between the cycles’ inflection points. Precisely because this randomness is illusory, the possibility of cyclicity is preserved. Ormerod, who (I believe) embraces chaos theory from a post-Keynesian perspective, therefore writes: “in the longer run, there is considerable regularity of behaviour. The often unpredictable interactions between individuals lead to a certain kind of self-regulation in the behaviour of the system as a whole. We cannot say exactly where the system will be at any point in time, but we can often set bounds around the areas in which it will move” (2000, xi – the title of Ormerod’s book, Butterfly Economics, is in part intended to evoke one of the more famous images of chaos theory: the Lorenz attractor).