The ``connectionist'' (or ``neural network'', or ``parallel distributed processing'') approach to artificial intelligence and computational cognitive science can be seen as one way for a system to (appear to) behave intelligently without being a ``symbol system'' and yet being computational. On this approach, large numbers of very simple processors (``nodes'') are connected in multiple ways by communication links of varying strengths. Input nodes receive information from the external world. The information is propagated along the links to and among intermediate (or ``hidden'') nodes, finally reaching output nodes. If the output is not what was expected (e.g., if it does not match a ``training set'' of sample input-output pairs), the strengths of the links are adjusted (by a variety of automatic techniques, such as ``back propagation'' and ``simulated annealing''). This process is repeated until the system ``settles down'' into a stable configuration that exhibits the desired (cognitive) behavior. Connectionist systems and techniques have been developed for learning features of natural language, for aspects of visual perception, and for a number of other cognitive (as well as non-cognitive) phenomena. There is a wide range of types of connectionist methods, many of which are, in fact, highly representational, but most of which are ``distributively representational'', by which is meant that the kind of information that a symbolic artificial intelligence program would represent using various symbolic knowledge-representation techniques is, instead, ``represented'' by the strengths and connectivity patterns of the links. Rather than having intelligence ``programmed'' into the system using explicit rules and representations, intelligence is sometimes held to ``emerge'' from the organization of the nodes and links. (Good surveys of connectionism are Graubard 1988; Cognitive Science, Vol. 9, No. 1 (1985); and--from a critical standpoint--Pinker & Mehler 1988; a useful tutorial is Knight 1989.)