- The view that one of the most impressive and plausible ways of modelling cognitive processes is by means of a connectionist or parallel distributed processing computer architecture. In such a system data is input into a number of cells at one level. These are each connected to a middle layer of cells, or hidden units, which in turn deliver an output. Such a system can be ‘trained’ by adjusting the weights a hidden unit accords to each signal from an earlier cell. The training is accomplished by ‘back propagation of error’, meaning that if the output is incorrect the network makes the minimum adjustment needed to correct it. Such systems prove capable of producing differentiated responses of great subtlety. For example, a system may be able to take as input written English, and deliver as output phonetically accurate speech. Proponents of the approach also point out that networks have a certain resemblance to the layers of cells that make up a human brain, and that like us, but unlike conventional computing programs, networks degrade gracefully, in the sense that with local damage they go blurry rather than crashing altogether. Current controversy concerns the extent to which the differentiated responses made by networks deserve to be called recognition, and the extent to which nonrecognitional cognitive functions, including linguistic and computational ones, are well approached in these terms.
Philosophy dictionary. Academic. 2011.