Perceptual Learning

Learning is undoubtedly one of the most important functions of the brain. If to improve chances of survival and reproduction, a task has to be performed often, then learning to do it well is important. Although one often associates learning with high-level things, as math, animals display learning at multiple levels, including in motor and perceptual tasks. Perceptual learning can aid the animal with many things, like detecting preys or predators, and avoiding obstacles while running. In human activities, learning can help with sports, performing surgery, piloting fast planes, and many other functions. Learning can also help in the biomedical front, as we are helping people regain sensory function through prostheses or transplantations. Techniques of perceptual learning can be used to teach patients how to use best their regained sensory abilities.

In our laboratory, we focus on perceptual learning in the visual domain. Psychophysical experiments show that humans have powerful visual perceptual learning, with many surprising properties. For instance, if one has to tell whether two vertical lines are aligned, the threshold for correct answers decreases with practice. Suppose that one trains to perform the task with the lower line displaced to the right. Then, if after undergoing learning, the line is presented to the left, the threshold is high again. In other words, for this line-alignment task, learning in one condition (in this case, position) does not transfer to another. This lack of transfer occurs in many (but not all) tasks of perceptual learning, including discrimination of orientation and of direction of motion. Many investigators have suggested that the lack of transfer indicates the fine-tuning of a low-level neural mechanism, such as those of orientation selective cells. However, even if this explanation holds, it is not the full story, because other tasks show transfer. The tendency of transfer increases with the easiness of the task, suggesting a mechanism that is more complex.

We have developed a general probabilistic control-theory framework to model perceptual learning. This framework extends Statistical Learning theory to the Bayesian domain. In this framework, learning may improve different operational processes, as fine-tuning of neural mechanisms (as postulated for lack of transfer), acquisition of prior distributions, or learning of the optimal decision parameters. Simulations of a variety of visual tasks show that this framework accounts for different kinds of learning. These different forms of learning can occur, because as pointed out above, different operational processes can be improved.

figure_5

Figure 5: Simulations of Contrast-detection Learning with Our Bayesian Framework. The prior distribution for contrast detection is exponential (top-left panel). Learning is performed by repetitions of the task, resulting in an eventually reduction of the number of errors (top-right panel). In this particular simulation, one of the operational processes that improves is the internal neural mechanisms (as illustrated by changes of the input-output-response functions in the middle panels). This improvement causes the system’s responses to change, resulting in a better decision process (bottom panels). (This process is the selection shown by the red line, dividing decisions of signal from decisions of noise.)


Although the new framework allows for the learning of different operational processes, the brain may not learn all of them. And if does, then it may learn them in isolation or in combination. We are performing a series of psychophysical experiments to test these different forms of learning. In particular, we have been using learning of the task of segmentation of scenes based on differences between statistics of motion signals in different sub-regions. For example, we found that if a scene has too many sub-regions, learning does not occur even after 2 weeks. However, after training with such hard scenes, one performs better the same task on scenes with fewer, larger regions. In other words, one is learning without noticing. This suggests that one can learn without improving the decision rule, as in the hard task, one does not have feedback on one’s own decisions.

figure_6

Figure 6: Examples of the Visual Stimuli Used in Our Perceptual-learning Experiments. We present two displays, signal (left panels) and reference (right panels), in random order and ask subjects to tell which is spatially inhomogeneous. The reference has the same vectors as the signal, but scrambled. The task can be easy (top panels) or hard (bottom panels), in terms of size and number of the inhomogeneous regions.