[Extracted from Rushall, B. S., & Siedentop, D. (1972). The development and control of behavior in sport and physical education. Philadelphia, PA: Lea & Febiger. (pp. 204-207).]

A golfer finds that a putt breaks sharply to the right as it approaches the hole. A college freshman participating in a psychology experiment attempts to draw an 18-inch line and is told by the experimenter that it was three inches short. A laboratory animal is given a food pellet for pressing a lever only when a certain amount of time has elapsed since it last pressed the lever. A driver glances at his speedometer and finds that he/she is going 12 miles per hour over the speed limit. These are four examples of the results of responses; a missed putt, a line three inches short, a food pellet, and a speedometer that reads 12 miles per hour over a speed limit. In each case, it is reasonable to assume that more trials under similar conditions would result in performance changes that could be described as learning. The golfer learns to "read" the green; the psychology student learns to draw an 18-inch line; the pigeon learns to respond at a low rate (drl); and the driver learns either the feel of traveling at 35 miles per hour or to watch his speedometer more closely.

How can these behavioral processes be explained? The results obviously play the crucial role (see the treatment of "Performance Information" in Chapter 4), but are they more profitably viewed as feedback or reinforcement? Are the two separable? Most practitioners and researchers in motor learning would favor a feedback view of human behavior. Like John Annett, in Feedback and Human Behavior, they would support the view that a feedback model is more directly applicable and relevant to motor skills training than is a reinforcement model.

Research in motor learning has a long and substantial history in this century (see Chapter 2). Some of the research was generated from a Hullian drive-reduction reinforcement model, but recently the shift in theoretical emphasis has been clearly toward an information model of human behavior in which feedback is the central experimental variable. Feedback is most often defined as the error detected in a comparison between a response (R1) and a standard. Feedback becomes input for the next response (R2), and R2 is modified on the basis of the feedback from R1.

During the past 20 years feedback theory and operant conditioning have flourished side by side with little contact between the two. Recently, operant conditioning has generated a significant technology that can account for behavior change in a number of varied human situations. While there are some interesting, if primarily academic differences between the models, the central issue seems to be the question of motivation.

The feedback and reinforcement literature itself shows that the issue had inevitably to be raised. E. A. Bilodeau (1966) suggested that theories of motor performance could not survive forever without an anchor in motivational theory and research. However, he also spoke of the lack of popularity among motor learning theorists of conditioning analyses of verbal and motor skills learning. More recently Skinner (1969) suggested that feedback had been widely misused as a synonym for operant reinforcement. At other times, however, Skinner has talked about reinforcement as control over changes in the environment and made it sound very much like a feedback variable.

Children play for hours with mechanical toys, paints, scissors and paper, noise-makers, puzzles-in short, with almost anything which feeds back significant changes in the environment and is reasonably free of aversive properties. The sheer control of nature is in itself reinforcing (Skinner, 1968, p. 20).

It is tempting to simply suggest that feedback is a secondary reinforcer. There is no doubt that feedback does act as a secondary reinforcer, and this is essentially the position adopted earlier in this text. However, the two constructs -- feedback and reinforcement -- have developed from entirely different theoretical frameworks, and it would be a mistake at this point in time to argue that they are synonyms. It is also going too far, as Sage (1971) recently suggested, to call Skinner a feedback theorist. Current levels of investigation allow one to say no more than that events normally described as feedback also possess reinforcing qualities and events normally described as reinforcers also possess informational qualities. This is the position adopted by Ammons (1956) and Holding (1965).

Feedback theorists most often point to two types of research results which seem to be in conflict with operant conditioning theory. The first is the scheduling of feedback where research indicates rather clearly that performance changes are a function of the absolute rather than the relative frequency of IF (Bilodeau & Bilodeau, 1958; Larre, 1961). Such results seem to conflict with the effects generally attributed to intermittent reinforcement. A second area of research that conflicts with operant psychology is in the delay of IF. Again, there appears to be ample evidence to suggest that delays in IF do not hinder the learning process (Bilodeau & Bilodeau, 1958; 1969), whereas delays in reinforcement are considered to detract from the possibility of behavior change.

Some of the conflict in research results between the two concepts is due to the inappropriateness of applying a variable from one model in an experimental design generated from the other model. In operant psychology rate of emission of behavior is almost always the dependent variable. Feedback research, on the other hand, uses magnitude of response as its dependent variable. Schedules of reinforcement are studied in terms of their effects on the response pattern of already learned behavior. Feedback research has tried to use scheduling as an independent variable in behavior acquisition studies. This is not to suggest that there are no differences. Future research, if designed to investigate the differences rather than to prove one position or another, hopefully will clarify the issue.

Performance information does have secondary reinforcing power, and the strength of IF as a reinforcer depends upon:

  1. the number of reinforcers it has been paired with; e.g., peer approval, parental affection, etc.,
  2. the number of pairings, for example, how important is it that the individual improve his performance, and
  3. the strength of the individual reinforcers with which it has been paired, for example, if peer approval is a very strong reinforcer for a given individual and it has been paired with improvement in a sport skill then the IF from the sport skill will be a fairly strong secondary reinforcer.

To view IF as possessing secondary reinforcing power allows one to utilize the entire operant framework to understand motivation in the learning and performing of sport skills. Without the operant framework, however, the motivational question in learning motor skill remains unanswered.

Annett (1969) suggested that motivation is "feedback in action." He dichotomizes motivation into energizing and directional components and suggests that the "power," "standard," and "error signal" are all necessary for motivated performance. Annett takes the position that pay for piecework in a factory is feedback, but as feedback its primary role is to release corrective action and, "it may not matter very much if this information is signaled in pennies, dollars, pounds, or 'grubs' " (Annett, 1969, p. 121). One wonders how an automobile manufacturer would do by providing "feedback" to his workers with a weekly "grubcheck" instead of a paycheck. It is at this point that feedback theorists have a difficult time explaining motivation. Within an operant framework the analysis would, of course, be quite direct and straightforward.

Adams (1971), in a more recent feedback theory of motor behavior, suggested that error is motivating. This may be true in some instances, but the analysis does not go far enough. Why is error, a missed shot for a basketball player for example, more motivating for one person than it is for another? Why do some people persist longer in the face of performance errors? From an operant framework the answers are easily drawn in terms of the strength and scheduling of the secondary reinforcer, but from a feedback point of view they are not so easily answered. One also must recognize that too great an error or too many errors would no doubt act as a negative reinforcer and decrease responding in the particular situation within which the errors were made. One must conclude that Bilodeau's (1966) earlier statement has not yet been fully answered and that feedback theory is still a long way from being able to answer problems in motivation.


  1. Adams, J. (1971). A closed loop theory of motor learning. Journal of Motor Behavior, 3, 111-150.
  2. Ammons, R. B. (1956). Effects of knowledge of performance: A survey and tentative theoretical formulation. Journal of General Psychology, 54, 279.
  3. Annett, J. (1969). Feedback and human behavior. Baltimore, MD: Penguin.
  4. Bilodeau, E. A. (1966). Supplementary feedback and instructions. In E. A. Bilodeau (Ed.), Acquisition of skill. New York, NY: Academic.
  5. Bilodeau, E. A., & Bilodeau, I. McD. (1958). Variable frequency of knowledge of results and the learning of a simple skill. Journal of Experimental Psychology, 55, 379-383.
  6. Bilodeau, E. A., & Bilodeau, I. McD. (1969). Principles of skill acquisition. New York, NY: Academic.
  7. Holding, D. H. (1965). Principles of training. Oxford, England: Pergamon.
  8. Larre, E. E. (1961). Interpolated activity before and after knowledge of results. Unpublished doctoral dissertation, Tulane University.
  9. Sage, G. (1971). Introduction to motor behavior: A neurophysiological approach. Reading, PA: Addison-Wesley.
  10. Skinner, B. F. (1968). The technology of teaching. New York, NY: Appleton-Century-Crofts.
  11. Skinner, B. F. (1969). Contingencies of reinforcement. New York, NY: Appleton-Century-Crofts.

Return to Table of Contents for this issue.