Contingency is a fundamental concept in associative learning, but it has not been defined in such a way that it could be measured in most conditioning paradigms, particularly operant paradigms. A simple information-theoretic measure of contingency may be applied to most classical and operant associative learning paradigms. In applying it to assess the role of contingency in maintaining responding on variable interval schedules of reinforcement, we distinguish between prospective contingency—the extent to which one event (e.g., a response) predicts another (e.g., a reinforcement)—and retrospective contingency—the extent to which one event (e.g., a reinforcement) retrodicts another (e.g., a response). We find that the prospective contingency between response and reinforcement is un-measurably small, that is, the probability of reinforcement at any latency following a response does not differ from the probability of reinforcement following a randomly chosen moment in time. By contrast, the retrospective contingency is perfect. Degrading the retrospective contingency in two different ways, by delay of reinforcement or by partial non-contingent reinforcement, suggests that reinforcement is only effective when it falls within a critical time window, which implies that retrospective temporal pairing is critical, not retrospective contingency.