Theories, Data, and Communication Research

Robert N. Bostrom

University of Kentucky


           In Communication, as well as many other disciplines, it is common to make distinctions between interpretive and empirical research. One argument for these distinctions is a belief that a theory-data interaction invalidates empirical research. This view holds that because objectivity is impossible, those studying social phenomena should adopt phenomenological positions and explore individualistic accounts of reality. Though many speak of a theory-data interaction as if it were a general principle, actually there are many different types of interactions between theories and data. Whether or not these interactions result in invalid observations, however, is less clear. Because specific evidence of distorted data leading to incorrect theories has not been cited by interpretive thinkers, it is more accurate to say that research data have been accurately reported in spite of this interaction. The ubiquity of the theory-data interaction, together with the acknowledgment of objectivity in the reporting of data has a number of implications for communication study. One is that empirical researchers need to acknowledge subjective involvement in their research, rather than maintaining a pose of objectivity in theory as well as method. Because theory-data interactions have been cited as a foundational issue in paradigmatic thought, it might well be time to rethink paradigmatic distinctions in the search of broader and more useful research. As Toulmin put it, we might well profit from a broader mode of expression (Toulmin, 2001).

Theories, Data, and Communication Research

           There is general agreement that the primary goal in studying communication theory is the discovery of common patterns of thought that will provide broader and more useful theoretical structures. On occasion, however, the study of theory has assumed a more eristic tone, taking the form of arguments for polarized paradigmatic positions. Researchers using different methods and studying different kinds of questions often attribute their differences to fundamental divisions about how rationality and science are viewed (Toulmin, 2001). Often one group of workers finds the methods and assumptions of another group to be disturbing and irrational (Hunt, 1999), and justify their beliefs by invoking fundamental philosophical positions.

           Foundational differences are a mainstay of philosophy. Most historians of philosophy agree with Bertrand Russell (1945) that Western thought has long been divided into two basic groups: those inspired by the nature of mathematics, and those who rely instead on empirical principles (Russell, 1945, p. 828). Russell placed Plato, Thomas Aquinas, Spinoza, and Kant in the former group, because they relied on a priori principles derivable from language as the source of fundamental knowledge. In the other group he placed Democritus, Aristotle, and empiricists from Locke onward. Today these distinctions are manifested in our approaches to epistemology and how we choose to define reality. Some hold that unobserved theoretical entities exist, and others profess agnosticism about unobservable constructs such as hadrons in physics, black holes in astronomy, or attitudes in social science (Miller, 1987).

           If reality is only to be found in observable phenomena, then the manner in which observation is conducted becomes the central issue in the study of knowledge. On the other hand, if mental or linguistic structures interact with the process in important ways, then questions of epistemology need to include the nature of these mental or linguistic structures. An important aspect of these structures has been described as the theory-data interaction, and defines a foundational issue, not only to the philosophy of science but also to the nature of communication theory and research.

           In social science in general, and communication in particular, these fundamental distinctions are most visible as a basic division between perspectivist (or hermeneutic) and empirical (or objectivist) thought. Endnote Because the approaches are fundamental, it is common to speak of them as paradigms. The empiricist finds reality in observable behavior and the perspectivist finds it in culture and thought. In communication study the labels used to describe paradigmatic differences have been highly varied. Traditionally, empiricism has been associated with covering law explanations and has followed traditional behavioristic agendas. Perspectivism has taken many forms, such as rules theory, constructivism, qualitative analysis, conversational analysis, and others. Endnote Other less popular labels include "contextualist" and "generalist" for interpretive and empirical (Tehranian, 1991). Pearce (1991) would substitute "relativist" for "contextual," and so on. Pavitt (1999) notes that none of these labels are truly defensible and proposes a third alternative, scientific realism. The range of these frameworks is broad, but on one point there is general agreement: the most important basis for choosing an objectivist or an interpretivist orientation is how one approaches the theory-data interaction (Kuhn, 1970; Suppe, 1977). Acceptance of an extreme form of this interaction would indicate that objectivity is impossible and reality lies in the constructions of logic, culture, and symbol. Some interpretivists go farther and imply that empirical science only provides a set of culturally constituted narratives which, more often than not, support legitimate and illegitimate power structures under the guise of reason and objectivity.

                                               Given its fundamental importance, the theory-data interaction merits a good deal of attention from communication theorists. The opposite has been true. Typically perspectivists treat it as a given without exploring its nature, and empiricists have ignored it. In this essay the theory-data interaction is examined in detail, and some of its implications for research in communication are explored. The position taken is that theory-data interactions are no impediment to objectivity, and that communication theory probably suffers from too little interaction between theory and data rather than too much.

           The Theory-Data Interaction. A basic tenet of empiricism has been that events can be observed, recorded, and compared in sensible ways. That is, the senses are reliable indexes of reality and various procedures can be applied to remove the subjectivity inherent in human information processing. If theory produces and influences data, however, then scientific knowledge is humanly created and is subject to all of the other frailties of other human creations--subjectivity, context, and culture. An extreme interpretation of this position asserts that reality is socially constructed, and that the nature of science changes as the prejudices and world-views of scientists change.

             Most empiricists believe, along with Hanna (1991), that methodology is ideally value-free. But others agree with Hesse (1980), who contrasts the human sciences with the physical sciences, arguing that in the human sciences data are not detachable from theory. From this point of view, science is no more than another form of argument, little different from political or aesthetic attempts to win belief. Adherents of this position have generally asserted that not only do theories interact with data, but also that this interaction is a pervasive one, underlying epistemology in general. Close examination, however, shows that there are many types of theory-data interactions, each of which has little in common with the others. They may affect different analytical processes and have different dynamics. The presence or absence of one theory-data interaction may have little or no effect on the presence or absence, or, indeed, the relevance of the others.  

            Communication Theory and Theory-Data Interactions. The theory-data interaction is particularly relevant for communication study, if for no other reason than it has served as a primary foundation for hermeneutic theoretical accounts (Anderson, 1996; Guban & Lincoln, 1994; Lannaman, 1991; Pearce, Cronen & Harris, 1982; Yerby, 1995). These theorists usually speak of a theory-data interaction as a general principle, without distiguishing among the varying types (Anderson, 1996; Delia, O'Keefe, & O'Keefe, 1982; Lannaman, 1991; Pearce, Cronen & Harris, 1982; Yerby, 1995). The interpretivist argument parallels the issues concerning the varying roles of thought and perception in the construction of theory, and is a prominently cited principle in justificatory arguments for the validity of interpretive science. Basically its adherents assert that researchers' theories have undue influence on the process of data collection, and this influence makes scientific objectivity impossible. A theory-data interaction is the foundation of the argument that organizational structure creates knowledge claims (Browning & Hawes, 1991).

           The interaction of theory and data has provided basic support to critics of the positivist position in communication theory. O'Keefe (1975) centers his argument on the logic of statements.

Of all the lines of attack on the positivistic view that have been considered here, this is the most important, for it strikes at the heart of the logical empiricist program: at the assumption that there is a special observational vocabulary which is suitable for all scientific theories, which is neutral with respect to competing theories, and which forms the rock-bottom certain foundations upon which scientific knowledge can be erected. Thus the denial of the theoretical-observational distinction has implications for the positivistic treatment of scientific knowledge and scientific progress. The positivist's claim that knowledge-claims are justified by reference to the foundational observation-statements can no longer be allowed to stand (p. 178).

Other descriptions of the theory-data interaction refer to cognitive processes. Lannaman (1991) asserts, "There may be an objective world, but, because of the self-organizing nature of perception and consciousness, knowledge of this world is determined by the co-ordinations of the knower and not by the characteristics of the known" (p. 181). Pearce, et al. (1982) echo this feeling, writing, "Just as data are selected and sometimes created by research methods, so research methods depend on the implicit theories of the researcher, and these theories derive from an array of unstated assumptions about the nature of reality" (pp. 2-3).

The Nature of Theory-Data Interactions

           Although it is common for theorists to speak of a theory-data interaction in general, a brief examination shows that there are many approaches to the phenomenon. The interaction of theory and data in quantum physics has often been cited as foundational proof that all knowledge is relative. The uncertainty principle was an early contribution to this line of thought, asserting that no particle of quantum size can be observed without changing it (Feynman, 1985). Light has energy, and because light is what people most often use to perceive events, one must manipulate it to make the event observable. The uncertainty principle is most often used to demonstrate that there can be no absolute truth in empirical science, and therefore absolute truth must be sought elsewhere. At the very least, quantum theory places limits on what can be called objective (Toulmin, 2001). Whether or not this principle generalizes to events in social science and communication theory is less clear.

           A second kind of theory-data interaction is rooted in a basic phenomenological argument and depends for its validity on the existence of phenomena as defined by Kant. According to Kant, knowledge is created by events occurring outside of humans, which he called "noumena," and another part, which he called "phenomena," by implication, occurring inside of humans. Phenomena are created by the processes implied in perception. When one sees a house, one also sees a roof, windows, doors, and a lawn. Whether seen from the side or the front, it remains a house, even though from the side the door is not seen, and only part of the roof is viewed. The boards, shingles, and cement that are perceived are noumena, and entities like house are phenomena, and can differ widely given circumstances and training. Hanson (1965) contends that noumena and phenomena are simply two different kinds of seeing, one essentially theory-free and the other theory-laden. Hamilton (1994) asserts that this view of perception “opens the door to subjectivism, idealism, perspectivism, and relativism (1994, p. 63). This logical leap is unwarranted, especially in view of Hanson's demonstration that the phenomenal seeing is not necessarily subjective (Hanson, 1965, p. 64).

            The third way that theory and data can be said to interact is linguistic in nature and focuses on the nature of sentences in theory-building. Theoretical statements are the basis from which researchers construct data statements. Therefore, it is argued, data produced by this process would not have existed if the process did not exist, and the process would not have existed had the theory not existed. To say that the theory created the process is certainly accurate. There are a number of questions at each stage of this process, not the least being the confusion of theory with theorist. This argument is very closely related to the Lebenswelt concept (Schutz, 1967), and in many ways resembles argument by definition.

            The fourth form of a theory-data interaction is cognitively based, addressing the nature of perception and awareness, sometimes called the false-consciousness principle. This assumption is foundational to the argument that it is impossible to distinguish between ideology and theory (Lannaman, 1991). Many support this contention by referring to the perception literature (Brown, 1977). This body of research generally has shown that attitudes, expectations, and situations distort what is perceived. If true, then a social scientist who believes in a particular theory will tend to perceive data that supports the theory and not perceive data that do not. Nor is the effect confined to the researchers. Participants (subjects) in social science research may also bias their responses toward what they think the experimenter wants (Rossiter, 1976).

           The existence of these theory-data interactions might well be taken as evidence for the existence of theory-data interactions in general. The existence of such a general principle and its subsumption into all systems of thought has typically been used as an argument for the superiority of perspectivist approaches to knowledge and the futility of applying "science" to human affairs. Further, its existence would clearly deny the possibility of objectivity in the study of human behavior in general and communication in particular. Each interaction warrants more intense examination and analysis. Because the relativity argument derived from particle physics is the most basic, it will be examined first.

           Physics and Philosophy. The existence of a theory-data interaction can be inferred from particle physics. The uncertainly principle was an early contribution of this research, and while modern physicists have modified the principle somewhat (Feynman, 1985) the uncertainty principle (along with Einstein's relativity argument) is often cited as evidence that knowledge of the outside world is a human construction. Applying this principle to everyday events is a dubious process. If one watches Northwestern's flight 400 depart from Newark, does one believe that our act of observation will affect this giant airliner as it climbs? It is probably true that the light reflected from the airliner (used to perceive its existence) has some effect on its trajectory. But the mass of the airliner is so large it is doubtful that the effect could be measured in a meaningful way. To think that the light (and an observer) affected the trajectory of this giant airplane is stretching a point a great deal, and certainly defies credulity. Extension of the uncertainty principle to everyday events may be why Heisenberg (1958) warned social scientists in general to beware of using concepts from quantum mechanics to interpret ordinary events. Yet at the same time, there is an element of uncertainty in Flight 400. How can the existence of the uncertainty principle be reconciled with the need for sensible knowledge in the ordinary world?

           Pavitt (1999) elucidates the problem well, by dividing phenomena into realms. Realm 1 is the world of easily observed phenomena, such as speech acts and behavioral consequences. Realm 2 is the world of phenomena that are observable only with technology that may only be possessed in the future, such as attitudes and intentions. Realm 3 is the world of objects that are forever beyond observation, such as particle physics. Realm 3 events may affect the other realms in systematic ways, such as the readings on an instrument or the effect of radiation on the body. But to expect direct and one-to-one correspondence between these realms is unrealistic (Pavitt, 1999). Particle physics acknowledges an uncertainty principle at the quantum level and nowhere else. The world of particle physics is immensely different from our perceptual world, and it is described meaningfully by quantum mechanics, not classical physics. Clearly the application of an uncertainty principle from this setting to a human science is specious.

           Sensation, Perception, and Phenomena. The second approach to a theory-data interaction is found in phenomenology. Often in communication theory the phenomenological argument is discussed as if it were a basic philosophical principle. The issue involves whether or not nature speaks to us directly through perception. Berkeley (1710/1957) describes the problem well.

As the mind frames to itself ideas of abstract qualities or modes, so does it, by the same precision or mental separation, attain abstract ideas of the more compounded beings which include several coexistent qualities. For example, the mind, having observed that Peter, James, and John resemble each other in certain common agreements of shape and other qualities leaves out of the complex, or compounded idea it has of Peter, James, and any other particular man that which is peculiar to each, retaining only what is common to all, and so makes an abstract idea wherein all the particulars equally partake--abstracting entirely from and cutting off all those circumstances and differences which might determine it to any particular existence. After this manner it is said that we come by the abstract idea of man, or if you please, humanity or human nature (p. 8).

Berkeley's mistrust of empiricism was so extreme that he was led to believe that matter could not exist. Although his arguments are refutable, involving as they do, the mixing of logical and empirical forms, he is still considered an important philosopher and represents one point of view in the development of empiricism.

             The movement from sensation to perception involves generalization, as Berkeley noted--moving from symbols like "Peter", "John," and "James," to "humanity." Although a certain amount of generalization is inevitable, it has probably been overdone in the development of philosophical argument. The internal processes described by Berkely and his successors have been the subject of intense analysis in cognitive psychology (Gibson, 1979; Michaels & Carello, 1981). Research of this type shows that there is no compelling need to refer to metaphysical entities to account for the processes of perception. Indeed, Pinker (1997) has gone even farther, using the tools of cognitive science to construct a coherent model of the mind.

           Philosophical systems based on the contributions of cognitive science are not only possible, but very persuasive. Lakoff and Johnson (1999) note that there are basic physiological processes that serve as root metaphors for most of our language, and by extension, our perception--altering our approach to phenomena and their reality. In addition, they propose an alternative to Kantian phenomena by asking one to visualize concepts. One can visualize a house but not real estate. Lakoff and Johnson would conclude that real estate is a concept of an entirely different type, which should be examined quite differently. To apply the term "phemonenon" to both fails to recognize important differences. Lakoff and Johnson's view is not inconsistent with Hanson's definitions of seeing (1965), but merely adds another level of abstraction. Recently, Dawkins (1998) strongly argued that the various types of perception from incoming visual data could be explained in terms of neurological activity, without reference to noumena or other imaginary entities.

           Abstraction can lead to extreme generalization, which is useful, but can lead to equivocation. This is so prevalent in philosophical writing that readers have become inured to it. For example, Ayer (1956) finds it troubling that the word "knowledge" may at one time refer to the ability to perform an effective response, and at another time, the possession of particular sentences describing states of affairs (p.12). It is puzzling why Ayer did not consider that the term "knowing" is used in two different ways in his two different examples, and therefore must have at least two different referents. The struggle to give it one general usage is not helpful. In other words, the single word "knowing" is an inadequate descriptor for the cognitive activities associated with the use of information by humans, and the attempt to overgeneralize the process is a harmful one. This habit is common with words like "reality," "meaning," and the like.

           The phenomenological argument emphasizes how careful one needs to be in choosing language as well as beliefs about what is true. Many attempts have been made to refine language so that it can carry more precision, and not have this distressing fuzziness that is seen in trying to pin down what is meant by house. This problem led to Carnap's (1956) proposal for a method of name relation that would apply to artificial and precise languages. This system would presuppose that any term could have one and only one referent, and that two terms which had the same referent could be substituted in a sentence without changing the truth of the sentence.

           This proposal is a foundational for the positivist framework. Salt becomes NaCl which is the only allowable term. Not only is this prescription unworkable for social science, it is basically illogical (Mackenzie,1997). For example, one might suppose that the terms "Benjamin Britten" and "the composer of the Eve of St. Agnes" have the identity characteristic that Carnap proposed. If one composed another sentence "Ann Landers wishes to know if Benjamin Britten was the composer of the Eve of St. Agnes," one might think that it is a perfectly sensible, but the positivist principle would allow us to say "Ann Landers wants to know if Benjamin Britten was Benjamin Britten" (Mackenzie, 1997, pp. 72-73). In short, any instance of language replacing metalanguage calls for a different set of rules for what is believed to be true. In other words, the sentence "Ann Landers wishes to know if Benjamin Britten was the composer of the Eve of St. Agnes" is a mixture of language and metalanguage, and the principles operating at one level may not necessarily do so at another.

            The confusion between logic and metalogic, theory and metatheory, and perception and metaperception is prevalent in communication theory, social science, and philosophy. This error arises when statements are assumed to contain truth value in and of themselves, such as the classic, "All statements in this box are false." In theoretical writing it is more subtle. Here is a recent example. Anderson (1996) cites a colleague who says: "I don't make truth claims, I only report what I observe." Anderson views this claim as an "an incontrovertible naive empiricism." He points out that when she says "I only report what I observe" she is making a truth claim. But her description of her analysis of her research is metacommunicative; it may or may not correspond to the rules and contexts of the communication behavior being described. Separating her language from her metalanguage, one can see that her truth claim refers to her behavior and not the objects of her behavior. The mixing of the levels leads to strange logical constructions (Bostrom & Donohew, 1992). Some of the problems resulting from this habit have been under continuous study in the philosophy of science (Hubner, 1985; Reichenbach, 1938, 1959).

           Theoretical Statements and Data Statements. The nature of statements is one of the most fundamental issues in philosophy. Although statements of value and statements of fact can take the same form, it is clear that they are quite different and should be treated differently. What qualifies as a scientific statement remains problematical, and is the special study of linguistic philosophy. While positivism had its logical problems, one advantage that it offered was to insist on a verification theory of meaning, asserting that statements or propositions are meaningful only if they can be verified empirically. The result, for positivists, was the rejection of metaphysical statements masquerading as fact.

           Edidin (1983) notes that testing any theory is inextricably involved with the nature of statements, requiring many types of theoretical sentences. An observational sentence gives way to a theoretical sentence, which then creates hypotheses. Edidin, along with Glymour (1980), points out that these sentences are made possible only by other assumptions, such as those inherent in Glymour's bootstrapping procedures Endnote . Edidin describes this process as it underlies the confirmation of a data statement: the theoretical statements generate a series of derivational statements which are confirmed, usually through creating an apparatus that will instantiate a condition that will exhibit characteristics (or lack of them) that the statements demand. The data produced by this process would not have existed if the process did not exist, and the process would not have existed had the theory not existed. To say that the theory created the process is certainly accurate. It is in this sense that Hanson (1965) meant that statements were theory-laden.

            The use of the words "cause" and "create" in this process may need a little more examination. Without the test there would have been no data, but without the physical world, there would have been no data either. So, if theory is made up of theoretical sentences and data are defined as data sentences, by definition there is a strong theory-data interaction, in that they are both sentences. Nonetheless, there will be different types of theory-data interactions depending on the nature of the phenomena described and the degree of abstraction inherent in their use.

           The science of statements is complicated. For example, it is possible to distinguish between natural languages and more formal constructions. Barwise and Etchemendy (1989) note that the task of exploring the relationships between language and mind is threefold: first, to understand lexical semantics, the relationships between language and object; second, to understand compositional semantics, the manner in which varying combinations of lexical elements can be understood; and third, to explain how knowledge of these principles comes to be understood by its users. This task requires a well-developed understanding of the role of metalanguage in theoretical study.

           Mackenzie (1997) notes that the way statements are constructed is always crucial, and goes on to prove that many unsolvable issues in philosophy are quite often merely problems in language. Most spectacularly, he shows that Wittgenstein's paradox, a thorny problem for many years, can be resolved by slight shifts in the way the sentences are composed (p. 15) Endnote . And although it may be true of the theory-data interaction, it is the case that the science of statements shows that theory statements and data statements are inextricably linked.

           Perception and Data. The perceptionist form of theory-data interaction in many ways is the most telling. It is sometimes called the false-consciousness principle, and asserts that it is impossible to distinguish between ideology and theory (Lannaman, 1991). Lannaman asserts that the false-consciousness is based on "the shaky assumption that the theorist is immune to the forces that determine the actions of those about whom he or she theorizes" (p. 181). Support for this assertion is typically drawn from the perception literature (Brown, 1977). Pearce, et al. (1982) extend this argument to separate cultures, noting that quantification is foreign to many cultures and that scientific experimentation would seem obscene to primitive societies.

           To begin with, one must examine the presuppositions involved in the perception literature that are central to this argument. First, there are different kinds of perceptual distortions. The most fundamental is the kind summarized by Day (1972), who demonstrated how visual illusions can be explained by an information processing principle that guides perceivers in striving to achieve consistency of perceptions in order to enhance the information-carrying capacity of the stimuli--bringing knowledge of size, orientation, and movement as well as the basic perception of the object. A second kind of perceptual distortion occurs when attitudes influence perceptions. For example, in a classic experiment, economically deprived children perceived stimuli associated with money as being much larger than more neutral stimuli (Bruner & Goodman, 1947). Perception is also influenced by the judgment of others, as demonstrated by Sherif (1958) and Asch (1956). Normative influence of this type continues to be of interest to communication researchers (Lee & Nass, 2002). In a comprehensive analysis, Nisbett and Ross (1980) showed how generally unreliable perceptions are in almost every phase of human judgment, presenting detailed evidence that people make inferences based on perceptions and or data which they know to be flawed (Nisbett & Ross, 1980). Perceptual distortion is ubiquitous, and can be harmful, as the common misperception of body size by young women illustrates (Smythe, 1995).

           In sum, there is ample evidence from the perception literature that predispositions influence perceptions. But there is a logical problem associated with using these basic studies as evidence to argue that scientific studies of behavior are flawed. If the principal findings of these studies are true, then the persons conducting the studies should also have experienced perceptual distortion. It is also possible that the respondents in these experiments either did not distort their perceptions or distorted them even more severely than was reported. Thus, to contend that theories interact with data, and to support this contention with data that are automatically suspect because of the contention itself, is a fatal logical error (Bostrom & Donohew, 1992).

           Given the distinction between and metalogic it can be seen that both contentions may be true, but at different levels. One way to think of the problem is to remember that the perception literature was the result of perceiving persons perceiving, and recognize that perceptions could be distorted at one level but not necessarily at another. This notion is consistent with Hubner's distinctions about data statements and is useful. In other words, communication theorists might best be seen as meta-perceivers. The logical problem can only be resolved by separating the various levels of perception and explanation that used to formulate theoretical statements. Endnote Nisbett and Ross suggest that becoming aware of the inferences about inferences is a powerful first step toward better thinking.

           Examination of these four forms of theory-data interaction confirms suspicions that they are different. The uncertainty principle applies to subatomic events and not to ordinary experience, and distinguished physicists have enjoined us not to apply physics to philosophy (Dawkins, 1998; Feynman, 1985; Heisenberg, 1958). The phenomenological approach to meaning is not a given but rather one of many points of view in the broad spectrum of philosophical thought. This point of view asserts that phenomenology is a useful way of viewing (or defining) reality, and if embraced, a theory-data interaction is involved. The logic of statements implies that both theory and data statements are of necessity similar and therefore are closely related, but this position is not an argument for distortion of the real world by our theories. Instead it is only a technological philosophical argument. The fourth type of theory data interaction illustrates two commonly occurring logical errors in communication theory: equivocation and confusion between logic and metalogic (theory-metatheory, language-metalanguage, or perception-metaperception). It implies that sometimes a previously held theory might affect data statements, and sometimes not.

Theory and Data in Communication Study

           Theories interact with data, but it is also clear that the terms "theory" and "data" are highly equivocal. Communication researchers use the word "theory" in interesting ways. Many scholars who examine communication phenomena in a particular way label the approach a theory. A popular theory textbook lists at least 26 different theories (Griffin, 2000). If a theory is a collection of statements used in explaining, evaluating, and predicting nature, perhaps Griffin's use of the word is justified, but there are problems with the word theory. For example, any analytic process can be said to contain a "theory" of some kind. To use a thermometer as a gauge of temperature is to use a theory of thermometers. Hubner (1985) points out that to use the word "theory" for all statements indiscriminately is to reduce it to the point of meaninglessness.

           Similar problems exist in the use of the word "data." Jacoby (1991) noted that human responses have traditionally been described in terms of stimulus comparisons, single stimuli, similarities, and preferential choices (pp. 17-21). Each could conceivably require different methods of analysis, and different kinds of interactions with theory. Jacoby offers a dimensionality model that integrates observations in a more powerful way, and argues for the development of a data theory (pp. 23-26).

           Clearly there are many ways in which the words "theory" and "data" are used, and great care must be taken to avoid equivocation.   It is also clear, however, that theories, attitudes, organizational structure, and world-view interact with the process of gathering data. It would be easy to conclude that the validity of the "facts" reported by researchers are doubtful.

             The Validity of Empirical Research. Each type of theory-data interaction reviewed here shows that theory and data do indeed interact, even though in some cases the interaction is less important than in others. Even resorting to meta-perception as an explanation only shows that there is no logical necessity for an interaction, rather than proving that no interaction exists. For many, the existence of theory-data interactions casts doubt on the validity of empirical research. Among those who believe that any empirical research is tainted, there are many whose belief borders on the extreme. Here is a representative example:

Any one who claims to have objective knowledge about anything is trying to control and dominate the rest of us.... There are no objective “facts.” All supposed “facts” are contaminated with theories, and theories are infested with moral and political doctrines. ... Therefore, when some guy in a lab coat tells you that such and such is an objective fact, he must have a political agenda up his starched white sleeve. (Cartmill, 1998, p. 80).

Statements of this kind are common in a variety of disciplines.

           Defenders of empirical research typically cite the use of self-correcting methods, such as replication, blind coding, and careful design, as evidence for the validity of their findings. The implications of a theory data interaction on theoretical grounds alone might be beyond resolution. If empirical study is flawed, one ought to be able to find instances of it in previous research. Though advocates of interpretive theory seldom cite specific instances of theoretical mishaps (such as the failure of the theory of cognitive dissonance), such instances might provide powerful evidence to show whether or not researchers have been careless about the data gathered and its implications.

           One does not have to look far to discover instances of unsupportable theory. One prominent example is the work of Margaret Mead. This influential anthropologist published accounts of Samoan life which were built on systematic misinformation told to her as a joke by two mischievous Samoan girls (Freeman, 1998). Ms. Mead did not even speak Samoan (Dawkins, 1998). Nonetheless, her recommendations about society and child-rearing have had wide acceptance for years (Freeman, 1998).

           There are also problems with theory in communication research. Three prominent examples concern the sleeper effect in persuasion, the relative importance of verbal and nonverbal messages in interpersonal communication, and congruity theory.

           The sleeper effect in persuasion has had wide acceptance among communication scholars for a number of years. Despite its acceptance, this effect has been described as "nonexistent" (Greenwald, Pratkanis, Lieppe, & Baumgardner, 1986). In the original research describing the sleeper effect (Hovland & Weiss, 1951), the data reported indicated that the groups hearing the low-credible message sources showed less decay than those hearing the high credible source messages. Greenwald et al. asserted that the “sleeper effect” is actually a kind of “decay” effect, but Allen and Stiff (1989) note that there are actually three forms of sleeper effects, and only the associational model can be supported. Eagly and Chaiken (1993) agree with Allen and Stiff and conclude that the sleeper effect is far from a general principle and only takes place in a narrow range of conditions.

           Nonetheless, the difficulties in this theory does not seem to have had much effect in writing about communication. Here are two instances of how it has been described:

Specifically, the sleeper effect posits that a message from a low credibility source may increase in persuasiveness as time passes, as compared to a message from a high credibility source. Sound unlikely? It is, to some extent. Yet the sleeper effect has been documented by researchers, dating back almost 50 years (Gass & Seiter, 1999, p.84).


... although the sleeper effect is of considerable conceptual importance, obtaining it may require conditions that are infrequently present in the "real world," or in persuasion research” (Petty & Cacioppo, 1986, p. 183).

           Another instance of theory gone awry concerns the relative effect of nonverbal messages in the study of interpersonal communication. It is believed commonly that nonverbal communication is more powerful than verbal communication (Tubbs & Moss, 1987; Verderber & Verderber, 1980). A book called Silent Messages by Albert Mehrabian (1972) is often cited as the source of support for these statements. In this book Mehrabian asserts that from 70% to 90% of the meaning in a message comes from the nonverbal aspects of the message, not the verbal. Mehrabian offers as evidence two research reports (Mehrabian & Ferriss, 1967; Mehrabian & Wiener, 1967) to support this astonishing claim. In these two studies, however, it is difficult to find evidence for those percentages. First, the studies examined one-word messages and examined them as presented in a number of contexts. The vocal tone of the presentations did indeed affect the reaction to the messages, but the astouding percentage claimed by Mehrabian is sheer fantasy. It is interesting to note that when a carefully designed study was performed to compare the relative importance of verbal and nonverbal messages in a more realistic setting (actual messages, not just one word), the verbal messages were more powerful (Motley, 1993). If nothing else, Motley's study should cast a great deal of doubt on the conventional wisdom about the force of nonverbal messages.

            Another widely accepted principle in communication is congruity theory (Osgood & Tannenbaum, 1955). These authors cited Tannenbaum's dissertation data (Tannenbaum, 1953) for support of the theory. When Tannenbaum's actual data are examined, however, it is clear that congruity theory as envisioned by these authors could not be supported (Bostrom, 1982). Negatively evaluated sources did not produce negative attitude change as predicted--only positive sources did. Nonetheless, many textbooks in communicaton theory describe congruity theory as if it were supported by the bulk of the data.

           These three examples are certainly not isolated instances. There are a number of less prominent theories that are similarly flawed. These examples seem to demonstrate that the false consciousness principle inevitably contaminates social science. At first it might seem so, were it not for the fact that the flaws in these theories were discovered in an interesting way: They appeared in the data reported by the researchers themselves. In examining the sleeper effect, Greenwald et al. examined the original research by Hovland and Weiss (1951). Mehrabian's original articles contain an accurate description of the messages and the proportions obtained originally. Tannenbaum's dissertation shows clearly that negatively evaluated sources had no effect on subsequent attitudes, rather than the positive effect claimed by the theory. In other words, the data were reported accurately, but the distortion took place in the claims of confirmation resulting from the data. These examples of theoretical distortion are known to us primarily because of the existence of data that demonstrated the distortion. If there were truly a theory-data interaction that contaminated the research process, it would be expected to be manifested in a different direction--these researchers might have been expected to report data that supported their theories. They did not. These data were not affected by theory, but the claimants were.

           Another way of looking at the problem is that although many theorists have expressed concern about the effects of theory upon data, few have shown much interest in the effects (or lack of effect) of data on theory. Although a data-theory interaction (or lack of it) may be a problem, the theory-data interactions do not seem to be. It is probably safe to say that much of communication theory has suffered from too much emphasis on theory and too little on data. Berger's (1991) assertion that communication theory has a methodological fixation may be true, but there does not seem to be a corresponding fixation about the results of such methods.

           If the theory-data interaction truly has deleterious effects on the validity of research findings, one would predict confirmatory data from these three lines of research, because their theories were stated very clearly. Instead, one finds that they reported data that showed their theories to be false. This fact is evidence that theory had little effect on data. In the more traditional sciences the discovery of cheating has been so rare that is considered news when it is discovered (Burrell, 1994). If theory-data interactions are a real problem in communication research, those who support this view ought to show instances of where and how it has appeared, and how it has created problems for communication study. It could become an area of study in its own right.

           Some data statements are indeed affected by theory and some are not. The basic observations in an empirical study are indeed susceptible to distortion, although double-blind techniques and reliability checks in coding procedures usually minimize it. For many, Popper (1968, 1972) solved the problem of objectivity by insisting on disconfirmation as a basic tool, but Putnam (1981) has shown that almost everyone has read Popper's recommendations as another form of confirmation. It is possible that objectivity can be obtained, even though a researcher is highly involved. Moreover, with the lack of specific instances to the contrary, it is highly likely that empirical communication studies have generally reported data that are basically free from serious bias.

           Implications for Communication Theory. If there is no compelling philosophical reason why communication study should reject empirical data, then theoretical accounts that ignore such knowledge are incomplete at best. Further, the data base for communication theory ought to be an inclusive one, involving more broadly based notions about human behavior.

           Furthermore, communication study has generally overlooked the profound effect that our environment and our biology have on our behavior and our thinking, Physical aspects of the human condition have marked effects on communication behaviors (Beatty & McCroskey, 1997). Lakoff and Johnson's demonstration of the biological nature of metaphor (Lakoff & Johnson, 1999) is convincing enough in and of itself. The influence of genetic makeup on many communication behaviors indicates that inherited tendencies should be an important part of interpersonal theories (Beatty, Heisel, Hall, Levine, & LaFrance, 2002), and Pinker's engaging description of how the mind works demonstrates how cognitive psychology might answer many theoretical questions posed in communication research (Pinker, 1997). It is seem sensible to construct theories of communication that are consistent with available data of all kinds. This statement implies that communication theories ought to consider cognitive psychology as well as broader knowledge from the other sciences, especially biology. A data-oriented knowledge base for communication theory, however, has been considered objectionable because it is a reductionistic aproach.

           Anderson articulates the widely held view that "objective empiricism is usually coupled with reductionism" (1996, p. 15). Other theorists find reductionism offensive (e. g., Craig, 1989). Reductionism, as it is usually defined is the belief that at base there is a simple knowledge model (usually physics) to which all other knowledge bases reduce. A typical reductionist would argue that all the arts and professions reduce to sociology, sociology reduces to biology, biology reduces to chemistry, and chemistry reduces to physics. Anderson goes on to say that "reductionism is the foundation of the 'unity of science' hypothesis" (p.15). It is probably correct in that many find a good deal of appeal in emulating the physical sciences.

           Nonethless, there are important differences between physics and biology. The biological sciences generally take a very different approach to theory. Statistical laws replace absolutes, classification is much more important, and experiment is not as prominent as a method. Contingencies are stressed, with a real interest in interactive principles. Where physical scientists generalize to the entire universe, biological scientists confine themselves to our earth.

           Comparatively, the behavioral (social) sciences are truly different. The physical sciences explain, predict, and control physical events; the biological sciences explain, predict, and control the activities of plants and animals, and the social sciences explain, predict, and control the activities of human beings. There is indeed some subsumption in these disciplines. The laws of physics provide a foundation for chemistry; the laws of chemistry provide a foundation biology, and the laws of biology provide a foundation for social science. But it is difficult to contend that the laws of biology form the only basis for communication theory--simply distinguishing between software and hardware demonstrates the difference. Social animals will not operate independently of the principles of mass and velocity, but to say that their behavior reduces to physics is ridiculous. Many prominent scientists do indeed believe in the subsumption principle, which was the basis for the beginnings of sociobiology. Wilson (1994) described the unity of science as consilience, and his viewpoint seems to be a powerful one until one discovers that he defines communication for ants in the same terms that he does for humans, i.e., based on the sense of smell and pheromones (Holldobler & Wilson, 1990).

           We know that there is certainly some subsumption in theories. Communication theory will not contain statements that affect the laws of physics, no matter how much any theorist might wish to include levitation as a basic principle. But a substantial number of philosophers

have denied any subsumption of any kind, steadfastly refusing to see any biological basis for human action. Rorty (1987), for example, advises us to avoid "reductionism--the idea that biology can somehow overrule culture" (p. 41). There are many instances in which biology has had a profound effect on culture. AIDS has created important cultural differences everywhere, and culture alone has not sufficed to eliminate AIDS. Rather, aspects of culture clearly contribute to its spread (Cupp, 2002).

             The existence of self-awareness is somehow assumed by philosophers to erase other motivational forces, a notion lacking face validity. In fact, self-awareness (conscious experience) may well have a physiological origin (Crick & Koch, 1995). Taken as a whole, the arguments about subsumption are often irrelevant and do not constitute a basis for the existence of paradigmatic differences.

Theory, Data, and Commonality in Research

           Thirty years ago, Gerald Miller made a persuasive case for what he called "rapproachment" between humanistic and scientific approaches to communication (Miller, 1975). The interaction of theories and data support Miller's contention in interesting ways. One involves the role of subjectivity in research.

            Subjectivity. There is something inherently unpleasant about treating human beings as objects or numbers. Objectivity takes many forms, however, and a strong interest in humanity and meaningfulness of human behavior does not imply that one makes up data and creates contaminated data sets. It it is certainly possible to be objective in method and technique and not in involvement. Objectivity, as it is usually defined in social science, means that researchers attempt to overcome their own prejudices and subject their preconceived notions to truth tests. No one believes that researchers should have little personal involvement in the meaning and implications of their study. Another way of putting it is that objectivity is the use of the most careful methods possible.

           It is past time to drop the pretense of complete objectivity in communication research. If one cares about a subject enough to initiate a research project on that topic, one is not going to be objective. Effective empirical researchers frequently care deeply about the object of their research. In 1861 Charles Darwin wrote to a friend:

About thirty years ago there was much talk that geologists ought only to observe and not to theorize; and I well remember that at this rate a man might well go into a gravel pit and count the pebbles and describe the colors. How odd it is that anyone should not see that all observation must be for or against some view if it is to be of any service? (Shermer, 2001).

           Subjective decisions are an inherent aspect of the use of statistics. Most researchers who label themselves as empiricists use statistical methods which imply assessment of Type I and Type II errors in their design. Should alpha = .05 and gamma = .7? Conventional methodological thinking requires each instance to be evaluated separately, based on the social, political, or personal consequences of the two kinds of error. This judgment involves assessing the harm involved in being wrong about either of one of these kinds of errors, as well as the importance of the research. Put differently, statistical tests have hermeneutic/qualitative/interpretive decisions built into them. Concerns with statistical tests, however, usually are concentrated on the technical aspects of these statistics (Levine & Banas, 2002) and not on the consequences of the tests. It is remarkable how little attention is paid to the notion of significance, as if a probability (or actually, improbability) of one in twenty is a fixed characteristic and not subject to discussion. It is as if the phrase "at the .05 level" had some mystical value. Hacking (1990) notes that the law of large numbers became what he called a "metaphysical truth" in the last century, and that Poisson's reasoning about chance became less important than its application. Hacking describes the transformation this way: "Thanks to superstition, laziness, equivocation, befuddlement with tables of numbers, dreams of social control, and propaganda from utilitarians, the law of large numbers ... became a synthetic a priori truth." (1990, p. 104) Levine and Banas suggest that mindless use of significance levels is probably of less value than reporting effect sizes, implying a more careful analysis in every study (2002, p. 141). Obviously, theory is important. But few researchers (and editors) recognize the inherent subjectivity in the choice of a particular level of statistical significance. Surely the social significance of a set of observations might be even more important.

           Consider an example: Jamison (2000) and her associates studied local television news stories featuring intergroup violence (white vs. nonwhite victims and perpetrators). For the stories with a white perpetrator, only 22 percent featured a person of color as a victim, and for the stories with a nonwhite perpetrator, 42 percent of the stories featured a white as a victim. Although the FBI reported that the actual rate of crimes in this area featuring a nonwhite perpetrator and a white victim was actually 10 percent, the television stations reported these crimes at a rate of 42 percent (Jamison, 2000, pp.169-185)! Few of us can read Jamison's account without sensing the indignation that naturally results from the discovery of a disgusting practice. Jamison reports that her frequencies were significant at the .05 level (p. 178).

           How important is her finding? Would we want to know about this practice if the results came out with only a .07 probability? I should think so. At the same time, station owners and news directors would want the frequencies to be even less attributable to chance, perhaps even as low at .01. These decisions are wholly subjective. Having said so, does anyone contend that Jamison and her associates fudged the data in these studies? Nevertheless, the researchers were highly involved.

           Theory, Data and "Paradigms." Perhaps, as Toulmin has observed, "the price of intellectualism has been too great, and we are now having to work our way back to broader modes of self expression" (Toulimin, 2001, p. 13). The theory-data interaction may be one of these prices of intellectualism, because it has been cited extensively as a foundational philosophical principle that helps define interpretive social science as a separate paradigm. Theories and data interact in empiricism and subjectivity is present as well. Empirical researchers do indeed examine issues that are politically and socially important, such as drug prevention (Donohew, Lorch, & Palmgreen, 1998) and the prevention of AIDS (Cupp, 2002). The most fundamental elements in statistical method involve subjective judgments. In the light of these facts, then one is justified in questioning whether the various forms of interpretive science qualify as separate paradigms.

           Defining personal preferences as paradigmatic differentiation is an effective argument Endnote in that it elevates one's attitudes to philosophical status. Paradigmatic differentiation awards meaningfulness to research that might otherwise seem irrelevant. Unfortunately, to believe that reality is socially constructed, is to make it impossible to test ideas for usefulness and good sense. The acceptance of paradigmatic distinctions has meant that empirical-objective researchers pretend to be unconcerned with issues of value and culture, that critical-humanists can ignore well established facts about biology and genetics, and that other philosophers can ignore the basic nature of thought and logic. In spite of the many calls for interparadigmatic dialogue (Martin & Flores, 1998) little effort has been expended in bringing about such activity. As Miller (1975) argued, it might be more more useful to recognize that no one form of investigation is inherently better or superior to another, but that they have different functions.

           Communicative phenomena need to be approached in different ways. Some require quantitative methods; some do not. A fundamental method of qualitative research, according to Erickson (1982), involves “narrative descriptions ... in which the immediate (often intuitive) meanings of action to the actors involved is of central interest" ( p.119). Sometimes it might be necessary to investigate the mindstate of the responder. But to assume that these mindstates can only be inferred by in-depth investigation and comparison to the researcher’s own mindstates is shortsighted.

            Some responses require introspection and some do not. Morris (1981) suggests that the presence or absence of this requirement is an excellent guide to distinguish qualitative research from behavioral research. For communication theory to be practical, concepts require wide acceptance: information should not mean one thing in Miami and another thing in Minnneapolis. Moreover, if communication science is to be cumulative, then a certain historical stability is required; involvement should mean approximately the same thing in 1992 as it did in 1962. But, at the same time, from a practical point of view, there is no need for statements to be absolutely true. They need only be close enough approximations to be useful in particular contexts. Communication aims at change, and the nature of these changes can be dramatic or minuscule. Rorty (1987) adopts a pragmatism that aims at deciding "whether we ought to keep our present values, theories, and practices, or try to replace them with others" (1987, p. 47). But how is one to know if these values, theories, and practices are the same, changed, or made irrelevant? If Jamison's effort to eliminate racism in television news is judged to be successful, the term "racism" must be specific enough for one to conclude that change has indeed taken place.

           Knowledge, embodied in communication theory, is a body of principles that describes why people do what they do and what part communication has to play in it. Communication theory develops by trial and error, by watching others, and by experience. But examining examples of typical communicative attempts, one can see that trial and error, watching others, and personal experience are no guarantee that any particular communication theory will be useful or effective. Individual experience--no matter how extensive--is likely to be provincial and flawed. Given these difficulties, communication theories must be useful, sensible, and soundly grounded in objective observations.

 Author Note

Robert N. Bostrom (Ph. D., Iowa, 1961) is Professor Emeritus in the Department of Communication at the University of Kentucky. An earlier version of this manuscript was presented at the Graduate Student Research Symposium at the University of Kentucky, April 2000. For corresondence, contact the author at 2295 Hifner Road, Versailles, KY, 40383 or The author thanks Nancy Harrington and two anonymous reviews for many helpful suggestions.




Allen, M., & Stiff, J. H. (1989). Testing three models for the sleeper effect. Western Journal of Speech Communication, 53, 411-426.

Anderson, J. A. (1987). Communication research: Issues and methods. New York: McGraw-Hill.

Anderson, J. A. (1996). Communication theory: Epistemological foundations. New York: Guilford.

Asch, S. (1956). Studies of independence and conformity: A minority of one against a unanimous majority. Psychological Monographs, 70, (Whole No. 416).

Ayer, A. J. (1956). The problem of knowledge. Baltimore, MD. Penguin Books.

Barwise, J. & Etchemendy, J. (1989). Model-theoretic semantics. In M. J. Posner, (Ed.), Foundations of cognitive science (pp. 201-242). Cambridge, MA: MIT Press.

Beatty, M., Heisel, A., Hall, A., Levine, T., & LaFrance, B. (2002). What can we learn from the study of twins about genetic and environmental influences on interpersonal affiliation, aggressiveness, and social anxiety? A meta-analytic study. Communication Monographs, 69, 6-29.

Beatty, M. J., & McCroskey, J. C. (1997). It's our nature: Verbal aggressiveness as temperamental expression. Communication Quarterly, 45, 446-460.

Berger, C. (1991). Communication theories and other curios. Communication Monographs, 58, 101-113.

Berger, C. R., & Jordan, J. J. (1992). Planning sources, planning difficulty, and verbal fluency. Communication Monographs, 59, 130-149.

Berkely, G. (1710/1957). A treatise concerning the principles of human knowledge. Indianapolis, IN: Bobbs-Merrill.

Bostrom, R. N. (1982). Theoretical interactions among sources, receivers, and attitude objects: RSO theory. In M. Burgoon, (Ed.) Communication Yearbook Five (pp. 834-855). New Bruswick, NJ: Transaction Books.

Bostrom, R. N. (1983). Persuasion. Englewood Cliffs, NJ: Prentice-Hall.

Bostrom, R. N., & Donohew, R. L. (1992). The case for empiricism: Clarifying fundamental issues in communication theory. Communication Monographs, 59, 109-128.

Brown, H. (1977). Perception, theory, and commitment. Chicago, IL: University of Chicago Press.

Browning, L. D., & Hawes, L. C. (1991). Style, context, surface: Consulting as postmodern art. Journal of Applied Communication Research, 19, 32-54.

Bruner, J., & Goodman, C. (1947). Value and need as organizing factors in perception. Journal of Abnormal and Social Psychology, 42, 33-44.

Burrell, C. (1994, November 26). MIT researcher faked study of antibodies. Savannah News-Press, p. 11a.

Carnap, R. (1956). Meaning and necessity. (2nd ed.). Chicago, IL: University of Chicago Press.

Cartmill, M. (1998, March). Oppressed by evolution. Discover, 78-85.

Craig, R. T. (1989). Communication as a practical discipline. In B. Dervin, B. J. Grossberg, & E. Wartella (Eds.), Rethinking communication (pp. 97-122). Newbury Park, CA: Sage

Crick, F., & Koch, C. (1995, December). Why neuroscience may be able to explain consciousness. Scientific American, 273 (6), 84-85.

Cupp, P. (2002). Elements of curriculum, sensation seeking, and messages in intentions to engage in prevention behavior. Unpublished doctoral dissertation, University of Kentucky.

Dawkins, R. (1998). Unweaving the rainbow: Science, delusion, and the appetite for wonder. New York: Houghton-Mifflin.

Day, R. (1972). The interpretation of visual stimuli. Science, 175, 1335-1340.

Delia, J. G., O'Keefe, B. J., & O'Keefe, D. J. (1982). The constructivist approach to communication. In F. E. X. Dance (Ed.), Human communication theory (pp. 147-191). New York: Harper & Row.

Eagly, A. H., & Chaiken, S. (1993). The psychology of attitudes. New York: Harcourt, Brace Jovanovich.

Edidin, A. (1983). Bootstrapping without bootstraps. In J. Earman (Ed.), Testing scientific theories (pp. 43-54). Minneapolis, MN: University of Minnesota Press

Erickson, F. (1982). Qualitative methods in research on teaching. In M. C. Whitrock (Ed.), Handbook of research on teaching (3rd ed., pp. 119-161). New York: Macmillan.

Feynman, R. P. (1985). The strange theory of light and matter. Princeton, NJ: Princeton University Press.

Freeman, D. (1998). The fateful hoaxing of Margaret Mead: An historical analysis of her Samoan researches. Boulder, CO: Westview Press.

Gass, R. H., & Seiter, J. S. (1999). Persuasion, social influence, and compliance gaining. Boston, MA: Allyn & Bacon.

Gibson, J. J. (1979). The ecological approach to visual perception. Boston, MA: Houghton-Mifflin.

Glymour, C. (1980). Theory and evidence. Princeton, NJ: Princeton University Press.

Gödel, K. (1962). On formally udecidable propositions.(R. B. Braithwaite, Trans.) New York: Basic Books. (Original work published 1931).

Greenwald, A., Pratkanis, A. Leippe, M. & Baumgardner, M. (1986). Under what conditions does theory obstruct research progress? Psychological Review, 93, 216-229.

Griffin, E. (2000). A first look at communication theory. (3rd ed.) New York: McGraw-Hill.

Guban, E. G., & Lincoln, Y. S. (1994). Competing paradigms in qualitative research. In N. H. Denzin & Y. S. Lincoln. (Eds), Handbook of qualitative research (pp. 105-117). Thousand Oaks, CA: Sage.

Hacking, I. (1990). The taming of chance. Cambridge: The University of Cambridge Press.

Hamilton, D. (1994). Traditions, preferences, and postures in applied qualitative research. In N. K. Denzin and Y. S. Lincoln (Eds.) Handbook of qualitative research (pp. 84-115). Thousand Oaks, CA: Sage.

Hanna, J. F. (1991). Critical theory and the politicization of science. Communication Monographs, 58, 202-212.

Hanson, N. H. (1965). Patterns of discovery. London: Cambridge University Press. Heisenberg, W. (1958). Physics and philosophy. New York: Harper and Row.

Hesse, M. (1980). Revolutions and reconstructions in the philosophy of science. Brighton, England: Harvester Press.

Hofstadter, D. (1979). Gödel, Escher, Bach: An eternal golden braid. New York: Basic Books.

Holldobler, B. & Wilson, E. O. (1990). Ants. Cambridge, MA: Harvard University Press.

Hovland, C. I. & Weiss, W. (1951). The influence of source credibility on communication effectiveness. Public Opinion Quarterly, 15, 635-650/

Hubner, K. (1985). Critique of scientific reason. (P. Dixon & H. Dixon, Trans.). Chicago: University of Chicago Press (Original work published 1982).

Hunt, M. (1999). The new know-nothings. New Brunsick, NJ: Transaction Publishers..

Jacobs, S. (1990). On the especially nice fit between qualitative analysis and the known properties of conversation. Communication Monographs, 57 241-249.

Jacoby, W. G. (1991). Data theory and dimensional analysis. Newbury Park, CA: Sage.

Jamison, K. H. (2000). Everything you think you know about politics...and why you’re wrong. New York: Basic Books.

Kuhn, T. S. (1970). The structure of scientific revolutions. (2nd ed.) Chicago: University of Chicago Press.

Lakoff, G.. & Johnson, M. (1999). Philosophy in the flesh. New York: Basic Books.

Lannaman, J. W. (1991). Interpersonal communication research as ideological practice. Communication Theory, 1, 179-203.

Lee, E. & Nass, C. (2002). Experimental tests of normative group influence and representation effects in computer-mediated communication. Human Communication Research, 28, 349-381.

Levine, T. B., & Banas, J. (2002). One-tailed F-tests in communication research. Communication Monographs, 69, 132-143.

Mackenzie, I. E. (1997). Introduction to linguistic philosophy. Thousand Oaks, CA: Sage.

Martin, J. L., & Flores, L. A. (1998). Challenges in contemporary culture and communication research. Human Communication Research, 25, 293-299.

Mehrabian, A. (1972). Silent Messages. Belmont, CA: Wadsworth.

Mehrabian, A., & Ferriss, S. R. (1967). Inference of attitudes from nonverbal communication in two channels. Journal of Counseling Psychology, 31 (3), 248-252.

Mehrabian, A., & Wiener, M. (1967). Decoding of inconsistent communications. Journal of Personality and Social Psychology, 6 (1), 109-114.

Michaels, C. F., & Carello, C. (1981). Direct perception. Englewood cliffs, NJ: Prentice-Hall.

Miller, G. R. (1975). Humanistic and scientific approaches to speech communication inquiry: Rivalry, redundancy, or rapproachment. Western Speech Communication, 39, 230-239.

Miller, R. W. (1987). Fact and method. Princeton, NJ: Princeton University Press.

Morris, P. (1981). The cognitive psychology of self-reports. In C. Antaki (Ed.) The psychology of ordinary explanation of social behaviour (pp.183-203) ). London: Academic Press.

Motley, M. T. (1993). Facial and verbal contexts in conversation: Facial expression as interjection. Human Communication Research, 20, 3-40.

Nagel, E., & Newman, J. R. (1956). Goedel's proof. In J. R V. Newman, (Ed.). The world of mathematics. (v. 3, pp. 1684-1992). New York: Simon and Shuster.

Nisbett, R. & Ross, L. (1980). Human inference: Strategies and shortcomings of social judgment. Englewood Cliff, NJ: Prentice-Hall.

Osgood, C., & Tannenbaum, P. (1955). The principle of congruity in the prediction of attitude change. Psychological Review, 62 , 42-55.

O'Keefe, D. J. (1975). Logical empiricism and the study of human communication. Speech Monographs, 42, 169-183.

Pavitt, C. (1999).The third way: Scientific realism and communication theory. Communication Theory, 9, 162-188.

Pearce, W. B. (1991). On comparing theories: treating theories as commensurate or incommensurate. Communication Theory, 1, 159-165.

Pearce, W. B., Cronen, V. T., & Harris, L. G. (1982). Methodological considerations or building human communication theory. In F. E. X. Dance ( Ed.), Human communication theory (pp. 1-43). New York: Harper.

Petty, R. E., & Cacioppo, J. T. (1986). Communication and persuasion: Central and peripheral routes to attitude change. New York: Springer-Verlag.

Pinker, S. (1997). How the mind works. New York: W. W. Norton.

Popper, K. R. (1968) The logic of scientific discovery. New York: Harper.

Popper, K. R. (1972) Objective knowledge. Oxford: Clarendon.

Putnam, H. (1981). The "corroboration" of theories. In I. Hocking (Ed.) Scientific revolutions (pp. 60-70). New York: Oxford University Press.

Reardon, K. K., & Rogers, E. M. (1988). Interpersonal versus mass media communication: A false dichotomy. Human Communication Research, 15, 284-303.

Reichenbach, H. (1938). Experience and prediction. Chicago, IL: University of Chicago Press.

Reichenbach, H. (1959). The rise of scientific philosophy. Berkely, CA: University of California Press.

Rorty, R. (1987). Science as solidarity. In J. Nelson and others (Eds.) The rhetoric of the human sciences (pp. 38-52). Madison, WI: University of Wisconsin Press.

Rosenthal, A. (2002) Report of the Hope College Conference on designing the undregraduate curriculum in communication. Communication Education, 51, 19-25.

Rossiter, C. (1976). The validity of communication experiments using human subects: A review. Human Communication Research, 2, 197-206.

Russell, B. (1945). A history of Western philosophy. New York: Simon and Schuster.

Schutz, A. (1967). Collected papers I: The problem of social reality (M. Natanson, Ed.). The Hague: Martinus Nijhoff.

Sherif, M. (1958). Group influences in the formation of norms and attitudes. In E. Maccoby, T. Newcomb, & E. hartley, (Eds.) Readings in Social Psychology (pp. 215-228). New York: Henry Holt and Company.

Shermer, M. (2001). Colorful pebbles and Darwin's dictum. Scientific American, 284 (4) 38.

Smythe, M. J. (1995). Talking bodies: Body talk at bodyworks. Communication Studies, 46 , 201-222.

Suppe, F. (1977). The search for philosophic understanding of scientific theories. In F. Suppe (Ed.), The structure of scientific theories (pp. 109-121). Urbana, IL: University of Illinois Press.

Tannenbaum, P. (1953) . Attitudes toward source and concept as factors in attitude change though communication. Unpublished doctoral dissertation, University of Illinois.

Tehranian, M. .(1991). Is comparative communication theory possible? Communication Theory, 1, 44-58.

Toulmin, S. (2001) Return to reason. Cambridge, MA: Harvard University Press Tubbs, S., & Moss, S. (1987). Human communication. New York: Random House.

Verderber, K. S., & Verderber, R. F. (1980). Interact: Using interpersonal communication skills. (4th Edition) Belmont, CA: Wadsworth.

Wilson, E. 0. (1994). Naturalist. Washington, D. C. Island Books.

Yerby, J. (1995). Family systems theory reconsidered: Integrating social construction theory and dialectical processes. Communication Theory, 5, 339-365.