Where AI-Machine Learning-Deep Learning Meet Cyber Security, Cryptography, Telecommunications Engineering & Computational-Quantitative Analytics applied to Digital, Quant, Cyber, Crypto, Quantum Technologies, Practices & Ventures by CEO-CxO Teams.
Below: Historical Archive: Prior R&D Intelligence, and, Analysis.

An Analogy to a Competitive Intelligence Program:
Role of Measurement in Organizational Research

© Copyright, 1993, Yogesh Malhotra, Ph.D., @BRINT Institute, All Rights Reserved

Reference citation for this document is given below:
Malhotra, Yogesh. (1993). An Analogy to a Competitive Intelligence Program: Role of Measurement in Organizational Research [WWW document]. URL http://www.brint.com/papers/compint.htm
This working paper may be printed as a paper copy for non-profit, non-commercial, academic or educational use provided no alterations are made and the copyright notice is maintained intact.

Abstract

This paper uses Competitive Intelligence Program as an analogy for explaining the critical aspects of measurement in organizational research. A conceptual model of Competitive Intelligence Program is developed based on extant practitioner literature. Key aspects of this model are then used for defining the 'critical ingredients' of measurement in organizational research: constructs, validity, and reliability, as well as their interrelationships.

1. Introduction

Measurement may be defined as the process of determining the value or level, either qualitative or quantitative, of a particular attribute for a particular unit of analysis. In organizational research, the unit of analysis may be the individual, the group, or the organization itself. The measurement process is an integral part of organizational research. Brilliant theories and research studies may be perfectly constructed in every other detail, yet organizational research will be a failure unless we can adequately measure our concepts (Bailey, 1987). Measurement generally entails the assignment of numbers to concepts or variables. 'Net sales' and 'stock price' are examples of organizational measurements that most of us are familiar with. Such attributes that are measured in numerical terms are called quantitative attributes or variables. Other organizational concepts, particularly attitudes, may be much more difficult to measure. For instance, a concept such as 'authoritarianism' may not be directly observable, although its effects may be, and may involve more than one dimensions. Such concepts are theoretically important for organizational research but may pose significant measurement problems.

This article is an attempt at delineating a conceptual model for developing valid instruments for measurement of organizational constructs. The measurement process and valid instrument development are explained by considering an analogy to a Competitive Intelligence Program, or CIP. The 'critical ingredients' of the measurement process, such as constructs, validity, and reliability, are explained using this model of CIP. The role of measurement in organizational research and its 'fit' in the process of experimentation and observation is briefly explained in section 2. Section 3 introduces the specifics of CIP with the help of the conceptual model. Section 4 discusses the major concepts of the measurement process and their interrelationships using the analogy to CIP. Section 5 presents the benefits and limitations of the conceptual model in understanding the organizational measurement process and suggests additional lines of inquiry for further study.

The discussion in this paper draws extensively upon the compilations of Jackson and Messick (1967), Judd, Smith and Kidder (1991) and Lindzey and Aronson (1968); specific references will be provided only where deemed necessary. The interpretation of validity used in this paper is largely based upon Loevinger's (1967) explanation of construct validity.

2. Role of Measurement in Organizational Research

Organizational researchers develop theories to understand and predict organizational phenomena. Hall and Lindzey (1957) suggested that the function of theory "is that of preventing the observer from being dazzled by the full-blown complexity of natural or concrete events." Theory may be defined as a statement of relationships between units observed or approximated in the empirical world. Approximated units are constructs - such as centralization, satisfaction, and authoritarianism - which by their very nature cannot be observed directly. These constructs are operationalized empirically by measurement into variables, which are the observed units. Operational definitions help the researcher specify the rules for assigning numbers. Thus, theory may be viewed as a system of constructs and variables - bounded by the theorist's assumptions - in which constructs are related to each other by propositions, and the variables are related to each other by hypotheses (Bacharach, 1989).

Measurement is the procedure that links theoretical constructs with empirical research and is therefore the means by which such constructs are rendered researchable. Organizational constructs constitute the 'linchpin' of the research process in that much organizational research entails attempts to establish connections between measures which are taken to be indicative of underlying constructs (Bryman, 1989).

In organizational research, the researcher may have to study complex constructs such as job satisfaction, morale and attitudes, measurement of which may be a very demanding task. The researcher would need to use instruments (such as scales and indexes) to determine the degree of the presence or absence of such constructs. The researcher must ensure that the measuring instrument is actually measuring the concept in question and not some other concept, and that the concept is being measured accurately (Bailey, 1987). In other words, the validity of the instrument must be confirmed. The different types of validity that a researcher must establish are discussed in section 4. In addition, the researcher must verify the reliability, or consistency, of the measuring instrument.

3. The 'Macro-Picture': Conceptual Model and Competitive Intelligence Program

The survival and growth of an organization often depend upon ensuring that it has accurate, current information about its competitors and a plan for using that information to its advantage (McGonagle & Vella, 1990). This objective can be achieved by using a Competitive Intelligence Program (CIP). Prescott and Gibbons (1993) have defined CIP as "a formalized, yet continuously evolving process by which the management team assesses the evolution of its industry and the capabilities and behavior of its current and potential competitors to assist in maintaining or developing a competitive advantage." CIP uses public sources to find and develop information on competition, competitors, and the market environment (Vella & McGonagle, 1987). Unlike business espionage, which develops information by illegal means like "hacking," CIP uses public information - all information that can be legally and ethically identified and accessed. The proliferation of computers and online databases over the last decade has resulted in an 'explosion' of data that can be accessed from public sources. CIP is based upon the conversion of data into information that can be evaluated for accuracy and reliability. The competitive intelligence information obtained using CIP can be used in programs that supplement planning, mergers and acquisitions, restructuring, marketing, pricing, advertising, and R&D activities. A conceptual model of the Competitive Intelligence Program is given in figure 1. The concepts of CIP discussed in this section will be utilized to elucidate the analogous process of developing valid instruments for measuring organizational constructs.

The purpose of CIP is to gather accurate and reliable information. The groundwork for the CIP is done through a [internal] Competitive Intelligence (CI) audit which is primarily a review of the organization's operations to determine what is actually known about the competitors and their operations. The CI audit helps in pinpointing the CI needs. Based upon the CI needs, relevant data can be gathered from own sales force, customers, industry periodicals, competitor's promotional materials, own marketing research staff, analysis of competitor's products, competitor's annual reports, trade shows and distributors. Specific CIP techniques include querying government resources and online databases, selective surveys of consumers and distributors about competitor's products, on-site observations of competitor's plant or headquarters, "shadowing" the markets, conducting defensive CI, competitive benchmarking, and reverse engineering of competitor's products and services.

Not all types of tools and techniques are suitable for all kinds of CIPs. Specific tools and techniques are chosen depending upon various factors such as CI needs, time constraints, financial constraints, staffing limitations, likelihood of obtaining the data, relative priorities of data, sequencing of raw data, etc. (McGonagle & Vella, 1990). While government sources have the advantage of low cost, online databases are preferrable for faster turnaround time. Whereas surveys may provide enormous data about products and competitors, interviews would be preferred for getting a more in-depth perspective from a limited sample. Therefore, human judgement is an essential element of the decision regarding which CI techniques to deploy in a specific situation.

Evaluation and analysis of raw data are critical steps of the CIP. Data that lacks accuracy and reliability may be marginally correct data, concoction of very good data, bad data, or even disinformation. All data is produced or released for some certain purpose. In CIP, reliability of data implies the reliability of the ultimate source of the data, based upon its past performance. In CIP, accuracy of data implies the [relative] degree of 'correctness' of data based upon factors such as whether it is confirmed by data from a reliable source as well as the reliability of the original source of data. Evaluation of CI data is done as the facts are collected and unreliable or irrelevant data is eliminated. Analysis of remaining facts includes 'sifting' out disinformation, studying patterns of competitor's strategies, and checking for competitor's moves that mask its 'real' intentions (McGonagle & Vella, 1990). The resulting CI information is integrated into the company's internal planning and operations for developing alternative competitive scenarios, structuring attack plans and evaluating potential competitive moves.

Competitor's Defence Against Organization's Competitive Intelligence Program:

Very likely the target competitor would be aware of the organization's CI moves and could make all possible efforts to thwart or jeopardize the organization's CIP. The competitor may have its own CI activities targeted at the organization. Or it might intentionally generate disinformation to mislead the organization's efforts. In fact, the organization's CI activities may find data which the competitor has 'planted' to keep the organization "preoccupied" and "off-balance" (McGonagle & Vella, 1990). The competitor could also create the problem of false confirmation by releasing similar, but misleading (or incomplete), facts to different media sources. The competitor may also use common ploys to pump information from the organization's employees. Such ploys include "the phantom interview", "the false flag job seeker", "the seduction," and "the nonsale sale." In the phantom interview, the competitor, posing as a potential employer, inquires from the organization's employees about their duties and responsibilities. The false flag job seeker is a competitor's trusted employee who, in the guise of a potential job seeker, tries to learn about the organization in the course of the employment process. The seduction involves flattery of organization's employees to encourage disclosure of important facts. In the nonsale sale technique, the competitor pursues the organization's nonemployee associates such as distributors and suppliers to elicit information about the organization's pricing structure, customer service, etc. Almost all these factors could be considered as responses to the stimuli that are generated by the organization's CIP. Effectively, in the process of 'observation' of the 'subjects,' the organization's CIP interacts with them (Weick, 1968). These concepts of the Competitive Intelligence Program model are used in the following section to explain the process of developing valid instruments for measurement of organizational constructs.

4. Discussion: Measurement Process and the Competitive Intelligence Program

As stated earlier, measurement is the process that links theoretical constructs with empirical research and is therefore the means by which such constructs are rendered researchable. Therefore, the 'correctness' of measurement process is dependent upon several factors, such as accurate assessment of the relationships of the construct under observation with its related constructs, development of a valid instrument for measuring the construct, accurate decoding of the data gathered through the instrument, and, correct analysis and evaluation of the data. The selected instrument must satisfy the criteria of validity and reliability.

Various authors (Bechtoldt, 1967; Campbell, 1967; Campbell & Fiske, 1967; Cannell & Kahn, 1968; Cronbach & Meehl, 1967; Holsti, 1968; Loevinger, 1967) have defined the terms 'validity' and 'reliability' differently. Validity of the instrument ensures that the instrument is actually measuring the construct under observation, and not some other construct; and that the construct is being measured accurately. Reliability implies consistency of the measuring instrument. An instrument can be not valid and still be reliable (consistently inaccurate), but the converse is not true. By definition, if a measure is valid it will be accurate every time, and thus must be reliable also (Bailey, 1987). These concepts are explained in the following discussion by using the analogy of the Competitive Intelligence Program. The model in figure 2 delineates the concepts of validity and reliability as applied to the development of valid instruments for measuring organizational constructs.

Theory

In CIP, as a starting point for obtaining CI data, the organization generally has some knowledge of its competitors, and its own CI needs. In absence of a definition of its information needs, the organization may not be able to deploy its resources effectively. To avoid such a scenario an organization may conduct a CI audit which is effectively a review of its current operations to determine what is actually known about the competitors and their operations. The CI audit helps in pinpointing the CI needs and is analogous to an exploratory study in organizational research.

The researcher conducts exploratory research to gain a better understanding of the dimensions of the problem. By using exploratory research, one attempts to utilize more readily available sources of information before proceeding to more expensive and detailed surveys. After identifying and clarifying the problem, the researcher generates a formal statement of the problem (the research question) and the research objectives. The exploratory study provides the base for developing a theory which is essentially a statement of relations among concepts within a set of boundary assumptions and constraints (Bacharach, 1989). The relationships at the abstract level of concepts are defined by propositions, while relationships at empirical level of variables are defined by hypotheses (Zikmund, 1991).

When the organization has some knowledge about its competitors and its own CI needs, it proceeds to the stage of gathering CI data. Raw data is evaluated and analyzed for accuracy and reliability. Every attempt is made to eliminate false confirmations and disinformation, and to check for omissions and anomalies. Omission, which is the seeming lack of cause for a business decision, raises a question to be answered by a plausible response. Anomalies (data that do not fit) ask for a reassessment of the working assumptions (McGonagle & Vella, 1990). While the conclusions one draws from the data must be based on that data, one should never be reluctant to test, modify, and even reject one's basic working hypotheses. The failure to test and reject what others regard as an established truth can be a major source of error (Vella & McGonagle, 1987).

Similarly, in organizational research the objective of the researcher is to evaluate a given theory in terms of its 'fit' with other preexisting and apparently related theories. If a theory is to be properly used or tested, the theorist's implicit assumptions which form the boundaries of the theory must be understood. If new data contradicts an existing theory, the researcher needs to determine the reliability and accuracy of the data. Beyond that, one may need to check if the working assumptions, which define the boundaries of the existing theory, need to be changed. New theories may be connective or transformational. A given theory is said to be connective if it bridges the gap between two or more existing theories. A theory is said to be transformational if it results in a reassessment of preexisting theories in a new perspective (Bacharach, 1989).

Construct

Effective implementation of its CIP requires not only information about the competitors, but also information on other environmental trends such as industry trends, legal and regulatory trends, international trends, technology developments, political developments and economic conditions. The relative strength of the competitor can be judged accurately only by assessing it with respect to the factors listed above. In the increasingly complex and uncertain business environment, the external [environmental] factors are assuming greater importance in effecting organizational change. Therefore, the determination of CI information needs is based upon the firm's relative competitive advantage over the competitor assessed within the 'network' of 'environmental' factors.

Judd, et al. (1991) stated that the theoretical construct is the starting point of all measurement [in organizational research]. The researcher defines the construct under observation in terms of its relationships with other theoretical constructs. This definition forms what is called the construct's nomological network: "the set of construct-to-construct relationships derived from the relevant theory and stated at an abstract, theoretical level." The construct's nomological net thus becomes the starting point for the operationalization of a construct [that is abstract] into variables [that are measurable]. Comparing to the analogy of CIP, the researcher (organization) defines the measurement (CI information needs) based upon the theoretical definition of the construct (competitor) within its nomological network (environmental network).

Researcher

Despite the increasing sophistication of CI tools and techniques, the most important role in a CIP remains that of the organization or its internal CI-unit [if it has one]. Once the CI needs have been defined, the CI-unit is responsible for collection, evaluation and analysis of raw data, and preparation, presentation, and dissemination of CI. The CI-unit may handle all the activities itself, or it may assign some tasks to an outside contractor. Often, decisions have to be made on assignments of data collection, and data analysis and evaluation. This process can be compared to that of experimental research in which the researcher assigns different tasks to experimenters and observers.

The CI-unit has to decide upon the choice of sources of raw data. Should it use government sources or online databases, interviews or surveys, drive-bys or on-site observations? It has also to decide if and when to deploy 'shadowing' and defensive-CI. Other decisions may involve choice of specialized interest groups (such as academics, trade associations, consumer groups), private sector sources (such as competitors, suppliers, distributors, customers) or media (such as journals, wire services, newspapers, financial reports) as the sources of information. Very frequently, such issues involve balancing various constraints, such as those of time, finances, staffing, etc. and therefore are based upon individual judgement.

Analogous to the CI-unit in the CIP model, the researcher's role is of primary importance in the conduct of organizational research. Based upon a previously existing theory or a new (connecting or transformational) theory, the researcher develops a set of propositions that define the relationships among concepts of his interest. The concepts (constructs) are operationalized into variables that are measurable. The relationships among variables are defined in terms of the hypotheses. Then, the researcher designs the research methodology to test the hypotheses by means of acquisition, analysis and interpretation of meaningful data (Sekaran, 1992). The operationalization of construct (abstractions) into variables [that are partial representations of the construct] is a process that is dependent upon the subjective judgement of the researcher (Bacharach, 1989).

Just as the CIP-unit tries to establish the accuracy and validity of the data gathered in the CIP, similarly in organizational research, the researcher tries to ensure the reliability and validity of the measurements. Analogous to the CIP in which all attempts are made to eliminate the various sources of misinformation such as false confirmations, disinformation, omissions and anomalies, in organizational research, the researcher tries to eliminate various sources of errors. Measurement, the process of comparison, estimation, or judgement (Lorge, 1967), is also subject to various types of errors such as interpretive errors, variable errors, personal errors and constant errors (Mursell, 1947). Considering the analogy to CIP, the researcher in organizational research tries to eliminate such measurement errors (Helmstadter, 1964).

Instrument

Different types of CI tools and techniques are available for different requirements of the CIP. Contacting government agencies can yield valuable data for the CIP, but may often require excessive lead time. Searching online databases is a faster method of finding competitive information, although it is more expensive. With increasing sophistication and affordability of information technology, this technique is expected to become less expensive. Database search does not provide information that has not been released to the public or that has not yet been collected. Some types of data that are not widely available from databases can be procured by contacting the corporation itself or from investment community sources. Surveys can yield plenty of data about competitors and products, while interviews can provide more in-depth perspectives from a limited sample. Drive-by and on-site observations of the competitor's [full or empty] parking spaces, new construction-in-progress, customer service at retail outlets, volume and pattern of [suppliers' or customers'] trucks, etc. can yield useful CI information about the state of the competitor's business. Competitive benchmarking is used for comparing the organization's operations against those of the competitor's. Defensive CI involves monitoring and analyzing one's own business activities as the competitors and outsiders see them. Reverse engineering of competitor's products and services may yield important CI information about their quality and costs. In conclusion, not all CIP tools and techniques are suitable for all CI objectives; the CI-unit has to use judgement in determining the relevant CI needs and the most appropriate tools and techniques.

Selecting instruments from the existing thousands of various types of scales (Stevens, 1946) and indexes for measuring social variables would facilitate replication and accumulation of research findings (Miller, 1991). When an appropriate instrument is not available, the researcher has to modify an existing instrument or design a new instrument for the study. The researcher needs to ensure that the instrument satisfies the criteria of validity, reliability, standardization, objectivity, and precision (Helmstadter, 1964; Zikmund, 1991). Validity refers to the absence of constant errors, i.e., it is the degree to which an instrument and the rules for its use in fact measure what they purport to measure (Cannell and Kahn, 1968). Reliability implies the absence of variable errors, i.e., it is the consistency of the instrument in measuring a variable. Standardization defines the degree to which interpretive errors have been eliminated. Objectivity refers to the absence of personal bias of one who is doing the scoring. Sensitivity (or precision) of the instrument is its ability to accurately record variability in stimuli or response. Intervening factors like social desirability and halo bias also need to be accounted for. Not all instruments are suited for all situations; the selection of the 'right' instrument for a specific situation is a very important judgmental choice. The researcher needs to be aware of the use [and disuse] of double-barreled questions, ambiguous questions, forced-choice questions, and open- and close-ended questions that go into the questionnaires. For data generation for situations that involve issues of sensitive nature or nonverbal behavior, telephonic or personal interviews (Cannell & Kahn, 1968) may be preferred over questionnaires. Certain situations may require the use of systematic observational methods (Weick, 1968) in which observations may be taken in the "fly on the wall" mode. Whatever the instrument of choice for a given study, the importance of pre- testing cannot be overemphasized; pre-testing may help early detection of problems and thus avoid undue expenditure of resources.

Reliability

In CI, reliability of data implies the 'believability' of the source of data based on its past performance. By analogy, in organizational research reliability of the measuring instrument simply means the consistency of the measurement. An instrument may be not valid and still be reliable (consistently inaccurate), but the converse is not true. Reliability encompasses two separate concepts, homogeneity and stability (Loevinger, 1967). Stability of the measure is its ability to maintain its consistency over time under varying test conditions. Homogeneity of the items in the instrument that tap the same construct is representative of the internal consistency of the items - the items should be capable of independently measuring the same construct. The stability of an instrument may be tested by test-retest reliability or parallel-form reliability, while the internal consistency may be tested by interitem consistency reliability or split-half reliability. Another concept that is extensively used in the measurement of qualitative attributes is that of interjudge reliability, the consistency of the specific measurement between different individuals.

Validity

The objective of the Competitive Intelligence Program is to gather relevant information that is valid and accurate. Incomplete or inaccurate information may jeopardize the organization's CI efforts. There might be instances of false confirmation in which one source of data appears to confirm the data obtained from another source. In reality, there is no confirmation because one source may have obtained its data from the second source, or both sources may have received their data from a third common source. Or the data generated may be flawed because of disinformation, which is incomplete or inaccurate information designed to mislead the organization's CI efforts. Blowback may occur when the company's disinformation or misinformation that is directed at the competitor contaminates its own intelligence channels or information. In all such cases, the information gathered may be inaccurate or incomplete. The issue of validity in organizational measurement is discussed in the following paragraphs by using the analogy of the CIP.

In organizational measurement research, validity is defined as the extent to which an instrument and the rules for its use in fact measure what they purport to measure (Cannell & Kahn, 1968). The interpretation of validity used in this paper is largely based upon Loevinger's (1967) classic explanation of construct validity as the "measure of real traits."

Construct Validity

Construct validity is established by the degree to which the measure confirms a network of related hypotheses generated from a theory based on the concepts. If the measure behaves the way it is supposed to, in a pattern of intercorrelation with a variety of other variables, there is evidence of construct validity (Zikmund, 1991). Considering the analogy of the CIP, construct validity is analogous to what we called 'accuracy'- the extent to which the gathered information is complete and accurate so that it reflects the 'desired' information about the competitor.

Loevinger (1967) defined construct validity as the "whole of validity from a scientific point of view." Campbell (1967) distinguished between trait validity and nomological validity as the two types of construct validity. While nomological validity attempts to confirm predictions from the viewpoint of a formal theoretical network containing the concept of interest, trait validity considers theory only at the level of the single trait and does not deal with the interrelationships of constructs within the nomological network. Trait validity can be further defined in terms of convergent validity and discriminant validity. The variance in measurement is a result of interaction between the trait variance and method variance. High convergent validity is achieved if there is a high correlation between the results of measurement of the construct using different instruments. High discriminant validity is achieved if there is minimal correlation between the results of measurement of different constructs using the same instrument. Campbell and Fiske (1967) have suggested trait validity as a prerequisite for establishing construct validity. These concepts can be illustrated by using the analogy of the CIP.

For an effective implementation of its CIP, the organization requires not only information about the competitors, but also information on other environmental factors such as industry trends, legal and regulatory trends, international trends, technology developments, political developments and economic conditions. The relative strength of the competitor can be judged accurately only by assessing it with respect to the factors mentioned above. The process of increasing validity of gathered information by determination of CI information needs within the 'network' of 'environmental' factors is analogous to ensuring nomological validity in organizational measurement. CI information gathered from government sources, online databases, media, surveys, on-site observations, etc. should provide consistent and coherent information about a specific competitor. This is analogous to convergent validity in the organizational measurement model. Moreover, CI information gathered from (say) a specific government source should be clearly distinguishable for different competitors, otherwise ambiguity (overlap) of information for different competitors may result in erroneous conclusions. This is analogous to discriminant validity in the measurement process.

The CIP analogy can also be used to explain Loevinger's (1967) classification of construct validity into three distinct components - substantive component, structural component, and external component. Substantive validity may be defined as the extent to which the content of the items included in the instrument can be accounted for in terms of the trait believed to be measured and the context of measurement (Loevinger, 1967). Substantive validity is composed of two elements - content validity and empirical keying. Content validity of an instrument is the extent to which it provides adequate coverage of the construct under study. This implies that all aspects of the attribute being measured are considered by the instrument. This is analogous to gathering CI information on all critical aspects of the competitor's activities - finance, marketing, research, etc. Empirical keying implies that the pool of items selected to 'tap' the construct also contain items that lie near the boundary of the construct, besides including the items that lie within the domain of the construct. This is analogous to gathering CI information not only on the competitor, but also on its related subsidiaries or divisions. Structural validity refers to the extent to which structural relations between items on the instrument parallel the structural relations of other "manifestations" of the construct being measured (Loevinger, 1967). As earlier mentioned, different CI sources are used for generating different types of CI information about the competitor. The specific data source should 'match' the specific CI information being sought. The substantive and structural components together comprise internal validity which encompasses issues related to individual items on the instrument. External validity deals primarily with the correlation of the item responses to the total score, relation with other test scores and non-test behavior, and distortions and biases. Considering the CIP analogy, the CI data gathered by using various CI tools and techniques need to be integrated together into consistent and coherent CI information.

5. Conclusion

This paper delineates a conceptual model of the Competitive Intelligence Program to facilitate the comprehension of the critical issues relevant to the development of valid instruments for measuring organizational constructs. The significance of the reliability and validity of information gathered about the measured 'construct' are the 'connecting themes' between the CIP model and the organizational measurement process. Just as the organization cannot achieve an accurate understanding of its competitive strengths and weaknesses (against the competitor's) unless it gives due importance to the reliability and validity issues, similarly the researcher cannot gain an accurate understanding of the construct or the related theory unless he utilizes appropriate instruments that are valid as well as reliable. Human judgement plays an important role in both the processes - just as the organization needs to choose the 'correct' CI tools and techniques for the specific CI needs, similarly the researcher should select the appropriate instrument for the specific measurement. In either process, the actor - which is organization in the CIP model and researcher in the organizational measurement process - operates under constraints of time, finances, staffing, etc.

A compromise between the resource expenditure and the 'completeness' of desired information is a judgmental choice of the actor. The interaction of the organization's CIP with the competitor is analogous to the interaction of the researcher's instruments with the 'concept' being measured. A complete understanding of the competitor can be achieved within the 'network' of environmental factors. By analogy, an accurate and complete understanding of the construct can only be gained by 'measuring' it within its nomological network. The dynamics and the interrelationships of the CIP model facilitate an integrated understanding of the concepts that comprise the process of developing valid instruments for organizational measurement.

The conceptual model uses the analogy of a Competitive Intelligence Program to capture most of the critical issues relevant to the measurement of organizational constructs. An elaborate explanation of concepts such as pretesting, debriefing, distortion and biases, normative and ipsative measurements, was foregone because of the limited scope of the analogy. Moreover, an elaborate explanation of statistical issues was considered beyond the scope of the model. The model can be refined further by depicting the relationship of the three aspects of construct validity (Loevinger, 1967) with the corresponding stages in the instrument development process.

The integrated framework of validity and reliability issues proposed by this model could be useful for future research in the measurement of organizational constructs. Just as the competitor's competitive advantage may change due to the impact of environmental factors, similarly the organizational constructs may also evolve with time. Therefore, the measurement of organizational constructs needs to be a 'dynamic' and ongoing process to provide the correct [temporal] picture of the construct. The proposed model could provide a basis for studying the refinement and modification of instruments and measurements as the corresponding organizational constructs evolve. Firstly, there is a need to continuously assess the validity of the instruments and measurements as the business conditions change. Secondly, since the definition of the 'competitor' may change over time due to environmental factors like globalization, the evolving definition of the construct requires a continual assessment of existing instruments, and invention of new instruments to tap the relevant constructs as they evolve. In other words, measurement of organizational constructs, just like CIP, is a "continuously evolving process" (Prescott and Gibbons, 1993). Thirdly, a systemic approach to organizational measurements needs to be developed. How the dynamic measurements of disparate constructs like competitors, and environmental factors, and the effect of their interactions with each other and the organization, can be integrated into a comprehensive "continuously evolving" measurement process remains an important and interesting issue.

References

Bacharach, S.B. (1989). Organizational Theories: Some Criteria for Evaluation. Academy of Management Review, 14(4), 496-515.

Bailey, K.D. (1987). Methods of Social Research, 3rd ed. New York: The Free Press.

Bechtoldt, H.P. (1967). Construct Validity: A Critique. In D.N.Jackson, & S. Messick (Eds.), Problems in Human Assessment. New York: McGraw Hill.

Bryman, A. (1989). Research Methods and Organization Studies, London: Unwin Hyman.

Campbell, D.T. (1967). Recommendations for APA Test Standards regarding Construct, Trait, or Discriminant Validity. In D.N.Jackson, & S. Messick (Eds.), Problems in Human Assessment. New York: McGraw Hill.

Campbell, D.T. & Fiske, D.W. (1967). Convergent and Discriminant Validation by the Multitrait-Multimethod Matrix. In D.N.Jackson, & S. Messick (Eds.), Problems in Human Assessment. New York: McGraw Hill.

Cannell, C.F. & Kahn, R.L. (1968). Experimentation in social psychology. In G. Lindzey and E. Aronson (Eds.). Handbook of Social Psychology, Vol. 2, Reading, MA: Addison Wesley.

Cronbach, L.J. & Meehl, P.E. (1967). Construct Validity in Psychological Tests. In D.N.Jackson, & S. Messick (Eds.), Problems in Human Assessment. New York: McGraw Hill.

Hall, C.S. & Lindzey, G. (1957). Theories of Personality, New York: Wiley.

Helmstadter, G.C. (1964). Principles of Psychological Measurement. N.J.: Prentice-Hall.

Holsti, O.R. (1968). Content Analysis. In G. Lindzey and E. Aronson (Eds.). Handbook of Social Psychology, Vol. 2, Reading, MA: Addison Wesley.

Jackson, D.N. & Messick, S. (1967). Problems in Human Assessment. New York: McGraw Hill.

Judd, C.M., Smith, E.R., & Kidder, L.H. (1991). Research methods in social relations, 6th ed. Orlando, FL: Holt, Rinehart and Winston.

Lindzey, G. & Aronson, E, (1968). Handbook of Social Psychology, Vol. 2, Reading, MA: Addison Wesley.

Loevinger, J. (1967). Objective Tests as Instruments of Psychological Theory. In D.N.Jackson, & S. Messick (Eds.), Problems in Human Assessment. New York: McGraw Hill.

Lorge, I. (1967). The Fundamental Nature of Measurement. In D.N.Jackson, & S. Messick (Eds.), Problems in Human Assessment. New York: McGraw Hill.

Malhotra,Y. (1993). Competitive Intelligence Programs: An Overview (Malhotra)

McGonagle, J.J. & Vella, C.M. (1990). Outsmarting the Competition: Practical Approaches to Finding and Using Competitive Information. Naperville, IL: Sourcebooks.

Miller, D.C. (1991). Selected Sociometric Scales and Indexes. In Handbook of Research Design and Social Measurement. 5th ed. Newbury Park, CA: Sage.

Mursell, J. (1947). Psychological Testing. New York: David McKay.

Prescott, J.E. & Gibbons, P.T. (1993). Global Competitive Intelligence: An Overview. In J.E. Prescott, & P.T. Gibbons (Eds.), Global Perspectives on Competitive Intelligence. Alexandria, VA: Society of Competitive Intelligence Professionals.

Sekaran, U. (1992). Research Methods for Business: A Skill-Building Approach, 2nd ed. New York: John Wiley.

Stevens, S.S. (1946). On the Theory of Scales of Measurement. Science, 103, 677-680.

Vella, C.M. & McGonagle, J.J. (1987). Competitive Intelligence in the Computer Age. New York: Quorum Books.

Weick, K.E. (1968). Systematic Observational Methods. In G. Lindzey and E. Aronson (Eds.). Handbook of Social Psychology, Vol. 2, Reading, MA: Addison Wesley.

Zikmund, W.G. (1991). Business Research Methods, 3rd ed. Orlando, FL: Dryden Press.




Top of Page

AIMLExchange

'Your Survival Network for The Brave New World Of Business'tm

About AIMLExchange | News About AIMLExchange

Terms of Use | Privacy Notice | © Copyright 1994-2019, Global Risk Management Network, New York, USA