Open Access Repositorium für Messinstrumente

AkkordeonDetailedScaleViews

The Survey Attitude Scale

  • Autor/in: de Leeuw, E., Hox, J., Silber, H., Struminskaya, B., Vis, C.
  • In ZIS seit: 2022
  • DOI: https://doi.org/10.6102/zis325_exz
  • Abstract:

    Declining response rates worldwide ... mehr have stimulated interest in understanding what may be influencing this decline and how it varies across countries and survey populations. In this paper, we describe the development and validation of a short 9-item survey attitude scale that measures three important constructs, thought by many scholars to be related to decisions to participate in surveys, that is, survey enjoyment, survey value, and survey burden. The survey attitude scale is based on a literature review of earlier work by multiple authors. Our overarching goal with this study is to develop and validate a concise and effective measure of how individuals feel about responding to surveys that can be implemented in surveys and panels to understand the willingness to participate in surveys and improve survey effectiveness. The research questions relate to factor structure, measurement equivalence, reliability, and predictive validity of the survey attitude scale. The data came from three probability-based panels: the German GESIS and PPSM panels and the Dutch LISS panel. The survey attitude scale proved to have a replicable three-dimensional factor structure (survey enjoyment, survey value, and survey burden). Partial scalar measurement equivalence was established across three panels that employed two languages (German and Dutch) and three measurement modes (web, telephone, and paper mail). For all three dimensions of the survey attitude scale, the reliability of the corresponding subscales (enjoyment, value, and burden) was satisfactory. Furthermore, the scales correlated with survey response in the expected directions, indicating predictive validity. weniger

  • Sprache Dokumentation: English
  • Sprache Items: English, Dutch, German
  • Anzahl der Items: 9
  • Erhebungsmodus: CATI, CASI (web), paper-pencil (mail)
  • Bearbeitungszeit: Duration of the processing time including the time for instruction when using the web mode. Mean = 1 minute and 21 seconds,
    Median = 1 minute and 11 seconds, 25 percentile= 55 seconds, 75 percentile = 1 minute and 36 sec-onds.
  • Reliabilität: Cronbach’s alpha = .54 to .80; McDonald’s omega = .54 to .80
  • Validität: evidence for construct and criterion validity
  • Konstrukt: international survey attitude scale (survey enjoyment, survey value, survey burden)
  • Schlagwörter: survey value, survey burden, survey enjoyment, online panel
  • Item(s) in Bevölkerungsumfrage eingesetzt: yes
  • URL Webseite: statistical syntax https://doi.org/10.5281/zenodo.7111310
  • URL Datenarchiv: GESIS panel https://doi.org/10.4232/1.12658), LISS panel https://www.dataarchive.lissdata.nl/study_units/view/15, PPSM panel https://doi.org/10.5281/zenodo.7111310
  • Entwicklungsstand: validated, standardized
  • Originalpublikation: https://doi.org/10.1186/s42409-019-0012-x
    • Instruction:

      Specific Instruction Survey Attitude Scale:

      We would like to ask you some questions about surveys in general.

      Below are nine general statements regarding what people may like or dislike about surveys. To what extent do you agree or disagree with each of these statements?

      Items

      Table 1

      English, Dutch, and German Items of the International Survey Attitude Scale

      No.

      Item

      Polarity

      Subscale

       

      English

       

       

      E1

      I really enjoy responding to questionnaires through the mail or Internet.

      +

      Survey enjoyment

      E2

      I really enjoy being interviewed for a survey.

      +

      E3

      Surveys are interesting in themselves.

      +

      V1

      Surveys are important for society.

      +

      Survey value

      V2

      A lot can be learned from information collected through surveys.

      +

      V3

      Completing surveys is a waste of time.

      B1

      I receive far too many requests to participate in surveys.

      +

      Survey burden

      B2

      Opinion polls are an invasion of privacy.

      +

      B3

      It is exhaustive to answer so many questions in a survey.

      +

       

      Dutch

       

       

      E1

      Ik vind het echt leuk om vragenlijsten te beantwoorden, schriftelijk of via internet.

      +

      Survey enjoyment

      E2

      Ik vind het echt leuk om geïnterviewd te worden voor een onderzoek.

      +

      E3

      Vragenlijstonderzoek op zich is interessant.

      +

      V1

      Vragenlijstonderzoek is belangrijk voor de maatschappij.

      +

      Survey value

      V2

      Met gegevens uit vragenlijstonderzoek kan men veel wijzer worden.

      +

      V3

      Vragenlijsten invullen voor onderzoek is tijdverspilling.

      B1

      Ik krijg veel te veel verzoeken om deel te nemen aan enquêtes.

      +

      Survey burden

      B2

      Opiniepeilingen zijn een schending van de privacy.

      +

      B3

      Het is vermoeiend om veel vragen te beantwoorden bij een enquête.

      +

       

      German

       

       

      E1

      Es macht mir Spaß, Fragebögen zu beantworten, die per Post oder Internet zugeschickt werden. (4)

      +

      Survey enjoyment

      E2

      Es macht mir Spaß, für Umfragen interviewt zu werden. (6)

      +

      E3

      Ich finde Umfragen an sich interessant. (7)

      +

      V1

      Ich bin der Meinung, dass Umfragen für die Gesellschaft wichtig sind. (1)

      +

      Survey value

      V2

      Ich finde, aus Umfragen können wichtige Erkenntnisse gewonnen werden. (2)

      +

      V3

      Meiner Meinung nach ist die Teilnahme an Umfragen Zeitverschwendung. (3)

      B1

      Ich werde viel zu oft darum gebeten, an Umfragen teilzunehmen. (8)

      +

      Survey burden

      B2

      Ich empfinde Meinungsumfragen als einen Eingriff in meine Privatsphäre. (5)

      +

      B3

      Ich finde es anstrengend bei einer Befragung viele Fragen zu beantworten. (9)

      +

      Note. The GESIS panel used a slightly different order for the German-language version indicated in

      parentheses.

       

      Response specifications

      A 7-point response scale was used; this scale was endpoint labelled (English: 1: totally disagree, 7: totally agree; Dutch: 1: helemaal mee oneens, 7: helemaal mee eens; German: 1: Stimme überhaupt nicht zu, 7: Stimme voll und ganz zu).

       

      Scoring

      The survey attitude scale consists of three subscales: enjoyment, value, and burden. One question in the value scale (V3, waste of time) is negatively formulated. The responses to this question were recoded (i.e., 1 = 7, 2 = 6, 3 = 5, 4 = 4, 5 = 3, 6 = 2, 7 = 1), so a high score on V3 now indicates a positive attitude toward value. A high value on the final subscales enjoyment and value is an indicator of a positive survey attitude, while a high value on the subscale burden indicates a negative attitude.

      The subscales are based on simple sum-scores. For handling missing data, ideally Full Information Maximum Likelihood Estimation should be used. However, for users who do not have access to advanced statistical programs item mean substitution can be used. This works as follows: If for a subscale only one of the three items has a missing, replace it by the mean value of the other two items. If there are more missing values (i.e., 2 or 3 in a subscale), the (whole) subscale value is assigned missing.

      Also, a global attitude scale can be calculated over all nine questions. For this global attitude scale, the responses to the three burden questions were recoded, resulting in a scale where a high score indicates a generally positive attitude toward surveys. In general, we do not recommend this procedure, and we advise using the three subscales as this provides more detailed insights. For the global attitude scale, a maximum of two items were allowed to be missing.

       

      Application field

      In times of declining response rates and decreasing trust in survey results, it is especially important to have a well-tested, documented, and validated measure of attitudes toward surveys. This instrument should be short to make it easy to implement in ongoing surveys. Using data from two countries, this article describes the development and validation of the 9-item survey attitude scale, which covers three dimensions of survey attitude: survey enjoyment (3 items), survey value (3 items), and survey burden (3 items). The survey attitude scale is a valid, reliable, and easy-to-implement tool for measuring attitudes towards surveys that can be used to investigate constructs such as survey climate, panel attrition, and survey fatigue.

      The survey attitude scale was administered in a web-based (online CASI), CATI, and paper-pencil (mail) mode. The first (Dutch) version of the survey attitude scale was cognitively pretested, using a paper version and 5 respondents who varied in age and education. All respondents understood the questionnaire and instruction. Also, they understood the ‘end-point labelling’ and showed no clear preference for fully-labelled over end-point labelled. As end-point labelling has advantages when a scale is used in multiple modes (e.g. telephone and web) we decided to use end-point labelling.

      Unfortunately, for the CATI and the paper and pencil (mail) version no para data are available. However, using the time stamp data of the GESIS-online panel we could estimate the overall time it took respondents to answer the nine-item survey attitude scale. The introduction to the survey attitude scale and all nine items were on one single page. The mean duration for completing this page (intro plus all nine items) is 1 minute and 21 seconds, the median is 1 minute and 11 seconds, the 25 percentile is 55 seconds, and the 75 percentile is 1 minute and 36 seconds.[1]



      [1] Outliers who took more than 5 minutes to answer the page were excluded.

    In psychology, the theory of reasoned action links attitudes to behaviour. According to the theory of reasoned action, action is guided by behavioural intention, which is influenced by perceived norms and subjective attitudes (Ajzen and Fishbein, 1980). In turn, attitudes are considered as the evaluative beliefs about an attitude object. Consistent with this background, and in contrast to existing longer instruments that concentrate on measuring a general survey attitude (e.g., Hox, de Leeuw, and Vorst, 2015; Stocké and Langfeldt, 2004), we aimed at a multidimensional measurement instrument.

    An international literature search on empirical studies that investigated general attitudes and opinions on surveys resulted in three clear theoretical dimensions: two positive and one negative dimension could be distinguished that have recognizable roots in the survey methodology literature (Dillman, Smyth and Christina, 2014; Groves, 1989; Groves and Couper, 1998; Stoop et al., 2010). The first and second dimension describe attitudes that guide the behavioural intentions of potential respondents positively (Cialdini,1984). The first dimension reflects the individual perception of surveys as a positive experience: survey enjoyment, as discussed by Cialdini (1984) and reflected in the work of Puleston (2012) on gamification to increase the enjoyment of the survey experience. The second dimension points to a positive survey climate and emphasizes the subjective importance and value of surveys, as discussed by Rogelberg, Fisher, Maynard, Hakel, and Horvath (2001). The third dimension indicates a negative survey climate: surveys are perceived by respondents as a burden, which has a negative influence on motivation and participation (Goyder, 1986; Schleifer, 1986). Survey designers and methodologist have to try and counteract this negative attitude by decreasing the perceived burden (Dillman, 1978; Puleston, 2012).

    These three dimensions are fundamental building blocks in theories on survey participation and nonresponse and are seen as important indicators of a deteriorating survey climate (Barbier, Loosveldt, and Carton, 2016; Loosveldt and Joye, 2016; Singer, van Hoewyk, and Maher, 1998). For instance, both the social exchange theory (Dillman, 1978) and the leverage saliency theory (Groves, Singer, and Corning, 2000) on survey participation emphasize that people are more willing to participate in a survey when the positive aspects of the survey are maximized and the negative aspects are minimized (Dillman, et al., 2014). These theories emphasize that for a positive decision to cooperate in a survey the perceived benefits should outweigh the perceived costs. This is achieved if a survey is seen as pleasant and fun (survey enjoyment), useful (survey value), and associated with minimal costs (survey burden).

    Previous research that investigated attitudes toward surveys used one-dimensional to five-dimensional scales when measuring survey attitudes (Hox et al., 1995; Loosveldt and Storms, 2008; Rogelberg et al., 2001; Stocké and Langfeldt, 2004; Stocké, 2006, 2014). Hox et al. (1995) proposed a one-dimensional general attitude towards surveys, based on eight items. Stocké and Langfeldt (2004) and Stocké (2006) used a one-dimensional measure of general survey attitude, based on 16 items. Later, Stocké (2014) proposed a three-dimensional survey attitude measure with scales measuring survey value, survey reliability, and survey burden. Rogelberg et al. (2001) discerned two dimensions: survey enjoyment and survey value, based on 6 items. Finally, Loosveldt and Storms (2008) suggested five dimensions (survey value, survey cost, survey enjoyment, survey reliability, and survey privacy) based on a survey attitude questionnaire with nineteen items.

    All studies on survey attitudes involved the positive dimension “survey value”, while the importance of “survey enjoyment” was noted by Rogelberg et al. (1997) and Loosveldt and Storms (2008). The concept “survey burden” that was mentioned by Stocké (2014) was referred to as “survey costs” in the work of Loosveldt and Storms (2008). These three common dimensions, survey enjoyment, survey value, and survey burden are also important concepts in theories on survey participation and nonresponse. Therefore, survey enjoyment, survey value, and survey burden were chosen as the three main constructs in the survey attitude scale.

    Item generation and selection

    For each construct in the survey attitude scale (i.e., enjoyment, value, and burden), we selected three questions that performed well in previous empirical research publications. Three questions per construct were selected as this is the minimum to identify a construct in a confirmatory factor model (Bollen, 1989, p. 244) needed to establish measurement equivalence over countries and modes. As the survey attitude scale was developed for regular use in both single-mode and mixed-mode surveys, we followed the recommendations for mixed-mode questionnaire construction (Dillman et al., 2014; Dillman and Edwards, 2016) and used a seven-point disagree/agree response scale that was endpoint labelled.

    Survey enjoyment

    In studies on nonresponse and survey attitudes, statements referring to enjoyment, such as, “I really enjoy responding.” are frequently posed (Cialdini, Braver, and Wolf, 1991; Hox et al., 1995; Loosveldt and Storms, 2008; Rogelberg et al., 2001). As our goal was to develop a general survey attitude scale that could also be used in mixed-mode studies, we included two questions on enjoyment (one referring to mail and online questionnaires, and one referring to interviews). Besides the direct emotional enjoyment, need for cognition can act as intrinsic motivation (Stocké, 2006). Thus, we added Stocké’s question on interest in surveys to the subscale on survey enjoyment. A similar question on survey interest was used by Hox et al. (1995) and Loosveldt and Storms (2008).

    Survey value

    Salience, relevance, and usefulness are all important for survey participation, and emphasizing these aspects plays an important role in theories of persuasion (Cialdini, 1984; Cialdini et al., 1991; Dillman, 1978; Groves, Cialdini, and Couper, 1992; Groves et al., 2000). From the literature on survey attitudes, we therefore selected a question on the importance of surveys for society that was used by multiple researchers in this field (i.e., Cialdini et al., 1991; Hox et al., 1995; Stocké, 2006) and a second question on the usefulness of the information gathered by surveys from Singer et al. (1998), which was also used by Rogelberg et al. (2001) and Loosveldt and Storms (2008). We also added a negatively formulated question on surveys as “a waste of time,” as an indicator of survey relevance. This question was based on the work of Rogelberg et al. (2001), Schleifer (1986), and Singer et al. (1998); a similar question was also used by Hox et al. (1995) and Loosveldt and Storms (2008).

    Survey burden

    According to Roper (1986) and Cialdini et al. (1991), an important aspect of the perceived survey burden is the amount of received requests to participate. Thus, we included a question on receiving too many requests in the subscale survey burden. This question was used in previous research on survey attitudes by Cialdini et al. (1991) and Hox et al. (1995). In addition, Stocké (2006) emphasized survey length as an indicator of burden and we added a question on this. Finally, Schleifer (1986) and Goyder (1996) pointed out the importance of privacy concerns, thus, we included a question on the invasion of privacy. Loosveldt and Storms (2008) used three slightly different questions to tap privacy as a sub-dimension. As our goal was to construct a brief survey attitude scale, we followed Shleifer (1986) and Goyder (1996) and only used one question on the invasion of privacy as part of the subscale survey burden.

     

    Translation

    The master questionnaire was developed in English; for the full text of the nine questions and references see Table 2. This master questionnaire was translated into Dutch and German. The translations were done by bilingual survey experts and checked with the original developer of the English master questionnaire and with senior staff of online panels in the Netherlands and Germany.

    Table 2

    English version of Survey Attitude questions followed by references to publications on which the respective question was based

    No.

    Item

    Source

    1

    I really enjoy responding to questionnaires through the mail or Internet.

    Cialdini et al. (1991); Hox et al. (1995); Rogelberg et al. (2001)

    2

    I really enjoy being interviewed for a survey.

    Cialdini et al. (1991); Hox et al. (1995); Rogelberg et al. (2001)

    3

    Surveys are interesting in themselves.

    Hox et al. (1995); Loosveldt and Storms, (2008); Stocké, (2006)

    4

    Surveys are important for society.

    Cialdini et al. (1991); Hox et al. (1995); Stocké, (2006)

    5

    A lot can be learned from information collected

    through surveys.

    Rogelberg et al. (2001); Singer et al. (1998)

    6

    Completing surveys is a waste of time

    Hox et al., (1995); Loosveldt and Storms, (2008); Rogelberg et al. (2001); Schleifer, (1986); Singer et al. (1998)

    7

    I receive far too many requests to participate in

    surveys.

    Cialdini et al. (1991); Hox et al. (1995)

    8

    Opinion polls are an invasion of privacy

    Goyder, (1986); Loosveldt and Storms, (2008); Schleifer, (1986)

    9

    It is exhaustive to answer so many questions in a survey.

    Stocké, (2006)

     

    Samples

     For the Netherlands, the data were collected online in the then newly established LISS panel from May to August 2008. The LISS panel is a probability-based online panel of approximately 4,500 Dutch households (7,000 individuals). It was established in autumn 2007 at the University of Tilburg and funded by a grant of the Dutch Science Foundation (NWO). The original recruitment was based on a random nationwide sample of addresses drawn from the community registers by Statistics Netherlands. The response to the recruitment interview was 75%, of which 84% (63% of gross sample) were willing to take part in the panel; finally, 48% of the gross sample did become an active panel member (Scherpenzeel and Das, 2011). Households with and without the Internet were recruited. Those without the Internet were provided with a ‘simple PC’ and internet connection. Panel members complete questionnaires each month with a duration of 15–30 min. Every year, they also complete a ‘core’ questionnaire. This longitudinal CORE study provides a wide range of data on the panel members that can be combined with the data of the individual ad-hoc studies. The survey attitude scale was part of the core questionnaire from 2008 to 2011. Panel members are paid for completing a questionnaire; this is based on the estimated average completion time (7.50 euro for 30 min). The present study used the survey attitude scale from the first wave of the core questionnaire and data were collected from 6808 individuals (wave response 78.1%). For more information on the LISS panel, see https://www.lissdata.nl/about-panel.

    For Germany, data were collected in spring 2009 during recruitment interviews for the probability-based mixed-mode PPSM panel. Panel members could be contacted through different modes: telephone only (4,900), telephone and online (1,600), online only (100). The panel was established in spring/summer 2009 and consisted of approximately 6,600 persons in Germany. It was part of the Priority Program on Survey Methodology (PPSM) at the University of Bremen and was funded by the German Research Foundation (See also, http://www.survey-methodology.de). The original recruitment was based on a random nationwide sample of telephone numbers (RandomLastDigit both landline and mobile). Selection criteria were 18+ and entitled to vote in Germany. The response (completed interviews) was 13.6% (a typical response rate for telephone surveys in Germany at the time); partial interviews added an additional 0.9% (Engel, 2015). Panel members completed questionnaires regularly every 4 months and do not receive any payment. The study was longitudinal in character and served a scientific purpose. The survey attitude scale was part of the recruitment interview, which was administered by telephone (CATI) and took on average 20 min. In total, data were collected from 6,200 individuals during that interview.

    The second Germany-based data collection took place in 2014 in the GESIS panel. The GESIS panel is a probability-based mixed-mode (online and postal mail) panel of the general population in Germany consisting of German-speaking individuals who live in private households aged 18–70 at the time of recruitment. The GESIS Panel was recruited from in 2013 via face-to-face interviews, based on a sample drawn from the German federal states’ central population registers. The response to the face-to-face recruitment interview was 35.5%, and 81.7% of those who responded to the recruitment interview were willing to participate in the first self-administered welcome survey, of which 79.5% became active panel members (Bosnjak et al., 2018). Thus, the active panel at the time of recruitment consisted of 4,938 respondents. Since then, the panel underwent two refreshments (in 2016 and in 2018) and in 2018 consist of about 4,400 respondents. About 67% of respondents complete the surveys online, while about 33% respond via mail. The field period for each wave is set at two months with six-panel waves per year. The questionnaires take about 20 min to complete. Every bimonthly questionnaire contains questions from the longitudinal core study that are repeated every year. The survey attitude scale is implemented in the last wave of each year in the core study module “Panel survey participation evaluation, survey mode preferences” and has been running since 2014. The survey attitude scale was implemented in the first year of the panel’s operation (last wave of 2014). In total, 4,344 respondents were invited of whom 3,775 completed the survey attitude scale (wave response: 88.7%)

    (Struminskaya, Schaurer, and Enderle, 2015). For more information about the GESIS panel see http://www.gesis-panel.org.

     

    Item analyses

    As the samples in this study are large, the fit of the model was evaluated by three established fit indicators: CFI, TLI, and RMSEA. Generally recognized criteria are for CFI and TLI that a value of 0.90 indicates acceptable fit, and 0.95 and higher values indicate a good fit. For the RMSEA values below 0.08 indicate acceptable fit, and values below 0.05 indicate good fit (Kline, 2016).

    The basic theoretical model is a confirmatory factor model with three factors, enjoyment, value, and burden, and with questions only loading on their intended factor. In a preliminary analysis, we checked if a single factor indicating a general survey attitude would suffice. We used Mplus 8.2 with robust maximum likelihood estimation (Muthén and Muthén, 2017). The single factor model was clearly rejected in all three samples, the fit indices were far from their acceptable values. Next, the theoretical model was estimated separately in all three samples. The theoretical three-factor model fitted moderately well. Fit indices were: for the GESIS data, χ2 (df = 24) = 653.3, CFI = .92, TLI = .88, RMSEA = .08, for the LISS data, χ2 (df = 24) = 1381.8, CFI = .91, TLI = .84, RMSEA = .10, and for the PPSM data, χ2 (df = 24) = 1255.3, CFI = .90, TLI = .86, RMSE = .09. In all three analyses, modification indices suggested the same two additional loadings: enjoyment question 3 (surveys are interesting) received an additional loading on the value factor, and value question 3 (surveys are a waste of time) received an additional loading on the burden factor. This model fitted very well in all three panels: for the GESIS panel data, χ2 (df = 22) = 102.8, CFI = .99, TLI = .98, RMSEA = .03; for the LISS panel data, χ2 (df = 22) = 350.4, CFI = .99, TLI = .98, RMSEA = .03; and for the PPSM panel data, χ2 (df = 22) = 137.1, CFI = .99, TLI = .99, RMSEA = .03. Figure 1 depicts the modified model.

     

    Figure 1. Final factor model for the survey attitude scale

     

    Thus, the factor structure of the survey attitude scale was established using data from three probability-based panels in two countries. In the analyses reported here, there were two cross-loadings. One enjoyment question (surveys are interesting) also had a loading on the value factor, and one value question (surveys are a waste of time) had a loading on the burden factor. These double loadings make sense: when a survey is evaluated as “interesting,” it is usually also perceived to be valuable, and when a survey is evaluated as “a waste of time,” it can be perceived as burdensome. This factor structure was replicated in all of the three panels, GESIS, LISS, and PPSM.

     

    Item parameters

    Below we present the item characteristics for each panel in a separate Table. (3A-C). Reported are the mean, standard deviation, skewness, kurtosis and item-total correlation (selectivity) for each of the nine questions. The item-total correlation (selectivity) was calculated for each subscale (i.e., enjoyment, value, and burden) separately. That is for the questions E1 to E3 the item-total correlation was calculated with Enjoyment, for the questions V1 to V3 the item-total correlation was calculated with Value, and for the questions B1 to B3 the item-total correlation was calculated with Burden,

    Table 3A GESIS Panel

    Means, Standard Deviations, Skewness, Kurtosis, and Selectivity (item-total correlations) of the Manifest Items:

     

    Mean

    Standard Deviation

    Skewness

    Kurtosis

    Selectivity

    E1

    4.50

    1.58

    -0.24

    -0.63

    .64

    E2

    3.86

    1.69

    0.09

    -0.86

    .61

    E3

    4.99

    1.39

    -0.54

    -0.20

    .61

    V1

    5.40

    1.29

    -0.64

    0.03

    .75

    V2

    5.55

    1.20

    -0.79

    0.53

    .74

    V3

    2.12

    1.25

    1.46

    2.18

    .46

    B1

    2.84

    1.58

    0.75

    -0.15

    .39

    B2

    2.47

    1.42

    0.97

    0.41

    .39

    B3

    2.85

    1.63

    0.66

    -0.52

    .40

    Note. Scale ranging from 1 to 7, listwise N = 3,655.

     

    Table 3B LISS Panel

    Means, Standard Deviations, Skewness, Kurtosis, and Selectivity (item-total correlations) of the Manifest Items:

     

    Mean

    Standard Deviation

    Skewness

    Kurtosis

    Selectivity

    E1

    4.87

    1.39

    -0.36

    -0.24

    .71

    E2

    4.14

    1.57

    -0.14

    -0.61

    .60

    E3

    5.06

    1.27

    -0.50

    0.08

    .66

    V1

    5.44

    1.16

    -0.70

    0.53

    .70

    V2

    4.42

    1.14

    -0.72

    0.61

    .70

    V3

    2.30

    1.33

    1.24

    1.26

    .48

    B1

    3.28

    1.67

    0.38

    -0.68

    .36

    B2

    2.58

    2.58

    0.84

    0.00

    .33

    B3

    3.44

    3.44

    1.66

    -0.86

    .37

    Note. Scale ranging from 1 to 7, listwise N = 6,802.

     

    Table 3C

    Means, Standard Deviations, Skewness, Kurtosis, and Selectivity (item-total correlations) of the Manifest Items: PPSM data

     

    Mean

    Standard Deviation

    Skewness

    Kurtosis

    Selectivity

    E1

    2.38

    1.65

    1.05

    0.16

    .52

    E2

    3.22

    1.66

    0.29

    -0.79

    .64

    E3

    4.35

    1.75

    -0.32

    -0.76

    .57

    V1

    4.92

    1.65

    -0.63

    -0.29

    .65

    V2

    5.12

    1.60

    -0.76

    -0.06

    .65

    V3

    2.87

    1.73

    0.66

    -0.50

    .35

    B1

    2.72

    1.89

    0.90

    -0.33

    .33

    B2

    2.62

    1.75

    0.87

    -0.27

    .45

    B3

    3.12

    1.93

    0.51

    -0.94

    .42

    Note. Scale ranging from 1 to 7, listwise N = 5,838.

    Objectivity

    The Survey Attitude Scale is developed for scientific research into survey climate and nonresponse and not for individual diagnostic purposes. The Survey Attitude Scale is not a personality test, or otherwise diagnostic instrument. Hence, objectivity in evaluation and interpretation of individual scores and norm tables does not apply. Our overarching goal was to develop and validate a concise and effective measure of feelings about surveys and about responding to surveys, that can be successfully implemented in cross-sectional surveys, longitudinal studies, and respondent panels to investigate survey climate and the willingness to participate in surveys in order to improve survey effectiveness.

    There are several factors supporting objectivity. First of all, a standardized questionnaire format including response categories and brief instruction is used for the Survey Attitude Scale. Second, this questionnaire was developed to be mode independent to ensure successful implementation in face-to-face, telephone, paper mail, and web survey modes. We followed the rules for uni(fied) or omni mode design as described by Dillman, Smyth, and Christian (2014, chapter11) and by Schouten, van de Brakel, Buelens, Giesen, Luiten, and Meertens, 2022, section 6.6). Third, a cognitive pretest was done of the Dutch version, which indicated that the respondent understood the instruction, question text and response format as intended (see also section Application Field). Furthermore, statistical analysis of the GESIS data, in which both the paper mail (offline) and web (online) mode was used showed scalar measurement equivalence which means that the survey mode (online vs. offline) did not affect the measurement model. Finally, it was investigated whether the Dutch and German version of the Surveys Attitude Scale had measurement equivalence. Multigroup Confirmatory Factorial Analysis indeed showed that measurement equivalence was found cross-culturally between the Netherlands and Germany (see section Further quality criteria). In sum, the Survey Attitude Scales has objectivity for mode and cross cultural implementation.The scores can be calculated directly and objectively, no interpretation from the investigator is needed.

     

    Reliability

    As an indicator of reliability, we calculated McDonald’s coefficient Omega (McDonald, 1999, p. 89) for each subscale and for the total scale using the software Factor (Lorenzo-Seva and Ferrando, 2013). Coefficient Omega gives a lower bound for the reliability and can be interpreted as the proportion of “true” score variance in the observed scores. It is similar to Cronbach’s coefficient Alpha, but requires weaker assumptions. If the assumptions for coefficient Alpha are met, Omega and Alpha are equal. Table 4 presents the coefficient omega for all subscales and the total scale, with coefficient Alpha in parentheses.

    Table 4

    Reliability of Survey Attitude (Sub-)Scales: Coefficient Omega (Coefficient Alpha)

     

    LISS

    PPSM

    GESIS

    Enjoyment

    .82 (.80)

    .76 (.75)

    .79 (.78)

    Value

    .81 (.78)

    .77 (.72)

    .83 (.79)

    Burden

    .54 (.54)

    .60 (.59)

    .59 (.58)

    Global survey attitude

    .81 (.80)

    .78 (.78)

    .81 (.80)

    Note. The reliability was indicated by McDonald’s Omega; Cronbach’s Alpha in parentheses

     Four main conclusions can be drawn from Table 4. Firstly, the two reliability coefficients are highly similar across the three panels. Secondly, two of the three subscales have good reliability for such short scales; only the subscale “burden” has a relatively low reliability. Thirdly, combining the three subscales into one global attitude scale is not worthwhile: the reliability does not increase and using the subscales as separate predictors in further analyses is more informative. Finally, the estimates for coefficient Omega and Alpha are very close, which implies that the assumptions underlying the use of coefficient Alpha are met. This is important since this justifies using simple sum scores for the scales.

    In sum, the anticipated three-factor structure fitted the data well across the three panels and the reliability of the three subscales was sufficient.

     

    Validity

    Construct validity

    There are indications for the construct validity of the survey attitude scale. During the recruitment interview for the PPSM panel, respondents were asked about their past survey behaviour and the reason why they had cooperated. Potential reasons for cooperation were rated on a 7-point scale. The correlations between the survey attitude subscales and the reason for cooperation are summarized in Table 5.

    Table 5

    Correlations between Survey Attitude Scales and Reasons for Previous Survey Participation Questions: PPSM Panel

     

    Enjoyment

    Value

    Burden

    General willingness

    .58

    .41

    -.25

    Interesting topic

    .25

    .25

    -.12

    Have something to say on the topic

    .28

    .28

    -.13

    Cannot say no to request

    -.19

    -.15

       .15

    Survey is scientific

    .09

    .17

    -.02ns

    Want to help

    .09

    .16

    -.02ns

    Note. All correlations significant at p < .05 unless marked ns.

     The correlations were in the expected directions. For instance, persons who scored high on general willingness to cooperate also scored high on survey enjoyment (renjoy,willing = .58), relatively high, but slightly lower on survey value (rvalue,willing = .41), and clearly did not see surveys as a burden (rburden, willing = -.26). Similar patterns were seen for persons who thought the topic was interesting and had the feeling that they could say something about the topic, while persons who said that they just could not say “no” to a request scored low on survey enjoyment (renjoy, not no = -.19), low on survey value (rvalue, not no = -.15), and high on survey burden (rburden, not no = .15). Finally, persons who emphasized the scientific nature of the survey as a reason to cooperate or were more altruistic only scored high on survey value (rvalue, scientific = .17; rvalue, help = .16).

    All three panels asked the same three evaluation questions about the survey; for the LISS and the GESIS panel, these were asked at the end of the welcome survey, for PPSM at the end of the recruitment interview. The questions were based on the standard evaluation questions at the end of each LISS-questionnaire: respondents were asked ‘whether they thought the topic was interesting’, which gives an indication of saliency, ‘whether the questions were difficult to answer’ where a negative evaluation indicates burden, and ‘whether the questionnaire got them thinking about things’, which can be viewed as a generally positive evaluation of the survey (Schonlau, 2015). The correlations for these survey evaluation questions and the survey attitude subscales for the three panels are presented in Table 6.

    Table 6

    Correlations Between Survey Attitude Scales and Survey Evaluation Questions for Three Panels: GESIS, LISS and PPSM Panel

     

    Enjoyment

    Value

    Burden

    Interesting (GESIS)

    .28

    .24

    -.13

    Interesting (LISS)

    .38

    .32

    -.16

    Interesting (PPSM)

    .27

    .29

    -.14

    Difficult (GESIS)

    -.08

    -.09

    .17

    Difficult (LISS)

    -.12

    -.07

    .11

    Difficult (PPSM)

    -.03ns

    -.09

    .10

    Make me think (GESIS)

    .15

    .16

    -.02ns

    Make think (LISS)

    .26

    .22

    -.05

    Make think (PPSM)

    .16

    .16

    -.03ns

    Note: All correlations significant at p < .05 unless marked ns.

    Although the absolute values of the correlations differ, all three panels showed the same pattern in the correlation matrix. The correlations between the survey attitude subscales and the evaluation of the survey are in the expected directions for all three panels. Respondents, who scored high on survey enjoyment and value and did not see surveys as a burden, rated the topic of the survey as interesting. On the other hand, respondents, who scored high on survey burden and did not value or enjoy surveys, rated the questions as difficult. Finally, respondents, who scored high on survey enjoyment and value, more often stated that the questionnaire got them thinking about things, while there was no clear relation with survey burden.

    In sum, there are indications for construct validity. The survey attitude scales were related both to reasons why one had cooperated in previous research and to survey evaluation.

     

    Predictive validity

    There are indications for the predictive validity of the survey attitude scale. A previous study involving the Dutch CentER panel, an online panel that was established in 1991, used logistic regression to predict nonresponse from March 2007 until August 2008 (de Leeuw et al., 2010). Survey enjoyment, value, and burden all predicted panel nonresponse. The effects were small but significant and in the expected direction with survey enjoyment as the strongest predictor (BEnjoy = -.13, BValue = -.02, BBurden = .06).

    During the recruitment interview for the LISS panel, one question from the survey value subscale was asked: “V1: Surveys are important for society.” At the end of the recruitment interview, respondents were asked if they were willing to become a panel member. The correlation between this question on survey value and the stated willingness to participate in the panel is .24. The correlation between survey value and active panel membership (defined as completing the first self-administered online panel questionnaire) was slightly lower: r = .18. Both correlations were significant at p < .01 (de Leeuw, Hox, Scherpenzeel, and Vis, 2008).

    At the end of the recruitment interview for the PPSM panel, respondents were asked if they were willing to be surveyed again. The correlations between willingness and the three survey attitude subscales were all significant (p < .01) and in the expected direction: .31 between survey enjoyment and willingness to participate, .24 between survey value, and willingness, and -.20 between survey burden and willingness.

    Finally, for the GESIS panel, the correlations between the survey attitude subscales and participation in the very next panel wave were low but significant and in the expected direction: .04 for survey enjoyment, .05 for survey value, and -.05 for survey burden (all p < .01).

    Summing up, the three subscales predicted stated willingness to participate and actual participation consistently, which is in line with the findings of Rogelberg et al. (2001), who reported that indicators for survey enjoyment and survey value were both positively related to stated willingness to complete telephone, in-person, and mail surveys.

     

    Descriptive statistics and Scaling

    Scores for the subscales are computed as the mean of the available (recoded) questions (V3 (waste of time should be recoded). When calculating the mean, it should be indicated that at least two items must have valid values, otherwise the subscale is assigned a missing vale.

    Table 7 provides the means, standard deviations, skewness and kurtosis for the three subscales, for each of the panels. The survey attitude scales are not intended for diagnostic purposes on individual (person) level, therefore there are no standardized or normed scores.

    Table 7

    Means, Standard Deviations, Skewness, and Kurtosis of the Scale scores.

    Panel

    Subscale

    Mean

    Standard Deviation

    Skewness

    Kurtosis

    GESIS

    (N=3,847)

    Evaluation

    3.83

    0.97

    -0.03

    -0.47

    Value

    4.70

    0.79

    -0.69

    0.34

    Burden

    2.54

    0.85

    0.85

    0.09

    LISS

    (N =6,803)

    Evaluation

    4.69

    1.20

    -0.20

    -0.19

    Value

    5.52

    1.01

    -0.73

    0.75

    Burden

    3.10

    1.16

    0.22

    -0.19

    PPSM

    (N=6,080)

    Evaluation

    3.32

    1.38

    0.28

    -0.48

    Value

    5.05

    1.34

    -0.56

    0.08

    Burden

    2.82

    1.38

    0.60

    -0.17

    Note. Scale –score ranging from 1 to 7.

     

     Further quality criteria

    Since there is a Dutch and a German version, it is important to investigate whether there is measurement equivalence between these two versions. We used the Multigroup Confirmatory Factor Analysis (MG-CFA) to test hypotheses concerning measurement equivalence between groups. If the factor loadings are invariant across all groups, there is metric equivalence (Vanderburg and Lance, 2000). If, in addition, all intercepts are invariant, there is scalar equivalence. Although the ideal situation is achieving complete scalar measurement invariance across all groups, in practice a small amount of variation is acceptable, which leads to partial measurement invariance (Byrne, Shavelson, and Muthén, 1989; Steenkamp and Baumgartner, 1998).

    It should be noted that the GESIS panel uses two modes: online and offline (paper mail). Prior to comparing the panels, a MG-CFA with two groups was used to test if there is measurement equivalence between the two modes. Specifying full scalar measurement equivalence led to an excellent model fit (χ2 (df = 58) = 169.3, CFI = .99, TLI = .98, RMSEA = .03). Thus, the survey mode (online vs. offline) did not affect the measurement model.

    Measurement equivalence testing using MG-CFA with three groups (GESIS, LISS, and PPSM) revealed partial scalar equivalence. All loadings could be constrained equal across all three panels. There was complete scalar equivalence between the GESIS and the LISS panel, which are both self-administered. In the PPSM model, the intercepts of E1 and V3 had to be estimated separately, indicating partial scalar equivalence for the PPSM, where the data for the survey attitude scale were collected by telephone interviews. With these two modifications, the model fitted well (χ2 (df = 92) = 1590.2, CFI = .96, TLI = .95, RMSEA = .05).

    Table 8 presents the unstandardized factor loadings for the GESIS, LISS, and PPSM panels, with values constrained equal across the three panels (measurement equivalence model). A second-order model with a general factor underlying the factors enjoyment, value and burden, specifying full scalar equivalence for the second-order general factor, fits less well (χ2 (df = 98) = 2119.8, CFI = .94, TLI = .94, RMSEA = .06), but was still acceptable. A model that constrained the variances and covariances to be equal across all three panels also fitted less well (χ2 (df = 104) = 2287.3, CFI = .94, TLI = .94, RMSEA = .06), but was still acceptable. The constrained model permits estimating a single set of correlations between the factors. These correlations were .59 between enjoyment and value, -.44 between enjoyment and burden, and -.36 between value and burden. These indicate sufficient discrimination between the three factors, which makes inadvisable to combine the three subscales into a single summated score.

    Table 8 Results

    Measurement Equivalence Model (with factor loadings constrained equal)

    Factor Loadings Survey Attitude Scale (Unstandardized) for the GESIS, LISS and PPSM panels.

     

    Enjoyment

    Value

    Burden

    E1

    1.00 (fixed)

     

     

    E2

    1.00

     

     

    E3

    0.62

    0.42

     

    V1

     

    1.00 (fixed)

     

    V2

     

    0.88

     

    V3

     

    -0.36

    0.76

    B1

     

     

    1.00 (fixed)

    B2

     

     

    1.12

    B3

     

     

    1.22

    Note. The values are based on the unstandardized solution.

    In sum, measurement equivalence was found cross-culturally between the Netherlands and Germany. Furthermore, for the German GESIS panel measurement equivalence was also established between the online mode and the paper mail mode.

     

    Further literature

    The scale was first published in the journal Measurement Instruments for the Social Sciences. 2019. https://measurementinstrumentssocialscience.biomedcentral.com/articles/10.1186/s42409-019-0012-x

     

    Acknowledgement

    The authors thank Remco Feskens for his help in developing the German version of the survey attitude scale; Simone Bartsch, Uwe Engel, Anja Göritz, and Helen Lauff for implementing the first German version; Annette Scherpenzeel for implementing the Dutch version; and Peter Lugtig and Don Dillman, the editors, and two anonymous reviewers of Measurement Instruments for the Social Sciences for their helpful comments. We also thank from Katherina Groskurth from ZIS, GESIS for her work and help in making this excerpt.