Open Access Repository for Measurement Instruments

AkkordeonDetailedScaleViews

German Version of the Extended DCA-Transparency Scale

  • Author: Hossiep, C. R., Märtins, J. & Schewe, G.
  • In ZIS since: 2022
  • DOI: https://doi.org/10.6102/zis314
  • Abstract: Transparency describes the quality of information perceived by an information recipient. The construct is increasingly gaining attention in the scientific and practical context. The dimensions Disclos ... moreure, Clarity and Accuracy as well as Timeliness and Relevance form individuals’ transparency perceptions. Based on these dimensions, a German questionnaire with 20 items with four items for each dimension was presented. This five-dimensional scale shows high internal consistency (> .88) and indicators for objectivity, reliability, and validity (content, criterion, and construct validity) are available. The validation study indicates that transparency dimensions differ with regard to their relationship to trustworthiness, trust and job satisfaction. In addition, first investigations show that the scale can be applied in contexts outside the organization. less
  • Language Documentation: English
  • Language Items: German
  • Number of Items: 20
  • Survey Mode: CASI
  • Processing Time: approximately 2 minutes (based on two waves of Sample 5)
  • Reliability: McDonald’s omega >= .87
  • Validity: content, criterium and construct validity
  • Construct: organizational transparency
  • Catchwords: information quality, disclosure, clarity, accuracy, timeliness, relevance
  • Item(s) used in Representative Survey: yes
  • URL Data archive: https://doi.org/10.1026/0932-4089/a000360
  • Status of Development: validated
    • Instruction

      Respondents are given the following instructions in German:

      Bitte beziehen Sie die folgenden Aussagen auf Ihre aktuelle berufliche Tätigkeit und geben Sie an, inwiefern Sie den folgenden Aussagen zustimmen. Es gibt keine richtigen oder falschen Antworten.

       

      Items

      Table 1

      Items of the Extended DCA-Transparency Scale

      No.

      Item

      Dimension

      D1

      Die Informationen, die ich von meiner Führungskraft erhalte, umfassen vollständig das, was ich wissen möchte. 

      (The information I receive from my supervisor fully encompasses what I want to know about.)

      Disclosure

      D2

      Die Informationen, die ich von meiner Führungskraft erhalte, decken alle Themen ab, über die ich etwas wissen möchte.

      (The information I receive from my supervisor covers all the topics I want to know about.)

      Disclosure

      D3

      Ich habe alle Informationen, die ich von meiner Führungskraft brauche.

      (I have all the information I need from my supervisor.)

      Disclosure

      D4

      Eine ausreichende Menge an Informationen wird von meiner Führungskraft vorgelegt.

      (A sufficient amount of information is presented by my supervisor.)

      Disclosure

      C1

      Die von meiner Führungskraft bereitgestellten Informationen sind verständlich.

      (The information presented by my supervisor is understandable.)

      Clarity

      C2

      Die Informationen von meiner Führungskraft sind eindeutig.

      (The information from my supervisor is clear.)

      Clarity

      C3

      Die Informationen von meiner Führungskraft sind nachvollziehbar.

      (The information from my supervisor is comprehensible.)

      Clarity

      C4

      Die Informationen von meiner Führungskraft sind für mich sprachlich verständlich ausgedrückt.

      (The information from my supervisor is presented in a language I understand.)

      Clarity

      A1

      Die Informationen von meiner Führungskraft scheinen wahr zu sein.

      (The information from my supervisor appears to be true.)

      Accuracy

      A2

      Die Informationen von meiner Führungskraft erscheinen korrekt.

      (The information from my supervisor appears correct.)

      Accuracy

      A3

      Die Informationen von meiner Führungskraft erscheinen zutreffend.

      (The information from my supervisor appears accurate.)

      Accuracy

      A4

      Die Informationen von meiner Führungskraft erscheinen richtig.

      (The information from my supervisor appears right.)

      Accuracy

      T1

      Die Informationen von meiner Führungskraft kommen mit ausreichend Vorlauf.

      (The information from my supervisor is provided ahead of time.)

      Timeliness

      T2

      Die Informationen von meiner Führungskraft sind rechtzeitig.

      (The information from my supervisor is timely.)

      Timeliness

      T3

      Die Informationen von meiner Führungskraft kommen nicht kurz vor knapp.

      (The information of my supervisor is not provided just in time.)

      Timeliness

      T4

      Die Informationen von meiner Führungskraft kommen zeitlich angemessen.

      (The information from my supervisor comes at an appropriate time.)

      Timeliness

      R1

      Das Thema, über das mich meine Führungskraft informiert, hat eine hohe Bedeutung für mich.

      (The topic my supervisor informs me about is of great importance to me.)

      Relevance

      R2

      Das Thema, über das mich meine Führungskraft informiert, ist mir wichtig.

      (I care about the topic my supervisor informs me about.)

      Relevance

      R3

      Das Thema, über das mich meine Führungskraft informiert, ist für mich relevant.

      (The topic my supervisor informs me about is relevant for me.)

      Relevance

      R4

      Das Thema, über das mich meine Führungskraft informiert, liegt mir am Herzen.

      (The topic my supervisor informs me about is close to my heart.)

      Relevance

      Note. In the surveys of the validation study, transparency was measured with regard to the interviewees’ supervisor. However, other information senders can also be used, as shown by initial applications such as in the context of technology acceptance (Oldeweme et al., 2021).

       

      Response specifications

      Respondents rate the items on a 5-point Likert scale on which only the two poles are labelled with 1 = "strongly disagree" and 5 = "strongly agree".

       

      Scoring

      All items are positively formulated. It is recommended to understand organizational transparency as a higher order construct. Thus, the dimensions should also be analysed separately. Unweighted means are calculated for the individual dimensions. In the case of missing values, participants that responded to at least three of the four items per dimension can be included. If fewer items were answered, the corresponding cases should be excluded list-wise. Furthermore, dimensions that have been answered completely can also be evaluated even if single dimensions cannot be analysed. However, it must be taken into account that facets of transparency are neglected in this way.

       

      Application field

      The developed instrument serves to measure an individual’s perception of the quality of a sender’s shared information in organizational settings (Schnackenberg et al., 2020). Nevertheless, the deliberately open formulation of the items offers the possibility to adapt them for the application in other fields, where individuals’ perception of the quality of shared information by a sender should be measured. As specified in the note of Table 1, the information sender can be replaced with regard to the context of application and might be an organization, a group or, for example, a specific technology (Oldeweme et al., 2021). Depending on the area of application and the accessibility of the target group, the measurement instrument can be used as a paper-and-pencil or online questionnaire. In the validation study, online questionnaires were used to collect the data. The average processing time of the instrument is about two minutes (based on two waves of Sample 5).

    In early transparency research, no consensus was reached on the conceptualization of the construct, which is why inconsistent operationalizations were used. The understanding of transparency was usually limited to the amount of shared information and was, thus, predominantly measured by unidimensional scales referring mainly to the quantity of shared information. Nevertheless, the scales differed markedly in terms of the number of items, the dimensionality (if any), and the focus or specificity of the scales (Schnackenberg & Tomlinson, 2016). The use of these non-standardized scales can be seen as a reason for the previously unclear empirical results regarding the test of assumed relationships between transparency and theoretically well-established outcomes such as trust and job satisfaction. To address this issue, Schnackenberg and Tomlinson (2016) elaborated a definition and different dimensions of the construct based on an extensive literature review. Following this widely adopted definition, transparency “is the perceived quality of intentionally shared information from a sender” (Schnackenberg & Tomlinson, 2016, p.1788). The constructs and the corresponding scale encompass the dimensions of disclosure, clarity and accuracy (Schnackenberg et al., 2020) as well as the facets of timeliness and relevance of shared information (Hossiep et al., 2021). The transparency scale presented here reflects, besides disclosure, the dimensions of clarity, accuracy, timeliness and relevance of shared information (Hossiep et al., 2021; Schnackenberg et al., 2020). Disclosure refers to the perception that sufficient information is shared by a sender (Schnackenberg & Tomlinson, 2016). Clarity indicates whether the shared information is perceived as comprehensible and can be understood as a measure for congruence of the intended and understood meaning of shared information (Schnackenberg & Tomlinson, 2016). Accuracy refers to the degree to which the shared information is perceived to be correct and not biased at the moment of information sharing (Schnackenberg & Tomlinson, 2016). Timeliness is about the right moment of information sharing; too early and too late information sharing harms the perceived value of received information. For example, very accurate information which is shared too late might be useless for the information receiver. Finally, relevance is the degree of personal importance (professionally and/or privately) of disclosed information (Hossiep et al., 2021). The five dimensions can be perceived differently by different information receivers. For example, employees may be more or less familiar with a technical language and therefore perceptions of clarity of shared information might differ between employees. Thus, transparency is about the degree of perceived transparency rather than its pure existence (Albu & Flyverbom, 2019). The multidimensional view of transparency enables researchers to gain a deeper understanding of the effects of transparency on the one hand, and on the other hand it offers practitioners the possibility to derive well-founded recommendations for action in order to strengthen the perception of transparency, for instance within an organization. For example, disclosure, timeliness, and relevance were found to have a positive impact on job satisfaction, while the other dimensions showed no significant effect.

    Item generation and selection

    The items of the dimensions of disclosure, clarity and accuracy were developed by Schnackenberg et al. (2020) and were translated by Hossiep et al. (2021) into German. We chose a team translation approach (Harkness, 2003). The translation was conducted by three persons independently of each other and was discussed afterwards. The mother tongue of those persons was German while they were fluent in English. Additionally, two of them were experienced within the organizational research field. Most of the item translations did not differ substantially, especially content wise there were no significant differences within the item translations. Therefore, the joint discussion focused on the consistent use of tenses and terms. The final selection of the item translation was based on the understandability of the items and the linguistic fit of the items to each other. The selection was done in consensus by all involved persons. Finally, the selected items were presented to three uninvolved persons to check the comprehensibility of the questionnaire. No changes were made here. In addition, the German translation was compared with the English original by a bilingual person. Again, no changes were necessary. In addition, items were developed for the two new dimensions timeliness and relevance. The items were derived from the corresponding definition of the dimensions. Special attention was paid to the wording of the items so that they cover all relevant aspects of the dimensions and at the same time match the existing items in terms of phrasing, length, and polarity. The items were also checked to ensure that they could be understood by uninvolved persons.

     

    Samples

    The expanded and translated DCA-Transparency scale was tested using six samples. The data collection of five samples took place in 2019 and one additional sample was collected at the beginning of 2021. All participants spoke German at a native level and had a non-self-employed occupation. This is important, because participants needed to fully understand the items and were required to have relevant professional experience to assess their supervisor transparency. The first (N = 325) and second sample (N = 322) served to test the factorial structure of the translated and extended items. Participants were recruited through personal contacts and participation was not compensated. The third (N = 133) and fourth sample (N = 120) were used to quantitatively test distinctiveness to orbiting scales and distinctiveness between transparency dimensions for content validity. Both samples were recruited via the online crowd-sourcing platform Prolific and participants were selected randomly. There was a financial reward for participating in the survey. A longitudinal research design was used for the fifth sample (N = 376). For this, participants were interviewed in three waves. 540 people took part in the first wave, 447 in the second and 376 in the final wave. The sample was representative of German employees in terms of age, gender and gender distribution by scope of employment. The data collection of this sample was supported by the Leibniz Institute of Psychology in Trier, Germany. Participants were recruited by a suitable panel provider and participants were financially rewarded. To ensure representativeness, we used quotas, and participants were rejected if the quotas were already met. Finally, sample six (N = 114) was used to compare transparency dimensions to existing, related scales. Again, participants were recruited on the online platform Prolific and were financially compensated. Participants were selected in an ad-hoc fashion. In samples 1 and 2 missing values were possible and such cases were excluded list-wise. In all other samples respondents were not able to skip items. To ensure data quality, we used attention checks within all surveys and used further indicators to ensure data quality such as processing time and answer pattern. The characteristics of the samples are shown in Table 2.

     

    Table 2

    Characteristics of Samples

     

    Sample 1
    Factorial
    Structure

    Sample 2
    Factorial
    Structure

    Sample 3
    Content
    Validation

    Sample 4
    Content
    Validation

    Sample 5
    Criterion
    Validation

    Sample 6
    Construct
    Validation

    Year

    2019

    2019

    2019

    2019

    2019

    2021

    Size (N)

    325

    322

    133

    120

    376*

    114

    Modus

    Online
    Survey

    Online
    Survey

    Online
    Survey

    Online
    Survey

    Online
    Survey

    Online
    Survey

    Participant
    Language

    German

    German

    German

    German

    German

    German

    Participant
    Incentive

    None

    None

    Financial

    Financial

    Financial

    Financial

    Gender
    [% female]

    54

    72

    45

    49

    48

    33

    Age [Mean]

    31

    32

    29

    30

    43

    29

    Note. * Sample 5 represents a longitudinal research design consisting of three measurement points with t0 = 540, t1 = 447, and t2 = 376 participants.

     

    Item analyses

    To test whether the supplemented items for timeliness and relevance also form separate factors and whether the translated items have the same structure as in the English original version, an exploratory factor analysis (i.e., maximum likelihood) was performed using Sample 1. We decided to use a maximum-likelihood approach since in contrast to other methods it is best suited to account for the common variance of factors (Worthington & Whittaker, 2006). In addition, an oblique factor rotation method (oblimin) was conducted.

    The statistical software IBM SPSS 25 was used. Since one of the accuracy items did not load on the accuracy factor (i.e., Accuracy 3), the survey was repeated with an alternative translation that had already been discussed in the translation process (Sample 2). For factor extraction, parallel analysis (Horn, 1965) was used and indicated five factors. The five factors explain 77% of variance. Except for the corresponding accuracy item, no substantial differences between the samples emerged. The new translation of the accuracy item was then used in all other samples (Samples 2-6). Table 3 shows the pattern matrix. In addition to the exploratory factor analysis, a confirmatory factor analysis was performed using the R-based software JASP version 13 and the “auto” estimator in JASP which provides similar results as the maximum likelihood estimator. The confirmatory factor analysis was based on a different sample (Sample 5). The model fit indicators confirm the suitability of five transparency dimensions: χ2/df = 2.79, CFI = .971, TLI = .967, SRMR = .041, RMSEA = .058. The factor loadings of the confirmatory factor analysis are presented in Table 4.

     

    Table 3

    Pattern Matrix of Explorative Factor Analysis

     

    A priori interpretation

    Factor 1
    Accuracy

    Factor 2
    Timeliness

    Factor 3
    Relevance

    Factor 4
    Disclosure

    Factor 5
    Clarity

    Accuracy 2

    .917 (.961)

     

     

     

     

    Accuracy 1

    .855 (.791)

     

     

     

     

    Accuracy 4

    .788 (.915)

     

     

     

     

    Timeliness 4

     

    -.910 (.891)

     

     

     

    Timeliness 1

     

    -.895 (.901)

     

     

     

    Timeliness 2

     

    -.878 (.902)

     

     

     

    Timeliness 3

     

    -.803 (.802)

     

     

     

    Relevance 2

     

     

    .934 (.943)

     

     

    Relevance 4

     

     

    .796 (.812)

     

     

    Relevance 1

     

     

    .732 (.724)

     

     

    Relevance 3

     

     

    .705 (.520)

     

     

    Disclosure 2

     

     

     

    .829 (.879)

     

    Disclosure 1

     

     

     

    .772 (.898)

     

    Disclosure 3

     

    -.213

     

    .573 (.781)

     

    Disclosure 4

     

     

     

    .484 (.571)

    .246

    Clarity 1

     

     

     

     

    .856 (.933)

    Clarity 2

     

     

     

     

    .739 (.591)

    Clarity 4

     

     

     

    -.205

    .644 (.714)

    Clarity 3

     

     

     

     

    .485 (.565)

    Accuracy 3 (new)

    .208 (.788)

     

     

     

    .446

    Note. Factor order taken from first sample; factor loadings in brackets are from second sample, factor loadings smaller than .2 are not shown.

     

    Table 4

    Unstandardized and Standardized Factor Loadings of Confirmatory Factor Analysis

     

    Item

    Unstandardized Factor Loading

    Standardized Factor Loading

    Disclosure

    D1

    1.048

    .924

     

    D2

    0.985

    .887

     

    D3

    0.986

    .893

     

    D4

    0.942

    .873

    Clarity

    C1

    0.801

    .857

     

    C2

    0.829

    .819

     

    C3

    0.796

    .812

     

    C4

    0.643

    .735

    Accuracy

    A1

    0.822

    .854

     

    A3

    0.851

    .918

     

    A3

    0.822

    .883

     

    A4

    0.838

    .902

    Timeliness

    T1

    1.026

    .905

     

    T2

    0.975

    .910

     

    T3

    0.929

    .817

     

    T3

    1.007

    .907

    Relevance

    R1

    0.835

    .845

     

    R2

    0.934

    .931

     

    R3

    0.726

    .827

     

    R4

    0.932

    .833

    Note. First wave of sample 5 was used to perform the CFA.

     

    Finally, the correlations of the manifest transparency dimensions based on Sample 5 are shown in Table 5. Factor correlations are generally large and range from r = .521 to r = 758. The relevance factor correlates to a lesser extent than the other factors to each other with factors correlations of r = .521 to r = .597.

     

    Table 5

    Descriptives and Correlations of Transparency Dimensions

     

    Mean

    SD

    Skewness

    Kurtosis

    Disclosure

    Clarity

    Accuracy

    Timeliness

    Relevance

    Disclosure

    3.62

    1.02

    -0.41

    -0.64

    .941

         

     

    Clarity

    4.03

    0.82

    -0.63

    -0.20

    .711***

    .880

       

     

    Accuracy

    4.02

    0.86

    -0.72

    0.11

    .682***

    .758***

    .938

     

     

    Timeliness

    3.48

    1.02

    -0.21

    -0.73

    .743***

    .658***

    .644***

    .935

     

    Relevance

    3.82

    0.89

    -0.57

    -0.01

    .539***

    .597***

    .587***

    .521***

    .915

    Overall

    3.79

    0.78

    -0.36

    -0.37

     

     

     

     

     

    Note. *** = p < .001; Cronbach’s Alpha on the diagonal. The first wave of Sample 5 (= 540) was used.

     

    Item parameters

    Table 6 indicates items’ mean value, standard deviation, skewness and kurtosis. The items were measured on 5-point Likert scales.

     

    Table 6

    Means, Standard Deviations, Skewness and Kurtosis of Items

     

    Mean

    Standard Deviation

    Skewness

    Kurtosis

    Disclosure 1

    3.53

    1.058

    -.295

    -.608

    Disclosure 2

    3.59

    1.011

    -.264

    -.770

    Disclosure 3

    3.57

    1.033

    -.325

    -.565

    Disclosure 4

    3.63

    1.009

    -.360

    -.580

    Clarity 1

    3.95

    .944

    -.703

    .163

    Clarity 2

    3.77

    1.068

    -.633

    -.292

    Clarity 3

    3.81

    1.015

    -.686

    .076

    Clarity 4

    4.11

    .883

    -.857

    .418

    Accuracy 1

    3.96

    .944

    -.684

    -.048

    Accuracy 2

    3.91

    .971

    -.638

    -.159

    Accuracy 3

    3.92

    .938

    -.582

    -.252

    Accuracy 4

    3.88

    .978

    -.571

    -.358

    Timeliness 1

    3.35

    1.109

    -.184

    -.702

    Timeliness 2

    3.47

    1.088

    -.255

    -.765

    Timeliness 3

    3.34

    1.129

    -.239

    -.731

    Timeliness 4

    3.48

    1.083

    -.383

    -.543

    Relevance 1

    3.61

    1.073

    -.428

    -.381

    Relevance 2

    3.71

    1.022

    -.391

    -.530

    Relevance 3

    3.81

    .990

    -.575

    -.180

    Relevance 4

    3.52

    1.110

    -.360

    -.498

    Note. Scale ranging from 1 (strongly disagree) to 5 (strongly agree), N = 376 (Sample 5).

     

    Objectivity

    Implementation objectivity is provided using standardized instructions and common 5-point Likert scales. Since, depending on the context of the survey, the evaluation of the items should be classified as confidential, the participants must be assured of anonymity. This means that no information may be collected that would allow direct conclusions to be drawn about the individual person. This is particularly important if, for example, the information behaviour of one's own superior is to be examined. If anonymity is not given, socially desirable responses may occur in order to prevent potential negative consequences for oneself (Krumpal, 2013). This could cause a distortion of the results. Thus, it is necessary to mention within the instructions that there are no correct and wrong answers and participants should rely on their own perceptions. Objectivity of interpretation is provided by the quantitative response options. For the evaluation, mean values of the individual dimensions and the overall transparency are formed. The perceived transparency of an information sender should be interpreted at the level of the individual dimensions, as this allows to identify deviations from expectations. Regression analyses, structural equation models or mean comparisons are conceivable for more complex analyses. Here, the use of suitable statistical software is recommended. Depending on the field of application and the intended conclusion, enough participants must take part in the survey.

     

    Reliability

    In Table 7, McDonald’s Omega of the five transparency dimensions is presented and compared between three different samples. McDonald’s Omega indicates very high to high reliability (conventional cut-off criteria >.7) of each subscale and is comparable between the different samples used (Hayes & Coutts, 2020). This shows that the items of each dimension form consistent dimensions. 

     

    Table 7

    McDonald’s Omega of Organizational Transparency Dimensions

     

    McDonald’s Omega
    Sample 1

    McDonald’s Omega
    Sample 2

    McDonald’s Omega
    Sample 5 (1st wave)

    Disclosure

    .890

    .908

    .941

    Clarity

    .865

    .875

    .882

    Accuracy

    .896

    .946

    .938

    Timeliness

    .877

    .940

    .935

    Relevance

    .901

    .870

    .919

     

    Validity

    The focus in validating the translated and expanded scale was to replicate and extend the findings from the original study. Especially the quantitative assessment of content validity is often neglected in scale validation (Colquitt et al., 2019). Thus, it was intended to add to the insights of Schnackenberg et al. (2020). To do so, an approach developed by Hinkin and Tracey (1999) and evolved by Colquitt and colleagues (2019) was chosen. Here, individuals who were not involved in the process of scale development (i.e., naive judges) evaluate the fit of the items of the focal scale (transparency) and of orbiting sales to the definition of all constructs on 7-point Likert Scales. Based on these ratings, mean values of the focal items are calculated and statistical mean value comparisons are used to examine whether the items have the significantly highest mean value on their respective definition. If this is the case, this can be interpreted as an indication of content validity (Hinkin & Tracey, 1999; Colquitt et al., 2019). Colquitt and colleagues (2019) further developed indices to qualitatively interpret the degree of definitional distinctiveness. First the definitional distinctiveness of the transparency to orbiting scales was investigated (Sample 3). We chose “informational justice” (Colquitt, 2001; Maier et al., 2017) and the “communication quality between employees and managers” (Mohr et al., 2014) since both constructs are on a similar logical stage as the focal construct and measure comparable perceptions. Following Colquitt et al. (2019) the distinctiveness of organizational transparency to those constructs can be interpret as moderate. In addition, the distinctiveness of transparency dimensions to each other was reviewed (Sample 4). This is especially important, since we added two dimensions, which might have been covered in the items of the existing ones. Again, we followed the recommendations of Colquitt and colleagues (2019) and found a very strong definitional distinctiveness.

     

    To further strengthen content validity of the presented transparency scale, it is investigated whether transparency is relatively highly correlated with theoretically related constructs and not correlated or low correlated with assumed unrelated constructs. As related constructs we chose a scale of informational justice (Maier et al., 2007), a measure of communication quality (Mohr et al., 2014) and the questionnaire to assess communication in organizations (so called KomminO; Sperka, 1997). Those scales are dealing with perceptions about information sharing within organizations and the way of communication between supervisors and employees. Thus, they are measuring a related perception as the extended DCA-transparency scale and should strongly correlate with the focal construct. In contrast, we chose a scale for well-being (Brähler et al., 2007) and an instrument to assess need-for-cognition (Beißert et al., 2014) as unrelated constructs as those constructs describe a psychological state rather than an information-related perception. The correlations presented in Table 8 are further evidence of validity, as the correlation pattern of transparency with related and unrelated constructs is consistent with the predictions.

     

    Table 8

    Correlations to Related and Unrelated Constructs

     

    Communication
    Quality

    Informational
    Justice

    KomminO

    Well-Being

    Need-for-
    cognition

    Overall Transparency

    .650**

    .861**

    .572**

    .164**

    .067

    Disclosure

    .616**

    .767**

    .514**

    .170**

    .124*

    Clarity

    .604**

    .746**

    .473**

    .084

    -.029

    Accuracy

    .557**

    .750**

    .405**

    .041

    -.13

    Timeliness

    .538**

    .744**

    .376**

    .195**

    .040

    Relevance

    .453**

    .629**

    .372**

    .119*

    .122*

    Note. * p < .05, ** p < .01; Communication Quality scale of Mohr et al. (2014); Informational Justice scale of Maier et al. (2007); KomminO scale of Sperka (1997); Well-Being scale of Brähler et al. (2007); Need-for-Cognition scale of Beißert et al. (2014).

     

    Criterion validity was further tested by investigating the relationship of transparency with expected outcomes in a longitudinal research design using the software R (Sample 5). To test the effects, multiple (stepwise) regression was used. The integrative model of organizational trust by Mayer et al. (1995) was used to assess the relationship of transparency dimensions to supervisor trustworthiness and employees’ trust in their supervisor. According to this model, individuals form perceptions regarding a person's ability, benevolence, and integrity. These perceptions form the trustworthiness of a trustee and are thus an important predictor for the formation of trust in the corresponding person (Mayer et al, 1995). Following Schnackenberg and Tomlinson (2016), the transparency dimensions in turn affect individuals’ perceptions of the information senders’ ability, benevolence, and integrity and thus influence individuals’ perception of trustworthiness and ultimately the formation of trust. Thus, trustworthiness can be understood as a mediator in the transparency-trust relationship. The results indicated that all transparency dimensions either directly or indirectly affected employees’ trust in supervisors. Thus, the findings provided further evidence for the distinctiveness of the transparency dimensions and support criterion validity. Besides trustworthiness and trust, employees’ job satisfaction was used as another outcome. Again, certain transparency dimensions (disclosure, timeliness, and relevance) were shown to be antecedents of employees’ job satisfaction. The results are shown in Table 9. For further details please see Hossiep et al. (2021).

     

     

    Table 9

    Examination of Criterion Validity (Main Effects)

    Variable

    beta

    (se)

    beta

    (se)

    beta

    (se)

    beta

    (se)

    beta

    (se)

    Effects

    Ability

    Benevolence

    Integrity

    Trust

    Job Satisfaction

    Overall
    Transparency

    0.70***
    (0.05)

    0.76***
    (0.05)

    0.81***
    (0.05)

    0.25***
    (0.06)

    0.55***
    (0.06)

    Disclosure

    0.17*
    (0.07)

    0.25***
    (0.07)

    0.16*
    (0.07)

    -0.05
    (0.09)

    0.26**
    (0.08)

    Clarity

    0.19*
    (0.08)

    0.02
    (0.09)

    0.22**
    (0.08)

    0.25*
    (0.11)

    0.12
    (0.10)

    Accuracy

    0.12
    (0.07)

    0.15
    (0.08)

    0.17*
    (0.07)

    0.27**
    (0.10)

    0.02
    (0.09)

    Timeliness

    0.09
    (0.06)

    0.11
    (0.07)

    0.17**
    (0.06)

    -0.08
    (0.08)

    0.20**
    (0.07)

    Relevance

    0.14*
    (0.05)

    0.22***
    (0.06)

    0.17**
    (0.05)

    -0.08
    (0.07)

    0.20**
    (0.06)

    Adj. R2

    .34

    .32

    .39

    .07

    .19

    ΔR2 a

    .02**

    .03***

    .02***

    .01

    0.02*

    Note. * p < .05, ** p < .01, *** p < .001; sample size N = 376 (Sample 5); a = the delta R2 refers to the difference in explained variance between DCA-transparency and full transparency including timeliness and relevance as additional factors.

     

    Descriptive statistics (scaling)

    We included descriptive statistics (i.e., means, standard deviations, skewness, and kurtosis) for each subscale in Table 5.

     

     

    Further literature

    The underlying English scale consisting of the dimensions Disclosure, Clarity, and Accuracy was developed by Schnackenberg et al. (2020). The article on the translation and extension of this scale is by Hossiep et al. (2021).

     

    Hossiep, C. R., Märtins, J., & Schewe, G. (2021). DCA-Transparency: Validation and Extension of a German Scale. German Journal of Work and Organizational Psychology, 65(3), 165-178. https://doi.org/10.1026/0932-4089/a000360

    Schnackenberg, A. K., Tomlinson, E., & Coen, C. (2020). The dimensional structure of transparency: a construct validation of transparency as disclosure, clarity, and accuracy in organizations. Human Relations, 74(10), 1628-1660. https://doi.org/10.1177/0018726720933317

     

    • C. Richard Hossiep, Center for Management, Chair of Business Administration – Organization, Human Resource Management and Innovation, University of Muenster, Muenster, Germany
      Universitätsstr. 14-16, 48143 Münster, E-Mail:
      Richard.Hossiep@wiwi.uni-muenster.de
    • Julian Märtins, Center for Management, Chair of Business Administration – Organization, Human Resource Management and Innovation, University of Muenster, Muenster, Germany
      Universitätsstr. 14-16, 48143 Münster, E-Mail:
      Julian.Maertins@wiwi.uni-muenster.de
    • Gerhard Schewe, Center for Management, Chair of Business Administration – Organization, Human Resource Management and Innovation, University of Muenster, Muenster, Germany
      Universitätsstr. 14-16, 48143 Münster, E-Mail:
      Gerhard.Schewe@wiwi.uni-muenster.de

    The data sets of Sample 5 are available at PsychArchives of Leibniz Institute of Psychology (https://doi.org/10.1026/0932-4089/a000360)