Validation of the Italian version of the Davos Assessment of Cognitive Biases Scale (DACOBS) in a sample of schizophrenia spectrum disorder patients and healthy controls

Valentina Pugliese1, Matteo Aloi1, Davide Maestri2, Renato de Filippis1, Raffaele Gaetano1, Lorenzo Pelizza2, Cristina Segura-Garcia3, Pasquale De Fazio1

1Psychiatry Unit, Department Health Sciences, University Mater Domini, Catanzaro, Italy; 2Parma Department of Mental Health and Pathological Addiction, AUSL of Parma, AUSL, Italy; 3Psychiatry Unit, Department Medical and Surgical Sciences, University Mater Domini, Catanzaro, Italy.

Summary. Purpose. Recently two instruments were developed to address the study of the cognitive biases in schizophrenia spectrum disorders (SSD): the Cognitive Biases Questionnaire for Psychosis (CBQ-P) and the Davos Assessment of Cognitive Biases Scale (DACOBS). Aim of this study was to validate the Italian version of the DACOBS. Methods. We investigated factor structure, reliability, discriminative and convergent validity of the instrument by comparing to the CBQ-P in an Italian sample of 102 patients diagnosed with SSD and 330 healthy controls (HC), matched by age, education and gender. Results. The second-order seven-factor solution provided the best results among the four models tested. Reliability proved to be very satisfactory, with ω coefficient ranged from 0.75 for Jumping to conclusions to 0.89 for Safety Behavior. The Italian version of DACOBS could discriminate psychosis from HC (Wilks’ Lambda=0.64, F=34.284, p<0.001; h2=0.364). All seven DACOBS subscales were significantly correlated with the CBQ-P subscales (total sample: r=0.331-0.707; SSD group: r=0.424-0.735; HC group: r=0.177-0.460). Conclusions. The Italian version of DACOBS is a valid instrument for measuring cognitive biases for patients with psychosis, confirming previous results regarding the psychometric properties of the tool.

Key words. Assessment, cognitive bias, psychometric properties, schizophrenia spectrum disorders, validation.

Validazione della versione italiana della Davos Assessment of Cognitive Biases Scale (DACOBS) in un campione di pazienti con disturbo dello spettro della schizofrenia e controlli sani.

Riassunto. Scopo. Recentemente sono stati sviluppati due strumenti per la valutazione dei bias cognitivi nei disturbi dello spettro della schizofrenia (DSS): il Cognitive Biases Questionnaire for Psychosis (CBQ-P) e la Davos Assessment of Cognitive Biases Scale (DACOBS). Scopo di questo studio è stato quello di validare la versione italiana della DACOBS. Metodi. È stata realizzata l’analisi fattoriale confermatoria e sono state valutate l’affidabilità e la validità discriminativa e convergente dello strumento confrontandole con il CBQ-P in un campione italiano di 102 pazienti con diagnosi di DSS e 330 controlli sani, appaiati per età, istruzione e genere. Risultati. L’alternativa a sette fattori di secondo ordine ha fornito i migliori risultati tra i quattro modelli testati. L’affidabilità si è rivelata molto soddisfacente, con un coefficiente compreso tra 0,75 per “Jumping to conclusions” e 0,89 per “Safety behavior”. La versione italiana della DACOBS discrimina tra pazienti con psicosi e controlli sani (Lambda di Wilks=0,64, F=34.284, p<0,001; h2=0,364). Tutte e sette le sottoscale della DACOBS erano significativamente correlate con le sottoscale del CBQ-P (campione totale: r=0,331-0,707; pazienti psicotici: r=0,424-0,735; controlli sani: r=0,177-0.460). Conclusioni. La versione italiana della DACOBS è uno strumento valido per misurare i bias cognitivi nei pazienti con psicosi, confermando i risultati precedenti relativi alle proprietà psicometriche del test.

Parole chiave. Assessment, bias cognitivi, disturbi dello spettro della schizofrenia, proprietà psicometriche, validazione.

Introduction

Schizophrenia is mainly characterized by positive (e.g., hallucinations and delusions) and negative (e.g., avolition, blunted affect and anhedonia) symptoms in addition to behavioral disorganization and persistent cognitive impairments1. Indeed, individuals with psychosis show pervasive cognitive biases, described as systematic errors in both cognitive processing and content meaning across specific situations2.

Cognitive biases are conceptualized as a methodical orientation toward appraising, processing, selecting and remembering specific information3. Moreover, cognitive biases influence several cognitive domains, such as attention, decision-making/reasoning, memory recall, motivation and style of attribution of meaning4. Some cognitive biases include jumping to conclusions5, confirmatory bias or bias against disconformity evidence6, inflexibility of belief7, attributional biases8.

Recently, two assessment tools were developed to address the study of the cognitive biases prevalent in psychosis: the Cognitive biases Questionnaire for Psychosis (CBQ-P)9 and the Davos Assessment of Cognitive Biases Scale (DACOBS)10.

The DACOBS was developed by van der Gaag et al.10 and consists of 42 items regarding seven subscales, each including 6 items. To date, the discrimination potential of DACOBS between patients with psychosis and healthy population has been little investigated and to the best of our knowledge, the DACOBS has been translated and validated only with Polish11, Dutch12 and Flemish13 populations. Furthermore, in the Italian context, there is only a validated version of the CBQ-P14 and it is missing of assessment tools to evaluate and measure specific cognitive biases in psychosis, overlapped and extended by the DACOBS.

Aim of this study was the validation of the Italian version of the DACOBS, through the examination of: 1) the factor structure; 2) the reliability of the questionnaire; 3) the discriminative validity to differentiate patients from non-clinical subjects; 4) the convergent validity of the Italian version of DACOBS by comparing to the CBQ-P.

Materials and methods

Participants and procedure

Participants were outpatients aged 18-65 years afferent to the Psychiatry Unit of the University of Catanzaro (Italy), between April 2019 and August 2020. We included all patients admitted in the unit for at least twelve months with a diagnosis of Schizophrenia Spectrum Disorder (SSD) according to DSM-5 criteria (American Psychiatric Association, 2013), formulated through the Structured Clinical Interview for DSM-5 (SCID-5-CV)15 by experienced psychiatrists who were trained in the administration of neuropsychiatric tests and used these tools in their daily clinical practice.

A control group was also collected from the local community via Internet advertisements and from local university working staff and was chosen to match the patients’ group based on age, education and gender. Prior the assessment, they were all interviewed and asked about the lifetime presence of schizophrenia spectrum disorder and were excluded if so. All participants had to be aged between 18 and 65 years and competent in the Italian language.

A total number of 432 participants were included in this study, of which 330 healthy controls (HC) and 102 patients diagnosed with SSD.

The study was carried out in accordance with the latest version of the Declaration of Helsinki16 and was approved by the local research ethics committee. All patients and controls signed a written informed consent according to the Ethical Committee’s guidelines before any data was collected.

A bilingual Dutch researcher did the original Dutch to Italian translation of DACOBS. Subsequently a bilingual Italian researcher, blind to the original Dutch version, performed the back translation of the test from Italian to Dutch. A third bilingual researcher validated the validity of both translations. After verifying the similarity with the original test, the DACOBS was given to a small group of 20 volunteers who evaluated the comprehensibility of the items. All raters considered it to be clear and easy to rate (see appendix 1 online at www.rivistadipsichiatria.it).

Measures

Davos Assessment of Cognitive Biases (DACOBS)10: is made up of 42 Likert type items ranging from 1 (strongly disagree) to 7 (strongly agree) and refers to the last two weeks. This self-report questionnaire specifically aims to measure four cognitive biases (Jumping to conclusions bias, Belief Inflexibility bias, Attention to threat bias, External Attribution bias), two cognitive limitations (Social Cognition problems, Subjective Cognitive problems) and avoidance behavior (Safety behaviors). The DACOBS has demonstrated a good reliability with an internal consistency ranging from 0.64 to 0.90 and discriminating satisfactorily SSD and HC samples10. Regarding convergent validity, five of seven subscales showed significant associations among the validation measures ranging from 0.36 to 0.63: Jumping to conclusions bias with Beads task17, Belief Inflexibility bias related to the Dogmatism Scale (DOG scale)18, Attention to threat bias and External Attribution bias with Green Paranoid Thoughts Scale (GPTS)19, Safety behaviors with Safety Behaviors Questionnaire-Paranoid Delusions (SBQ-PD)20.

Cognitive Biases Questionnaire for Psychosis (CBQ-P)9: assessment was developed from the Cognitive Style Test (CST)21, made up of 30 vignettes of ordinary life events (half-pleasant and half-unpleasant). In the CST, the interviewees visualize that they are in each of the proposed scenarios and must choose 1 of the 4 possible cognitive responses to the situation, which represent general depressive cognitive distortions such as selective abstraction and excessive generalization. Thus, these illustrations were adapted to psychosis in the CBQ-P by creating new vignettes in order to include 2 subjects of great significance to psychosis: “Anomalous Perceptions” (AP) and “Threatening Events” (TE). Each scenario includes a forced-choice answer among 3 options, illustrating absence of bias (score of 1); possible presence of bias (score of 2); and likely presence of bias (score of 3).

Scores can range between 30 and 90 points (15-45 for each theme and 6-18 for each thinking bias). Moreover, in such a way to reduce potential response biases, the order of the responses was randomized across items. In the present study, we used the Italian validation of CBQ-P scale14. The McDonald ω coefficient in our sample was 0.912.

Statistical analysis

Different confirmatory factor analyses (CFA) were conducted using JASP open-source software (JASP, Version 0.13.1, University of Amsterdam, The Netherlands) to examine the best latent structure of the Italian DACOBS version. We tested a one-, a three-, and a seven-factor model based on van der Gaag et al.10 and a second-order seven-factors solution. The diagonally weighted least squares (DWLS) estimator, using a polychoric correlation matrix was used to estimate the parameters because it provides the best option for modelling ordered data22.

The Tucker-Lewis Index (TLI), The Comparative Fit Index (CFI), the Root Mean Square Error of Approximation (RMSEA), Standardized Root Mean Squared Residual (SRMR) and relative chi-square (χ2/df) were used to assess the data’s goodness-of-fit to a proposed model. For TLI and CFI, values of 0.90 and above were considered adequate, whereas values of 0.95 or above were considered very good; for RMSEA values of 0.08 and below was considered adequate and 0.05 or less very good; for SRMR a cut-off value close to 0.08 was considered adequate. Values of χ2/df <3.0 are good and those <2.0 are very good. The levels of these indices were evaluated according to the recommendations of Hu and Bentler23. The McDonald’s ω reliability coefficient was calculated. Correlations between DACOBS and CBQ-P were calculated to measure construct validity, considering that correlation coefficients greater than 0.30 are recommended24.

A multivariate analysis of covariance (MANCOVA) was performed to explore whether the DACOBS was able to differentiate between SSD patients and healthy controls, controlling for age, sex and years of education. A p<0.05 was considered statistically significant.

Results

Demographics

The sociodemographic characteristics of the sample are shown in table 1.




No differences were evident between groups for age, gender and education level. The groups differed significantly for civil status as SSD patients were more frequently single.

Confirmatory factor analysis

According to the fit indices of the four CFA models (table 2) the second-order seven-factors solution (figure 1) is the best demonstrating a very good model fit.







Reliability of the scores

The McDonald ω coefficient of the seven DACOBS subscales ranged from 0.75 for Jumping to conclusions bias, 0.80 for Belief inflexibility bias, 0.76 for Attention to threat bias, 0.81 for External attribution bias, 0.83 for Social cognition problems, 0.85 for Subjective cognitive problems and 0.89 for Safety behaviors indicating a high score reliability. The total DACOBS’s ω coefficient was 0.96, corresponding to an excellent reliability.

Discriminant validity

We performed a MANCOVA using the status of SSD or HC as independent variable. Age, gender and educational level were covariates.

Overall, a significant main effect of case-control (Wilks’ Lambda=0.64, F=34.284, p<0.001; h2=0.364) but not of age emerged. Gender and educational level were also significantly related to the DACOBS subscales (gender: Wilks’ Lambda=0.963, F=2.276, p=0.028; h2=0.037 educational level: Wilks’ Lambda=0.939, F=3.871, p<0.001; h2=0.061).

More specifically, male gender was positively associated to the DACOBS Jumping to conclusions subscale (F=7.622, p=0.006; h2=0.018) and educational level was positive associated to all the DACOBS scales (p<0.05) except for Jumping to conclusions.

Regarding to the main effect of status (table 3), we found that SSD patients scored significantly higher than healthy controls on all seven DACOBS scales.




Convergent validity

As displayed in table 4, all seven DACOBS scales were significantly correlated with the CBQ-P subscales, in the total sample (ranging 0.331-0.707) as well as in the patient group (ranging 0.424-0.735) and healthy control group (ranging 0.177-0.460) separately.




Discussion

The aim of the present study was to validate the Italian version of the DACOBS. To the best of our knowledge, no other study has investigated the psychometric properties of this tool in an Italian sample.

Regarding the factor structure, van der Gaag et al.10 described a seven factor model as the best fit for their data. In our study the one-, three- and seven-factor models showed poor fit indexes, instead our findings are consistent with a second-order seven-factor as the best solution among those tested for the DACOBS, indicating that the total score well reassumes the characteristics of all the seven factors.

Reliability also proved to be very satisfactory, with ω coefficient ranging from 0.75 to 0.89, that indicate adequate levels of omega reliability for clinical decisions25,26. The results from van der Gaag et al.10 also showed good internal consistency, but they used Cronbach’s alpha that ranged from 0.64 to 0.90. For multidimensional constructs, omega coefficient has the advantage of considering the strength of association between items and constructs as well as item-specific measurement errors. Thus, omega provides more realistic estimates about the true reliability of the scale27,28.

Regarding the discriminant power, in accordance with the results of van der Gaag et al.10, all seven DACOBS subscales clearly differentiated patients with SSD and healthy controls in the current Italian sample.

Several studies confirmed the association between cognitive bias and psychosis. Bastiaens et al.12 reported that cognitive biases were equally present in patients diagnosed with non-psychotic disorders compared with SSD patients. Moreover, a meta-analysis established that Jumping to conclusions bias was more robust in patients diagnosed with psychotic disorders than among healthy individuals and patients with non-psychotic disorders, as depression, OCD and anxiety disorders29.

On the other hand, positive psychotic-like experiences (e.g. perceptual abnormalities, delusional thoughts) have been related with some cognitive biases such as Attention to threat, Externalizing, Belief inflexibility and Jumping to conclusions in both healthy and Ultra High Risk (UHR) individuals30. Jumping to conclusion and the alteration of neuropsychological domains occurs during early stages of psychotic illnesses31 and there exist evidence supporting the role of cognitive biases in the onset and the maintenance of psychotic symptoms32.

Regarding convergent validity, significant correlations between DACOBS and CBQ-P scales were found, both in the total sample as well as in the patient and healthy control groups independently, according to the findings of Bastiaens et al.13, so the DACOBS has proved to be appropriate to measure cognitive biases.

However, the results of this study should be evaluated in the context of some potential limitations. The first limit of the study is the use of self-report questionnaires that are exposed to the risk of hiding, social desirability, and misunderstanding. Surely task-based tests are more adequate in providing evidence for the presence or not of a specific cognitive bias. The second weakness is that the protocol did not include a test-retest reliability. However, reproducibility over time (test-retest) is one of several ways to classify and measure reliability, which include also internal consistency. Indeed, internal consistency measures how the individual scores of the items well correlate with each other33, and in the current study it has proved to be satisfactory. The last limitation is that the clinical data on healthy controls were partial. Subjects with a current or previous clinical diagnosis of psychosis were excluded, however data did not include measurement of “at-risk mental state” (ARMS)34. ARMS individuals are commonly identified using cognitive basic symptoms or ‘ultra-high-risk’ (UHR) criteria35, therefore theoretically the sample could include high risk subjects.

On the other hand, our study presents some strengths. First, the validation was performed in a clinical and community sample, made up of both patients with psychosis and healthy participants, and second, the large sample size. Indeed, recommendations for the sample size used to validate a scale suggest ranging from 2 to 20 subjects per item36, with an absolute minimum of 100 to 250 subjects37. Moreover, Comrey and Lee38 provided the following guidance: 100 subjects= poor, 200= fair, 300= good, 500= very good, ≥1000= excellent. Therefore, according to all these recommendations, our sample can be considered more than good.

Conclusions

Self-reports instruments assessing cognitive biases as the DACOBS are simple to use and should be taken during routine clinical practice and research in order to facilitate early recognition. Our findings suggest that the DACOBS is a valid instrument for measuring cognitive biases and limitations in psychosis for Italian speakers, confirming previous results regarding the psychometric properties of the tool.

Since cognitive biases are an important target of the clinical intervention, an effective tool is needed for their accurate detection and measurement in the clinical settings.

Acknowledgments: the authors are grateful to all participants who accepted to contribute to this research. The authors are grateful to Dr. Deborah Lauria and Dr. Eric Ettema for their valuable contribution in translating the DACOBS. Finally, the authors thank Prof. Mark van der Gaag for his valuable suggestions during the writing of the manuscript.

Conflict of interests: no potential competing interest was reported by the authors.

References

1. Yoon JH, Minzenberg MJ, Ursu S, et al. Association of dorsolateral prefrontal cortex dysfunction with disrupted coordinated brain activity in schizophrenia: relationship with impaired cognition, behavioral disorganization, and global function. Am J Psychiatry 2008; 165: 1006-14.

2. Garety PA, Kuipers E, Fowler D, Freeman D, Bebbington PE. A cognitive model of the positive symptoms of psychosis. Psychol Med 2001; 31: 189-95.

3. Grisham JR, Becker L, Williams AD, Whitton AE, Makkar SR. Using cognitive bias modification to deflate responsibility in compulsive checkers. Cognit Ther Res 2014; 38: 505-17.

4. Beck A, Rector N, Stolar N, Grant P. Schizophrenia: cognitive theory,research, and therapy. New York, NY: Guilford Press, 2011.

5. Garety P, Freeman D, Jolley S, Ross K, Waller H, Dunn G. Jumping to conclusions: the psychology of delusional reasoning. Adv Psychiatr Treat 2011; 17: 332-9.

6. Moritz S, Woodward TS. A generalized bias against disconfirmatory evidence in schizophrenia. Psychiatry Res 2006; 142: 157-65.

7. Garety PA, Freeman D, Jolley S, et al. Reasoning, emotions, and delusional conviction in psychosis. J Abnorm Psychol 2005; 114: 373-84.

8. Bentall RP, Corcoran R, Howard R, Blackwood N, Kinderman P. Persecutory delusions: a review and theoretical integration. Clin Psychol Rev 2001; 21: 1143-92.

9. Peters ER, Moritz S, Schwannauer M, et al. Cognitive biases questionnaire for psychosis. Schizophr Bull 2014; 40: 300-13.

10. van der Gaag M, Schütz C, ten Napel A, et al. Development of the Davos Assessment of Cognitive Biases Scale (DACOBS). Schizophr Res 2013; 144: 63-71.

11. Gawe˛da Ł, Prochwicz K, Kre˛z˙ołek M, Kłosowska J, Staszkiewicz M, Moritz S. Self-reported cognitive distortions in the psychosis continuum: a polish 18-item version of the Davos Assessment of Cognitive Biases Scale (DACOBS-18). Schizophr Res 2018; 192: 317-26.

12. Bastiaens T, Claes L, Smits D, Vanwalleghem D, De Hert M. Self-reported cognitive biases are equally present in patients diagnosed with psychotic versus non psychotic disorders. J Nerv Ment Dis 2018; 206: 122-9.

13. Bastiaens T, Claes L, Smits D, De Wachter D, van der Gaag M, De Hert M. The cognitive biases questionnaire for psychosis (CBQ-P) and the Davos assessment of cognitive biases (DACOBS): validation in a flemish sample of psychotic patients and healthy controls. Schizophr Res 2013; 147: 310-4.

14. Pozza A, Dèttore D. The CBQ-p: a confirmatory study on factor structure and convergent validity with psychotic-like experiences and cognitions in adolescents and young adults. Appl Psychol Bull 2017; 280: 58-69.

15. First MB. SCID-5-CV: structured clinical interview for DSM-5 disorders, clinician version. Washington, DC: American Psychiatric Association, 2016.

16. World Medical Association. World Medical Association Declaration of Helsinki: ethical principles for medical research involving human subjects. JAMA 2013; 310: 2191-4.

17. Phillips LD, Edwards W. Conservatism in a simple probability inference task. J Exp Psychol 1966; 72: 346-54.

18. Altemeyer B. Dogmatic behavior among students: testing a new measure of dogmatism. J Soc Psychol 2002; 142: 713-21.

19. Green CEL, Freeman D, Kuipers E, et al. Measuring ideas of persecution and social reference: the Green et al. Paranoid Thought Scales (GPTS). Psychol Med 2008; 38: 101-11.

20. Freeman D, Garety PA, Kuipers E. Persecutory delusions: developing the understanding of belief maintenance and emotional distress. Psychol Med 2001; 31: 1293-306.

21. Blackburn IM, Jones S, Lewin RJP. Cognitive style in depression. Br J Clin Psychol 1986; 25: 241-51.

22. Brown T. Confirmatory factor analysis for applied research. New York, NY: Guilford Press, 2006.

23. Hu L, Bentler PM. Cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives. Structural Equation Modeling 1999; 6: 1-55.

24. McGraw KO, Wong SP. Forming inferences about some intraclass correlation coefficients. Psychol Methods 1996; 1: 30-46.

25. Reise SP. The rediscovery of bifactor measurement models. Multivariate Behav Res 2012; 47: 667-96.

26. Reise SP, Bonifay WE, Haviland MG. Scoring and modeling psychological measures in the presence of multidimensionality. J Pers Assess 2013; 95: 129-40.

27. Dunn TJ, Baguley T, Brunsden V. From alpha to omega: a practical solution to the pervasive problem of internal consistency estimation. Br J Psychol 2014; 105: 399-412.

28. Schweizer K. On the changing role of Cronbach’s α in the evaluation of the quality of a measure. Eur J Psychol Assess 2011; 27: 143-4.

29. So SH, Siu NY, Wong H, Chan W, Garety PA. ‘Jumping to conclusions’ data-gathering bias in psychosis and other psychiatric disorders. Two meta-analyses of comparisons between patients and healthy individuals. Clin Psychol Rev 2016; 46: 151-67.

30. Livet A, Navarri X, Potvin S, Conrod P. Cognitive biases in individuals with psychotic-like experiences: a systematic review and a meta-analysis. Schizophr Res 2020; 222: 10-22.

31. González LE, López-Carrilero R, Barrigón ML, et al. Neuropsychological functioning and jumping to conclusions in recent onset psychosis patients. Schizophr Res 2018; 195: 366-71.

32. Pot-Kolder R, Veling W, Counotte J, van der Gaag M. Self-reported cognitive biases moderate the associations between social stress and paranoid ideation in a virtual reality experimental study. Schizophr Bull 2017; 44: 749-56.

33. Cook DA, Beckman TJ. Current concepts in validity and reliability for psychometric instruments: theory and application. Am J Med 2006; 119: 166.e7-166.e16.

34. Bonnett LJ, Varese F, Smith CT, Flores A, Yung AR. Individualised prediction of psychosis in individuals meeting at-risk mental state (ARMS) criteria: protocol for a systematic review of clinical prediction models. Diagnostic Progn Res 2019; 3: 21.

35. Fusar-Poli P, Borgwardt S, Bechdolf A, et al. The psychosis high-risk state. JAMA Psychiatry 2013; 70: 107.

36. Hair J, Anderson R, Tatham R, Black W. Multivariate data analysis. New York, NY: Pearson College Division, 1995.

37. Anthoine E, Moret L, Regnault A, Sébille V, Hardouin J-B. Sample size used to validate a scale: a review of publications on newly-developed patient reported outcomes measures. Health Qual Life Outcomes 2014; 12: 2.

38. Comrey AL, Lee HB. A first course in factor analysis (2nd ed.). Hillsdale, NJ: Lawrence Erlbaum, 1992.