This is the reproducible report for the paper “Social and moral psychology of COVID-19 across 69 countries.” whose data repository is at https://osf.io/tfsza/.


Please cite it as:

Azevedo, F., Pavlović, T., Rêgo, G. G. d., Ay, F. C., Gjoneska, B., Etienne, T., … Sampaio, W. M. (2022, May 18). Social and moral psychology of COVID-19 across 69 countries. https://doi.org/10.31234/osf.io/a3562

You can also download the Bibtex here.



Abstract

The COVID-19 pandemic has affected all domains of human life, including the economic and social fabric of societies. One of the central strategies for managing public health throughout the pandemic has been through persuasive messaging and collective behavior change. To help scholars better understand the social and moral psychology behind public health behavior, we present a dataset comprising of 51,404 individuals from 69 countries. This dataset was collected for the International Collaboration on Social & Moral Psychology of COVID-19 project (ICSMP COVID-19). This social science survey invited participants around the world to complete a series of individual differences and public health attitudes about COVID-19 during an early phase of the COVID-19 pandemic (between April and June 2020). The survey included seven broad categories of questions: COVID-19 beliefs and compliance behaviours; identity and social attitudes; ideology; health and well-being; moral beliefs and motivation; personality traits; and demographic variables. We report both raw and cleaned data, along with all survey materials, data visualisations, and psychometric evaluations of key variables.

Check out the paper’s ShinnyApp here and also check out our website for more information. Our preprint can be found here and here it is our data repository.


***

Analyses reported on the SciData paper

library(tidyverse)
'%!in%' <- function(x,y)!('%in%'(x,y))

###Function for longtable in latex format
addtorow          <- list()
addtorow$pos      <- list()
addtorow$pos[[1]] <- c(0)
addtorow$command  <- c(paste(
 "\\hline \n",
 "\\endhead \n",
 "\\hline \n",
 "{\\footnotesize Continued on next page} \n",
 "\\endfoot \n",
 "\\endlastfoot \n",
 sep=""))


###### functions test colors an ranging palette
#####  function WES ANDERSON PALLETTES SAMPLE SIZE #####
ssize_fun_wa <- function(name_palette = "palette", 
                         n_colors, 
                         legend_name = "Legend\nName\nNULL*") 
{
    palette_wa <- wes_palette(name_palette, n_colors, 
                              type = "continuous")
    ggplot(sample_size_n) +
        geom_polygon(data = world_map, aes(x = long, y = lat, group = group), 
                     color="#ffffff00",size=0.05, fill = "gray90") +
        geom_map(aes(map_id = ISO3, fill = n), map = world_map, 
                 color="#ffffff", size=0.05) +
        expand_limits(x = world_map$long, y = world_map$lat) +
        coord_fixed()+ 
        scale_x_discrete(labels = NULL, breaks = NULL) + 
        scale_y_discrete(labels = NULL, breaks = NULL) +
        scale_fill_gradientn(colours = palette_wa,
                             limits = c(0, 2500),
                             breaks = c(0, 500, 1000, 1500, 2000, 2500),
                             labels = c(0, 500, 1000, 1500, 2000, 2500),
                             name = legend_name,
                             guide = guide_colorbar(
                                 direction = "vertical",
                                 barheight = unit(20, units = "mm"),
                                 barwidth = unit(3, units = "mm"),
                                 draw.ulim = F,
                                 title.position = 'top',
                                 # some shifting around
                                 title.hjust = 0,
                                 label.hjust = 0)) +
        labs(x=NULL, y=NULL)+ 
        theme_map() +
        theme(text = element_text(family = "Georgia"),
              legend.position = c(0.05, 0.01),legend.text.align = 0,
              legend.background = element_rect(fill = alpha('white', 0.0)),
              legend.text = element_text(size = 6, hjust = 0, 
                                         color = "#4e4d47"),
              plot.tag.position = c(0.1, 0),
              plot.tag = element_text(size = 10, hjust = 0.1, vjust = 2),
              plot.title = element_text(color = "#4e4d47", 
                                        size=12, face = "bold",
                                        vjust = 3),
              plot.subtitle = element_text(color = "#4e4d47", vjust = 2,
                                           margin = margin(b = -0.00001, 
                                                           t = 0.05, 
                                                           l = 2, 
                                                           unit = "cm"), debug = F),
              legend.title = element_text(size = 7),
              plot.margin = unit(c(.1,.1,.1,.1), "cm"),
              panel.spacing = unit(c(-.1,0.2,.2,0.2), "cm"),
              panel.border = element_blank(),
              plot.caption = element_text(size=8,
                                          family="Georgia",color="gray35"))
}



RevCode <- function (x) 
{
    if (is.factor(x)) {
        levels(x) <- rev(levels(x))
        res <- factor(x, levels = rev(levels(x)))
    }
    else if (is.numeric(x)) {
        res <- (min(x, na.rm=TRUE) + max(x, na.rm=TRUE) - x)
    }
    else if (is.logical(x)) {
        res <- as.logical(1 - x)
    }
    else {
        res <- NA
    }
    return(res)
}
data <- haven::read_sav("C:/Users/falaf/Dropbox/Shared/Projects/Current/ICMPS Nature SciData/ICSMP Scientific Data/Data/Final/ICSMP_cleaned_data.sav", encoding = "latin1")

Descriptives

Number of cases

nrow(data)
## [1] 51404

Number of variables

ncol(data)
## [1] 99

Number of countries

sum(table(unique(data$ISO3)))
## [1] 69

Sample size by country

table(sjlabelled::as_label(data$ISO3))
## 
##     United Arab Emirates                Argentina                Australia 
##                      313                      721                     2161 
##                  Austria                  Belgium               Bangladesh 
##                     1605                     1159                      596 
##                 Bulgaria                  Bolivia                   Brazil 
##                      666                       29                     2268 
##                   Canada              Switzerland                    Chile 
##                      963                     1056                       97 
##                    China                 Colombia               Costa Rica 
##                     1030                     1277                       25 
##                     Cuba                  Germany                  Denmark 
##                       43                     1587                      566 
##       Dominican Republic                  Ecuador                    Spain 
##                       36                      148                     1090 
##                  Finland                   France           United Kingdom 
##                      698                     1119                      550 
##                    Ghana                   Greece                Guatemala 
##                      390                      640                       48 
##                 Honduras                  Croatia                  Hungary 
##                       24                      515                      506 
##                    India                  Ireland                     Iraq 
##                      741                      785                     1142 
##                   Israel                    Italy                    Japan 
##                     1253                     1282                     1239 
##                    Korea                   Latvia                  Morocco 
##                      555                     1008                      812 
##                   Mexico                Macedonia                  Nigeria 
##                     1311                      726                      608 
##                Nicaragua              Netherlands                   Norway 
##                       16                     1297                      532 
##                    Nepal              New Zealand                 Pakistan 
##                      563                      510                      565 
##                   Panama                     Peru              Philippines 
##                       18                       91                      524 
##                   Poland              Puerto Rico                 Paraguay 
##                     1817                        2                       16 
##                  Romania       Russian Federation                  Senegal 
##                     1005                      558                      552 
##                Singapore              El Salvador                   Serbia 
##                      564                       28                     1070 
##                 Slovakia                   Sweden                   Turkey 
##                     1265                     1568                     1455 
##                   Taiwan                  Ukraine                  Uruguay 
##                      833                      577                       49 
## United States of America                Venezuela             South Africa 
##                     1506                       96                      939

Number of samples

sum(table(unique(data$country)))
## [1] 77

Sample size by sample

table(sjlabelled::as_label(data$country))
## 
##     United Arab Emirates                Argentina                  Austria 
##                      313                      721                     1605 
##                Australia               Bangladesh                  Belgium 
##                     2161                      596                     1159 
##                 Bulgaria                  Bolivia                   Brazil 
##                      666                       29                      961 
##                 Brazil_2                 Brazil_3           Canada_english 
##                     1301                        6                      792 
##            Canada_french              Switzerland                    Chile 
##                      171                     1056                       97 
##                    China                 Colombia               Colombia_2 
##                     1030                      731                      546 
##               Costa Rica                     Cuba                  Germany 
##                       25                       43                     1587 
##                  Denmark       Dominican Republic                  Ecuador 
##                      566                       36                      148 
##                    Spain                  Finland                   France 
##                     1090                      698                     1119 
##           United Kingdom                    Ghana                   Greece 
##                      550                      390                      640 
##                Guatemala                 Honduras                  Croatia 
##                       48                       24                      515 
##                  Hungary                  Ireland                   Israel 
##                      506                      785                     1253 
##                    India                  India_2                     Iraq 
##                      312                      429                     1142 
##                    Italy                  Italy_2                    Japan 
##                      998                      284                     1239 
##                    Korea                   Latvia                  Morocco 
##                      555                     1008                      812 
##                Macedonia                   Mexico                 Mexico_2 
##                      726                      804                      507 
##                  Nigeria                Nicaragua              Netherlands 
##                      608                       16                     1297 
##                   Norway                    Nepal              New Zealand 
##                      532                      563                      510 
##                   Panama                     Peru              Philippines 
##                       18                       91                      524 
##                 Pakistan                   Poland              Puerto Rico 
##                      565                     1817                        2 
##                 Paraguay                  Romania                Romania_2 
##                       16                      500                      505 
##                   Serbia       Russian Federation                   Sweden 
##                     1070                      558                     1568 
##                Singapore                 Slovakia                  Senegal 
##                      564                     1265                      552 
##              El Salvador                   Turkey                   Taiwan 
##                       28                     1455                      833 
##                  Ukraine United States of America                  Uruguay 
##                      577                     1506                       49 
##                Venezuela             South Africa 
##                       96                      939

Data Collection period

dt.noNA <- data[data$start_date %!in% NA,]
#
#as.Date(as.POSIXct(dt.noNA$start_date, format = "%Y-%m-%d"))
min(dt.noNA$end_date)
## [1] "2020-04-22"
max(dt.noNA$end_date) #390 from Ghana from "2020-06-03"
## [1] "2020-06-03"

Sample types

data$sample_coding_labelled <- sjlabelled::as_label(data$sample_coding)
data$country_labelled       <- sjlabelled::as_label(data$country)

# counts
table(data$sample_coding_labelled)
## 
## Quota-based nationally representative                      Post-hoc weights 
##                                 26173                                  6703 
##                           Convenience                     unknown/undecided 
##                                 18528                                     0
# proportion
table(data$sample_coding_labelled)/nrow(data)
## 
## Quota-based nationally representative                      Post-hoc weights 
##                             0.5091627                             0.1303984 
##                           Convenience                     unknown/undecided 
##                             0.3604389                             0.0000000
# per country
library(tidyverse)
data %>% group_by(country_labelled, sample_coding_labelled, ISO3, country) %>% summarize(n = n()) -> dt.sample.type

# display table
library(DT)
DT::datatable(dt.sample.type, 
              class = 'cell-border stripe',
              filter = 'top',
              escape = F,
              caption = htmltools::tags$caption(style = 'caption-side: bottom; text-align: left;','Table 0: ', htmltools::em('Sample types per samples.')),
    options = list(initComplete = JS(
    "function(settings, json) {",
    "$(this.api().table().header()).css({'background-color': '#2b2b2b', 'color': '#f3f4f5'});",
    "}"))
    ) 

Tables

Table 1 - Sample size, average proportion of valid answers, age of respondents and the number of data collections in 69 countries

sss <- data

sss$valid <- rowSums(is.na(sss[, -grep("employ_status2|sex2", names(sss))]))

sss$prop_valid <- 1-sss$valid/ncol(sss)
sss$Prop_50    <- ifelse(sss$prop_valid < .50, 0, 1)
sss$Prop_90    <- ifelse(sss$prop_valid < .90, 0, 1)

sss$male    <- ifelse(is.na(sss$sex1), 0, ifelse(sss$sex1 == 1, 1, 0))
sss$female  <- ifelse(is.na(sss$sex1), 0, ifelse(sss$sex1 == 2, 1, 0))
sss$other   <- ifelse(is.na(sss$sex1), 0, ifelse(sss$sex1 == 3, 1, 0))
sss$gendna  <- ifelse(is.na(sss$sex1), 1, 0)

sss$emp_ft  <- ifelse(is.na(sss$employ_status1), 0, ifelse(sss$employ_status1 == 1, 1, 0))
sss$emp_pt  <- ifelse(is.na(sss$employ_status1), 0, ifelse(sss$employ_status1 == 2, 1, 0))
sss$unemp   <- ifelse(is.na(sss$employ_status1), 0, ifelse(sss$employ_status1 == 3, 1, 0))
sss$stud    <- ifelse(is.na(sss$employ_status1), 0, ifelse(sss$employ_status1 == 4, 1, 0))
sss$ret     <- ifelse(is.na(sss$employ_status1), 0, ifelse(sss$employ_status1 == 5, 1, 0))
sss$eother  <- ifelse(is.na(sss$employ_status1), 0, ifelse(sss$employ_status1 == 6, 1, 0))
sss$empna   <- ifelse(is.na(sss$employ_status1), 1, 0)

sss$single       <- ifelse(is.na(sss$marital1), 0, ifelse(sss$marital1 == 1, 1, 0))
sss$relationship <- ifelse(is.na(sss$marital1), 0, ifelse(sss$marital1 == 2, 1, 0))
sss$married      <- ifelse(is.na(sss$marital1), 0, ifelse(sss$marital1 == 3, 1, 0))
sss$marna        <- ifelse(is.na(sss$marital1), 1, 0)

sss$child0      <- ifelse(is.na(sss$children), 0, ifelse(sss$children == 0, 1, 0))
sss$child1      <- ifelse(is.na(sss$children), 0, ifelse(sss$children == 1, 1, 0))
sss$child2      <- ifelse(is.na(sss$children), 0, ifelse(sss$children == 2, 1, 0))
sss$child3      <- ifelse(is.na(sss$children), 0, ifelse(sss$children == 3, 1, 0))
sss$child4      <- ifelse(is.na(sss$children), 0, ifelse(sss$children == 4, 1, 0))
sss$child5      <- ifelse(is.na(sss$children), 0, ifelse(sss$children == 5, 1, 0))
sss$child6      <- ifelse(is.na(sss$children), 0, ifelse(sss$children == 6, 1, 0))
sss$child7      <- ifelse(is.na(sss$children), 0, ifelse(sss$children == 7, 1, 0))
sss$child8      <- ifelse(is.na(sss$children), 0, ifelse(sss$children == 8, 1, 0))
sss$child9      <- ifelse(is.na(sss$children), 0, ifelse(sss$children == 9, 1, 0))
sss$child10plus <- ifelse(is.na(sss$children), 0, ifelse(sss$children == 10, 1, 0))
sss$childna     <- ifelse(is.na(sss$children), 1, 0)

sss %>% group_by(country) %>% summarize(
  n = n(), 
  Prop_50 = round(mean(Prop_50), 3), 
  Prop_90 = round(mean(Prop_90), 3), 
  M_age = round(mean(age, na.rm = T), 3), 
  SD_age = round(sd(age, na.rm = T), 3), 
  sample = DescTools::Mode(country)) %>%  
  mutate_if(haven::is.labelled, as_factor) %>% 
  arrange(as.character(sample)) -> table1

dt.sample.type %>%
  group_by(ISO3) %>%
  mutate(Country_frequency = n()) %>%
  ungroup() -> dt.sample.type.Country_frequency

names(dt.sample.type.Country_frequency)[4] <- "sample"

table1 <- merge(table1, dt.sample.type.Country_frequency, by = "sample")

table1 <- table1[order(as.character(table1$country)),]

table1 <- table1[,c("sample", "country", "n.x", "Prop_50", "Prop_90", "M_age", "SD_age", "Country_frequency")]

names(table1)[3] <- "N"

table1[, 4:7] <- round(table1[,4:7], 2)

#means to report on paper
round(1-mean(sss$prop_valid),3)*100      #[1] 6
## [1] 6.1
round(1-mean(table1$Prop_50),3)*100      #[1] 4.4
## [1] 4.4
round(1-mean(table1$Prop_90),3)*100      #[1] 7.2
## [1] 7.1
round(sum(sss$valid==0)/nrow(sss),3)*100 #[1] 24.7
## [1] 24.8
round(prop.table(table(sss$att_check_nobots, useNA = "always")), 3)*100 #1.0 90.9  8.0 
## 
##    0    1 <NA> 
##  1.0 90.9  8.0
DT::datatable(table1,
              class = 'cell-border stripe',
              filter = 'top',
              escape = F,
              caption = htmltools::tags$caption(style = 'caption-side: bottom; text-align: left;','Table 1: ', htmltools::em('Note: Country = country names in accordance with ISO3 codes, N = number of respondents in each country. < 50% and < 90% = average proportion of valid (non NA) answers that are below 0.5 and 0,.9 respectively in the subject level. μAge = mean age and sdAge = standard deviation of the age, Multiple datasets = whether there were multiple data collections in the country.')),
    options = list(initComplete = JS(
    "function(settings, json) {",
    "$(this.api().table().header()).css({'background-color': '#2b2b2b', 'color': '#f3f4f5'});",
    "}"))
    )

Table 2 - Distribution of sex in 69 countries

table2 <- sss %>% group_by(country) %>% summarize(Prop_female = round(mean(female, na.rm = T), 3), Prop_male = round(mean(male, na.rm = T), 2), Prop_other = round(mean(other, na.rm = T), 2), Prop_NA = round(mean(gendna, na.rm = T), 2)) %>% mutate_if(haven::is.labelled, as_factor) %>% arrange(as.character(country))

DT::datatable(table2,
              class = 'cell-border stripe',
              filter = 'top',
              escape = F,
              caption = htmltools::tags$caption(style = 'caption-side: bottom; text-align: left;','Table 2: ', htmltools::em('Note: Country = country names in accordance with ISO3 codes, % Female = Proportion of female respondents in the country, % Male = proportion of male respondents, % Other = proportion of non-binary respondents and % NA = proportion of the unreported sex.')),
    options = list(initComplete = JS(
    "function(settings, json) {",
    "$(this.api().table().header()).css({'background-color': '#2b2b2b', 'color': '#f3f4f5'});",
    "}"))
    ) 

Table 3

table3 <- sss %>% group_by(country) %>% summarize(Prop_ft_employed = mean(emp_ft, na.rm = T), Prop_pt_employed = mean(emp_pt, na.rm = T), Prop_unemployed = mean(unemp, na.rm = T), Prop_student = mean(stud, na.rm = T), Prop_retired = mean(ret, na.rm = T), Prop_other = mean(eother, na.rm = T), Prop_NA = mean(empna, na.rm = T)) %>% mutate_if(haven::is.labelled, as_factor) %>% arrange(as.character(country))
table3[, 2:8] <- round(table3[,2:8], 2)

DT::datatable(table3,
              class = 'cell-border stripe',
              filter = 'top',
              escape = F,
              caption = htmltools::tags$caption(style = 'caption-side: bottom; text-align: left;','Table 3: ', htmltools::em('Note: Country = country names in accordance with ISO3 codes, % Full = Proportion of full time workers, % Part = proportion of part time workers, % Unemp. = proportion of unemployed respondents, % Student = proportion of students, % Retired = proportion of retirees, % Other = proportion of respondents who do not fit in the mentioned categories and % NA = proportion of the unreported employment status.')),
    options = list(initComplete = JS(
    "function(settings, json) {",
    "$(this.api().table().header()).css({'background-color': '#2b2b2b', 'color': '#f3f4f5'});",
    "}"))
    ) 

Table 4

table4 <- sss %>% group_by(country) %>% summarize(Prop_single = mean(single, na.rm = T), Prop_relationship = mean(relationship, na.rm = T), Prop_married = mean(married, na.rm = T), Prop_married_NA = mean(marna, na.rm = T), Prop_child_0 = mean(child0, na.rm = T),  Prop_child_1 = mean(child1, na.rm = T), Prop_child_2 = mean(child2, na.rm = T), Prop_child_3 = mean(child3, na.rm = T), Prop_child_4 = mean(child4, na.rm = T), Prop_child_5 = mean(child5, na.rm = T), Prop_child_6 = mean(child6, na.rm = T), Prop_child_7 = mean(child7, na.rm = T), Prop_child_8 = mean(child8, na.rm = T), Prop_child_9 = mean(child9, na.rm = T), Prop_child_10plus = mean(child10plus, na.rm = T), Prop_NA = mean(childna, na.rm = T)) %>% mutate_if(haven::is.labelled, as_factor) %>% arrange(as.character(country))

t4.pap<-table4
t4.pap$over_4<-t4.pap$Prop_child_5+t4.pap$Prop_child_6+t4.pap$Prop_child_7+t4.pap$Prop_child_8+t4.pap$Prop_child_9+t4.pap$Prop_child_10plus
t4.pap$Prop_child_5<-NULL
t4.pap$Prop_child_6<-NULL
t4.pap$Prop_child_7<-NULL
t4.pap$Prop_child_8<-NULL
t4.pap$Prop_child_9<-NULL
t4.pap$Prop_child_10plus<-NULL
t4.pap<-t4.pap[,c("country","Prop_single","Prop_relationship", "Prop_married", "Prop_married_NA", "Prop_child_0","Prop_child_1","Prop_child_2","Prop_child_3","Prop_child_4","over_4", "Prop_NA")]

table4 <- t4.pap

table4[, 2:12] <- round(table4[,2:12], 2)


DT::datatable(table4,
              class = 'cell-border stripe',
              filter = 'top',
              escape = F,
              caption = htmltools::tags$caption(style = 'caption-side: bottom; text-align: left;','Table 4: ', htmltools::em('Note: Country = country names in accordance with ISO3 codes, Columns 2-5 shows the proportion of different marital status, NA(MS) = unreported marital status, Columns 6-16 shows proportion of respondents by the number of children they have and NA(Child.) = proportion of unreported number of children.')),
    options = list(initComplete = JS(
    "function(settings, json) {",
    "$(this.api().table().header()).css({'background-color': '#2b2b2b', 'color': '#f3f4f5'});",
    "}"))
    ) 

Table 5

data %>%
  group_by(sample_coding) %>%
  dplyr::summarise(Samples       = paste0(unique(country), collapse = ", "),
                   N_samples     = n_distinct(country),
                   N_respondents = n(),) %>%
  dplyr::mutate(Prop_countries   = round(N_samples / sum(N_samples), 2),
                Prop_respondents = round(N_respondents / sum(N_respondents), 2)) -> table5

# label sample coding types
table5$sample_coding <- as.character(sjlabelled::as_label(table5$sample_coding))

table5[4,] <- list("Total", "", 
                   sum(table5$N_samples),
                   sum(table5$N_respondents),
                   sum(table5$Prop_countries),
                   sum(table5$Prop_respondents))

DT::datatable(table5,
              class = 'cell-border stripe',
              filter = 'top',
              escape = F,
              caption = htmltools::tags$caption(style = 'caption-side: bottom; text-align: left;','Table 5: ', htmltools::em('Note: .')),
    options = list(initComplete = JS(
    "function(settings, json) {",
    "$(this.api().table().header()).css({'background-color': '#2b2b2b', 'color': '#f3f4f5'});",
    "}"))
    ) 

Table 6

## Creating mean scores
#--------------------------
#
# 1. conspiracy theories
ctheory = c("ctheory1","ctheory2","ctheory3","ctheory4")
data$ctheory_index = rowMeans(data[,names(data) %in% ctheory], na.rm = F)

# 2. morality as cooperation
#data$mcoop6_rev  = RevCode(data$mcoop6)
#data$mcoop7_rev  = RevCode(data$mcoop7)
#mcoop = c("mcoop1","mcoop2","mcoop3","mcoop4","mcoop5","mcoop6_rev","mcoop7_rev")
mcoop = c("mcoop1","mcoop2","mcoop3","mcoop4","mcoop5","mcoop6","mcoop7")
data$mcoop_index = rowMeans(data[,names(data) %in% mcoop], na.rm = F)

# 3. spatial distancing 
data$contact2_rev  = RevCode(data$contact2)
contact = c("contact1","contact2_rev","contact3","contact4","contact5")
data$contact_index = rowMeans(data[,names(data) %in% contact], na.rm = F)

# 4. collective narcissism
cnarc = c("cnarc1","cnarc2","cnarc3")
data$cnarc_index = rowMeans(data[,names(data) %in% cnarc], na.rm = T)

# 5. national identity | only two items
nidentity = c("nidentity1","nidentity2")
data$nidentity_index = rowMeans(data[,names(data) %in% nidentity], na.rm = F)

# 6. policy support
psupport = c("psupport1","psupport2","psupport3","psupport4","psupport5")
data$psupport_index = rowMeans(data[,names(data) %in% psupport], na.rm = F)

# 7. hygiene
hygiene = c("hygiene1","hygiene2","hygiene3","hygiene4","hygiene5")
data$hygiene_index = rowMeans(data[,names(data) %in% hygiene], na.rm = F)

# 8. moral identity
data$moralid4_rev = RevCode(data$moralid4)
data$moralid7_rev = RevCode(data$moralid7)
moralid = c("moralid1","moralid10","moralid2","moralid3","moralid4_rev","moralid5","moralid6","moralid7_rev","moralid8","moralid9")
data$moralid_index = rowMeans(data[,names(data) %in% moralid], na.rm = F)


# 9. narcissism
narc = c("narc1","narc2","narc3","narc4","narc5","narc6")
data$narc_index = rowMeans(data[,names(data) %in% narc], na.rm = F)

# 10. open mindedness
data$omind1_rev = RevCode(data$omind1)
data$omind5_rev = RevCode(data$omind5)
data$omind6_rev = RevCode(data$omind6)
omind = c("omind1_rev","omind2","omind3","omind4","omind5_rev","omind6_rev")
data$omind_index = rowMeans(data[,names(data) %in% omind], na.rm = F)

# 11. optimism
optim = c("optim1","optim2")
data$optim_index = rowMeans(data[,names(data) %in% optim], na.rm = F)

# 12. risk perception
riskperc = c("riskperc1","riskperc2")
data$riskperc_index = rowMeans(data[,names(data) %in% riskperc], na.rm = F)

# 13. social belonging
sbelong = c("sbelong1","sbelong2","sbelong3","sbelong4")
data$sbelong_index = rowMeans(data[,names(data) %in% sbelong], na.rm = F)

# 14. self-control
data$slfcont3_rev = RevCode(data$slfcont3)
data$slfcont4_rev = RevCode(data$slfcont4)
slfcont = c("slfcont1","slfcont2","slfcont3_rev","slfcont4_rev")
data$slfcont_index = rowMeans(data[,names(data) %in% slfcont], na.rm = F)
#
#
# To calculate reliability indices, two conditions need to be satisfied 
#
# 1. Enough records per country
#-----------------------------------
# This paper (https://www.sciencedirect.com/science/article/abs/pii/S0092656619300297) suggests maintaining n>490 as a minimum for correlation analysis, depending on population correlation and internal consistency. We show that our average omega consistency for six of our constructs ranges between 0.74 and 0.90, and correlation between all 14 constructs ranges between 0.003 and 0.48, averaging 0.17. We will therefore respect n=490 as a minimum here.

min_n = 490

ISO3_n = as.data.frame(table(data$ISO3))
small_n_ISO = as.vector(ISO3_n$Var1[ISO3_n$Freq<min_n])

inclusion2 = (data$ISO3 %!in% small_n_ISO)
# sum(inclusion2, na.rm = T) # 49935

# 2. Passed attention check
#----------------------------------
## data$att_check_nobots==1
inclusion3 = (data$att_check_nobots==1 & !is.na(data$att_check_nobots))
#sum(inclusion3, na.rm = T) # 46745
#
#
# Apply criteria and subset data
#--------------------------------
# which of the following inclusion criteria should be used; T if used, F if not used
prop75     = F
large_n    = T
attn_check = T

prop75vec = if(prop75)    {inclusion1}else{rep(T, nrow(data))}
largenvec = if(large_n)   {inclusion2}else{rep(T, nrow(data))}
attnvec   = if(attn_check){inclusion3}else{rep(T, nrow(data))}

inclusion_corr = prop75vec & largenvec & attnvec

# sum(inclusion_corr) # how many records are retained: 45792
data_cleaned = data[inclusion_corr,]
#
#
#
indices6 = c("ctheory_index", "mcoop_index", "contact_index", "cnarc_index", "nidentity_index", "psupport_index")

# create empty df
consistency = data.frame(ISO3 = rep(unique(data_cleaned$ISO3), each=4),
                         measure = rep(c("alpha","omega","guttman","variance")),
                         measure_label = rep(c("Cronbach's alpha","Omega","Guttman's split-half coefficient","Proportion of variance explained")),
                         ctheory = NA,
                         mcoop = NA,
                         contact = NA,
                         cnarc = NA,
                         nidentity = NA,
                         psupport = NA)

library(psych)

for(i in 1:length(unique(data_cleaned$ISO3))){
  # select country data
  country = unique(data_cleaned$ISO3)[i]
  data_selection = data_cleaned[data_cleaned$ISO3 == country,]

  # ctheory ####
    # alpha
    alpha_pos = which(consistency$ISO3 == country & consistency$measure == "alpha")
    alpha = round(psych::alpha(data_selection[,names(data_selection) %in% ctheory])$total$std.alpha, 2)
    
    # omega
    omega_pos = which(consistency$ISO3 == country & consistency$measure == "omega")
    omega = round(psych::omega(data_selection[,names(data_selection) %in% ctheory],nfactors = 1)$omega_h, 2)
    
    # guttman split half
    split_pos = which(consistency$ISO3 == country & consistency$measure == "guttman")
    split = round(psych::splitHalf(data_selection[,names(data_selection) %in% ctheory])$maxrb, 2)
  
    # fa variance
    fa_var_pos = which(consistency$ISO3 == country & consistency$measure == "variance")
    fa_var = round(psych::fa(data_selection[,names(data_selection) %in% ctheory], fa=1)$Vaccounted[2], 2)
  
    # fill in
    consistency$ctheory[alpha_pos] = alpha
    consistency$ctheory[omega_pos] = omega
    consistency$ctheory[split_pos] = split
    consistency$ctheory[fa_var_pos] = fa_var

  # mcoop ####
    # alpha
    alpha_pos = which(consistency$ISO3 == country & consistency$measure == "alpha")
    alpha = round(psych::alpha(data_selection[,names(data_selection) %in% mcoop])$total$std.alpha, 2)
    
    # omega
    omega_pos = which(consistency$ISO3 == country & consistency$measure == "omega")
    omega = round(psych::omega(data_selection[,names(data_selection) %in% mcoop],nfactors = 1)$omega_h, 2)
    
    # guttman split half
    split_pos = which(consistency$ISO3 == country & consistency$measure == "guttman")
    split = round(psych::splitHalf(data_selection[,names(data_selection) %in% mcoop])$maxrb, 2)
  
    # fa variance
    fa_var_pos = which(consistency$ISO3 == country & consistency$measure == "variance")
    fa_var = round(psych::fa(data_selection[,names(data_selection) %in% mcoop], fa=1)$Vaccounted[2], 2)
  
    # fill in
    consistency$mcoop[alpha_pos] = alpha
    consistency$mcoop[omega_pos] = omega
    consistency$mcoop[split_pos] = split
    consistency$mcoop[fa_var_pos] = fa_var
    
  # contact ####
    # alpha
    alpha_pos = which(consistency$ISO3 == country & consistency$measure == "alpha")
    alpha = round(psych::alpha(data_selection[,names(data_selection) %in% contact])$total$std.alpha, 2)
    
    # omega
    omega_pos = which(consistency$ISO3 == country & consistency$measure == "omega")
    omega = round(psych::omega(data_selection[,names(data_selection) %in% contact],nfactors = 1)$omega_h, 2)
    
    # guttman split half
    split_pos = which(consistency$ISO3 == country & consistency$measure == "guttman")
    split = round(psych::splitHalf(data_selection[,names(data_selection) %in% contact])$maxrb, 2)
  
    # fa variance
    fa_var_pos = which(consistency$ISO3 == country & consistency$measure == "variance")
    fa_var = round(psych::fa(data_selection[,names(data_selection) %in% contact], fa=1)$Vaccounted[2], 2)
  
    # fill in
    consistency$contact[alpha_pos] = alpha
    consistency$contact[omega_pos] = omega
    consistency$contact[split_pos] = split
    consistency$contact[fa_var_pos] = fa_var
    
  # cnarc ####
    # alpha
    alpha_pos = which(consistency$ISO3 == country & consistency$measure == "alpha")
    alpha = round(psych::alpha(data_selection[,names(data_selection) %in% cnarc])$total$std.alpha, 2)
    
    # omega
    omega_pos = which(consistency$ISO3 == country & consistency$measure == "omega")
    omega = round(psych::omega(data_selection[,names(data_selection) %in% cnarc],nfactors = 1)$omega_h, 2)
    
    # guttman split half
    split_pos = which(consistency$ISO3 == country & consistency$measure == "guttman")
    split = round(psych::splitHalf(data_selection[,names(data_selection) %in% cnarc])$maxrb, 2)
  
    # fa variance
    fa_var_pos = which(consistency$ISO3 == country & consistency$measure == "variance")
    fa_var = round(psych::fa(data_selection[,names(data_selection) %in% cnarc], fa=1)$Vaccounted[2], 2)
  
    # fill in
    consistency$cnarc[alpha_pos] = alpha
    consistency$cnarc[omega_pos] = omega
    consistency$cnarc[split_pos] = split
    consistency$cnarc[fa_var_pos] = fa_var
    
  
  # nidentity ####
    # alpha
    alpha_pos = which(consistency$ISO3 == country & consistency$measure == "alpha")
    alpha = round(psych::alpha(data_selection[,names(data_selection) %in% nidentity])$total$std.alpha, 2)
    
    # omega
    omega_pos = which(consistency$ISO3 == country & consistency$measure == "omega")
    #omega = round(psych::omega(data_selection[,names(data_selection) %in% nidentity],nfactors = 1)$omega_h, 2)
    
    # guttman split half
    split_pos = which(consistency$ISO3 == country & consistency$measure == "guttman")
    split = round(psych::splitHalf(data_selection[,names(data_selection) %in% nidentity])$maxrb, 2)
  
    # fa variance
    fa_var_pos = which(consistency$ISO3 == country & consistency$measure == "variance")
    fa_var = round(psych::fa(data_selection[,names(data_selection) %in% nidentity], fa=1)$Vaccounted[2], 2)
  
    # fill in
    consistency$nidentity[alpha_pos] = alpha
    #consistency$nidentity[omega_pos] = omega
    consistency$nidentity[split_pos] = split
    consistency$nidentity[fa_var_pos] = fa_var
    
  # psupport ####
    # alpha
    alpha_pos = which(consistency$ISO3 == country & consistency$measure == "alpha")
    alpha = round(psych::alpha(data_selection[,names(data_selection) %in% psupport])$total$std.alpha, 2)
    
    # omega
    omega_pos = which(consistency$ISO3 == country & consistency$measure == "omega")
    omega = round(psych::omega(data_selection[,names(data_selection) %in% psupport],nfactors = 1)$omega_h, 2)
    
    # guttman split half
    split_pos = which(consistency$ISO3 == country & consistency$measure == "guttman")
      split = round(psych::splitHalf(data_selection[,names(data_selection) %in% psupport])$maxrb, 2)
  
    # fa variance
    fa_var_pos = which(consistency$ISO3 == country & consistency$measure == "variance")
    fa_var = round(psych::fa(data_selection[,names(data_selection) %in% psupport], fa=1)$Vaccounted[2], 2)
  
    # fill in
    consistency$psupport[alpha_pos] = alpha
    consistency$psupport[omega_pos] = omega
    consistency$psupport[split_pos] = split
    consistency$psupport[fa_var_pos] = fa_var
}
## Some items ( mcoop5 mcoop6 mcoop7 ) were negatively correlated with the total scale and 
## probably should be reversed.  
## To do this, run the function again with the 'check.keys=TRUE' optionSome items ( contact2_rev ) were negatively correlated with the total scale and 
## probably should be reversed.  
## To do this, run the function again with the 'check.keys=TRUE' optionSome items ( mcoop7 ) were negatively correlated with the total scale and 
## probably should be reversed.  
## To do this, run the function again with the 'check.keys=TRUE' optionSome items ( mcoop5 mcoop6 mcoop7 ) were negatively correlated with the total scale and 
## probably should be reversed.  
## To do this, run the function again with the 'check.keys=TRUE' optionSome items ( contact2_rev ) were negatively correlated with the total scale and 
## probably should be reversed.  
## To do this, run the function again with the 'check.keys=TRUE' optionSome items ( contact2_rev ) were negatively correlated with the total scale and 
## probably should be reversed.  
## To do this, run the function again with the 'check.keys=TRUE' optionSome items ( mcoop6 mcoop7 ) were negatively correlated with the total scale and 
## probably should be reversed.  
## To do this, run the function again with the 'check.keys=TRUE' optionSome items ( mcoop7 ) were negatively correlated with the total scale and 
## probably should be reversed.  
## To do this, run the function again with the 'check.keys=TRUE' optionSome items ( mcoop7 ) were negatively correlated with the total scale and 
## probably should be reversed.  
## To do this, run the function again with the 'check.keys=TRUE' optionSome items ( mcoop6 mcoop7 ) were negatively correlated with the total scale and 
## probably should be reversed.  
## To do this, run the function again with the 'check.keys=TRUE' optionSome items ( mcoop6 mcoop7 ) were negatively correlated with the total scale and 
## probably should be reversed.  
## To do this, run the function again with the 'check.keys=TRUE' optionSome items ( mcoop6 mcoop7 ) were negatively correlated with the total scale and 
## probably should be reversed.  
## To do this, run the function again with the 'check.keys=TRUE' option
#
#
#
table6 <- consistency

table6$Country <- as.character(sjlabelled::as_label(table6$ISO3))

table6 <- table6[,c("Country","measure_label", "ctheory", "mcoop", "contact", "cnarc", "nidentity", "psupport")]

names(table6) <- c("Country","measure_label", "Conspiracy beliefs","Morality as cooperation","Spatial distancing","Collective narcissism","National identity","Policy support")

DT::datatable(table6,
              class = 'cell-border stripe',
              filter = 'top',
              escape = F,
              caption = htmltools::tags$caption(style = 'caption-side: bottom; text-align: left;','Table 6: ', htmltools::em('Note: .')),
    options = list(initComplete = JS(
    "function(settings, json) {",
    "$(this.api().table().header()).css({'background-color': '#2b2b2b', 'color': '#f3f4f5'});",
    "}"))
    ) 
save.image("temp.RData")

Downloadable tables

xlsx::write.xlsx(table1, "table1.xlsx")
xlsx::write.xlsx(table2, "table2.xlsx")
xlsx::write.xlsx(table3, "table3.xlsx")
xlsx::write.xlsx(table4, "table4.xlsx")
xlsx::write.xlsx(table5, "table5.xlsx")
xlsx::write.xlsx(table6, "table6.xlsx")

xfun::embed_file("table1.xlsx")

Download table1.xlsx

xfun::embed_file("table2.xlsx")

Download table2.xlsx

xfun::embed_file("table3.xlsx")

Download table3.xlsx

xfun::embed_file("table4.xlsx")

Download table4.xlsx

xfun::embed_file("table5.xlsx")

Download table5.xlsx

xfun::embed_file("table6.xlsx")

Download table6.xlsx

Saving tables in LaTeX format

table1.table <- xtable::xtable(table1)
print(table1.table, tabular.environment="longtable", floating=FALSE, include.rownames=FALSE, add.to.row = addtorow, hline.after=c(-1), file="table_latex_1.txt")

table2.table <- xtable::xtable(table2)
print(table2.table, tabular.environment="longtable", floating=FALSE, include.rownames=FALSE, add.to.row = addtorow, hline.after=c(-1), file="table_latex_2.txt")

table3.table <- xtable::xtable(table3)
print(table3.table, tabular.environment="longtable", floating=FALSE, include.rownames=FALSE, add.to.row = addtorow, hline.after=c(-1), file="table_latex_3.txt")

table4.table <- xtable::xtable(table4)
print(table4.table, tabular.environment="longtable", floating=FALSE, include.rownames=FALSE, add.to.row = addtorow, hline.after=c(-1), file="table_latex_4.txt")

table5.table <- xtable::xtable(table5)
print(table5.table, tabular.environment="longtable", floating=FALSE, include.rownames=FALSE, add.to.row = addtorow, hline.after=c(-1), file="table_latex_5.txt")

table6.table <- xtable::xtable(table6)
print(table6.table, tabular.environment="longtable", floating=FALSE, include.rownames=FALSE, add.to.row = addtorow, hline.after=c(-1), file="table_latex_6.txt")












Visualizations

Figure 1 (Sample Size)

#Calculate share of NAs per country
clean_data_na<-data%>%
  group_by(country)%>%
  summarise_all(list(na = ~mean(is.na(.))))

clean_data_na<-clean_data_na%>% mutate(na_over = rowMeans(.[, 10:99]))

clean_data_na<-clean_data_na%>%select("country", "na_over")

clean_data<-merge(data,clean_data_na,by="country")

clean_data$na_over<-100*clean_data$na_over

# defining shape map
world_map<-map_data("world")

# world_map ordering and removing Antarctica
world_map<-world_map[order(world_map$group,world_map$order),] %>% filter(region != "Antarctica") 
clean_data$country2<-if_else(clean_data$country %in% "US",
                         "USA",
                         if_else(clean_data$country=="RU",
                                 "Russia",
                                 if_else(clean_data$country=="KR",
                                         "South Korea",
                                         if_else(clean_data$country=="GB",                                   "UK", as.character(sjlabelled::as_label(clean_data$ISO3))))))

# standadizing country names to match across dt
sample_size_n <- clean_data %>% group_by(country2) %>% tally() %>% mutate(prop = (n/nrow(clean_data))*100)

sample_size_n$country2 <- if_else(sample_size_n$country2=="United States of America",
                         "USA",
                         if_else(sample_size_n$country2=="Russian Federation",
                                 "Russia",
                                 if_else(sample_size_n$country2=="Korea",
                                         "South Korea",
                                         if_else(sample_size_n$country2=="United Kingdom",                                   "UK", as.character(sample_size_n$country2)))))
library(wesanderson)
library(rworldmap)
library(ggthemes)
library(extrafont)
library(ggplot2)

palette_wa <- wes_palette("Zissou1", 5, type = "continuous")

ggplot(sample_size_n) +
        geom_polygon(data = world_map, aes(x = long, y = lat, group = group), 
                     color="#ffffff00",size=0.05, fill = "gray90") +
        geom_map(aes(map_id = country2, fill = n), map = world_map, 
                 color="#ffffff", size=0.05) +
        expand_limits(x = world_map$long, y = world_map$lat) +
        coord_fixed() + 
        scale_x_discrete(labels = NULL, breaks = NULL) + 
        scale_y_discrete(labels = NULL, breaks = NULL) +
        scale_fill_gradientn(colours = palette_wa,
                             limits = c(0, 2500),
                             breaks = c(0, 500, 1000, 1500, 2000, 2500),
                             labels = c(0, 500, 1000, 1500, 2000, 2500),
                             name = "Sample\nsize",
                             guide = guide_colorbar(
                                 direction = "vertical",
                                 barheight = unit(20, units = "mm"),
                                 barwidth = unit(3, units = "mm"),
                                 draw.ulim = F,
                                 title.position = 'top',
                                 # some shifting around
                                 title.hjust = 0,
                                 label.hjust = 0)) +
        labs(x=NULL, y=NULL) + 
        theme_map() +
        theme(text = element_text(family = "Georgia"),
              legend.position = c(0.05, 0.01), legend.text.align = 0,
              legend.background = element_rect(fill = 'white'),
              legend.text = element_text(size = 6, hjust = 0, 
                                         color = "#4e4d47"),
              plot.tag.position = c(0.1, 0),
              plot.tag = element_text(size = 10, hjust = 0.1, vjust = 2),
              plot.title = element_text(color = "#4e4d47", 
                                        size=12, face = "bold",
                                        vjust = 3),
              plot.subtitle = element_text(color = "#4e4d47", vjust = 2,
                                           margin = margin(b = -0.00001, 
                                                           t = 0.05, 
                                                           l = 2, 
                                                           unit = "cm"), debug = F),
              legend.title = element_text(size = 7),
              plot.margin = unit(c(.1,.1,.1,.1), "cm"),
              panel.spacing = unit(c(-.1,0.2,.2,0.2), "cm"),
              panel.border = element_blank(),
              plot.caption = element_text(size=8,
                                          family="Georgia",color="gray35")) +
    labs(title="Sample sizes across 69 countries",
         subtitle="Heat map showing the number of respondents from each country",
         caption = "International Collaboration on the Social & Moral Psychology of COVID-19\nhttps://icsmp-covid19.netlify.app"
         #tag = expression(paste(bold("Fig 1. "), "A world map visualizing the number of participants in each surveyed country."))
         ) +
    theme(
    plot.title = element_text(size = 12, family="Georgia", face = "plain", color="black"),
    plot.caption = element_text(color = "#383838", face = "italic", size = 7))
Figure 1.a. A world map visualizing the number of participants in each surveyed country.

Figure 1.a. A world map visualizing the number of participants in each surveyed country.



library(colorBlindness)
library(wesanderson)
library(rworldmap)
library(ggthemes)
library(extrafont)
library(ggplot2)


palette_dar <- wes_palette("Darjeeling1", 5, type = "continuous")

ggplot(sample_size_n) +
        geom_polygon(data = world_map, aes(x = long, y = lat, group = group), 
                     color="#ffffff00",size=0.05, fill = "gray90") +
        geom_map(aes(map_id = country2, fill = prop), map = world_map, 
                 color="#ffffff", size=0.05) +
        expand_limits(x = world_map$long, y = world_map$lat) +
        coord_fixed() + 
        scale_x_discrete(labels = NULL, breaks = NULL) + 
        scale_y_discrete(labels = NULL, breaks = NULL) +
        scale_fill_gradientn(colours = palette_dar,
                             limits = c(0,5),
                             breaks = c(0,1,2,3,4,5),
                             labels = c('0%','1%','2%','3%','4%','5%'),
                             name = "Sample\nproportion",
                             guide = guide_colorbar(
                                 direction = "vertical",
                                 barheight = unit(20, units = "mm"),
                                 barwidth = unit(3, units = "mm"),
                                 draw.ulim = F,
                                 title.position = 'top',
                                 # some shifting around
                                 title.hjust = 0,
                                 label.hjust = 0)) +
        # scale_fill_gradientn(colours = colorBlindness::LightBlue2DarkBlue10Steps[5:10],
        #                      limits = c(0,5),
        #                      breaks = c(0,1,2,3,4,5),
        #                      labels = c('0%','1%','2%','3%','4%','5%'),
        #                      name = "Sample\nproportion",
        #                      guide = guide_colorbar(
        #                          direction = "vertical",
        #                          barheight = unit(20, units = "mm"),
        #                          barwidth = unit(3, units = "mm"),
        #                          draw.ulim = F,
        #                          title.position = 'top',
        #                          # some shifting around
        #                          title.hjust = 0,
        #                          label.hjust = 0)) +
        labs(x=NULL, y=NULL) + 
        theme_map() +
        theme(text = element_text(family = "Georgia"),
              legend.position = c(0.05, 0.01), legend.text.align = 0,
              legend.background = element_rect(fill = 'white'),
              legend.text = element_text(size = 6, hjust = 0, 
                                         color = "#4e4d47"),
              plot.tag.position = c(0.1, 0),
              plot.tag = element_text(size = 10, hjust = 0.1, vjust = 2),
              plot.title = element_text(color = "#4e4d47", 
                                        size=12, face = "bold",
                                        vjust = 3),
              plot.subtitle = element_text(color = "#4e4d47", vjust = 2,
                                           margin = margin(b = -0.00001, 
                                                           t = 0.05, 
                                                           l = 2, 
                                                           unit = "cm"), debug = F),
              legend.title = element_text(size = 7),
              plot.margin = unit(c(.1,.1,.1,.1), "cm"),
              panel.spacing = unit(c(-.1,0.2,.2,0.2), "cm"),
              panel.border = element_blank(),
              plot.caption = element_text(size=8,
                                          family="Georgia",color="gray35")) +
    labs(title="Sample proportion across 69 countries",
         subtitle="Heat map showing the proportion of respondents (i.e., country sample/total sample) from each country",
         caption = "International Collaboration on the Social & Moral Psychology of COVID-19\nhttps://icsmp-covid19.netlify.app"
         #tag = expression(paste(bold("Fig 1. "), "A world map visualizing the number of participants in each surveyed country."))
         ) +
    theme(
    plot.title = element_text(size = 12, family="Georgia", face = "plain", color="black"),
    plot.caption = element_text(color = "#383838", face = "italic", size = 7))
Figure 1.b. A world map visualizing the proportion of participants in each surveyed country.

Figure 1.b. A world map visualizing the proportion of participants in each surveyed country.





Figure 2 (Gantt Chart)

final_data <- final_data %>%
  arrange(as.character(continent),as.character(country_name)) %>%
  dplyr::mutate(sorted_country_name = factor(country_name, unique(country_name)))

gantt<-ggplot() +
    geom_line(data=final_data, 
              mapping=aes(x=sorted_country_name, 
                          y=date,
                          color=continent),
              size=2, alpha = 0.8) +
    coord_flip() +
    scale_y_date(date_breaks = "4 day") +
    labs(x="", y="", title="Data collection periods across 69 countries", 
         caption = "International Collaboration on the Social & Moral Psychology of COVID-19\nhttps://icsmp-covid19.netlify.app") +
    theme_minimal() +
    theme(
        strip.background = element_rect(fill="#e9e9e9"),
        panel.border = element_blank(),
        panel.grid.minor.x = element_blank(),
        panel.grid.major.y = element_blank(),
        axis.text.x = element_text(angle = 45, hjust=1,vjust = 1),
        #axis.text.y = element_text(face="italic"),
        text=element_text(size=7, 
                          family="Georgia", 
                          color = "#383838"), 
        plot.title = element_text(size = 10, 
                                  family="Georgia", 
                                  hjust = 0, 
                                  color = "#383838"),
        plot.subtitle = element_text(size = 10, 
                                     family="Georgia"),
        legend.title=element_blank(),
        #legend.title = element_text(size = 8),
        #legend.position="none",
        legend.text = element_text(size = 8, 
                                   hjust = 0, 
                                   family="Georgia",
                                   color = "#383838",),
        legend.position='bottom', 
        plot.caption = element_text(size=7,
                                    family="Georgia", 
                                    color = "#383838", 
                                    face = "italic",
                                    vjust = 2),
        plot.margin = unit(c(.1,.1,.1,.1), "cm"),
        panel.spacing = unit(c(-.1,0.2,.2,0.2), "cm")) +
    scale_color_manual(values= c("#02401B", "#E2D200", "#46ACC8" ,"#E58601", "#B40F20")) + 
    scale_x_discrete(limits=rev(levels(final_data$sorted_country_name)))
gantt
Figure 2. Gantt Chart illustrating the data collection periods for each surveyed country.

Figure 2. Gantt Chart illustrating the data collection periods for each surveyed country.





Figure 3 (Constructs)

knitr::include_graphics("1.png")
Figure 3.a. International Collaboration on the Social and Moral Psychology of COVID-19: Investigated constructs, items, and variable

Figure 3.a. International Collaboration on the Social and Moral Psychology of COVID-19: Investigated constructs, items, and variable


knitr::include_graphics("2.png")
Figure 3.b. International Collaboration on the Social and Moral Psychology of COVID-19: Investigated constructs, items, and variable

Figure 3.b. International Collaboration on the Social and Moral Psychology of COVID-19: Investigated constructs, items, and variable





Figure 4 (Missing Data & Attention Check)

palette_wa <- wes_palette("Zissou1", 
                          5, 
                          type = "continuous")

Zissou1_NAs2 <- 
  ggplot(clean_data) +
    geom_polygon(data = world_map, aes(x = long, y = lat, group = group), 
                 color="#ffffff00",size=0.05, fill = "gray90") +
    geom_map(aes(map_id = country2, fill = na_over), map = world_map, 
             color="#ffffff", size=0.05) +
    expand_limits(x = world_map$long, y = world_map$lat) +
    coord_fixed()+ 
    scale_x_discrete(labels = NULL, breaks = NULL) + 
    scale_y_discrete(labels = NULL, breaks = NULL) +
    scale_fill_gradientn(colours = palette_wa,
                         limits = c(0, 50),
                         breaks = c(0, 10, 20, 30, 40, 50),
                         labels = c("0%", "10%", "20%", "30%", "40%", "50%"), 
                         name = "Missing\ndata",
                         guide = guide_colorbar(
                             direction = "vertical",
                             barheight = unit(20, units = "mm"),
                             barwidth = unit(3, units = "mm"),
                             draw.ulim = F,
                             title.position = 'top',
                             # some shifting around
                             title.hjust = 0,
                             label.hjust = 0)) +
    labs(x=NULL, y=NULL, 
         title="Overall percentages of missing data by country", caption = "\n\n") +
    theme_map() +
    theme(text = element_text(family = "Georgia"),
          legend.position = c(0.05, 0.01),legend.text.align = 0,
          legend.background = element_rect(fill = 'white'),
          legend.text = element_text(size = 6, hjust = 0, 
                                     color = "#4e4d47"),
          plot.tag.position = c(0.1, 0),
          plot.tag = element_text(size = 10, hjust = 0.1, vjust = 2),
          plot.title = element_text(size = 12, family="Georgia", 
                                    face = "plain", color="black"),
          plot.subtitle = element_text(color = "#4e4d47", vjust = 2,
                                       margin = margin(b = -0.00001, t = 0.05, 
                                                       l = 2, unit = "cm"), debug = F),
          legend.title = element_text(size = 7),
          plot.margin = unit(c(.1,.1,.1,.1), "cm"),
          panel.spacing = unit(c(-.1,0.2,.2,0.2), "cm"),
          panel.border = element_blank(),
          plot.caption = element_text(size=8, family="Georgia",color="gray35"))



# setting up plot 2
att_check_fail<-as.data.frame(clean_data %>% select(country2, att_check_nobots))
att_check_fail<-filter(att_check_fail, !is.na(att_check_nobots))
att_check_fail <- as.data.frame(aggregate(att_check_fail[, 2], list(att_check_fail$country2), mean))
names(att_check_fail) <- c("country", "fail")
att_check_fail$fail<-1-att_check_fail$fail



# ploting 
Zissou1_check2 <- 
    ggplot(att_check_fail) +
    geom_polygon(data = world_map, aes(x = long, y = lat, group = group), 
                 color="#ffffff00",size=0.05, fill = "gray90") +
    geom_map(aes(map_id = country, fill = fail*100), map = world_map, 
             color="#ffffff", size=0.05) +
    expand_limits(x = world_map$long, y = world_map$lat) +
    coord_fixed()+ 
    scale_x_discrete(labels = NULL, breaks = NULL) + 
    scale_y_discrete(labels = NULL, breaks = NULL) +
    scale_fill_gradientn(colours = palette_wa,
                         limits = c(0, 20),
                         breaks = c(0, 5, 10, 15, 20),
                         labels = c("0%", "5%", "10%", "15%", "20%"),
                         name = "Attention\nCheck Fails\n",
                         guide = guide_colorbar(
                             direction = "vertical",
                             barheight = unit(20, units = "mm"),
                             barwidth = unit(3, units = "mm"),
                             draw.ulim = F,
                             title.position = 'top',
                             # some shifting around
                             title.hjust = 0,
                             label.hjust = 0)) +
    labs(x=NULL, y=NULL, title="Rate of attention-check fails across 69 countries",
         caption = "\nNote: The percentage of missing data considered all the questions in the survey (i.e., all sociodemographic and psychological scales)") + 
    theme_map() +
        theme(text = element_text(family = "Georgia"),
              legend.position = c(0.05, 0.01),legend.text.align = 0,
              legend.background = element_rect(fill = 'white'),
              legend.text = element_text(size = 6, hjust = 0, 
                                         color = "#4e4d47"),
              plot.tag.position = c(0.1, 0),
              plot.tag = element_text(size = 10, hjust = 0.1, vjust = 2),
              plot.title = element_text(size = 12, family="Georgia", 
                                    face = "plain", color="black"),
              plot.subtitle = element_text(color = "#4e4d47", vjust = 2,
                                           margin = margin(b = -0.00001, 
                                                           t = 0.05, 
                                                           l = 2, 
                                                           unit = "cm"), debug = F),
              legend.title = element_text(size = 7),
              plot.margin = unit(c(.1,.1,.1,.1), "cm"),
              panel.spacing = unit(c(-.1,0.2,.2,0.2), "cm"),
              panel.border = element_blank(),
              plot.caption = element_text(size=7,
                                        family="Georgia",color="#383838", hjust = 0))

# setting up multiple plots
library(patchwork)
com_plots <- (Zissou1_NAs2/Zissou1_check2) +
    plot_annotation(caption = "\nInternational Collaboration on the Social & Moral Psychology of COVID-19\nhttps://icsmp-covid19.netlify.app") & 
    theme(plot.caption = element_text(color = "#383838", face = "italic", size = 7, family="Georgia"))
#
com_plots
Figure 4. Data quality indicators for each surveyed country.

Figure 4. Data quality indicators for each surveyed country.

#
#ggpubr::ggarrange(Zissou1_NAs2, Zissou1_check2,ncol = 1, nrow = 2)
#
#cowplot::plot_grid(Zissou1_NAs2, Zissou1_check2, labels = c('A', 'B'), ncol = 1)





Figure 5 (Major Constructs)

##### arranging info for constructs: pcontact; hygiene; psupport
fig_5_contact<-as.data.frame(clean_data %>% select(ISO3, contact1, contact2,contact3,contact4,contact5))
fig_5_contact<-reshape2::melt(fig_5_contact, id="ISO3")
fig_5_contact<-filter(fig_5_contact, !is.na(fig_5_contact$value))
fig_5_contact$value<-if_else(fig_5_contact$value=="Strongly Agree",10,
                             if_else(fig_5_contact$value=="Strongly Disagree", 0,
                                     if_else(fig_5_contact$value=="Neither agree nor disagree",5,
                                             as.numeric(fig_5_contact$value))))
contact<-as.data.frame(aggregate(fig_5_contact[, 3], list(fig_5_contact$ISO3), mean))
#
#
fig_5_support<-as.data.frame(clean_data %>% select(ISO3, psupport1, psupport2,psupport3,psupport4,psupport5))
fig_5_support<-reshape2::melt(fig_5_support, id="ISO3")
fig_5_support<-filter(fig_5_support, !is.na(fig_5_support$value))
fig_5_support$value<-if_else(fig_5_support$value=="Strongly Agree",10,
                             if_else(fig_5_support$value=="Strongly Disagree", 0,
                                     if_else(fig_5_support$value=="Neither agree nor disagree",5,
                                             as.numeric(fig_5_support$value))))
support<-as.data.frame(aggregate(fig_5_support[, 3], list(fig_5_support$ISO3), mean))
fig_5_hygiene<-as.data.frame(clean_data %>% select(ISO3, hygiene1, hygiene2,hygiene3,hygiene4,hygiene5))
fig_5_hygiene<-reshape2::melt(fig_5_hygiene, id="ISO3")
fig_5_hygiene<-filter(fig_5_hygiene,!is.na(fig_5_hygiene$value))
fig_5_hygiene$value<-if_else(fig_5_hygiene$value=="Strongly Agree",10,
                             if_else(fig_5_hygiene$value=="Strongly Disagree", 0,
                                     if_else(fig_5_hygiene$value=="Neither agree nor disagree",5,
                                             as.numeric(fig_5_hygiene$value))))
hygiene<-as.data.frame(aggregate(fig_5_hygiene[, 3], list(fig_5_hygiene$ISO3), mean))
hygiene$country<-hygiene$Group.1
contact$country<-contact$Group.1
support$country<-support$Group.1
##### arranging data conspiracy 
fig_5_cons<-as.data.frame(clean_data %>% select(ISO3, ctheory1, ctheory2, ctheory3, ctheory4))
fig_5_cons<-reshape2::melt(fig_5_cons, id="ISO3")
fig_5_cons<-filter(fig_5_cons,!is.na(fig_5_cons$value))
fig_5_cons$value<-if_else(fig_5_cons$value=="Strongly Agree",10,
                          if_else(fig_5_cons$value=="Strongly Disagree", 0,
                                  if_else(fig_5_cons$value=="Neither agree nor disagree",5,
                                          as.numeric(fig_5_cons$value))))
cons<-as.data.frame(aggregate(fig_5_cons[, 3], list(fig_5_cons$ISO3), mean))
cons$country<-cons$Group.1

##### arranging data national identity #####
fig_5_national_ident<-as.data.frame(clean_data %>% select(ISO3, nidentity1, nidentity2))
fig_5_national_ident<-reshape2::melt(fig_5_national_ident, id="ISO3")
fig_5_national_ident<-filter(fig_5_national_ident,!is.na(fig_5_national_ident$value))
fig_5_national_ident$value<-if_else(fig_5_national_ident$value=="Strongly Agree",10,
                                    if_else(fig_5_national_ident$value=="Strongly Disagree", 0,
                                            if_else(fig_5_national_ident$value=="Neither agree nor disagree",5,
                                                    as.numeric(fig_5_national_ident$value))))
national_ident<-as.data.frame(aggregate(fig_5_national_ident[, 3], list(fig_5_national_ident$ISO3), mean))
national_ident$country<-national_ident$Group.1

##### arranging data national narcissism #####
fig_5_national_narc<-as.data.frame(clean_data %>% select(ISO3, cnarc1, cnarc2, cnarc3))
fig_5_national_narc<-reshape2::melt(fig_5_national_narc, id="ISO3")
fig_5_national_narc<-filter(fig_5_national_narc, !is.na(fig_5_national_narc$value))
fig_5_national_narc$value<-if_else(fig_5_national_narc$value=="Strongly Agree",10,
                                   if_else(fig_5_national_narc$value=="Strongly Disagree", 0,
                                           if_else(fig_5_national_narc$value=="Neither agree nor disagree",5,
                                                   as.numeric(fig_5_national_narc$value))))
national_narc<-as.data.frame(aggregate(fig_5_national_narc[, 3], list(fig_5_national_narc$ISO3), mean))
national_narc$country<-national_narc$Group.1


##### arranging data social belonging #####
fig_5_soc_belong<-as.data.frame(clean_data %>% select(ISO3, sbelong1, sbelong2, sbelong3,sbelong4))
fig_5_soc_belong<-reshape2::melt(fig_5_soc_belong, id="ISO3")
fig_5_soc_belong<-filter(fig_5_soc_belong,!is.na(fig_5_soc_belong$value))
fig_5_soc_belong$value<-if_else(fig_5_soc_belong$value=="Strongly Agree",10,
                                if_else(fig_5_soc_belong$value=="Strongly Disagree", 0,
                                        if_else(fig_5_soc_belong$value=="Neither agree nor disagree",5,
                                                as.numeric(fig_5_soc_belong$value))))
soc_belong<-as.data.frame(aggregate(fig_5_soc_belong[, 3], list(fig_5_soc_belong$ISO3), mean))
soc_belong$country<-soc_belong$Group.1
#This construct is different than the others, it is measured by percentage.
##### arranging data risk perception #####
fig_5_risk_perc<-as.data.frame(clean_data %>% select(ISO3, riskperc1, riskperc2))
fig_5_risk_perc<-reshape2::melt(fig_5_risk_perc, id="ISO3")
fig_5_risk_perc<-filter(fig_5_risk_perc,!is.na(fig_5_risk_perc$value))
risk_perc<-as.data.frame(aggregate(fig_5_risk_perc[, 3], list(fig_5_risk_perc$ISO3), mean))
risk_perc$country<-risk_perc$Group.1
## Political ideology construct 
### arranging data political ideology
pol_ide <- clean_data %>%
    select(ISO3, political_ideology) %>% 
    filter(., !is.na(political_ideology)) %>% 
    group_by(ISO3) %>% 
    summarise_all(funs(mean(., na.rm=TRUE))) %>% 
    ungroup() %>% 
    as.data.frame() %>% dplyr::rename(x=political_ideology)
pol_ide$Group.1 <- countrycode::countrycode(pol_ide$ISO3, "iso3c", "country.name")
pol_ide$country <- pol_ide$Group.1
#
##### arranging data COOPERATION #####
fig_5_coop<-as.data.frame(clean_data %>% select(ISO3, mcoop1, mcoop2, mcoop3,mcoop4, mcoop5,mcoop6,mcoop7))
fig_5_coop<-reshape2::melt(fig_5_coop, id="ISO3")
fig_5_coop<-filter(fig_5_coop, !is.na(fig_5_coop$value))
fig_5_coop$value<-if_else(fig_5_coop$value=="Strongly Agree",10,
                          if_else(fig_5_coop$value=="Strongly Disagree", 0,
                                  if_else(fig_5_coop$value=="Neither agree nor disagree",5,
                                          as.numeric(fig_5_coop$value))))
coop<-as.data.frame(aggregate(fig_5_coop[, 3], list(fig_5_coop$ISO3), mean))
coop$country<-coop$Group.1
#library(tidyverse)
require(dplyr)

risk_perc$risk_perception<-risk_perc$x
cons$conspiracy<-cons$x
national_ident$national_identity<-national_ident$x
national_narc$narcissism<-national_narc$x
soc_belong$social_belonging<-soc_belong$x
coop$cooperation<-coop$x
support$policy_support<-support$x
contact$distancing<-contact$x
hygiene$personal_hygiene<-hygiene$x

constructs <- risk_perc %>% 
    left_join(hygiene, by = "country") %>% 
    left_join(support, by = "country") %>% 
    left_join(contact, by = "country") %>% 
    left_join(cons, by = "country") %>% 
    left_join(national_ident, by = "country") %>% 
    left_join(national_narc, by = "country") %>% 
    left_join(soc_belong, by = "country") %>% 
    left_join(coop, by = "country") %>% select(country,
                                               risk_perception,
                                               personal_hygiene, 
                                               distancing, 
                                               policy_support,
                                               conspiracy,
                                               national_identity,
                                               narcissism,
                                               social_belonging,
                                               cooperation)

constructs <- constructs %>% gather(., key = "variable", value = "value", -country)
facet_const_WA_fun <- function(construct_db, 
                            var = "variable",
                            vect_contruct, 
                            facet_name,
                            name_palette = "palette",
                            n_colors,
                            numcol = 3, 
                            numrow = 1, 
                            label_name,
                            Lim,
                            bre,
                            legend_name = "Legend\nName\nNULL*") {
    require(wesanderson)
    contruct <- construct_db[construct_db[[var]] %in% vect_contruct, ]
    palette_wa <- wes_palette(name_palette, n_colors, 
                              type = "continuous")
    ggplot(subset(contruct, country2!=0)) +
        geom_polygon(data = world_map, aes(x = long, y = lat, group = group), 
                     color="#ffffff00",size=0.05, fill = "gray90") +
        geom_map(aes(map_id = country2, fill = value, group=1), 
                 map = world_map, color="#ffffff", size=0.05) +
        expand_limits(x = world_map$long, y = world_map$lat)+ 
        coord_fixed()+
        labs(x=NULL, y=NULL)+
        facet_wrap(.~variable, nrow = numrow, ncol = numcol, 
                   labeller = as_labeller(facet_name))+
        scale_x_discrete(expand = c(0, 0.5)) +
        scale_fill_gradientn(colors= palette_wa, na.value='#f5f5f2',
                             labels = label_name,
                             limits = Lim,
                             breaks = bre, ) +
                             #breaks = c(min(contructs_other$value), 3, 5, 7, 
                             #           max(contructs_other$value)),
                             #labels = c((round(min(contructs_other$value))), 
                             #          3, 5, 7, round(max(contructs_other$value))))+
        theme_mapX()+
        guides(fill = guide_colourbar(title=legend_name, ##rename default legend
                                      # title.position='top',
                                      ticks.colour='#f5f5f2',
                                      ticks.linewidth=2,
                                      barwidth = 6,#0.3,
                                      barheight = 0.3))#5))
}

###### functions test colors an ranging palette FOR FACET_WRAP MAPS#####
##### Function THEME_MAPX #####
theme_mapX <- function(...) {
    theme_minimal() +
        theme(panel.spacing.x = unit(0, "cm"),
              panel.spacing.y = unit(0, "cm"),
            axis.line = element_blank(),
            axis.text.x = element_blank(),
            axis.text.y = element_blank(),
            axis.ticks = element_blank(),
            axis.title.x = element_blank(),
            axis.title.y = element_blank(),
            panel.grid.major = element_blank(),
            panel.grid.minor = element_blank(),
            plot.title = element_text(size = 10, face = "bold", vjust = 2,
                            hjust=0.1,family="Georgia",color="gray35"),
            plot.subtitle = element_text(size = 10,hjust=0.1,
                                         family="Georgia"),
            plot.caption = element_text(size=8,
                                        family="Georgia",color="gray35"),
            strip.text.x = element_text(size=7,hjust=0,vjust=0,family="Georgia"),
            #plot.background = element_rect(fill = "#f5f5f2", color = NA),
            plot.margin = unit(c(.1,.1,.1,.1), "cm"),
            #panel.background = element_rect(fill = "#f5f5f2", color =NA),
            panel.border = element_blank(),
            legend.position = 'right',
            #legend.background = element_rect(fill = "#f5f5f2", color = NA),
            legend.title = element_text(size = 8,family="Georgia"),
            legend.text = element_text(size = 7, family="Georgia"),
            legend.key = element_rect(),
            plot.tag.position = c(0.1, 0),
            plot.tag = element_text(size = 10, hjust = 0.1, vjust = 2),
        )}


vect_constructs_1 <- c("distancing", "policy_support",
                         "conspiracy", "national_identity", 
                         "narcissism", "cooperation")
facet_names_1 <- c(`distancing` = "Spatial Distancing",
                  `policy_support` = "Policy Support",
                  `conspiracy` = "Conspiracy Beliefs",
                  `national_identity` = "National identity",
                  `narcissism` = "Collective Narcissism",
                  `cooperation` = "Morality as Cooperation")


constructs$country2 <- sjlabelled::as_label(constructs$country)


constructs$country2 <- if_else(constructs$country2=="United States of America",
                         "USA",
                         if_else(constructs$country2=="Russian Federation",
                                 "Russia",
                                 if_else(constructs$country2=="Korea",
                                         "South Korea",
                                         if_else(constructs$country2=="United Kingdom",                                   "UK", as.character(constructs$country2)))))


Zissou1_ALL_grid_2 <- facet_const_WA_fun(constructs, 
                                      var = "variable",
                                      vect_constructs_1,
                                      facet_names_1,
                                      name_palette = "Zissou1",
                                      n_colors = 4,
                                      numcol = 2, 
                                      numrow = 3,
                                      Lim = c(0, 10),
                                      bre = c(0, 10), 
                                      label_name = c("Lower\nlevels", "Higher\nlevels"),
                                      legend_name = "") + 
    labs(title="Cross-cultural differences in Social & Moral Psychology of COVID-19"
         #subtitle="across 69 countries",
         #caption="International Collaboration on the Social & Moral Psychology of COVID-19\nhttps://icsmp-covid19.netlify.app/\nScale = Strongly Agree or higher level (10), Neither agree nor disagree (5), Strongly Disagree or lower level (0)"
        # caption = c("Note: Each world map in the figure shows the averaged score by country of specific scales adopted in the survey. We described each scale in detail in figures 3a and 3b. Legend: Conspiracy Beliefs - participant's \nbeliefs in conspiracy theories regarding COVID-19; Morality as Cooperation - participant's moral concerns based on the morality-as-cooperation theory; Spatial Distancing - participant's support for spatial distancing \nas a strategy against COVID-19; Collective Narcissism - participant's narcissism, i.e., an inflated view regarding their ingroup (in the present data we focused on nationality narcissism dimension); National Identity - \nparticipant's identity attached to belonging to a nation; Policy Support - participant's support to public policies (e.g., closing parks or schools) as a strategy against COVID-19.", "\n\n\n\n\n\nInternational Collaboration on the Social & Moral Psychology of COVID-19\nhttps://icsmp-covid19.netlify.app")
         ) +
  theme(plot.title = element_text(size = 13, family="Georgia", face = "plain", color="black"),
        #axis.text.x.left = element_blank(),
        legend.position = "bottom", #c(0.535, 0.475),
        legend.text=element_text(size=8),
        strip.text.x = element_text(size = 11, color="black" , family="Georgia", hjust=0.5),
        plot.caption = element_text(color = "#383838", face = "italic", size = 7, family="Georgia", hjust = c(0,1)))

Zissou1_ALL_grid_2
Figure 5. Cross-cultural differences in Social & Moral Psychology of COVID-19 across 69 countries.

Figure 5. Cross-cultural differences in Social & Moral Psychology of COVID-19 across 69 countries.





Figure 6 (Construct associations)

# classification of the demographics of interest ####
data$age_cat =  ifelse(data$age %in% 18:24, "18 to 24 years old",
                ifelse(data$age %in% 25:34, "25 to 34 years old",
                ifelse(data$age %in% 35:44, "35 to 44 years old",
                ifelse(data$age %in% 45:54, "45 to 54 years old",
                ifelse(data$age %in% 55:64, "55 to 64 years old",
                ifelse(data$age %in% 65:100,"65  years old and up",NA)))))) 


data$sex_cat =  ifelse(data$sex1 == 1, "Male",
                ifelse(data$sex1 == 2, "Female",NA)) 


data$pol_cat =  ifelse(data$political_ideology %in% 0:4,  "1. Left",
                ifelse(data$political_ideology %in% 5,    "2. Center",
                ifelse(data$political_ideology %in% 6:10, "3. Right", NA)))

# data selection ####

## attention check

data_cleaned = data[data$att_check_nobots==1 & !is.na(data$att_check_nobots),]


## minimum n

# This paper (https://www.sciencedirect.com/science/article/abs/pii/S0092656619300297) suggests maintaining n=490 as a minimum for correlation analysis. Even though the following analyses are not conducted per country, this minimum is enforced nevertheless. Enforcement of a minimum n is also necessary for the calculation of the consistency statistics.

"%!in%" = Negate("%in%")
min_n   = 490

ISO3_n      = as.data.frame(table(data_cleaned$ISO3))
small_n_ISO = as.vector(ISO3_n$Var1[ISO3_n$Freq<min_n])

data_cleaned = data_cleaned[data_cleaned$ISO3 %!in% small_n_ISO,]


# define index selections ####
indices = c("ctheory_index","mcoop_index","contact_index","cnarc_index","nidentity_index","psupport_index","hygiene_index","moralid_index","narc_index","omind_index","optim_index","riskperc_index","sbelong_index","slfcont_index")
labels  = c("conspiracy beliefs (1)", "morality as cooperation (2)", "spatial distancing (3)", "collective narcissism (4)", "national identity (5)", "policy support (6)", "physical hygiene (7)", "moral identity (8)", "narcissism (9)", "open mindedness (10)", "optimism (11)", "risk perception (12)", "social belonging (13)", "self-control (14)")
dem_cat = c("age_cat","sex_cat","pol_cat")

# calculate correlations between indices ####
index_data   = data_cleaned[,names(data_cleaned) %in% c(indices)]
correlations = cor(index_data, use = "complete.obs")

# summary stats correlations ####
levels <- colnames(correlations)
corr_summary = correlations %>%
      data.frame() %>% 
      mutate(row = factor(rownames(.), levels = levels)) %>%
      pivot_longer(-c(row), names_to = "col") %>%
      mutate(col = factor(col, levels = levels)) 
corr_summary = corr_summary[duplicated(corr_summary$value),]
corr_summary = corr_summary[corr_summary$value!=1         ,]

# Descriptive Statistics
min(abs(corr_summary$value))
## [1] 0.003114215
max(abs(corr_summary$value))
## [1] 0.484048
mean(abs(corr_summary$value))
## [1] 0.1684443
# plots ####
# With great help from Stefan and others at StackOverflow (https://stackoverflow.com/questions/71890644/how-to-replicate-correlation-plot-with-greyscale-coefficients-in-the-lower-half), I managed to create these beautiful correlation plots.

fullcorr = data.frame(matrix(ncol = 7, nrow = 0))
colnames(fullcorr) = c("row","rowid","col","value","colid","dem","demval")

for(i in 1:length(dem_cat)){
  dem = dem_cat[i]
  unique_vals = unique(data_cleaned[,names(data_cleaned) %in% dem_cat[i]])
  unique_vals = unique_vals[!is.na(unique_vals)]

  for(j in 1:length(unique_vals)){
    data_selection = data_cleaned[data_cleaned[,dem]        == unique_vals[j] &
                                 !is.na(data_cleaned[,dem] == unique_vals[j]),
                                 names(data_cleaned) %in% indices]

    correlations = as.data.frame(cor(data_selection, use="complete.obs"))
    
    levels <- colnames(correlations)
    
    corr_long <- correlations %>%
      data.frame() %>% 
      mutate(row = factor(rownames(.), levels = levels), 
             rowid = as.numeric(row)) %>%
      pivot_longer(-c(row, rowid), names_to = "col") %>%
      mutate(col = factor(col, levels = levels),
             colid = as.numeric(col))
    
    corr_long$dem = dem
    corr_long$demval = unique_vals[j]
    
    fullcorr = rbind(fullcorr, corr_long)
  }
}

# age plot ####
age_data = fullcorr[fullcorr$dem == "age_cat",]

corrplot_age_facets = 
  ggplot(age_data, aes(col, row)) +
    geom_point(aes(size = abs(value), 
                   fill = value, stroke = 0.1,
                   alpha= abs(value)),
               data = ~filter(.x, rowid > colid), shape = 21) +
    geom_text(aes(label = scales::number(value, accuracy = .01), 
                  color = abs(value), 
                  family = "Georgia"), 
              data = ~filter(.x, rowid < colid), size = 8 / .pt) +
    scale_x_discrete(labels = 1:14, drop = FALSE) +
    scale_y_discrete(labels = rev(labels), drop = FALSE, limits=rev) +
    scale_fill_viridis_c(limits = c(-.6, .6)) +
    scale_color_gradient(low = grey(.985), high = grey(.015)) +
    coord_equal() +
    guides(size = "none", color = "none", alpha = "none") +
    theme_minimal() +
    theme(text = element_text(family = "Georgia"),
          legend.position = "none", 
          panel.grid = element_blank(),
          plot.margin=unit(c(-2,1,1,1), "cm"),
          strip.background=element_rect(colour="white", fill="#f6f6f6"),
          axis.ticks = element_blank()) +
    labs(title = "Construct associations by age",
         x = NULL, y = NULL, fill = NULL) + 
    facet_wrap(~demval, nrow=2)

# png(file = "./Plots/corrplot_age_facets.png", 
#     width = 1500, # The width of the plot 
#     height = 1000) # The height of the plot
# corrplot_age_facets
# dev.off()


# sex plot ####
sex_data = fullcorr[fullcorr$dem == "sex_cat",]

corrplot_sex_facets = 
  ggplot(sex_data, aes(col, row)) +
    geom_point(aes(size = abs(value), 
                   fill = value, stroke = 0.1, 
                   alpha= abs(value)),
               data = ~filter(.x, rowid > colid), shape = 21) +
    geom_text(aes(label = scales::number(value, accuracy = .01), 
                  color = abs(value), 
                  family = "Georgia"), 
              data = ~filter(.x, rowid < colid), size = 8 / .pt) +
    scale_x_discrete(labels = 1:14, drop = FALSE) +
    scale_y_discrete(labels = rev(labels), drop = FALSE, limits=rev) +
    scale_fill_viridis_c(limits = c(-.65, .65), name="Person's\nCorrelation\nCoefficient") +
    scale_color_gradient(low = grey(.985), high = grey(.015)) +
    coord_equal() +
    guides(size = "none", color = "none", alpha = "none") +
    theme_minimal() +
    theme(text = element_text(family = "Georgia"),
          #legend.position = "none",
          plot.margin=unit(c(1,4,-2,4), "cm"),
          panel.grid = element_blank(),
          legend.position=c(1.2,0.5),
          legend.text=element_text(size=8),
          legend.title=element_text(size=8),
          strip.background=element_rect(colour="white", fill="#f6f6f6"),
          axis.ticks = element_blank()) +
    labs(title = "Construct associations by sex",
         x = NULL, y = NULL, fill = NULL) + 
    facet_wrap(~demval, nrow=1)

# png(file = "./Plots/corrplot_sex_facets.png", 
#     width = 1000, # The width of the plot 
#     height = 500) # The height of the plot
#   corrplot_sex_facets
# dev.off()


# pol plot ####
pol_data = fullcorr[fullcorr$dem == "pol_cat",]

corrplot_pol_facets = 
  ggplot(pol_data, aes(col, row)) +
    geom_point(aes(size = abs(value), 
                   fill = value, stroke = 0.1, 
                   alpha= abs(value)),
               data = ~filter(.x, rowid > colid), shape = 21) +
    geom_text(aes(label = scales::number(value, accuracy = .01), 
                  color = abs(value), 
                  family = "Georgia"), 
              data = ~filter(.x, rowid < colid), size = 8 / .pt) +
    scale_x_discrete(labels = 1:14, drop = FALSE) +
    scale_y_discrete(labels = rev(labels), drop = FALSE, limits=rev) +
    scale_fill_viridis_c(limits = c(-.6, .6)) +
    scale_color_gradient(low = grey(.985), high = grey(.015)) +
    coord_equal() +
    guides(size = "none", color = "none", alpha = "none") +
    theme_minimal() + 
    theme(text = element_text(family = "Georgia"),
          legend.position = "none", 
          plot.margin=unit(c(-2,1,-2,1), "cm"),
          panel.grid = element_blank(),         
          strip.background=element_rect(colour="white", fill="#f6f6f6"),
          axis.ticks = element_blank()) +
    labs(title = "Construct associations by ideology",
         x = NULL, y = NULL, fill = NULL, alpha = NULL) + 
    facet_wrap(~demval, ncol=3)

# png(file = "./Plots/corrplot_pol_facets.png", 
#     width = 1500, # The width of the plot 
#     height = 500) # The height of the plot
#   corrplot_pol_facets
# dev.off()



#corrplot_sex_facets
#corrplot_age_facets
#corrplot_pol_facets

# com_plots2 <- (corrplot_sex_facets/corrplot_pol_facets) 
# 
# #com_plots2 /  corrplot_age_facets + plot_layout(heights = unit(c(12, 6, 6), c('cm', 'null')))
# 
# wrap_plots(com_plots2, corrplot_age_facets, nrow=2) + 
#     plot_annotation(caption = "\nInternational Collaboration on the Social & Moral Psychology of COVID-19\nhttps://icsmp-covid19.netlify.app") & 
#     theme(plot.caption = element_text(color = "#383838", face = "italic", size = 7, family="Georgia"))

library(gridExtra)

#figure 5a
grid.arrange(corrplot_sex_facets,
             corrplot_pol_facets, 
             ncol = 1,
             heights = c(1, 1.4, 2))
Figure 6.a. Cross-cultural differences in associations of Social & Moral Psychology of COVID-19 across Sociodemographics in 69 countries.

Figure 6.a. Cross-cultural differences in associations of Social & Moral Psychology of COVID-19 across Sociodemographics in 69 countries.



#figure 5b
corrplot_age_facets
Figure 6.b. Cross-cultural differences in associations of Social & Moral Psychology of COVID-19 across Sociodemographics in 69 countries.

Figure 6.b. Cross-cultural differences in associations of Social & Moral Psychology of COVID-19 across Sociodemographics in 69 countries.



Figure 7 (Internal consistency maps)

# notes: first create the full dataframe with empty values, then cycle through to fill in. The omega consistency value cannot be calculated for constructs that rely on only two items, in this case national identity. Note that average split half reliability (meanr) yielded values that exceeded 1. Therefore opted to proceed with Maximum split half reliability (lambda 4) (maxrb). Also note that omega gives warnings that may need to be considered.

library(countrycode)

palette = wes_palette("Zissou1", 100, type = "continuous")

# index vectors
measures     = c("alpha","omega","guttman","variance")
indices6     = c("ctheory","mcoop","contact","cnarc","nidentity","psupport")
index_labels = c("Conspiracy beliefs","Morality as cooperation","Spatial distancing","Collective narcissism","National identity","Policy support")
index_labs   = c("\nNote: Conspiracy Beliefs refers to the participant's beliefs in conspiracy theories regarding COVID-19.", "\nNote: Morality as Cooperation refers to the participant’s moral concern based on the morality-as-cooperation theory.", "\nNote: Spatial Distancing refers to the participant's support for spatial distancing as a strategy against COVID-19.","\nNote: National Narcissism refers to the participant's narcissism, i.e., an inflated view regarding their nationality.", "\nNote: National Identity refers to the participant's identity attached to belonging to a nation.", "\nNote: Policy Support refers to participant's support to public policies (e.g., closing parks or schools) as a strategy against COVID-19.")

# create world data
world_map<-map_data("world")
world_map<-world_map[order(world_map$group,world_map$order),] %>% filter(region != "Antarctica") 

theme_map <- function(...) {
  theme_minimal() +
  theme(panel.spacing.x = unit(0, "cm"),
        panel.spacing.y = unit(0, "cm"),
        axis.line = element_blank(),
        axis.text.x = element_blank(),
        axis.text.y = element_blank(),
        axis.ticks = element_blank(),
        axis.title.x = element_blank(),
        axis.title.y = element_blank(),
        panel.grid.major = element_blank(),
        panel.grid.minor = element_blank(),
        plot.title = element_text(size = 10,
                                  face = "bold",
                                  vjust = 2,
                                  hjust=0,
                                  family="Georgia",
                                  color="gray35"),
        plot.subtitle = element_text(size = 10,
                                     hjust=0,
                                     family="Georgia"),
        plot.caption = element_text(size=8,
                                    family="Georgia",
                                    color="gray35"),
        strip.text.x = element_text(size=7,
                                    hjust=0,
                                    vjust=0,
                                    family="Georgia"),
        #plot.background = element_rect(fill = "#f5f5f2", color = NA),
        plot.margin = unit(c(.1,.1,.1,.1), "cm"),
        #panel.background = element_rect(fill = "#f5f5f2", color =NA),
        panel.border = element_blank(),
        legend.position = 'right',
        #legend.background = element_rect(fill = "#f5f5f2", color = NA),
        legend.title = element_text(size = 6,family="Georgia"),
        legend.text = element_text(size = 5, family="Georgia"),
        legend.key = element_rect(),
        plot.tag.position = c(0.1, 0),
        plot.tag = element_text(size = 10, hjust = 0.1, vjust = 2))
}

# define mapping function
mapping_f = function(name_palette = "palette", 
                     n_colors, 
                     legeng_name = "Legend\nName\nNULL*"){
  ggplot(data_selection) +
    # data input
    geom_polygon(data = world_map, 
                 aes(x = long, y = lat, group = group), 
                 color="#ffffff00", size=0.05, fill = "gray90") +
    geom_map(aes(map_id = ISO3, fill = value), 
             map = world_map, 
             color="#ffffff", size=0.05) +
    facet_wrap(~measure) +
    # some other settings
    expand_limits(x = world_map$long, y = world_map$lat) +
    coord_fixed() + 
    # axes settings
    scale_x_discrete(labels = NULL, breaks = NULL) + 
    scale_y_discrete(labels = NULL, breaks = NULL) +
    scale_fill_gradientn("Coefficient\nValue",
      colours = palette,
                         limits = c(.2, 1),
                         breaks = seq(.2,1,0.1),
                         labels = seq(.2,1,0.1),
                         #name = legeng_name,
                         guide = guide_colorbar(
                             direction = "vertical",
                             barheight = unit(15, units = "mm"),
                             barwidth = unit(3, units = "mm"),
                             draw.ulim = F,
                             title.position = 'top',
                             # some shifting around
                             title.hjust = 0,
                             label.hjust = 0)) +
    labs(x=NULL, y=NULL) + 
    # theme settings
    theme_map() +
    theme(text = element_text(family = "Georgia"),
          legend.position = c(0.05, 0.1), 
          legend.text.align = 0,
          
          legend.background = element_rect(color = NA), # changed from original code
          legend.text = element_text(size = 4, 
                                     hjust = 0, 
                                     color = "#4e4d47"),
          plot.tag.position = c(0.1, 0),
          plot.tag = element_text(size = 9, hjust = 0.1, vjust = 2),
          plot.title = element_text(color = "#4e4d47", 
                                    size=12, face = "bold",
                                    vjust = 3,
                                    hjust = 0), # changed from original code
          plot.subtitle = element_text(color = "#4e4d47", 
                                       vjust = 2,
                                       hjust = 0,
                                       margin = margin(b = -0.00001, 
                                                       t = 0.05, 
                                                       l = 2, 
                                                       unit = "cm"), debug = F),
          legend.title = element_text(size = 6),
          plot.margin = unit(c(.1,.1,.1,.1), "cm"),
          panel.spacing = unit(c(-.1,0.2,.2,0.2), "cm"),
          panel.border = element_blank(),
          plot.caption = element_text(size=8,
                                      family="Georgia",color="gray35", hjust = c(0,1))) 
}

# loop though constructs and make map for each
for(i in 1:length(indices6)){
  # select data per construct
  data_selection = table6[,names(table6) %in% c("Country","measure_label", index_labels[i])] # select necessary data
  names(data_selection) = c("ISO3","measure","value") # give uniform names
  data_selection$ISO3 =  ifelse(data_selection$ISO3=="United States of America", "USA",
                            ifelse(data_selection$ISO3=="United Kingdom", "UK",
                            ifelse(data_selection$ISO3=="Russian Federation", "Russia",
                            ifelse(data_selection$ISO3=="Korea", "South Korea",
                                   data_selection$ISO3)))) # adjust a few country names

  # plot
  # png(file = paste0("./Plots/map_consistency_",indices6[i],".png"), 
  #     width = 800, # The width of the plot 
  #     height = 1000) # The height of the plot 
  
  plot(assign(paste("map",indices6[i], sep = "_"), 
    mapping_f(name_palette = "Zissou1", 
              n_colors = 5, 
              legeng_name = "Sample\nsize") + 
        labs(title=index_labels[i], 
             subtitle="Heat map showing consistency measures for each country",
             caption = c(index_labs[i], "\n\nInternational Collaboration on the Social & Moral Psychology of COVID-19\nhttps://icsmp-covid19.netlify.app")) +
        theme(plot.title = element_text(size = 12, family="Georgia", face = "plain", color="black"),
              plot.caption = element_text(color = "#383838", face = "italic", size = 7))))
        
  #dev.off()
}
Figure 7. Cross-cultural differences in Internal Consistency Coefficients (Cronbach's alpha, McDonald's Omega, Guttman Split-Half) and variance explained of Social & Moral Psychology Constructs in 69 countries.

Figure 7. Cross-cultural differences in Internal Consistency Coefficients (Cronbach’s alpha, McDonald’s Omega, Guttman Split-Half) and variance explained of Social & Moral Psychology Constructs in 69 countries.

Figure 7. Cross-cultural differences in Internal Consistency Coefficients (Cronbach's alpha, McDonald's Omega, Guttman Split-Half) and variance explained of Social & Moral Psychology Constructs in 69 countries.

Figure 7. Cross-cultural differences in Internal Consistency Coefficients (Cronbach’s alpha, McDonald’s Omega, Guttman Split-Half) and variance explained of Social & Moral Psychology Constructs in 69 countries.

Figure 7. Cross-cultural differences in Internal Consistency Coefficients (Cronbach's alpha, McDonald's Omega, Guttman Split-Half) and variance explained of Social & Moral Psychology Constructs in 69 countries.

Figure 7. Cross-cultural differences in Internal Consistency Coefficients (Cronbach’s alpha, McDonald’s Omega, Guttman Split-Half) and variance explained of Social & Moral Psychology Constructs in 69 countries.

Figure 7. Cross-cultural differences in Internal Consistency Coefficients (Cronbach's alpha, McDonald's Omega, Guttman Split-Half) and variance explained of Social & Moral Psychology Constructs in 69 countries.

Figure 7. Cross-cultural differences in Internal Consistency Coefficients (Cronbach’s alpha, McDonald’s Omega, Guttman Split-Half) and variance explained of Social & Moral Psychology Constructs in 69 countries.

Figure 7. Cross-cultural differences in Internal Consistency Coefficients (Cronbach's alpha, McDonald's Omega, Guttman Split-Half) and variance explained of Social & Moral Psychology Constructs in 69 countries.

Figure 7. Cross-cultural differences in Internal Consistency Coefficients (Cronbach’s alpha, McDonald’s Omega, Guttman Split-Half) and variance explained of Social & Moral Psychology Constructs in 69 countries.

Figure 7. Cross-cultural differences in Internal Consistency Coefficients (Cronbach's alpha, McDonald's Omega, Guttman Split-Half) and variance explained of Social & Moral Psychology Constructs in 69 countries.

Figure 7. Cross-cultural differences in Internal Consistency Coefficients (Cronbach’s alpha, McDonald’s Omega, Guttman Split-Half) and variance explained of Social & Moral Psychology Constructs in 69 countries.

Saving ‘no bots’ data

# select data
data_nobots = data

data_nobots$valid <- rowSums(is.na(data_nobots[, -grep("employ_status2|sex2", names(data_nobots))]))
data_nobots$prop_valid <- 1-data_nobots$valid/ncol(data_nobots)
data_nobots$Prop_50 <- ifelse(data_nobots$prop_valid < .50, 0, 1)

data_nobots = data_nobots %>%
filter(att_check_nobots == 1 & Prop_50 == 1)

# write csv and spss files
write.csv(data_nobots,
"ICSMP_cleaned_data_nobots.csv",
#row.names = F
)

haven::write_sav(data_nobots,
"ICSMP_cleaned_data_nobots.sav")

Descriptives of ‘no bots’ data

Number of cases

nrow(data_nobots)
## [1] 46479

Number of variables

ncol(data_nobots)
## [1] 129

Number of countries

sum(table(unique(data_nobots$ISO3)))
## [1] 68

Sample size by country

table(sjlabelled::as_label(data_nobots$ISO3))
## 
##     United Arab Emirates                Argentina                Australia 
##                      187                      721                     2133 
##                  Austria                  Belgium               Bangladesh 
##                     1386                     1152                      382 
##                 Bulgaria                  Bolivia                   Brazil 
##                      644                       29                     1778 
##                   Canada              Switzerland                    Chile 
##                      963                     1053                       97 
##                    China                 Colombia               Costa Rica 
##                     1030                     1227                       25 
##                     Cuba                  Germany                  Denmark 
##                       43                     1587                      553 
##       Dominican Republic                  Ecuador                    Spain 
##                       36                      148                     1090 
##                  Finland                   France           United Kingdom 
##                      677                     1119                      547 
##                    Ghana                   Greece                Guatemala 
##                        0                      635                       48 
##                 Honduras                  Croatia                  Hungary 
##                       24                      511                      506 
##                    India                  Ireland                     Iraq 
##                      624                      735                      549 
##                   Israel                    Italy                    Japan 
##                     1245                     1261                     1156 
##                    Korea                   Latvia                  Morocco 
##                      491                      998                      654 
##                   Mexico                Macedonia                  Nigeria 
##                     1247                      694                      523 
##                Nicaragua              Netherlands                   Norway 
##                       16                     1297                      527 
##                    Nepal              New Zealand                 Pakistan 
##                      345                      510                      472 
##                   Panama                     Peru              Philippines 
##                       18                       91                      504 
##                   Poland              Puerto Rico                 Paraguay 
##                     1800                        2                       16 
##                  Romania       Russian Federation                  Senegal 
##                      502                      507                      340 
##                Singapore              El Salvador                   Serbia 
##                      518                       28                      733 
##                 Slovakia                   Sweden                   Turkey 
##                     1099                     1566                     1435 
##                   Taiwan                  Ukraine                  Uruguay 
##                      833                      577                       49 
## United States of America                Venezuela             South Africa 
##                     1470                       96                      620

Number of samples

sum(table(unique(data_nobots$country)))
## [1] 75

Sample size by sample

table(sjlabelled::as_label(data_nobots$country))
## 
##     United Arab Emirates                Argentina                  Austria 
##                      187                      721                     1386 
##                Australia               Bangladesh                  Belgium 
##                     2133                      382                     1152 
##                 Bulgaria                  Bolivia                   Brazil 
##                      644                       29                      910 
##                 Brazil_2                 Brazil_3           Canada_english 
##                      862                        6                      792 
##            Canada_french              Switzerland                    Chile 
##                      171                     1053                       97 
##                    China                 Colombia               Colombia_2 
##                     1030                      681                      546 
##               Costa Rica                     Cuba                  Germany 
##                       25                       43                     1587 
##                  Denmark       Dominican Republic                  Ecuador 
##                      553                       36                      148 
##                    Spain                  Finland                   France 
##                     1090                      677                     1119 
##           United Kingdom                    Ghana                   Greece 
##                      547                        0                      635 
##                Guatemala                 Honduras                  Croatia 
##                       48                       24                      511 
##                  Hungary                  Ireland                   Israel 
##                      506                      735                     1245 
##                    India                  India_2                     Iraq 
##                      243                      381                      549 
##                    Italy                  Italy_2                    Japan 
##                      977                      284                     1156 
##                    Korea                   Latvia                  Morocco 
##                      491                      998                      654 
##                Macedonia                   Mexico                 Mexico_2 
##                      694                      740                      507 
##                  Nigeria                Nicaragua              Netherlands 
##                      523                       16                     1297 
##                   Norway                    Nepal              New Zealand 
##                      527                      345                      510 
##                   Panama                     Peru              Philippines 
##                       18                       91                      504 
##                 Pakistan                   Poland              Puerto Rico 
##                      472                     1800                        2 
##                 Paraguay                  Romania                Romania_2 
##                       16                        0                      502 
##                   Serbia       Russian Federation                   Sweden 
##                      733                      507                     1566 
##                Singapore                 Slovakia                  Senegal 
##                      518                     1099                      340 
##              El Salvador                   Turkey                   Taiwan 
##                       28                     1435                      833 
##                  Ukraine United States of America                  Uruguay 
##                      577                     1470                       49 
##                Venezuela             South Africa 
##                       96                      620






Saving all data and its outputs for reproducibility

save.image(file = "SciData ICSMP.RData")

Session Info

utils::sessionInfo()
## R version 3.6.0 (2019-04-26)
## Platform: x86_64-w64-mingw32/x64 (64-bit)
## Running under: Windows 10 x64 (build 19041)
## 
## Matrix products: default
## 
## locale:
## [1] LC_COLLATE=English_United States.1252 
## [2] LC_CTYPE=English_United States.1252   
## [3] LC_MONETARY=English_United States.1252
## [4] LC_NUMERIC=C                          
## [5] LC_TIME=English_United States.1252    
## 
## attached base packages:
## [1] stats     graphics  grDevices utils     datasets  methods   base     
## 
## other attached packages:
##  [1] countrycode_1.3.0    gridExtra_2.3        patchwork_1.1.1     
##  [4] lubridate_1.7.10     tidytext_0.2.2       colorBlindness_0.1.9
##  [7] extrafont_0.17       ggthemes_4.2.0       rworldmap_1.3-6     
## [10] sp_1.3-1             wesanderson_0.3.6    psych_2.1.6         
## [13] DT_0.16              forcats_0.5.1        stringr_1.4.0       
## [16] dplyr_1.0.6          purrr_0.3.4          readr_1.4.0         
## [19] tidyr_1.1.3          tibble_3.1.1         ggplot2_3.3.5       
## [22] tidyverse_1.3.1     
## 
## loaded via a namespace (and not attached):
##  [1] colorspace_2.0-2      ellipsis_0.3.2        class_7.3-15         
##  [4] sjlabelled_1.1.6      snakecase_0.11.0      fs_1.5.0             
##  [7] gld_2.6.2             rstudioapi_0.13       farver_2.1.0         
## [10] SnowballC_0.7.0       fansi_0.5.0           mvtnorm_1.1-2        
## [13] xml2_1.3.2            codetools_0.2-16      mnormt_1.5-6         
## [16] rootSolve_1.8.2.1     knitr_1.33            spam_2.3-0           
## [19] jsonlite_1.7.2        rJava_0.9-12          broom_0.7.6          
## [22] Rttf2pt1_1.3.8        dbplyr_2.1.1          compiler_3.6.0       
## [25] httr_1.4.2            backports_1.2.0       assertthat_0.2.1     
## [28] Matrix_1.2-17         cli_2.5.0             htmltools_0.5.1.1    
## [31] tools_3.6.0           dotCall64_1.0-0       gtable_0.3.0         
## [34] glue_1.4.2            lmom_2.8              reshape2_1.4.4       
## [37] maps_3.3.0            Rcpp_1.0.7            cellranger_1.1.0     
## [40] vctrs_0.3.8           nlme_3.1-139          extrafontdb_1.0      
## [43] crosstalk_1.1.0.1     insight_0.14.5        xfun_0.22            
## [46] xlsxjars_0.6.1        rvest_1.0.0           mime_0.11            
## [49] lifecycle_1.0.0       xlsx_0.6.5            MASS_7.3-51.4        
## [52] scales_1.1.1          hms_1.1.0             parallel_3.6.0       
## [55] expm_0.999-4          fields_9.9            yaml_2.2.1           
## [58] Exact_2.1             stringi_1.7.4         highr_0.9            
## [61] maptools_1.1-1        tokenizers_0.2.1      e1071_1.7-2          
## [64] boot_1.3-22           rlang_0.4.11          pkgconfig_2.0.3      
## [67] evaluate_0.14         lattice_0.20-38       labeling_0.4.2       
## [70] htmlwidgets_1.5.3     cowplot_1.1.0         tidyselect_1.1.1     
## [73] plyr_1.8.4            magrittr_2.0.1        R6_2.5.1             
## [76] DescTools_0.99.38     generics_0.1.0        DBI_1.0.0            
## [79] pillar_1.6.2          haven_2.4.1           foreign_0.8-71       
## [82] withr_2.4.2           janeaustenr_0.1.5     modelr_0.1.8         
## [85] crayon_1.4.1          utf8_1.2.2            rmarkdown_2.7        
## [88] grid_3.6.0            readxl_1.3.1          reprex_2.0.0         
## [91] digest_0.6.27         xtable_1.8-4          GPArotation_2014.11-1
## [94] gridGraphics_0.5-1    munsell_0.5.0         viridisLite_0.4.0