⛏️ARCHEOLOGISTS⛏️ 💞EMPATHS💞 1

Author

Jenny Gutsell & other ARCHEOLOGISTS

Published

January 18, 2024

Introduction & setup

The preregistration for this document is at .

This is EMPATHS-1 (EMPATHS stands for EMPATHS: Machine-readable Publications to Analyze, Teach, Hypothesize, and Synthesize). EMPATHS-1 is an ARCHEOLOGISTS project (see https://archeologists.opens.science).

Here is the Codeberg repo for this project,here is the URL to the rendered version of this R Markdown file at Codeberg Pages, and here is the URL to the Open Science Framework project. The main Google Docs file for this project is here.

Note: this file was based on NITRO, the Narrated Illustration of a Transparent Review Outline, which accompanies the SysRevving book and the metabefor package. Throughout this file, links to the corresponding SysRevving chapters will be provided. For general reference, you may want to keep the SysRevving glossary ready.

Setup

Here we check for the required packages (without loading them into R’s search path with library() or require(), to safeguard against accidently forgetting to use the package::function() syntax), specify the paths, and set script-wide settings.

###-----------------------------------------------------------------------------
### Packages
###-----------------------------------------------------------------------------

if ((!(requireNamespace("metabefor", quietly = TRUE))) ||
      (packageVersion("metabefor") < "0.3")) {
  stop("You need to have at least version 0.3 of the `metabefor` package installed; ",
       "install it with:\n\ninstall.packages('metabefor');");
}
Registered S3 method overwritten by 'metabefor':
  method               from
  as.data.frame.person base
metabefor::checkPkgs(
  "here",               ### For easily access to files using 'relative paths'
  "preregr",            ### For specifying (pre)registrations
  "synthesisr",         ### For plotting
  "ggplot2"             ### For plotting
);

### Potentially update to the development version of some packages
# ufs::quietGitLabUpdate("r-packages/preregr@dev", quiet = FALSE);
# ufs::quietGitLabUpdate("r-packages/rock@dev", quiet = FALSE);
# ufs::quietGitLabUpdate("r-packages/metabefor", quiet = FALSE);
# devtools::load_all("B:/git/R/metabefor");
# ufs::quietRemotesInstall("rmetaverse/synthesisr",
#                          func = "install_github", quiet = FALSE);

###-----------------------------------------------------------------------------
### Paths
###-----------------------------------------------------------------------------

basePath <- here::here();
preregPath <- file.path(basePath, "prereg");
scriptPath <- file.path(basePath, "scripts");
searchPath <- file.path(basePath, "search");
screeningPath <- file.path(basePath, "screening");
extractionPath <- file.path(basePath, "extraction");
rxsSpecPath <- file.path(basePath, "extraction-Rxs-spec");
outputPath <- file.path(basePath, "output");

###-----------------------------------------------------------------------------
### Settings
###-----------------------------------------------------------------------------

knitr::opts_chunk$set(
  echo = TRUE,
  comment = ""
);

###-----------------------------------------------------------------------------
### Extraction script Google sheets URL
###-----------------------------------------------------------------------------

rxsSpec_googleSheetsURL <-
  paste0("https://docs.google.com/spreadsheets/d/",
         "1hNu8IC1Y8bIXq-Bjgm5VFfNOiTEx1OunO5Cp3rmSO6g");

Planning

Research Question

(link to corresponding SysRevving chapter)

Example: The research question is whether the exponential explosion of the scientific literature is also reflected in a growing evidence base for health promotion interventions targeting recreational substance use.

Planning: Synthesis

(link to corresponding SysRevving chapter)

Example: To answer the research question, our synthesis will consist of a plot with years on the X axis, cumulative number of publications on the Y axis, and with spearately, differently colored lines for each substance.

Planning: Extraction

(link to corresponding SysRevving chapter)

Example: The R extraction script specification (Rxs spec) is stored in this Rxs spec google sheet. The below chunks load it, convert it into the Rxs template (that will then be copied and completed for each source from which data are extracted), and show these specifications.

R extraction script specification

Extractor instructions

metabefor::write_extractor_instructions(
  rxsSpecObject
);

Extractor instructions

Welcome!

Welcome to the extraction instructions for EMPATHS-1, the first systematic review in the EMPATHS project. If this is new to you, you may want to start at https://archeologists.opens.science/empaths.html.

In this project, the focus is on construct definitions and measurement methods. Therefore, during extraction, these are the main entities that you will spend time on. In addition to the brief extraction instructions specified in these instructions and in the extraction script (.Rxs file), where you will register the extracted data, more extensive instructions are provided here.

Extract construct definitions

When looking for a definition of empathy, start by using your source viewer (e.g. if the source is in PDF format, it may be your browser (e.g. Firefox) or a dedicated PDF viewer such as Sumatra or Adobe Acrobat) and use the search/find functionality to look for the text string “empathy” (assuming the source was written in English). If the first occurrence of the construct name is accompanied by its definition, as the authors use it in their work, copy that definition into the extraction script.

However, if the first occurrence is accompanied by a definition that the authors discuss, but then do not use themselves, move to the next occurrence. Similarly, if the first occurrence is not accompanied by a definition at all, move to the next occurrence. For each occurrence, repeat this evaluation: are the authors defining what exactly empathy is? In other words, which parts of the human psyche they consider constituting empathy, and which they consider to reflect other constructs?

If the authors do not provide an explicit definition, then they may instead cite another source (e.g. an article or a book) and refer to the definition there as the one they use. In that case, obtain the shortdoi for that source, and extract that, in the full URL form (e.g. “https://doi.org/gf6btx”). This will enable us to later automatically identify all such URLs, and so categorize sources as either providing their own definition, providing no definition, or citing a definition from elsewhere in the literature (as well as compile a list of such references). If they cite a source that does not have a DOI, consult with the EMPATHS-1 coordinators, Jennifer Gutsell and/or Gjalt-Jorn Peters.

Finally, if the authors do not define empathy but also do not cite another source as providing the definition they use, extract NA to signify that the definition is missing from the source.

Extracting a measurement or manipulation instrument

When extracting a measurement or manipulation (entities empathyMeasureId and empathyManipulationId), you specify their unique identifier. This identifier is taken from https://archeologists.opens.science/empathy-measures (from the column marked “identifier”). If the instrument you’re extracting is already in the list, you can just specify the relevant identifier in the extraction script.

However, if it does not yet exist, you have to add it. If it is a questionnaire, you can choose to specificy it as a TOQ (“Tabulated Open Questionnaire”) specification, enabling importing it into the questionnaire repository at https://operationalizations.com. This is not yet possible for measurement instruments that do not consist of questions and for manipulations; those have to be specified as TOM (“Tabulated Open Metadata”) specifications. Depending on what you choose, follow the corresponding set of instructions below.

Minimal specification of a measurement or manipulation instrument

To specify a TOM (“Tabulated Open Metadata”) specification, you need to complete these steps:

  1. visit https://archeologists.opens.science/empathy-tabulated-specs
  2. open “TOM-spec—bespt0eng_7rtpjgf3”
  3. save a copy under a different name but in the same folder.
  4. create an identifier prefix (see the procedure below for details) and enter it in cell B3
  5. visit https://opens.science/apps/elsa, enter the prefix, and create an identifier
  6. enter the result in cell B4 as UMID
  7. complete the other fields
  8. open the spreadsheet at https://archeologists.opens.science/empathy-measures again and add a row with the UMID you just created

Full specification of a questionnaire

To specify a TOQ (“Tabulated Open Questionnaire”) specification, you need to complete these steps:

  1. visit https://archeologists.opens.science/empathy-tabulated-specs
  2. open “TOQ-spec—eq60eng_7rs8g3bd”
  3. save a copy under a different name but in the same folder.
  4. create an identifier prefix (see the procedure below for details) and enter it in cell B3
  5. visit https://opens.science/apps/elsa, enter the prefix, and create an identifier
  6. enter the result in cell B4 as UQID
  7. complete the other fields
  8. open the spreadsheet at https://archeologists.opens.science/empathy-measures again and add a row with the UQID you just created

How to create an identifier

To create a unique identifier for a TOM, TOQ, or TOI, you can either use the R package {psyverse} or the Elsa app. To use Elsa, visit https://opens.science/apps/elsa

Entity overview (list)

This is an overview of the entities to extract, their titles and descriptions, and other details that will become part of the extraction script template that will be used for the actual extraction.


General

General information

Type: Entity Container
Identifier: general
Path in extraction script tree: source > general
Repeating: FALSE


QURID

Quasi Unique Identifier Record Identifier (QURID).

Extraction instructions: This is already available in the screening database; a QURID was added to every record. We will use this to automatically import bibliographic information available in that file, such as title, keywords, potentially abstract, etc.

Type: Extractable Entity
Identifier: qurid
Value description: A single character value that is used as an identifier and so is always mandatory and can only contain a-z, A-Z, 0-9, and underscores, and must start with a letter.
Path in extraction script tree: source > general > qurid
Value template: string.identifier
Repeating: FALSE


Authors

This container entity holds the information about the authors, in repeated individual container entities per author that each hold two extractable entities.

Type: Entity Container
Identifier: sourceAuthors
Path in extraction script tree: source > general > sourceAuthors
Repeating: FALSE


Author

This container entity holds the information about one author, in two extractable entities (name and ORCID), as well as a unique identifier for each author.

Type: Extractable Entity List
Identifier: sourceAuthor

Author identifier A unique identifier for this author in this source.
Author Name This author’s name.
Author ORCID This ORCID of the author.
Author’s RORs The ROR of the author’s affiliations

Path in extraction script tree: source > general > sourceAuthors > sourceAuthor
Repeating: TRUE


Language

The language in which the article is written as ISO 639-3 code (e.g., to list the 10 most spoken languages: “eng” for English, “zho” for Chinese, “hin” for Hindi, “spa” for Spanish, “fra” for French, and “ara” for Arabic, “ben” for Bengali, “por” for Portuguese, “rus” for Russian, and “urd” for Urdu).

Extraction instructions: Use ISO 639-3 to extract this (see https://en.wikipedia.org/wiki/List_of_ISO_639_language_codes and https://en.wikipedia.org/wiki/ISO_639-3).

Type: Extractable Entity
Identifier: language
Value description: A single character value
Path in extraction script tree: source > language
Value template: string
Repeating: FALSE


Methods

This container entity holds entities related to the methods used by the study.

Type: Entity Container
Identifier: methods
Path in extraction script tree: source > methods
Repeating: FALSE


Population

Extraction instructions: Human/ Non-human (to be discussed)

Type: Entity Container
Identifier: population
Path in extraction script tree: source > methods > population
Repeating: FALSE


Research approach

The research approach used in the study.

Extraction instructions: Extract “quantitative” if the study collects numeric or categorical data (i.e. data where every possible value, or the range of possible values, was defined in advance, for example by using questionnaires or other measurement instruments). Extract “qualitative” if the study collects data that is very raw and unstructured, such as free text provided by participants, transcribed interviews, or video data. Extract “mixed-methods” if the study collects both quantitative and qualitative data. Extract “sysrev” if the study does not collect primary data, but instead is a systematic review of the literature, for example meta-analyses, scoping reviews, or qualitative systematic reviews. Extract “non-empirical” if the study does not collect primary data or provide secondary analysis of pre-existing data, for example for opinion pieces or theoretical contributions. Extract “other” if the source does not fall within any of these categories.

Type: Extractable Entity
Identifier: researchApproach
Value description: A string that has to exactly match one of the values specified in the “values” column of the Coding sheet
Path in extraction script tree: source > methods > researchApproach
Value template: categorical.mandatory
Repeating: FALSE


Empathy Measures

This container entity holds entities specifying how empathy was measured.

Type: Entity Container
Identifier: empathyMeasures
Path in extraction script tree: source > methods > empathyMeasures
Repeating: FALSE


Empathy Measure Identifier

The identifier for the empathy measure that was used to measure empathy.

Extraction instructions: The instruction for extracting measurement instruments and locating or producing the unique measure identifier is provided in the general extraction instructions; please refer to that section.

Type: Extractable Entity
Identifier: empathyMeasureId
Value description: A single character value that is used as an identifier and so is always mandatory and can only contain a-z, A-Z, 0-9, and underscores, and must start with a letter.
Path in extraction script tree: source > methods > empathyMeasures > empathyMeasureId
Value template: string.identifier
Repeating: FALSE


Empathy Manipulations

This container entity holds entities specifying how empathy was manipulated.

Type: Entity Container
Identifier: empathyManipulations
Path in extraction script tree: source > methods > empathyManipulations
Repeating: FALSE


Empathy Manipulation Identifier

The identifier for the empathy manipulation that was used to manipulate empathy.

Extraction instructions: The instruction for extracting manipulations and locating or producing the unique manipulation identifier is provided in the general extraction instructions; please refer to that section.

Type: Extractable Entity
Identifier: empathyManipulationId
Value description: A single character value that is used as an identifier and so is always mandatory and can only contain a-z, A-Z, 0-9, and underscores, and must start with a letter.
Path in extraction script tree: source > methods > empathyManipulations > empathyManipulationId
Value template: string.identifier
Repeating: FALSE


extractorInstructions <-
  metabefor::write_extractor_instructions(
    rxsSpecObject,
    outputFile = file.path(
        extractionPath,
        "extractor-instructions.pdf"
      )
  );

Basic Rxs tree structure

metabefor::show_rxsTree_in_rxsStructure(
  rxsSpecObject,
  output = outputPath
);
                                levelName
1  source                                
2   ¦--general                           
3   ¦   ¦--qurid                         
4   ¦   °--sourceAuthors                 
5   ¦       °--sourceAuthor              
6   ¦           ¦--sourceAuthorIdentifier
7   ¦           ¦--sourceAuthorName      
8   ¦           ¦--sourceAuthorORCID     
9   ¦           °--sourceAuthorROR       
10  ¦--language                          
11  °--methods                           
12      ¦--population                    
13      ¦--researchApproach              
14      ¦--empathyMeasures               
15      ¦   °--empathyMeasureId          
16      °--empathyManipulations          
17          °--empathyManipulationId     
%0 1->2 1->10 1->11 2->3 2->4 4->5 5->6 5->7 5->8 5->9 11->12 11->13 11->14 11->16 14->15 16->17 1 source 2 General 3 QURID 4 Authors 5 Author 6 Author identifier 7 Author Name 8 Author ORCID 9 Author`s RORs 10 Language 11 Methods 12 Population 13 Research approach 14 Empathy Measures 15 Empathy Measure Identifier 16 Empathy Manipulations 17 Empathy Manipulation Identifier

Extraction instructions

cat(rxsSpecObject$rxsInstructions);

Extractor instructions

Welcome!

Welcome to the extraction instructions for EMPATHS-1, the first systematic review in the EMPATHS project. If this is new to you, you may want to start at https://archeologists.opens.science/empaths.html.

In this project, the focus is on construct definitions and measurement methods. Therefore, during extraction, these are the main entities that you will spend time on. In addition to the brief extraction instructions specified in these instructions and in the extraction script (.Rxs file), where you will register the extracted data, more extensive instructions are provided here.

Extract construct definitions

When looking for a definition of empathy, start by using your source viewer (e.g. if the source is in PDF format, it may be your browser (e.g. Firefox) or a dedicated PDF viewer such as Sumatra or Adobe Acrobat) and use the search/find functionality to look for the text string “empathy” (assuming the source was written in English). If the first occurrence of the construct name is accompanied by its definition, as the authors use it in their work, copy that definition into the extraction script.

However, if the first occurrence is accompanied by a definition that the authors discuss, but then do not use themselves, move to the next occurrence. Similarly, if the first occurrence is not accompanied by a definition at all, move to the next occurrence. For each occurrence, repeat this evaluation: are the authors defining what exactly empathy is? In other words, which parts of the human psyche they consider constituting empathy, and which they consider to reflect other constructs?

If the authors do not provide an explicit definition, then they may instead cite another source (e.g. an article or a book) and refer to the definition there as the one they use. In that case, obtain the shortdoi for that source, and extract that, in the full URL form (e.g. “https://doi.org/gf6btx”). This will enable us to later automatically identify all such URLs, and so categorize sources as either providing their own definition, providing no definition, or citing a definition from elsewhere in the literature (as well as compile a list of such references). If they cite a source that does not have a DOI, consult with the EMPATHS-1 coordinators, Jennifer Gutsell and/or Gjalt-Jorn Peters.

Finally, if the authors do not define empathy but also do not cite another source as providing the definition they use, extract NA to signify that the definition is missing from the source.

Extracting a measurement or manipulation instrument

When extracting a measurement or manipulation (entities empathyMeasureId and empathyManipulationId), you specify their unique identifier. This identifier is taken from https://archeologists.opens.science/empathy-measures (from the column marked “identifier”). If the instrument you’re extracting is already in the list, you can just specify the relevant identifier in the extraction script.

However, if it does not yet exist, you have to add it. If it is a questionnaire, you can choose to specificy it as a TOQ (“Tabulated Open Questionnaire”) specification, enabling importing it into the questionnaire repository at https://operationalizations.com. This is not yet possible for measurement instruments that do not consist of questions and for manipulations; those have to be specified as TOM (“Tabulated Open Metadata”) specifications. Depending on what you choose, follow the corresponding set of instructions below.

Minimal specification of a measurement or manipulation instrument

To specify a TOM (“Tabulated Open Metadata”) specification, you need to complete these steps:

  1. visit https://archeologists.opens.science/empathy-tabulated-specs
  2. open “TOM-spec—bespt0eng_7rtpjgf3”
  3. save a copy under a different name but in the same folder.
  4. create an identifier prefix (see the procedure below for details) and enter it in cell B3
  5. visit https://opens.science/apps/elsa, enter the prefix, and create an identifier
  6. enter the result in cell B4 as UMID
  7. complete the other fields
  8. open the spreadsheet at https://archeologists.opens.science/empathy-measures again and add a row with the UMID you just created

Full specification of a questionnaire

To specify a TOQ (“Tabulated Open Questionnaire”) specification, you need to complete these steps:

  1. visit https://archeologists.opens.science/empathy-tabulated-specs
  2. open “TOQ-spec—eq60eng_7rs8g3bd”
  3. save a copy under a different name but in the same folder.
  4. create an identifier prefix (see the procedure below for details) and enter it in cell B3
  5. visit https://opens.science/apps/elsa, enter the prefix, and create an identifier
  6. enter the result in cell B4 as UQID
  7. complete the other fields
  8. open the spreadsheet at https://archeologists.opens.science/empathy-measures again and add a row with the UQID you just created

How to create an identifier

To create a unique identifier for a TOM, TOQ, or TOI, you can either use the R package {psyverse} or the Elsa app. To use Elsa, visit https://opens.science/apps/elsa

Entity overview

cat(rxsSpecObject$entityOverview_list);

Entity overview (list)

This is an overview of the entities to extract, their titles and descriptions, and other details that will become part of the extraction script template that will be used for the actual extraction.


General

General information

Type: Entity Container
Identifier: general
Path in extraction script tree: source > general
Repeating: FALSE


QURID

Quasi Unique Identifier Record Identifier (QURID).

Extraction instructions: This is already available in the screening database; a QURID was added to every record. We will use this to automatically import bibliographic information available in that file, such as title, keywords, potentially abstract, etc.

Type: Extractable Entity
Identifier: qurid
Value description: A single character value that is used as an identifier and so is always mandatory and can only contain a-z, A-Z, 0-9, and underscores, and must start with a letter.
Path in extraction script tree: source > general > qurid
Value template: string.identifier
Repeating: FALSE


Authors

This container entity holds the information about the authors, in repeated individual container entities per author that each hold two extractable entities.

Type: Entity Container
Identifier: sourceAuthors
Path in extraction script tree: source > general > sourceAuthors
Repeating: FALSE


Author

This container entity holds the information about one author, in two extractable entities (name and ORCID), as well as a unique identifier for each author.

Type: Extractable Entity List
Identifier: sourceAuthor

Author identifier A unique identifier for this author in this source.
Author Name This author’s name.
Author ORCID This ORCID of the author.
Author’s RORs The ROR of the author’s affiliations

Path in extraction script tree: source > general > sourceAuthors > sourceAuthor
Repeating: TRUE


Language

The language in which the article is written as ISO 639-3 code (e.g., to list the 10 most spoken languages: “eng” for English, “zho” for Chinese, “hin” for Hindi, “spa” for Spanish, “fra” for French, and “ara” for Arabic, “ben” for Bengali, “por” for Portuguese, “rus” for Russian, and “urd” for Urdu).

Extraction instructions: Use ISO 639-3 to extract this (see https://en.wikipedia.org/wiki/List_of_ISO_639_language_codes and https://en.wikipedia.org/wiki/ISO_639-3).

Type: Extractable Entity
Identifier: language
Value description: A single character value
Path in extraction script tree: source > language
Value template: string
Repeating: FALSE


Methods

This container entity holds entities related to the methods used by the study.

Type: Entity Container
Identifier: methods
Path in extraction script tree: source > methods
Repeating: FALSE


Population

Extraction instructions: Human/ Non-human (to be discussed)

Type: Entity Container
Identifier: population
Path in extraction script tree: source > methods > population
Repeating: FALSE


Research approach

The research approach used in the study.

Extraction instructions: Extract “quantitative” if the study collects numeric or categorical data (i.e. data where every possible value, or the range of possible values, was defined in advance, for example by using questionnaires or other measurement instruments). Extract “qualitative” if the study collects data that is very raw and unstructured, such as free text provided by participants, transcribed interviews, or video data. Extract “mixed-methods” if the study collects both quantitative and qualitative data. Extract “sysrev” if the study does not collect primary data, but instead is a systematic review of the literature, for example meta-analyses, scoping reviews, or qualitative systematic reviews. Extract “non-empirical” if the study does not collect primary data or provide secondary analysis of pre-existing data, for example for opinion pieces or theoretical contributions. Extract “other” if the source does not fall within any of these categories.

Type: Extractable Entity
Identifier: researchApproach
Value description: A string that has to exactly match one of the values specified in the “values” column of the Coding sheet
Path in extraction script tree: source > methods > researchApproach
Value template: categorical.mandatory
Repeating: FALSE


Empathy Measures

This container entity holds entities specifying how empathy was measured.

Type: Entity Container
Identifier: empathyMeasures
Path in extraction script tree: source > methods > empathyMeasures
Repeating: FALSE


Empathy Measure Identifier

The identifier for the empathy measure that was used to measure empathy.

Extraction instructions: The instruction for extracting measurement instruments and locating or producing the unique measure identifier is provided in the general extraction instructions; please refer to that section.

Type: Extractable Entity
Identifier: empathyMeasureId
Value description: A single character value that is used as an identifier and so is always mandatory and can only contain a-z, A-Z, 0-9, and underscores, and must start with a letter.
Path in extraction script tree: source > methods > empathyMeasures > empathyMeasureId
Value template: string.identifier
Repeating: FALSE


Empathy Manipulations

This container entity holds entities specifying how empathy was manipulated.

Type: Entity Container
Identifier: empathyManipulations
Path in extraction script tree: source > methods > empathyManipulations
Repeating: FALSE


Empathy Manipulation Identifier

The identifier for the empathy manipulation that was used to manipulate empathy.

Extraction instructions: The instruction for extracting manipulations and locating or producing the unique manipulation identifier is provided in the general extraction instructions; please refer to that section.

Type: Extractable Entity
Identifier: empathyManipulationId
Value description: A single character value that is used as an identifier and so is always mandatory and can only contain a-z, A-Z, 0-9, and underscores, and must start with a letter.
Path in extraction script tree: source > methods > empathyManipulations > empathyManipulationId
Value template: string.identifier
Repeating: FALSE


Extraction script template

This is the extraction script generated based on the extraction script specification.

cat("\n\n<pre><textarea rows='40' cols='124' style='font-family:monospace;font-size:11px;white-space:pre;'>",
    unlist(rxsSpecObject$rxsTemplate),
    "</textarea></pre>\n\n",
    sep="\n");

Planning: Screening

(link to corresponding SysRevving chapter)

Example: …

Preregistration

(link to corresponding SysRevving chapter)

### Note: this chunk doesn't need to be evaluated (i.e. chunk option "eval" is
### set to FALSE), but in case it is, it writes the template to a different
### file than the version with content added and included in the next chunk.
### (For a list of included packages, see data(package='preregr'))

preregr::form_to_rmd_template(
    "genSysRev_v1",
    file = file.path(scriptPath, "preregistration-autogenerated.Rmd"),
    includeYAML = FALSE
);

### Note also that the preregistration form contains a level 2 heading

Inclusive Systematic Review Registration Form

Section: Metadata

Target discipline
target_discipline
Psychology
Title
title
Empathy and its components – conceptualization, operationalization, and measurement
Author(s) / contributor(s)
authors
(in alphabetical order based on family names) Jillian Franks; Jennifer N. Gutsell; Gjalt-Jorn Peters; Oscar Sun
Tasks and roles
tasks_and_roles
Will be copy-pasted from what {comma} produces based on the google sheet at https://docs.google.com/spreadsheets/d/16If0nGsL3xzfJRZA7UWh9wQL6xVIsq4Z-WfajMVQmT8/edit?usp=sharing.

Section: Review methods

Type of review
type_of_review
Scoping Review
Review stages
review_stages
Preparation, Search, Extraction, Synthesis (note: we will not screen articles; see the screening section)
Current review stage
current_stage
Preparation
Start date
start_date
2024-03-01
End date
end_date
2025-03-01
Background
background

Empathy plays a pivotal role in people’s socio-emotional well-being. In light of its significance, research on empathy has experienced considerable growth in the last two decades. Yet, the existing literature lacks clear construct definitions and agreed-upon measures that capture the multifaceted nature of empathy. There is a growing consensus suggesting that empathy can be viewed as a broad, overarching term encompassing at least three distinct sub-constructs that represent critical dimensions of empathy: an affective component involving emotions, a cognitive component related to understanding, and the act of sharing experiences. Additionally, a certain degree of self-other differentiation and a motivational component – the desire to promote others’ well-being or alleviate their suffering is often integral to the empathic experience.

Despite this conceptual framework, the extent to which empirical studies align with this view of empathy and its constituent elements remains unclear. We are planning to conduct a large-scale scoping review to evaluate how empirical research approaches the measurement and manipulation of empathy and its components. Our review aims to address questions regarding which components of empathy receive significant attention and which remain underexplored, as well as how these components are operationalized and measured. Furthermore, this scoping review will culminate in the creation of a publicly accessible database containing machine-readable data, which can serve as a valuable resource for future systematic reviews and meta-analyses. Subsequent research could utilize our new database to explore questions like how different components of empathy affect various outcome measures differentially. Similarly, investigating the factors that either facilitate or impede these empathy components and their impact on empathy itself could be a promising future direction stemming from this project.

Primary research question(s)
primary_research_question
How does empirical research in psychology approach the conceptualization, measurement and manipulation of empathy and its components?
Secondary research question(s)
secondary_research_question
TBD - by entire intitial group of co-authors
Expectations / hypotheses
expectations_hypotheses
We expect a large amount of heterogeneity and relatively little convergence between conceptualizations, measurement, and manipulations. We also expect these patterns to differ by component of empathy, with some being better defined, operationalized, and measured while others might be understuddied.
Dependent variable(s) / outcome(s) / main variables
dvs_outcomes_main_vars
Main variables: Conceptualizations, Manipulations, and measures of empathy
Independent variable(s) / intervention(s) / treatment(s)
ivs_intervention_treatment
TBD - by entire intitial group of co-authors
Additional variable(s) / covariate(s)
additional_variables
TBD - by entire intitial group of co-authors
Software
software
R
Funding
funding
None for now
Conflicts of interest
cois
We declare that we have no conflicts of interest.
Overlapping authorships
overlapping_authorships
Anyone who is co-author on a publication will not be involved in the extraction phase of that publication. We will compare a list of all co-author publications to the included sources to avoid presenting a co-author with self-authored papers to extract.

Section: Search strategy

Databases
databases
PsycINFO
Interfaces
interfaces
We will use EBSCO to search PsycINFO.
Grey literature
grey_literature
We will not search the grey liteature.
Inclusion and exclusion criteria
inclusions_exclusion_criteria
For now we do not have any exclusion criteria but we will discuss the posibility of excluding non-human animals with the full group of collaborators.
Query strings
query_strings
TI empathy
Search validation procedure
search_validation_procedure
We won’t employ any validation procedure because we are aiming to include the entire literature that has empathy in the title.
Other search strategies
other_search_strategies
None used
Procedures to contact authors
procedure_for_contacting_authors
In the first phase of the project, we will not contact authors, but we might decide to do so for future phases of the project. This will then be described in a new pre-registration
Results of contacting authors
results_of_contacting_authors
Not Applicable
Search expiration and repetition
search_expiration_and_repetition
We will strive to develop an infrastructure for keeping the database updated. We do not yet have repetition timepoints specified
Search strategy justification
search_strategy_justification
We aim to capture the the heterogenity of definitions and conceptualizations of empathy in the psychological academic literature. PsycINFO covers most psychological literature and we don’t expect any bias in terms of journals that are included in PsycINFO but would be included in other search engines. We are searching only titles because we are only intersted in articles that primarily focus on empathy. We don’t expect that articles that do not use the term empathy but instead use a derivative term systematically use different definitions of empathy.
Miscellaneous search strategy details
misc_search_strategy_details
There are no other relevant details

Section: Screening

Screening stages
screening_stages
There will be no screening and therefore no screening stages to specify here
Screened fields / masking
screened_fields_masking
There will be no screening and therefore no masking
Used exclusion criteria
used_exclusion_criteria
There will be no screening and therefore no exclusion criteria - UPDATE IF NECESSARY
Screener instructions
screener_instructions
There will be no screening and therefore no screener instructions to specify here
Screening reliability
screening_reliability
Unspecified
Screening reconciliation procedure
screening_reconciliation_procedure
Unspecified
Sampling and sample size
sampling_and_sample_size
We’ll include all search results in the review
Screening procedure justification
screening_procedure_justification
We will not do screening because we are interested in all articles that study empathy as one of their main constructs and it is unlikely that if the authors mention empathy in the title they do not study empathy as one of the main constructs. Therefore we expect screening to yield very few exclusions, hence not warranting the additional effort and resources necessary for screening
Data management and sharing
screening_data_management_and_sharing
We will publicly share files in BibTeX, RIS, CSV, and XLSX
Miscellaneous screening details
misc_screening_details
There are no other details to specify

Section: Extraction

Entities to extract
entities_to_extract
The entities we will extract are described in detail in the Rxs specification, as well as in the extraction script template and the extractor instructions. These are available in the OSF project at https://osf.io/5j82t (for the most current version) or in the files frozen along with this preregistration.
Extraction stages
extraction_stages
We will have two stages: a training/calibration stage to improve the extractor instructions (i.e. the entity descriptions, instructions, and potentially value/data types (e.g. improving categories etc)), where the first 6 sources are extracted by three extractors each, so that first two sources extracted by extractor A are also extracted by extractors B and C; the second two sources of extractor A are also extracted by extractors B and D (for example), and the last two sources of extractor A are also extracted by extractors E and F (e.g.). This maximized ‘matches’ between extractors. Based on the results, the extractor instructions (etc) will be updated, after which the training stage is repeated for a new batch of sources. Once the extractors feel that they align sufficiently, the remaining sources will be extracted independently in batches of 50 sources.
Extractor instructions
extractor_instructions
The extractor instructions are available in the OSF project at https://osf.io/5j82t (for the most current version) or in the files frozen along with this preregistration.
Extractor blinding
extractor_blinding
Extractors will not be masked.
Extraction reliability
extraction_reliability
We do not plan to have independent extraction, and so, will not compute agreement.
Extraction reconciliation procedure
extraction_reconciliation_procedure
We do not plan to have independent extraction, and so, have nothing to reconcile (except from during the training/calibration stage.
Extraction procedure justification
extraction_procedure_justification
Although we acknowledge that independent extraction would yield high-quality results, given the large-scale scope of this undertaking, that does not seem feasible in this first phase. However, once the initial database has been established, we may involve other collaborators and/or students and/or citizens to realize double extraction.
Data management and sharing
extraction_data_management_and_sharing
Everything will be shared through the OSF repo at https://osf.io/5j82t in .rxs.rmd files as well as a variety of files with rectangular data (most likely in .RData, .omv, .sav, .xlsx, and .csv files).
Miscellaneous extraction details
misc_extraction_details
There are no additional details.

Section: Synthesis and Quality Assessment

Planned data transformations
planned_data_transformations
We do not plan any data transformations.
Missing data
missing_data
We will simply register missing data, but since we plan no analyses that might be biased by missing data, we will take no further steps.
Data validation
data_validation
We will not engage in data validation, but we will describe the quality of the reporting.
Quality assessment
quality_assessment
This review is itself about quality assessment, so we consider observations about quality our results.
Synthesis plan
synthesis_plan
We will code the definitions and measurement instruments as qualitative data using and export to the Reproducible Open Coding Kit, and then import the results back. We will then conduct descriptive analyses.
Criteria for conclusions / inference criteria
criteria_for_conclusions
We test no hypotheses and as such have no formal inferential criteria. In addition, our aims are descriptive, and we have no criteria for drawing conclusions about those descriptives.
Synthesist masking
synthesis_masking
We have not yet established a procedure for the synthesis, but given the exploratory nature of the project, in combination with the research question not being related to analyst degrees of freedom, it is unlikely that we will have multiple synthesists.
Synthesis reliability
synthesis_reliability
We will not have multiple synthesists.
Synthesis reconciliation procedure
synthesis_reconciliation_procedure
We will not have multiple synthesists.
Publication bias analyses
publication_bias
We cannot look at publication bias, since we do not look at effect sizes or other results.
Sensitivity analyses / robustness checks
sensitivity_analysis
We do not plan any sensitivity analyses or robustness checks (and would not know how to design those given the kind of data we will extract).
Synthesis procedure justification
synthesis_procedure_justification
We feel that the nature of the analyses does not require a very comprehensive analysis plan; our results will be descriptive.
Synthesis data management and sharing
synthesis_data_management_and_sharing
Everything will be shared through the OSF repo at https://osf.io/5j82t, most likely in R files and .omv files.
Miscellaneous synthesis details
misc_synthesis_details
We have no additional details.
preregr::prereg_spec_to_pdf(
  preregrObject,
  file = file.path(preregPath, "registration-1---preregistration.pdf"),
  author = rmarkdown::metadata$author
);

Example: …

Execution

Execution: Screening

(link to corresponding SysRevving chapter)

Example: …

Screening stage 1

###-----------------------------------------------------------------------------
### Process first search batch
### Note that these are sorted by batch
###-----------------------------------------------------------------------------

# ### Generate and add quasi-unique record identifiers; note that the origin
# ### *must* be hardcoded to preserve the same QURIDs for every record. The first
# ### record should get "qurid_7mtttgrb".
# searchResults$bibHitDf$qurid <-
#   metabefor::generate_qurids(
#     nrow(searchResults$bibHitDf),
#     origin = as.POSIXct("2023-02-06 15:39:43 CET")
#     );
# 
# screenerPackages <-
#   metabefor::write_screenerPackage(
#     bibliographyDf = searchResults,
#     outputPath = screeningPath,
#     screeners = c("fm2", "il1", "av5"),
#     screenerFieldsPrefix = "stage1_",
#     basename = "stage1_",
#     duplicateField = "duplicate"
#   );

### Potentially, to screen with revtools:
# revtools::screen_titles(bibHitDf[[1]]);
# ###-----------------------------------------------------------------------------
# ### Import files
# ###-----------------------------------------------------------------------------
# 
# filesToImport <-
#   list.files(
#     screeningPath,
#     recursive = TRUE,
#     pattern = "2023-02-28.*bib",
#     full.names = TRUE
#   );
# 
# screenerAcronyms <-
#   gsub("^.*stage1_([a-zA-Z0-9]+)\\.bib$",
#        "\\1",
#        filesToImport);
# 
# # screening_stage1_imported_1 <-
# #   lapply(
# #     filesToImport,
# #     bibtex::read.bib
# #   );
# 
# screening_stage1_imported_2 <-
#   lapply(
#     filesToImport,
#     RefManageR::ReadBib
#   );
# names(screening_stage1_imported_2) <- screenerAcronyms;
# 
# screening_stage1_imported_2_df <-
#   lapply(
#     screening_stage1_imported_2,
#     as.data.frame
#   )
# names(screening_stage1_imported_2_df) <- screenerAcronyms;
# 
# ### Fix wrong column
# # screening_stage1_imported_2_df$av5$screener_av5_stage_1 <-
# #   screening_stage1_imported_2_df$av5$screener_av5_stage_2;
# 
# getScreenerCols <-
#   lapply(
#     screenerAcronyms,
#     function(x) {
#       return(
#         screening_stage1_imported_2_df[[x]][, c("qurid",
#                                               paste0("screener_", x, "_stage_1"))]);
#     }
#   );
# names(getScreenerCols) <- screenerAcronyms;
# 
# newDf <-
#   merge(
#     screening_stage1_imported_2_df$fm2,
#     getScreenerCols$il1,
#     by = "qurid"
#   );
# newDf <-
#   merge(
#     newDf,
#     getScreenerCols$av5,
#     by = "qurid"
#   );
# 
# write.csv(newDf,
#           file = file.path(screeningPath, "2023-02-28---stage1_merged.csv"));
# 
# writexl::write_xlsx(
#   newDf,
#   file.path(screeningPath, "2023-02-28---stage1_merged.xlsx")
# );

# newDf <-
#   as.data.frame(
#     readxl::read_xlsx(
#       file.path(screeningPath, "2023-02-28---stage1_merged.xlsx")
#     )
#   );

### Potentially, to screen with revtools:
# revtools::screen_titles(bibHitDf[[1]]);

Execution: Extraction

(link to corresponding SysRevving chapter)

Example: …

# test <-
#   metabefor::rxs_parseExtractionScripts(
#     path = rxsSpecPath,
#     exclude = NULL
#   );

Execution: Synthesis

(link to corresponding SysRevving chapter)

Example: …