banner



How To Calculate Effect Size In R

Effect Size Calculation & Conversion


A trouble meta-analysts oftentimes face is that suitable "raw" issue size information cannot be extracted from all included studies. Most functions in the {meta} parcel, such as metacont (Chapter 4.ii.2) or metabin (Chapter 4.2.3.1), tin can but exist used when complete raw result size data is available.

In practice, this frequently leads to difficulties. Some published articles, particularly older ones, do not report results in a way that allows to extract the needed (raw) effect size information. Information technology is non uncommon to find that a written report reports the results of a \(t\)-examination, one-way ANOVA, or \(\chi^2\)-test, but not the group-wise mean and standard deviation, or the number of events in the study conditions, that we need for our meta-analysis.

The good news is that we can sometimes convert reported information into the desired consequence size format. This makes it possible to include affected studies in a meta-analysis with pre-calculated data (Affiliate 4.2.1) using metagen. For example, we can catechumen the results of a two-sample \(t\)-test to a standardized hateful deviation and its standard error, and and so use metagen to perform a meta-analysis of pre-calculated SMDs. The {esc} bundle (Lüdecke 2019) provides several helpful functions which let us to perform such conversions directly in R.

Hateful & Standard Fault


When computing SMDs or Hedges' \(g\) from the mean and standard error, nosotros can make employ of the fact that the standard deviation of a mean is divers equally its standard error, with the square root of the sample size "factored out" (Thalheimer and Cook 2002):

\[\begin{equation} \text{SD} =\text{SE}\sqrt{n} \tag{17.1} \finish{equation}\]

We can calculate the SMD or Hedges' \(g\) using the esc_mean_se function. Hither is an example:

                              library                (                esc                )                esc_mean_se                (grp1m                =                8.v,                # mean of grouping 1                grp1se                =                1.5,                # standard error of group ane                grp1n                =                l,                # sample in grouping one                grp2m                =                11,                # mean of grouping ii                grp2se                =                ane.viii,                # standard error of group two                grp2n                =                lx,                # sample in group 2                es.type                =                "d"                )                # catechumen to SMD; use "g" for Hedges' thousand                          
            ##  ## Effect Size Calculation for Meta Assay ##  ##      Conversion: hateful and se to effect size d ##     Consequence Size:  -0.2012 ##  Standard Error:   0.1920 ##        Variance:   0.0369 ##        Lower CI:  -0.5774 ##        Upper CI:   0.1751 ##          Weight:  27.1366          

Regression Coefficients


It is possible to calculate SMDs, Hedges' \(chiliad\) or a correlation \(r\) from standardized or unstandardized regression coefficients (Lipsey and Wilson 2001). For unstandardized coefficients, we can utilize the esc_B function in {esc}. Here is an example:

                              library                (                esc                )                esc_B                (b                =                3.three,                # unstandardized regression coefficient                sdy                =                5,                # standard divergence of predicted variable y                grp1n                =                100,                # sample size of the offset group                grp2n                =                150,                # sample size of the second grouping                es.type                =                "d"                )                # catechumen to SMD; use "m" for Hedges' g                          
            ##  ## Effect Size Calculation for Meta Analysis ##  ##      Conversion: unstandardized regression coefficient to effect size d ##     Effect Size:   0.6962 ##  Standard Mistake:   0.1328 ##        Variance:   0.0176 ##        Lower CI:   0.4359 ##        Upper CI:   0.9565 ##          Weight:  56.7018          
                              esc_B                (b                =                2.ix,                # unstandardized regression coefficient                sdy                =                four,                # standard deviation of the predicted variable y                grp1n                =                50,                # sample size of the first group                grp2n                =                fifty,                # sample size of the second group                es.blazon                =                "r"                )                # convert to correlation                          
            ## Issue Size Calculation for Meta Assay ##  ##      Conversion: unstandardized regression coefficient  ##                  to effect size correlation ##     Result Size:   0.3611 ##  Standard Fault:   0.1031 ##        Variance:   0.0106 ##        Lower CI:   0.1743 ##        Upper CI:   0.5229 ##          Weight:  94.0238 ##      Fisher's z:   0.3782 ##       Lower CIz:   0.1761 ##       Upper CIz:   0.5803          

Standardized regression coefficients tin exist transformed using esc_beta.

                              esc_beta                (beta                =                0.32,                # standardized regression coefficient                sdy                =                5,                # standard departure of the predicted variable y                grp1n                =                100,                # sample size of the start group                grp2n                =                150,                # sample size of the 2d group                es.type                =                "d"                )                # convert to SMD; utilize "g" for Hedges' g                          
            ##  ## Effect Size Calculation for Meta Assay ##  ##      Conversion: standardized regression coefficient to effect size d ##     Effect Size:   0.6867 ##  Standard Error:   0.1327 ##        Variance:   0.0176 ##        Lower CI:   0.4266 ##        Upper CI:   0.9468 ##          Weight:  56.7867          
                              esc_beta                (beta                =                0.37,                # standardized regression coefficient                sdy                =                4,                # standard deviation of predicted variable y                grp1n                =                50,                # sample size of the first group                grp2n                =                50,                # sample size of the 2nd group                es.type                =                "r"                )                # convert to correlation                          
            ## Effect Size Calculation for Meta Assay ##  ##      Conversion: standardized regression coefficient  ##                  to effect size correlation ##     Effect Size:   0.3668 ##  Standard Error:   0.1033 ##        Variance:   0.0107 ##        Lower CI:   0.1803 ##        Upper CI:   0.5278 ##          Weight:  93.7884 ##      Fisher'southward z:   0.3847 ##       Lower CIz:   0.1823 ##       Upper CIz:   0.5871          

Correlations


For as sized groups (\(n_1=n_2\)), we can utilize the post-obit formula to derive the SMD from the point-biserial correlation (Lipsey and Wilson 2001, chap. iii).

\[\begin{equation} r_{pb} = \frac{\text{SMD}}{\sqrt{\text{SMD}^ii+iv}} ~~~~~~~~ \text{SMD}=\frac{2r_{lead}}{\sqrt{i-r^2_{pb}}} \tag{17.2} \end{equation}\]

A different formula has to exist used for unequally sized groups (Aaron, Kromrey, and Ferron 1998):

\[\begin{align} r_{pb} &= \frac{\text{SMD}}{\sqrt{\text{SMD}^two+\dfrac{(N^2-2N)}{n_1n_2}}} \notag \\ \text{SMD} &= \dfrac{r_{pb}}{\sqrt{(1-r^two)\left(\frac{n_1}{Due north}\times\left(1-\frac{n_1}{Northward}\correct)\correct)}} \tag{17.3} \finish{marshal}\]

To convert \(r_{pb}\) to an SMD or Hedges' \(yard\), we tin use the esc_rpb part.

                              library                (                esc                )                esc_rpb                (r                =                0.25,                # betoken-biserial correlation                grp1n                =                99,                # sample size of group one                grp2n                =                120,                # sample size of group 2                es.type                =                "d"                )                # convert to SMD; use "g" for Hedges' g                          
            ##  ## Effect Size Calculation for Meta Assay ##  ##      Conversion: point-biserial r to effect size d ##     Effect Size:   0.5188 ##  Standard Error:   0.1380 ##        Variance:   0.0190 ##        Lower CI:   0.2483 ##        Upper CI:   0.7893 ##          Weight:  52.4967          

One-Way ANOVAs


Nosotros can too derive the SMD from the \(F\)-value of a one-style ANOVA with ii groups. Such ANOVAs tin be identified by looking at the degrees of freedom. In a i-mode ANOVA with two groups, the degrees of freedom should ever start with 1 (e.g.\(F_{\text{i,147}}\)=5.31).

The formula used for the transformation looks like this (based on Rosnow and Rosenthal 1996; Rosnow, Rosenthal, and Rubin 2000; see Thalheimer and Cook 2002):

\[\begin{equation} \text{SMD} = \sqrt{ F\left(\frac{n_1+n_2}{n_1 n_2}\right)\left(\frac{n_1+n_2}{n_1+n_2-2}\right)} \tag{17.iv} \end{equation}\]

To calculate the SMD or Hedges' \(grand\) from \(F\)-values, we can use the esc_f function. Hither is an example:

                              esc_f                (f                =                5.04,                # F value of the one-style anova                grp1n                =                519,                # sample size of grouping 1                                grp2n                =                528,                # sample size of group 2                es.type                =                "grand"                )                # convert to Hedges' g; use "d" for SMD                          
            ##  ## Issue Size Calculation for Meta Analysis ##  ##      Conversion: F-value (one-style-Anova) to effect size Hedges' g ##     Issue Size:   0.1387 ##  Standard Error:   0.0619 ##        Variance:   0.0038 ##        Lower CI:   0.0174 ##        Upper CI:   0.2600 ##          Weight: 261.1022          

Two-Sample \(t\)-Tests


An consequence size expressed equally a standardized hateful difference can also be derived from an independent ii-sample \(t\)-test value, using the post-obit formula (Rosnow, Rosenthal, and Rubin 2000; Thalheimer and Cook 2002):

\[\begin{equation} \text{SMD} = \frac {t(n_1+n_2)}{\sqrt{(n_1+n_2-two)(n_1n_2)}} \tag{17.five} \cease{equation}\]

In R, we can summate the SMD or Hedges' chiliad from a \(t\)-value using the esc_t office. Here is an case:

                              esc_t                (t                =                iii.iii,                # t-value                                grp1n                =                100,                # sample size of group1                grp2n                =                150,                # sample size of group 2                es.blazon=                "d"                )                # convert to SMD; use "thousand" for Hedges' g                          
            ##  ## Consequence Size Calculation for Meta Analysis ##  ##      Conversion: t-value to result size d ##     Effect Size:   0.4260 ##  Standard Mistake:   0.1305 ##        Variance:   0.0170 ##        Lower CI:   0.1703 ##        Upper CI:   0.6818 ##          Weight:  58.7211          

\(p\)-Values


At times, studies merely report the effect size (e.1000. a value of Cohen's \(d\)), the \(p\)-value of that effect, and nothing more than. Yet, to puddle results in a meta-analysis, we need a measure out of the precision of the outcome size, preferably the standard error.

In such cases, we must judge the standard error from the \(p\)-value of the upshot size. This is possible for effect sizes based on differences (i.eastward. SMDs), or ratios (i.eastward. take a chance or odds ratios), using the formulas by Altman and Bland (2011). These formulas are implemented in the se.from.p function in R.

Bold a study with \(N=\) 71 participants, reporting an upshot size of \(d=\) 0.71 for which \(p=\) 0.013, we tin summate the standard mistake like this:

            ##   EffectSize StandardError StandardDeviation  LLCI  ULCI ## 1       0.71         0.286             two.410 0.149 1.270          

For a study with \(Due north=\) 200 participants reporting an upshot size of OR = 0.91 with \(p=\) 0.38, the standard error is calculated this way:

            ##                        [,1] ## logEffectSize        -0.094 ## logStandardError      0.105 ## logStandardDeviation  1.498 ## logLLCI              -0.302 ## logULCI               0.113 ## EffectSize            0.910 ## LLCI                  0.739 ## ULCI                  1.120          

When upshot.size.type = "ratio", the role automatically also calculates the log-transformed consequence size and standard error, which are needed to use the metagen office (Affiliate four.2.1).

\(\chi^2\) Tests


To convert a \(\chi^ii\) statistic to an odds ratio, the esc_chisq function tin can exist used (bold that d.f. = 1; e.yard.\(\chi^2_1\) = 8.7). Here is an case:

                              esc_chisq                (chisq                =                7.9,                # chi-squared value                totaln                =                100,                # full sample size                es.type                =                "cox.or"                )                # convert to odds ratio                          
            ##  ## Effect Size Adding for Meta Analysis ##  ##      Conversion: chi-squared-value to event size Cox odds ratios ##     Upshot Size:   2.6287 ##  Standard Error:   0.3440 ##        Variance:   0.1183 ##        Lower CI:   ane.3394 ##        Upper CI:   v.1589 ##          Weight:   eight.4502          

Number Needed To Treat


Effect sizes such every bit Cohen'southward \(d\) or Hedges' \(g\) are often difficult to interpret from a practical standpoint. Imagine that we found an intervention effect of \(g=\) 0.35 in our meta-analysis. How can nosotros communicate what such an upshot means to patients, public officials, medical professionals, or other stakeholders?

To brand information technology easier for others to understand the results, meta-analyses also ofttimes written report the number needed to treat (NNT). This measure is virtually commonly used in medical enquiry. It signifies how many additional patients must receive the treatment under study to forbid one additional negative event (eastward.g. relapse) or attain one boosted positive outcome (eastward.g. symptom remission, response). If NNT = three, for example, we can say that three individuals must receive the handling to avoid ane additional relapse case; or that three patients must exist treated to achieve one additional case of reliable symptom remission, depending on the research question.

When we are dealing with binary effect size data, calculation of NNTs is relatively easy. The formula looks like this:

\[\begin{equation} \text{NNT} = (p_{e_{\text{treat}}}-p_{e_{\text{control}}})^{-1} \tag{17.six} \end{equation}\]

In this formula, \(p_{e_{\text{care for}}}\) and \(p_{e_{\text{control}}}\) are the proportions of participants who experienced the event in the treatment and command grouping, respectively. These proportions are identical to the "risks" used to calculate the hazard ratio (Chapter iii.3.2.ane), and likewise known equally the experimental group event rate (EER) and control grouping event charge per unit (CER). Given its formula, the NTT can also be described as the inverse of the (absolute) risk difference.

Converting standardized mean differences or Hedges' \(g\) to a NNT is more complicated. There are two ordinarily used methods:

  • The method by Kraemer and Kupfer (2006), which calculates the NNT from an area under the bend (AUC), defined every bit the probability that a patient in the handling group has an issue preferable to the one in the control grouping. This method allows to calculate the NNT directly from an SMD or \(k\) without whatever extra information.

  • The method past Furukawa and Leucht calculates NNT values from SMDs using the CER, or a reasonable estimate thereof. Furukawa'due south method has been shown to be superior in estimating the true NNT value compared to the Kraemer & Kupfer method (Furukawa and Leucht 2011). If we can make reasonable estimates of the CER, Furukawa's method should therefore ever exist preferred.

When we use risk or odds ratios as effect size measures, NNTs tin be calculated directly from {meta} objects using the nnt office. After running our meta-analysis using metabin (Chapter four.ii.three.i), we merely have to plug the results into the nnt role. Here is an example:

                              library                (                meta                )                information                (                Olkin1995                )                # Run meta-analysis with binary effect size information                g.b                <-                metabin                (                ev.exp,                n.exp,                ev.cont,                north.cont,                 data                =                Olkin1995,                sm                =                "RR"                )                nnt                (                thou.b                )                          
            ## Common event model:  ##  ##     p.c     NNT lower.NNT upper.NNT ##  0.0000     Inf       Inf       Inf ##  0.1440 xxx.5677   26.1222   37.2386 ##  0.3750 11.7383   ten.0312   14.3001 ##  ## Random effects model:  ##  ##     p.c     NNT lower.NNT upper.NNT ##  0.0000     Inf       Inf       Inf ##  0.1440 30.1139   24.0662   41.3519 ##  0.3750 eleven.5641    ix.2417   15.8796          

The nnt function provides the number needed to treat for different assumed CERs. The three lines show the result for the minimum, mean, and maximum CER in our data set. The mean CER estimate is the "typical" NNT that is usually reported.

It is as well possible to use nnt with metagen models, as long every bit the summary measure sm is either "RR" or "OR". For such models, we also need to specify the assumed CER in the p.c argument in nnt. Here is an instance using the chiliad.gen_bin meta-analysis object we created in Chapter 4.2.3.1.5:

                              # Also evidence fixed-effect model results                one thousand.gen_bin                <-                update.meta                (                m.gen_bin,                           fixed                =                TRUE                )                nnt                (                yard.gen_bin,      p.c                =                0.1                )                # Use a CER of 0.1                          
            ## Common consequence model:  ##  ##     p.c     NNT lower.NNT upper.NNT ##  0.k -9.6906  -11.6058   -8.2116 ##  ## Random effects model:  ##  ##     p.c     NNT lower.NNT upper.NNT ##  0.1000 -9.7870  -16.4843   -6.4761          

Standardized mean differences or Hedges' \(m\) can exist converted to the NNT using the NNT function in {dmetar}.

To apply the Kraemer & Kupfer method, we only have to provide the NNT part with an effect size (SMD or \(g\)). Furukawa'southward method is automatically used as presently equally a CER value is supplied.

            ## Kraemer & Kupfer method used.  ## [1] seven.270711          
                              NNT                (d                =                0.245, CER                =                0.35                )                          
            ## Furukawa & Leucht method used.  ## [1] 10.61533          

A Number to be Treated with Care: Criticism of the NNT

While common, usage of NNTs to communicate the results of clinical trials is not uncontroversial. Criticisms include that lay people often misunderstand it (despite purportedly being an "intuitive" alternative to other effect size measures, Christensen and Kristiansen 2006) ; and that researchers often calculate NNTs incorrectly (Mendes, Alves, and Batel-Marques 2017) .

Furthermore, information technology is not possible to calculate reliable standard errors (and conviction intervals) of NNTs, which ways that they tin non be used in meta-analyses (Hutton 2010) . Information technology is merely possible to convert results to the NNT after pooling has been conducted using another effect size mensurate.

Multi-Arm Studies


To avoid unit-of-analysis errors (Affiliate three.5.two), it is sometimes necessary to puddle the mean and standard deviation of 2 or more trial arms before calculating a (standardized) mean difference. To pool continuous upshot size information of 2 groups, nosotros can use these equations:

\[\brainstorm{marshal} n_{\text{pooled}} &= n_1 + n_2 \\ m_{\text{pooled}} &= \frac{n_1m_1+n_2m_2}{n_1+n_2} \\ SD_{\text{pooled}} &= \sqrt{\frac{(n_1-ane)SD^{2}_{1}+ (n_2-1)SD^{ii}_{2}+\frac{n_1n_2}{n_1+n_2}(m^{2}_1+m^{ii}_2-2m_1m_2)} {n_1+n_2-1}} \end{align}\]

We can apply this formula in R using the pool.groups office.

Here is an example:

                              library                (                dmetar                )                pool.groups                (n1                =                50,                # sample size group 1                n2                =                50,                # sample size group two                m1                =                three.5,                # mean group 1                m2                =                4,                # hateful group two                sd1                =                3,                # sd group i                sd2                =                3.8                )                # sd group2                                          
            ##   Mpooled SDpooled Npooled ## 1    three.75 3.415369     100          

Assemblage of Event Sizes


The aggregate function in {metafor} can be used to amass several dependent, pre-calculated outcome sizes into one estimate, for instance because they are part of the same study or cluster. This is a way to avert the unit-of-analysis error (run across Chapter iii.v.2), but requires u.s.a. to assume a value for the inside-study correlation, which is typically unknown. Another (and often preferable) way to deal with effect size dependencies are (correlated) hierarchical models, which are illustrated in Chapter 10.

In this example, we aggregate result sizes of the Chernobyl data set (run across Affiliate x.2), so that each study merely provides one effect size:

                              library                (                metafor                )                library                (                dmetar                )                data                (                "Chernobyl"                )                # Convert 'Chernobyl' data to 'escalc' object                Chernobyl                <-                escalc                (yi                =                z,                # Effect size                sei                =                se.z,                # Standard mistake                information                =                Chernobyl                )                # Amass upshot sizes on written report level                # Nosotros assume a correlation of rho=0.6                Chernobyl.agg                <-                aggregate                (                Chernobyl,                             cluster                =                author,                            rho                =                0.half-dozen                )                # Show aggregated results                Chernobyl.agg                [,c                (                "writer",                "yi",                "vi"                )                ]                          
            ##                       author     yi     half-dozen  ## 1 Aghajanyan & Suskov (2009) 0.2415 0.0079  ## 2     Alexanin et al. (2010) one.3659 0.0012  ## 3             Bochkov (1993) 0.2081 0.0014  ## 4      Dubrova et al. (1996) 0.3068 0.0132  ## 5      Dubrova et al. (1997) 0.4453 0.0110 ## [...]          

Please note that aggregate returns the aggregated event sizes yi as well equally their variance vi, the square root of which is the standard fault.

\[\tag*{$\blacksquare$}\]

How To Calculate Effect Size In R,

Source: https://bookdown.org/MathiasHarrer/Doing_Meta_Analysis_in_R/es-calc.html

Posted by: savoiesendes.blogspot.com

0 Response to "How To Calculate Effect Size In R"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel