Recommendations for Implementation of Randomized Clinical Trial Design

Printer-friendly version

Adopted by the ATSA Executive Board of Directors on August 17, 2010

Numerous considerations for implementing randomized clinical trials (RCT) have been identified (CODC, February, 2007; Shadish, Cook & Campbell, 2002; Schultz, Chalmers, Hayes & Altman, 1995). For a comprehensive assessment of relevant criteria, we refer readers to CODC Guidelines for Evaluation of Sexual Offender Treatment Outcome Research, Parts I and II (CODC, February 2007).

In the standard RCT, the probability that any specific offender receives the experimental treatment is specified in advance, controlled by the experimenter, and determined by a random process. Partial randomization can also be considered, in which random assignment is restricted to specific subgroups, such as low risk (or high risk) offenders. In order to minimize post-randomization biases resulting from “resentful demoralization”, it is also possible to ask participants for their preference before randomization (randomized encouragement design) and then only randomize those who are indifferent – allowing those with a preference to get their choice (Brewin-Bradley, 1989). Randomization can also be done at the level of sites or setting (rather than individuals) (e.g., King et al., 2007). In general, researchers are encouraged to use random assignment in ways that that minimizes the threat of harm and are sensitive to public and professional concerns about the provision of services to sexual offenders.

At minimum, it is suggested that investigators address the following six considerations in preparation of RCT.

1. Clear rationale for use of randomization.

Since randomization projects are challenging to implement and can be politically unpopular, the investigator should have a clear rationale defending this choice. The most apparent arguments for use of randomization are based on practical, ethical, and/or scientific rationales:

  1. Practical Rationale: there are limited treatment openings relative to the number of qualified participants or there are two distinct interventions with equivalent practitioner support;
  2. Ethical Rationale: there is concern that existing “standard of care” interventions might be iatrogenic (e.g., when juveniles are removed from families and placed in residential group treatment facilities for years at a time or when participants are made to recount their offenses at the start of most sessions)
  3. Scientific Rationale: existing studies are fatally or severely flawed—for example, most or all failed to adequately control for potential “third variable explanations” of between-group differences (e.g., between-groups differences identified but based on pre-existing groups that were selected on factors that might influence recidivism)


2. Well-defined treatment and comparison intervention conditions.

Effective interventions are useful only to the extent that they can be implemented and disseminated with integrity beyond the original developer(s).

  1. To facilitate replication, interventions must be sufficiently specified. At minimum, this means an accompanying therapist training manual with session-by-session protocols or (for more individually-based interventions) clear criteria for identifying and addressing client needs.
  2. To facilitate integrity, quality assurance procedures and/or measurement instruments have been developed and validated. Examples of QA procedures include specified therapist supervision protocols, coding taped therapy sessions for content analysis, and collecting client and/or therapist reports regarding session content.
  3. To facilitate interpretation of RCT outcomes, it also is necessary to specify the control condition to the extent possible.


3. Group equivalence.

Random assignment is the best method for ensuring equality of groups prior to treatment. Nevertheless, significant differences can still occur, particularly when groups are small (i.e., fewer than 20 per treatment condition). One way of reducing the probability of pre-existing differences is to match on risk relevant characteristics prior to randomization. Given the difficulty of matching on multiple criteria, researchers should consider stratifying offenders on an empirically based risk measure. Minimally, investigators should include a baseline assessment battery that measures relevant factors—i.e., factors known or believed to influence treatment success and recidivism prior to the start of the treatment program. These factors vary depending upon the client population of interest (e.g., adults versus juveniles). For RCTs that target adult sex offenders, factors include but are not limited to the following:

  1. Factors predictive of recidivism, such as deviant sexual arousal profile, psychopathy, use of substances, current mood state, prior sexual and nonsexual violent offenses, and victim characteristics. Many of these are addressed in validated actuarial risk assessment instruments.
  2. Factors predictive of treatment completion, such as age and education. Of note, many factors predictive of recidivism also predict treatment completion (e.g., psychopathy).

For RCTs that target juveniles who have sexually offended or children with sexual behaviour problems, relevant factors include (but are not limited to) family support (e.g., caregiver-child relationship), parenting practices (e.g., measures of parental supervision, discipline), peer relationships (e.g., prosocial and delinquent peer associations), and school achievement and affiliation (e.g., enrolment status, grades, youth interest in/success with curricular and extracurricular activities).


4. Program evaluation/treatment outcome.

Program outcome or treatment evaluations often rely heavily upon therapist report. Unfortunately, therapist report is a relatively poor measure of treatment success, if defined as reduced recidivism risk. Moreover, interventions typically attempt to influence a host of client behaviours, some of which are clearly linked to recidivism risk (e.g., altering deviant arousal profiles) and some of which are less obviously linked to recidivism (e.g., improving family relationships). Consequently, objective instruments that assess therapy process variables and a variety of treatment outcome variables should be included in standardized follow-up evaluations. Where possible, someone other than the treating therapist or the investigator should collect follow-up data, to minimize the potential for bias. Relevant factors to assess include, but are not limited to the following:

  1. Treatment process variables such as group cohesion, leader support and therapeutic relationship
  2. New sexual and nonsexual official charges/convictions
  3. New sexual and nonsexual offenses as indicated in other official records, such as child protective services documents
  4. Client self-report of criminal and delinquent behaviour using validated instrument
  5. Client self-report of employment stability, housing stability, family and other social supports
  6. Client treatment satisfaction questionnaire using validated instrument
  7. Therapist evaluation of treatment outcome


5. Participant attrition.

Investigators should have a plan for minimizing and addressing attrition. Attrition can occur with respect to follow-up data collection and/or treatment. For example, clients who complete treatment might refuse or be unavailable for follow-up research assessment protocols. Some clients will fail to complete treatment, due to benign factors (e.g., new job requires moving out of town) and factors likely related to outcomes such as recidivism (e.g., re-incarcerated for a new offense, quit or removed from treatment due to therapist-client interaction problems). Ideally, analyses will involve all participants regardless of whether they completed treatment or contributed to all assessments. Removing treatment failures/drop-outs from analyses often is unfairly advantageous to the experimental treatment condition. This is because it often is easier to identify clients who failed to complete the experimental treatment whereas there might be relatively little information on the treatment success of clients in the control condition. Thus, treatment failures are more likely to be included in the control condition and might be more likely to recidivate or demonstrate less improvement on other outcome factors.

Limiting treatment and research attrition is, of course, preferable to losing clients or client data. Steps to limit attrition include the following:

  1. Reduce barriers to research participation by scheduling assessments at times and places convenient to the respondent
  2. Make multiple attempts at data collection before counting an assessment as missing
  3. Reimburse respondents for time spent completing research assessment protocols
  4. Conduct treatment interventions in ways that demonstrate respect and empathy for clients
  5. Ensure accessibility of treatment (e.g., schedule sessions outside of normal school or work hours; offer flexible appointment times, avoid unnecessarily restrictive rules such as contracting to end treatment if two sessions are missed)


6. Data analysis

It is critical that investigators specify a priori objective hypotheses that will be tested with specific statistical methods. While follow-up analyses might be suggested by results, the primary analyses should be pre-specified, so as to avoid the risk of identifying between-groups difference due to chance rather than to treatment effects. Important considerations for analyses include:

  1. Obtaining sufficient sample sizes to provide power to detect between-groups differences

    1. The major determinant of statistical power is the expected number of recidivists in the comparison group, which should be at least 20. The number of recidivists is a function of the overall sample size and the follow-up time, For example, if the expected recidivism rate is 10% after 5 years, this requires a total sample of 400 (200 for treatment; 200 for comparison) to expect to find 20 recidivism events in the comparison group.
    2. Large sample sizes are also required to test the extent to which bias has been introduced by unmeasured factors or known corruption of the experimental design (e.g., differential attrition).
    3. In conditions in which large samples are not possible, researchers are still encouraged to use random assignment. High quality studies of small samples (e.g., 10 per group) can meaningfully contribute to cumulative knowledge through meta-analysis.
  2. Obtaining data from reliable and valid sources
  3. Designing analyses to control for pre-existing differences that might have occurred despite randomization
  4. Adequately addressing missing data
  5. Including equivalent follow-up periods for treatment and control groups or utilizing methods (e.g., survival analysis) that account for variable follow-up periods
  6. Allowing sufficient follow-up time to detect recidivism events. Researchers also need to consider that there is a delay between the date the offense occurred and the date that the offence is recorded in a data base available to researchers.


Please refer to the ATSA Position Statement Supporting the Use of Randomized Control Trials for the Evaluation of Sexual Offender Treatment for an overview of this topic.


Regerences

Berliner, L. & Sanuders, B. E. (1996). Treating fear and anxiety in sexually abuse children: results of a controlled 2-year follow-up study. Child Maltreatment, 1, 294-309.

Bonder, B. L., Walker, C. E., & Berliner, L. (1999). Children with sexual behavior problems: Assessment and treatment-final Report (Grant No. 90-CA-1469). Washington DC: U.S. Department of Health and Human Services, National Clearinghouse on Child Abuse and Neglect.

Borduin, C. M., Henggeler, S. W., Blaske, D. M., & Stein, R. (1990). Multisystemic treatment of adolescent sexual offenders. International Journal of Offender Therapy and Comparative Criminology, 34, 105–113.

Borduin, C. M., Schaeffer, C. M., & Heiblum, N. (2009). A randomized clinical trial of multisystemic therapy with juvenile sexual offenders: Effects on youth social ecology and criminal activity. Journal of Consulting and Clinical Psychology, 77, 26-37.

Brewin, C. R. & Bradley, C. (1989), Patient preferences and randomised clinical trials, British Medical Journal, 299, 313-315.

Cohen, J. A., Deblinger, E., Mannarino, A. P., & Steer, R. A. (2004). A multisite, randomized controlled trial for children with sexual abuse-related PTSD symptoms. Journal of the American Academy of Child and Adolescent Psychiatry, 43, 393-402.

Cohen, J. A., & Mannarino, A. P. (1998). Interventions for sexually abused children: Initial treatment outcome findings. Child Maltreatment, 3, 17-26.

Cohen, J. A., & Mannarino, A. P. (1996). A treatment outcome study for sexually abused preschool children: Initial findings. Journal of the American Academy of Child and Adolescent Psychiatry, 35, 42-50.

Collaborative Outcome Data Committee (February, 2007). Sexual Offender Treatment Outcome Research: CODC Guidelines for Evaluation. Part 1: Introduction and Overview. Ottawa, Canada: Public Safety Canada, Cat. No.: PS4-38/1-2007E-PDF ISBN No.: 978-0-662-45553-0. Available at http://www.publicsafety.gc.ca/res/cor/rep/_fl/CODC_07_e.pdf

Collaborative Outcome Data Committee (March, 2007). Sexual Offender Treatment Outcome Research: CODC Guidelines for Evaluation. Part 2: CODC Guidelines. Ottawa, Canada: Public Safety Canada, Cat. No.: PS3-1/2007-3E ISBN No.: 978-0-662-46069-5. Available at http://www.publicsafety.gc.ca/res/cor/rep/codc_200703-eng.aspx

Deblinger, E., Stauffer, L. B., & Steer, R. A. (2001). Comparative efficacies of supportive and cognitive behavioral group therapies for young children who have been sexually abused and their nonoffending mothers. Child Maltreatment, 6, 332-343.

Hanson, R.K., Bourgon, G., Helmus, L., & Hodgson, S. (2009). The principles of effective correctional treatment also apply to sexual offenders : A meta-analysis. Criminal Justice and Behavior, 36, 865-891.

Hanson, R.K., Gordon, A., Harris, A.J., Marques, J.K., Murphy, W., Quinsey, V.L., & Seto, M.C. (2002). First report of the collaborative outcome data project on the effectiveness of psychological treatment for sex offenders. Sexual Abuse: A Journal of Research and Treatment, 14, 169-194.

Henggeler, S. W., Letourneau, E. J., Chapman, J. E., Borduin, C. M., Schewe, P. A., & McCart, M. R., (in press). Mediators of change for multisystemic therapy with juvenile sexual offenders. Journal of Consulting and Clinical Psychology.

King, G., Gakidou E., Ravishankar N., Moore R.T., Lakin J., Vargas M., Téllez-Rojo, M.M., Ávila, J.E.H., Ávila, M.H., & Llamas, H.H. (2007). A “politically robust” experimental design for public policy evaluation, with application to the Mexican Universal Health Insurance Program. Journal of Policy Analysis and Management, 26, 479-506.

Letourneau, E. J., Henggeler, S.W., Borduin, C. M., Schewe, P. A., McCart, M. R., Chapman, J. E., & Saldana, L. (2009). Multisystemic therapy for juvenile sexual offenders: 1-year results from a randomized effectiveness trial. Journal of Family Psychology, 23, 89-102.

Lösel, F. & Schmucker, M. (2005). The effectiveness of treatment for sexual offenders: A comprehensive meta-analysis. Journal of Experimental Criminology, 1, 117–146.

Marques, J.K., Wiederanders, M., Day, D.M., Nelson, C., & van Ommeren, A. (2005). Effects of a relapse prevention program on sexual recidivism: Final results from California’s Sex Offender Treatment and Evaluation Project (SOTEP). Sexual Abuse: A Journal of Research and Treatment, 17, 79-107.

Schultz, K.F., Chalmers, I., Hayes, R.J., & Altman, D. J. (1995). Empirical evidience of bias. Dimensions of methodological quality associated with estimates of treatment effects in controlled trials. Journal of the American Medical Association, 273, 408-412.

Shadish, W.R., Cook, T.D., & Campbell, D.T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston: Houghton Mifflin.

St. Amand, A., Bard, D. E., & Silovsky, J. F. (2008). Meta-analysis of treatment for child sexual behavior problems: Practice elements and outcomes. Child Maltreatment, 12, 145-166.