Editorial Type:
Article Category: Research Article
 | 
Online Publication Date: 26 Dec 2024

Building bridges from research outcomes to clinical practice decisions

and
Page Range: 141 – 148
DOI: 10.2319/Bridges_edited-11-23-24.1
Save
Download PDF

ABSTRACT

Practitioners may face difficulties implementing research results into practice. Seven examples of common knowledge translation barriers for clinicians are presented, and suggestions are offered for building effective communication bridges. Changes in how research results are reported and interpreted across different practice contexts can improve orthodontic care. These include (a) attention to the expected benefit that includes estimates of both likely clinical value and probability of occurrence, (b) cost considerations, (c) generalizability across contexts that require interpretation adjustments, (d) measures of effect size in addition to measures of statistical significance, (e) determination of the largest relative sources of variance in the reported results, (f) estimating probabilities that lead to practice actions, and (g) conversion of research descriptions to values that impact practice decisions. Examples of improved communication relevant to clinicians are provided that can be used to build stronger bridges between orthodontic research and practice. Although advances in orthodontic research rigor have been noted, journal articles would benefit from more clinician-friendly descriptions of results and their impact.

INTRODUCTION

Researchers provide a necessary but incomplete foundation for orthodontic practice. Recently, literature has been strengthened using more rigorous study designs and sophisticated statistical analyses. In something of an irony, very specific case selection, rigid bias control, and complex statistical analysis may work against clinical transfer as, in some cases, they limit the applicability of findings to the daily, general, and many-faceted decisions practitioners must make regarding individual cases. Cantilevering the bridge from the research foundation alone may limit the reach of new findings.

The difference between statistical and clinical significance is not a matter of computed probability. These are two separate questions. In research, the likelihood is estimated that a theoretical generalization will be observed when the average of multiple observations taken in the same controlled circumstances is observed again. Clinical significance means that an observed fact about an individual case and other relevant concerns (patient values, expectations, and the clinician’s experience) warrant adjusting treatment decisions. Not all research findings are relevant, even if they were properly collected. In most cases, they contribute potential evidence to be added to a large pool of factors that differ from case to case and office to office. In this paper, we explore ways a more functional bridge can be built so that research results are presented in clinically usable formats and clinicians can better identify the clinical impact of research evidence.

The classical American Dental Association Venn diagram of evidence-based dentistry (EBD)1 contains primary decision elements from research (available evidence and its quality), practitioners (knowledge and expertise related to the situation), and patients (knowledge, values, and capacity to make sound decisions based on reasonable available treatment options). The research enterprise interacts with other aspects of oral care in three segments: relationship with practices, relationships with those seeking care, and the critical, central segment of EBD where all factors come together. In this paper, we focus on the two segments involving the intersection between research and practice and the full interaction of EBD when patients are added to the consideration. The relationship can be strengthened by more and better research, astute practitioners attuned to current literature, familiarity with evolving research design, and expanding common ground between these activities. The latter can be facilitated by the choice of research topic and reporting in terms that permit direct translation of findings in terms relevant to individual practices.

Science that contributes directly to potential changes in clinical practice, expressed in terms that permit clinicians to judge the likely impact on various of their patients, will have greater value than statistically significant outcomes in well-designed trials that lack these properties. Some research, of course, is primarily intended for other researchers. However, it will serve a useful purpose to explore ways of improving the usefulness of precisely structured science to the relevant variation in clinical practice.

Treatment Decisions

The essential part of orthodontics, the part that cannot be delegated, is treatment decisions, even if artificial intelligence is incorporated. Such decisions should be based on evidence from the practitioner’s experience, the best available pertinent research, and patient needs. The role played by evidence in guiding practice decisions is shown in the formula: where EU is expected utility or what can be hoped for in committing to a particular treatment action; this is the value to professionals and patients of the best outcomes (O) adjusted for the evidence reflecting the likelihood that it will come about because of one’s actions (E). Good decisions optimize outcomes based on evidence.

The utility of an outcome depends on both the probability that it will come to pass, given the evidence, and the value it represents. A removable appliance-based orthodontic approach may promise the most attractive result but could have a low probability of success for a patient who demonstrates nonadherence. Based on research, adjunctive approaches may offer a high probability of shortening treatment time. Still, some patients place low priority on this aspect of treatment when the additional costs/risks of this additional procedure are considered.

The rule for calculating EU is simple: (value of the outcome) multiplied by (probability of achieving it). This formula can be found in the first chapter of virtually every book on decision making.2–5 Researchers that do not provide strong probability estimates contribute little to EU. Researchers that do not address the values for practitioners or patients contribute little to EU. An example is described in Table 1.

Table 1. Utility: Probability and Values for Orthodontic Decisions
Table 1.

Cost

Cost plays a role in choosing orthodontic treatment. Usually, this is borne by the patient or third-party payers. Cost includes the orthodontists’ expenses of doing business, time, expected revenue, and reputation. Costs may be embedded in clinical protocol and be underestimated. Expensive diagnostic procedures and pyretic ones that are positive but add little to managing the condition are to be avoided. In academic settings, patients may bear additional costs in time, additional diagnostic tests, and loss of autonomy if participating in randomized trials. When alternative treatments have similar EU, the least costly should be chosen.

Technically, costs have their own EU, multiplying the expected subtraction or addition by its probability of occurring. This is usually simplified since, in almost all cases, the probability that the cost will be activated with the chosen action is known to be approximately 1.0. When this work is performed by staff and is part of office protocol, it may mask practitioners’ awareness of how costs affect patient decisions. The rapid emergence of digital technology in orthodontics will make the interaction of cost and treatment decisions more central in the coming years.

To the extent that genetics, adherence, and other factors are linked to financial availability, interactions between treatment decisions and cost may exist. The most efficient treatment alternative is not always the first choice by patients. The complement to cost is benefit. Two approaches of equal EU may differ in which has greater financial incentives. A possible definition of overtreatment is cases where the EU plus cost is greater than indicated by the patient’s needs and desires. An example is described in Table 2.

Table 2. Cost: The Effect of Cost
Table 2.

Generalizability

Generalizability is the confidence that evidence generated in one research situation applies to various clinical practice situations. Much orthodontic research is conducted in university programs, while private orthodontists or general practices provide the most care. Treatment decisions, the number of providers handling the case, and scheduling are critical variables that affect how care is delivered in a university clinic vs a private practice setting. Research articles are written in the past tense. The results are claimed for a particular time and place. Whether they apply elsewhere with different operators, treatment protocols, and patients is a matter of judgment that the practitioner must make. Estimates of variance in publications are helpful, but they only quantify the range of unaccounted-for outcomes in the research context.

Randomization and patient selection are important, but random selection of patients from a nonrandom population using certain inclusion criteria is only partially random. Random selection of operators or settings is rare. Randomized controlled trials (RCTs) were developed for drug testing with negligible delivery circumstances. To the extent that research designs use procedures that reduce variance with tighter experimental control, they add to the difficulty of generalization by restricting the scope of investigation.

The generalization challenge is eased when researchers report a rich range of covariables, implying the need for large samples. These reflect the interactions or appropriate adjustments when the practice context differs from the sample selection in the reported research. Practitioners have been shown to make intuitive adjustments to reflect presumed contextual differences.11–16 Authors of one study, however, did show that practitioners pay little attention to reports of techniques they do not use and, paradoxically, are more critical of techniques that resemble those they do use.17

Generalization presents a special challenge for retrospective cohort studies, especially when authors use a few selected controls and a matched sample size. This design has two problems. One is regression toward the mean.18,19 This occurs when before-and-after measures are taken on control subjects selected to have some of the conditions that match the experimental group at baseline. It is a statistical artifact that the second measure will collapse on the population average. That usually inflates the relative difference within the treatment group. In a more general sense, such designs exaggerate the reported effect because the baseline in the research study differs from the baseline of various practices. When the proportion of patients in a practice with a studied condition differs from the (often artificial) proportion in a study, the results do not extrapolate. The further away the practice is from a 50:50 split, the greater the exaggerated effect. Although researchers have shown that health care professionals make intuitive adjustments for this effect, the adjustments are always incomplete.20 Those adjustments may also be inherently biased to the practitioners’ own preferences. Fortunately, a simple calculation that practitioners can make to correct this problem exists, as described in Table 3.

Table 3. Generalization: Adjustment for Practice Baseline
Table 3.

Measures of Effect

Journals have begun to request reports of measures of effect in addition to statistical tests for differences between means on single independent variables. Confidence intervals (CIs) are regarded as fulfilling this function in some cases, and these can always be determined directly from mean, standard deviation, and sample size. Two problems exist here. CIs are based on the variation of group averages rather than individual outcomes. Practitioners make treatment decisions based on each presenting patient and policy considerations grounded in practice philosophy. Secondly, CI is affected by sample size as well as treatment effect. The formula contains the square root of N in the denominator, meaning that CI can be made arbitrarily small by increasing the sample size in the research study, while it remains fixed in practice.

Two adjustments can be made in practice to provide better estimates of the effect of the reported literature. First, attention can be expanded to include the likelihood of outcomes that would affect treatment choice and differences in averages that may be of no practical value to choosing between average scores that are functionally equivalent. The challenge is quantifying clinical significance in some fashion like how statistical significance is reported. One approach to the problem is sketched in the following section. When a meaningful probability exists that an individual result will fall into a zone where corrective action is necessary, it helps little to reference averages. All the acceptable outcomes are normally grouped into one decision category and all the unacceptable ones into another.

Patients and many practitioners also have difficulty basing treatment decisions on P values. After all, these are usually estimates of statistical properties relating to outcomes considered important to researchers for theoretical reasons. The statistical number needed to treat (NNT) is a useful alternative.20 It is easy to calculate: NNT = 1/advantage of the considered approach expressed as a proportion of the next best alternative. An example is described in Table 4.

Table 4. Measure of Effect Size vs Confidence of Effect
Table 4.

Identifying Sources of Variance

In research based on RCTs, authors lean toward conclusions about a single independent variable, with all other factors controlled out of consideration through randomization. Practitioners seldom face such situations. The practicing orthodontist is often concerned with questions such as, “Of the collection of factors at play, which, individually and in combination, contribute the greatest variation to my treatment objectives?” In some cases, factors that add even a small variance are critical because of their cost or association with unacceptable outcomes. Also, in every clinical situation, several factors come into play, so even when each factor individually represents a small effect, a clinically meaningful impact is possible when taken together. Considering variance only as error was characteristic of statistics many decades ago.

Multivariant analysis is now easy, given computer capability both in research and in the orthodontic office. This can be accomplished using analysis of variance (ANOVA) tests, which are investigations of variance partitioned among the factors measured, and multiple regression, which estimates the proportion of variance in a target outcome attributable to various sources. In a recent review of the orthodontic literature, authors found that 43% of the papers published in the three leading journals used multiple ANOVA and multiple regression tests, permitting estimates of several factors contributing to a common outcome and their interaction to be reported.24 The general fact being described is that 100% of the contribution to a clinical outcome can be attributed to various factors, plus unexplained (unmeasured) variance. The traditional approach is to ask whether any factor in isolation explains something. A more meaningful question might be, “How much variation remains unexplained and represents a threat of surprise?” An example is described in Table 5.

Table 5. Estimating the Proportion of Variance Attributable to Various Factors
Table 5.

Does Variance Matter?

Differences found in research may matter in research but not in practice. Practitioners may treat a few degrees of difference in the angulation of certain teeth as equivalent for treatment purposes, but if the sample size is large enough in a research project, such an effect will be highly significant. The standard for which differences are actionable differs for research and practice and from orthodontist to orthodontist.

This concept is like the difference between statistically significant and clinically significant. What is clinically significant varies between practitioners facing different case conditions. Each practitioner has a general and flexible standard for where to draw the line on clinical decisions. Such differences are often intuitive and based on practice patterns and are not a mark against the profession. As a first approximation, a clinical difference would trigger an alternative response from the practitioner. Most obviously, this would involve selecting one treatment option over another, making a necessary course correction during treatment, or even perhaps concern and justification for a less-than-expected outcome.

The zone of equivalence (ZOE) concept covers the spread of clinical description where all values within the zone are treated as clinically the same (Figure 1). No change in action is indicated within the ZOE. In research, an estimate of the average and standard deviation is provided, and the practitioner provides the threshold or boundary of the ZOE. In research, significance in practice is determined by the difference between the average and a hypothetical or control score divided by the spread of scores reduced by sample size in a particular study. In practice, the difference between the expected outcome and any that would trigger an adjustment is divided by the standard deviation for the relevant individual patients. No adjustment for sample size is made. Each practitioner sets his or her threshold values for each treatment decision and adjusts these based on available evidence, personal experience, patient expectations, and practice business considerations.

Figure 1.Figure 1.Figure 1.
Figure 1. Zone of equivalence. All cases to the right of the vertical line are treated one way; those to the left are all treated differently.

Citation: The Angle Orthodontist 95, 2; 10.2319/Bridges_edited-11-23-24.1

ZOE is the set of values over which no change in treatment decision is required. The formula for estimating the probability of falling outside the ZOE and, thus, the likelihood of an unsatisfactory result is (population average − threshold)/standard deviation.27 An example is described in Table 6.

Table 6. Variance Matters: Estimating Probability of Mistreatment (Zone of Equivalence)
Table 6.

Converting Research Descriptions into Treatment Decisions

Research findings are often expressed on continuous distances, angles, or time scales. These lend themselves to averages and tests for differences between means. Clinicians consider multiple variables at the same time when making clinical calls. Research about only one of those variables is inherently not so impactful. Treatment decisions, by contrast, are almost entirely matters of mutually exclusive categories: extraction vs nonextraction, headgear or elastics, for example. Category data are reported as counts or proportions, and tests are nonparametric, such as χ2, κ, or φ. Two-by-two or more complex contingency tables present data on rows and treatment decisions in columns or treatment decisions on rows and outcomes by column.29

Sensitivity is the ratio of true positive classification relative to the number of positive classifications, counting correct and incorrect classifications. Thus, a good treatment decision has a high proportion of satisfactory results given all cases treated that way; it is a measure of beneficence. Sensitivity is dependent on the threshold. Treating only a few carefully selected cases with experimental treatment or having a liberal criterion for success improves sensitivity. Selectivity is the proportion of negative outcomes in the target category divided by the total number of negative outcomes. It is the operational definition of nonmaleficence. That also is dependent on the definition of the threshold. Generally, greater sensitivity goes with lower selectivity.

It is traditional to use classification data for probabilities of outcomes without considering the importance of the outcomes. It would be just as easy and more meaningful to build contingency tables around EU. Null outcomes may just be a foregone opportunity for improvement or a significant negative side effect. It is reasonable to expect that patients choose treatments that place weight on selectivity, while practitioners move cut scores in the other direction to optimize sensitivity. It is reasonable to expect that professionals and patients will have differing evaluations of the EU of contingency tables, even when the same objective data are used. An example is described in Table 7.

Table 7. Contingency: Optimizing Outcomes
Table 7.

DISCUSSION

The metaphor on which this paper is based is building bridges using evidence from research into clinical practice and patient behavior. Metatheory has focused heavily in recent decades on perfecting the methodological rigor of the research enterprise. This cantilevered approach has produced necessary improvements, but working from one end only is insufficient. Even the best research can be of questionable value in clinical practice because of issues over generalizability; application depends on individual context. Thus, the practitioner must actively judge whether results obtained in rigorously controlled research have the same meaning for them and their patients when moved into the clinical context. Seven concrete examples were presented, sampling how the best research literature can be interpreted in practice. These are examples meant to illustrate a general point.

The practitioner always exercises a role in evaluating the degree to which good generalizations apply in particular situations. This is more than a take-it-or-leave-it-because-it-just-does-not-feel-right-in-my-situation kind of decision. Some of the above research transfers fall into a category where the statistically significant research finding does not matter much in practice. Looking only at the probabilities of comparative outcomes leaves out the importance of practitioners’ and patients’ values and costs. Isolating independent variables improves the likelihood of statistically significant findings but sacrifices generalizability. The confidence that comes from large studies matched against the arbitrary P < .05 standard fails to adequately address the variability that exists from one patient to another or professionals’ personal standards for the dependability of various treatments.

The strategy of strengthening the knowledge-building bridge between published evidence and clinical practice by using more controlled samples and more sophisticated analysis has the advantage of improving the merit of the research enterprise. It also has the potential disadvantage of moving the bridge further away from the working platform of day-to-day orthodontic practice. At some point, this strategy may cause practitioners to pay less attention to the rigorous literature. The established pattern in medicine has been that practitioners pick up what appears to be generally successful from their colleagues in practice.31 This is a meta-analysis at the practice level rather than concerning combining controlled research studies. This should be considered the primary bridging strategy.

CONCLUSIONS

  • For researchers, it is important to understand that clinicians make decisions on individual cases; for clinicians, it is important to understand that research has some guidelines for consistency and proper analysis. As can be perceived, both paths are likely diverging. In this manuscript, we propose a set of concrete examples of how to bridge these two worlds together for the patient’s benefit.

  • Some potential bridging strategies to be considered by researchers include reporting naturally occurring variables, especially operators’ treatment protocol, context, and characteristics. The variance across these data and the interaction effects between them and outcomes would be helpful to practitioners in guiding the transfer fit between research and individual practices.

  • Report contributions of the multiple factors and their combinations that contribute variance to outcomes. Controlling for single variables leaves practitioners wondering what else affects the outcomes they experience.

  • Recognize that much of the variance in treatment outcomes comes from factors other than the patient’s morphology, biology, and physiology. Treating all else as error variance drives understanding the science of orthodontics to large samples in controlled circumstances. Substantial opportunities exist for sound research exploring all factors contributing to patient outcomes. Larger samples, considering as many factors as possible to have an impact, are needed. Retrospective cohort studies and systematic reviews may be approaching a point of diminishing contributions.

  • Focusing on measures of effect in multivariable settings might strengthen review standards. Reports on single variables adjusted by the square root of N should be augmented, where possible, with NNT statistics and multiple regression R2 values that estimate the probability of surprise in individual cases.

  • To the extent that treatment decisions of individual practitioners play a role in patient outcomes, they should be the independent variable in rigorous research investigations rather than the assumed outcome under circumstances that resemble those of controlled research settings. This will likely open the range of dependent variables that matter to include more than morphologic measures. The success of orthodontics encompasses many interacting factors, and both research and practice can be strengthened by careful, methodical investigation of this broader perspective.

Copyright: © 2025 by The EH Angle Education and Research Foundation, Inc.
Figure 1.
Figure 1.

Zone of equivalence. All cases to the right of the vertical line are treated one way; those to the left are all treated differently.


Contributor Notes

 Professor, Department of Diagnostic Sciences, University of the Pacific, Arthur A. Dugoni School of Dentistry, San Francisco, CA, USA.
 Professor, Department of Dentistry, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, AB, Canada.
Corresponding author: Dr David W. Chambers, Professor of Diagnostic Sciences, University of the Pacific, Arthur A. Dugoni School of Dentistry, 155 Fifth Street, San Francisco, CA 94103 (e-mail: dchamber@pacific.edu)
Received: 03 Aug 2024
Accepted: 05 Nov 2024
  • Download PDF