Advanced Orthodontic Education: Evolution of Assessment Criteria and Methods to Meet Future Challenges
Abstract
The achievement of academic and research institutes has increasingly become the subject of ranking by independent organizations. This trend has educational and economic implications, which may pertain to attracting students and endowments as well as allocating of funds to specific educational components. The aims of this article were to discuss the reliability and soundness of the existing academic ranking methods and to propose guidelines for the quantitative assessment of advanced orthodontic programs. This criteria-driven assessment assigns weight factors for the basic components of orthodontic education and provides a template for clarifying the standing of programs. Moreover, a combination of program and individual assessment is proposed to monitor the implementation of educational criteria and assess their effectiveness.
INTRODUCTION
In recent years, the achievement of academic and research institutes, controversial as it may be to define, has been the subject of ranking by independent organizations and various electronic media. This trend has received wide acclamation and has evolved to include undergraduate and graduate sciences, as well as research institutes, professional programs, and medical specialties.
Why have assessments?
Assessment of programs directly affects the reputation and academic profile of schools rated on the top of the list, thus influencing their financial growth.1 The economic implications of assessment may be twofold. First, reputable institutions attract more applicants and are able to choose from a large pool of students. Also, private and public universities seek to secure funds from many sources, a significant component of which is the size of the donations provided by organizations, the public, and the industry. These endowments may be positively affected by the perception of the reputation of the institution.2
On a different level, assessment process responds to the need to define the “gold standard” of education and facilitates the educational role model in various disciplines.
Apart from financial and educational issues, other practical issues may also give rise to the necessity for program assessment. For example, the recently noted shortage of orthodontic faculty in the United States is expected to reach unprecedented figures during the next decade.3–5 The financial burden facing the graduating orthodontist, who, by the time of completion of specialty, has paid an enormous amount for tuition and fees accumulated during the decade-long, postsecondary education, coupled with the relatively low wages for dental faculty, has made recruitment of new academic staff a complex task. Between the early and late 1990s, the number of vacant orthodontic positions advertised has tripled,5 whereas high-ranked spots, which require academic experience, suffer from understaffing. A recent survey showed that the annual income of faculty was less than half that of private practitioners. Faculty also reported working an average of 25% more hours per week, whereas income per hour for full-time faculty was less than one-third that of their private practice colleagues. In addition, faculty perceived that they experienced more stress, encountered more bureaucracy, and received less respect.6
The implication of program assessment in the shortage of faculty pertains to the fact that, in the future, reputable programs may be more appealing to academically oriented professionals.
In addition, establishing of a reliable assessment method for orthodontic educational institutions may find application in potential future projects of relevant organizations, thus proposing more strict criteria for professional recognition and educational equivalency at a global scale.
Who assesses the programs?
Apart from governmental organizations such as the National Research Council (NRC) in the United States, rankings are frequently provided from various private sources, several professional societies, and the press.7 One of the most known rating sources is the US News, which frequently publishes rankings in a wide array of subjects. The assessment involves several demographical data, the reputation of the school, as well as academic and social environment, presumably to facilitate an informed choice for prospective applicants.7
However, incorporation of an ambiguous “reputation” factor in ranking academic institutions most often results in the same group of established elites dominating the top ranks. Newly established universities have been underestimated by rating universities on prestige rather than research achievements.2
The global trend for educational program assessment and ranking is clearly implied by the ranking of the top 500 academic institutions chosen from a pool of 2000 universities from all over the world. These universities were assessed on the basis of the following criteria (www.ed.sjtu.edu.cn/ranking.htm): Nobel laureates in physics, chemistry, medicine, and economics (weight 20%); highly cited researchers in 21 broad subject categories (weight 20%); articles published in the Nature and Science journals (weight 20%); articles in Science Citation Index-expanded and Social Science Citation Index (weight 20%); and academic performance per faculty (weight 20%).
On a different level, the European Academy of Pediatric Dentistry has formulated accreditation guidelines and requirements, which include academic, clinical, and research components of specialty education (www.eapd.gr/cur-forw.htm).
The purpose of this article was to discuss the reliability and soundness of the existing academic assessment methods and propose guidelines for the objective assessment of graduate orthodontic programs as these may evolve in the future.
ASSESSMENT OF ADVANCED EDUCATION
Dental specialties constitute the largest portion of advanced dental education. The difficulty in assessing these programs relies on the incorporation of a strong clinical component, which is difficult to evaluate objectively. Orthodontics, in particular, combines the principles of science with art and individual talent and, therefore, formulation of criteria based on the tradition and reputation of programs may lead to false results.
Assessment of fellowship and advanced, nongraduate level courses such as those found in the medical and dental field requires a different process than that used for rating graduate degree programs.
Graduate program assessment
Assessment for nonspecialty advanced dental educational programs, ie, those leading to a PhD degree in Oral Biology independent of specialty training, has not become available yet.
In sciences, ranking of graduate schools may be more reliably formulated because a series of objective criteria may be effectively applied. Thus, the research activity and productivity can be roughly judged by the number of departmental publications; the significance of the evidence presented can be demonstrated by the amount of external funding received; and the recognition of research efforts may be reflected on the number of citations and impact factor for faculty publications. Therefore, an overview of the quality of these programs may be established because the variables serving as criteria of program standing can be quantitatively assessed.
The NRC in the United States, which constitutes the research arm of the National Academy of Sciences, releases rankings of graduate courses at 10-year intervals and frequently revises evaluation criteria and methodologies.8 A recent study, cited in the US News, reported a positive correlation between the scores of NRC and those of US News. However, some authors argue that NRC's method is also subjective because it is based on a concept identical to that used by media and other sources.2 The data compiled by the NCR were generated by asking faculty and researchers about their impression of their peers. Specifically, a rating board of over 16,000 academics evaluated the quality of 65,000 faculty members of 3634 programs; each rater reviewed faculty rosters for 50 departments, yielding a list containing more than 1000 names. Because the contemporary research environment is scattered throughout several specializations, it is possible that each responder did not provide more than a handful of informed choices because of the lack of active participation in multiple topics of a broader scientific discipline.
A relevant study9 evaluated the research achievement of more than 200 institutions assessing the per capita research articles published in leading journals as well as prizes and awards. It was shown that universities outside the East and West Coast institutions often produced more research papers, which were cited more than those of the well-known standard centers of excellence. For example, although the rankings of the materials science programs provided by the NRC (http://pubweb.acns.nwu.edu/~mseniw/msedepts.html) place Pennsylvania State University in the ninth position in the United States, Penn State is ranked first in faculty citations in the nation (http://www.isihighlycited.com/isi_copy/Comm_news27.htm).
As stated by Graham,9 the reason underlying this effect resides in the multiplication of graduate programs across the United States, ie, the number of doctorate-granting programs has expanded from a few hundreds in the 1960s to about 4000 in the 1990s.2 Thus, as the doctorate league is being expanded, the view of scientists for the development of advances in various fields is being focused onto a narrower area associated with their specific interests. This eventually leads to a deepening of the scientific interest of researchers and to a lack of keeping up with the delirious pace of developments in associated fields.
Professional and clinical program assessment
In the health sciences field, professional education assessment is achieved by peer assessment surveys, addressed to medical school deans, heads of residency programs, and directors of admissions. This group of academic and professional experts is asked to rate the specialty program quality on a scale of “marginal” (1) to “outstanding” (5) separating program quality for both research and primary-care programs on a single survey instrument.
Particularly, medical school deans and senior faculty are asked to identify the 10 schools offering the best programs in each specialty area. The 10 programs receiving the highest cumulative number of nominations are included in the report. Input variables of academic quality are measures that reflect the quality of factors brought to the graduate education process including the academic preparation of the entering class, faculty-student ratio, and research funding. The deans, directors, and “experts” who are chosen with assessment data for these areas, come from surveys of individuals in academia and practitioners in each profession.2
The method described fails to achieve objectivity and incorporates a high degree of error for the following reasons.
The selection of the responders is biased, and, with the exception of academic faculties, experts and practicing professionals are chosen on the basis of a subjectively defined “reputation” factor. Also, the criteria used by experts in nominating a program may vary with personality and social status, general economic climate, financial stability, and personal wealth of each rater.
The response to the survey in the form of questionnaire ranges from 35% of faculty for the research survey to 25% for primary care centers and 56% for the group of deans (www.usnews.com). Such a low response rate precludes the extrapolation of a reliable consensus on the ranking of programs. Actually, if this survey were to be reviewed as a clinical study, the high drop out rates would have caused a rejection of the project.
The subjectivity of the responders in proposing centers of excellence may distort the reliability of the ranking outcome. The output of raters is temporary and may be only valid for the specific period of questionnaire submission. The impression of a person about a specific educational program largely depends on various factual and personality-related variables. These may include the temporal variation or occasional “bursts” of reputation associated with the publication of an impressive study or a discovery of a significant merit. Recent personal acquaintances of the responder with a prominent faculty or attendance at an impressive lecture or a conference may also influence the output of deans.
The fact that the responders are the persons on the top of academic hierarchy does not necessarily imply that they are knowledgeable about the standing of each individual specialty department. Considering the enormous figures of graduate program expansion, and the strictly administrative role of deans, it follows that these people maintain little or no exposure to developments in research. Thus, their opinion should not be considered without caution.
The use of weight or significance factors as applied in various ranking reports is arbitrary and can be misleading in the sense that schools that are strong in components possessing low factor may on average score less than institutions achieving medium rankings in high-impact components. The assignment of specific weight to various components is subjective with no criteria supporting the range of strong- and low-impact parameters.
DEVELOPMENT OF CRITERIA FOR ORTHODONTIC PROGRAM ASSESSMENT
The delirious pace of technological advancements coupled with the highly competitive and demanding environment in which the orthodontists of the next decade will practice, necessitate a continuous evolution of advanced educational programs. The authors of this article propose a criteria-driven assessment, which incorporates six parameters and assigns weight factors to these basic components of orthodontic education thus providing a template for clarifying the standing of programs.
Table 1 presents the constituent components of orthodontic education along with the potential weight factor for each of the sections. Although the actual importance of each component may be judged subjectively, it could be postulated that academic education and clinical training may be of higher impact relative to other variables. It could be argued that arbitrary assignment of values for various components of appraisal will elicit some difference of opinion among educators and clinicians. However, it was felt that the need to start from somewhere may prevail over the necessity for objectivity in defining the exact weight of each component. The latter may include the facilities of the program, the exposure of students to teaching, and general data pertinent to the clinical and scientific recognition of its graduates. In Tables 2–7, the educational components of orthodontic programs are analyzed and weight factors are assigned to constituent components of educational modules.


Facilities
The analysis of facilities as presented in Table 2 implies that high importance is given to clinic arrangement and access to photographic and X-ray facilities as well as the number of chairside assistants per resident. By avoiding time-consuming and trivial procedures, ie, cement mixing, preparation of brackets for bonding, a more efficient management of patients' needs may be achieved. Although hiring of auxiliary personnel requires the direction of funds to a nonacademically related source, the long-term benefits in efficiently organizing the clinical training of residents, potentially generating higher earning, and clinical experience of residents may outweigh this cost.
Clinical training
In Table 3, the clinical training component is analyzed to its constituents. This section is the most controversial because a quantitative assessment does not always correspond to quality of training provided. In assessing this component, the number and variety of malocclusions treated; the number of hours devoted to clinic and clinical seminars; and the variety of malocclusions, treatment modalities, and patients' dental age are considered the key parameters in program assessment. On the contrary, extensive patient assignment may serve institutional financial needs and thus a ratio of the time spent on the clinic floor to that devoted to clinical seminars may more reliably represent the clinical direction of the program.

Emphasis should be placed on the quality of orthodontic treatment results by implementing the use of appropriate indices, ie, ABO index, PAR (Peer Assessment Rating), ICON (Index of Comprehensive Orthodontic Need). The qualitative assessment of occlusal improvements as a result of orthodontic treatment may constitute a feedback for students and instructors. A model proposed to familiarize the resident with the actual conditions and environment of practice, including managerial and financial components, was originally introduced by Isaacson1011 at the University of Minnesota in the mid 1960s. This plan involved extensive auxiliary help, in the form of model group practice, consisting of a junior resident, a senior resident, and an assistant. This scheme consistently generated the highest earning relative to orthodontic programs operating under conventional rules, at the time of its introduction. Gradually, the increased role played by assistants became a recognized necessity, and in a way, the principles of the training philosophy of this program have spread over the current structure of graduate training throughout the United States. Apart from the exchanging of clinical experience, this concept exposes the future clinicians to aspects of practice that are not usually taught on a practical level. These include the effective handling of organizational aspects of practice, such as logistics of materials and utilities, but most importantly, this system provides fundamentals of management of the finances of clinical orthodontics.
Academic education and organization
The important aspects of academic education are summarized in Table 4. Emphasis is placed on the organization of seminars involving preparation of current literature seminars and choice of topics, format of assessment (ie, exam, term paper), as well as the inclusion of subjects from associated biomedical fields and statistics.12 A contemporary orthodontic research classification involves extensive use of advanced instrumental analyses (biomaterials-biomechanics), demanding design of epidemiological studies (clinical research) and complex biological research. Therefore, acquaintance of students with the fundamentals of research techniques in various disciplines is critical for their understanding of the literature, which has long been departed from standard and simplistic essays of the case report type.

Research
In Table 5, the research component of orthodontic education is depicted. In assessing the research component of programs, the main emphasis is placed on the faculty research activity and credentials because this is instrumental in providing the appropriate environment for a fruitful research involvement.

Traditionally, advanced dental programs have been structured around a diarchic rule involving a research or a clinical character. However, it must be noted that deprivation of the specialty education from its research constituent may be detrimental to the efficiency of education. This is because absence of any research component from advanced dental curricula leads to the situation where the graduate is unaware of the limitations of research methodological approaches, accepting the results of studies indiscriminately and endorsing unsubstantiated claims inertly. In the era of evidence-based health sciences practice, this may have devastating consequences for both treatment providers and patients.
Research in orthodontics possesses some distinctive characteristics. A large number of research articles, which appear in orthodontic journals, correspond to students' projects submitted in fulfillment of the requirement for graduation. Although this may increase the productivity of departments and contribute to the permeability of information to the profession, the fact that residents perform usually applied protocols coupled with a lack of training in research techniques may adversely affect the caliber of the research performed.13
To deal with this deficiency, the U.S. National Institute of Dental and Craniofacial Research (formerly National Institute of Dental Research) initiated a program aiming at encouraging basic research in the dental sciences. Thus, the dentist scientist award (DSA) program was implemented with the objective of attracting individuals seeking combined specialty/PhD training. Research efforts by junior faculty were also assisted by the newly introduced young investigator award. Along with federal funding, institutional and organizational grants such as the AAOF faculty development award were designed to provide support to junior orthodontic researchers.
Nonetheless, a concern has been expressed on the effect of these projects on contemporary practice.14 Although, it is true that wide-scale research programs may not directly generate clinically applicable evidence, the specialty has not yet experienced a striking change from this decade-long program.
It has been proposed that this discrepancy may be associated with the structure of the academic system.14 Because faculty activity is assessed by criteria used in the associated biomedical fields, researchers strive to achieve publication in periodicals possessing high-impact factor. However, this preference excludes all orthodontic journals, which are characterized by low impact factor.15 Therefore, papers on orthodontic topics increasingly appear in the Journal of Biomechanics, Journal of Biomedical Materials Research, Immunology, Journal of Anatomy, Bone, and Journal of Biological Chemistry, among others.16 Therefore, the orthodontic community is not benefited from the generation of knowledge published in the broader biomedical literature. To bypass this discrepancy, it is proposed that a group of qualified orthodontic researchers reviews the publications of orthodontic interest published in associated disciplines and records the issues of interest to a special column in orthodontic periodicals.
One additional explanation for the aforementioned lack of high impact of research on the orthodontic community may be associated with the fact that during the early stages of the DSA program, fellows most frequently received supervision by nonorthodontic faculty. This occurred because by the late 1980s, most orthodontic programs in the United States were staffed by MS level or recently graduated junior PhD faculty who could not obtain the required status from the graduate school to serve as advisors to doctoral candidates. It may be worth noting that only recently has the requirement for prospective faculty to possess a terminal academic degree become frequent in advertisements of vacant orthodontic position. Therefore, students enrolled in the specialty/PhD program were assigned to basic science or senior dental faculty who run their own research programs in the broader biomedical field. This tactic did not always ensure that the candidate's thesis would focus on an orthodontic topic. As a result, a large portion of doctorate research performed by orthodontists did not directly contribute to the advancement of the research status of specialty. Considering that the topic of the doctorate research usually determines the greater area of future interests of scientists, it is clear why the specialty has not witnessed the fruits of the funded efforts of some of its members for the past years. This is expected to change drastically in the next decade as more orthodontists enter doctoral degree curricula.
The foregoing issues have provoked the swinging of the pendulum too far on the other side and led to the proposition that the orthodontic education, to avoid the academic implications, may return to the preceptor programs, which were common at the dawn on the development of our specialty.14 It should be stressed that this would be harmful for the advancement of clinical and academic status of orthodontics.
Teaching
Many programs assign undergraduate teaching and clinical supervision to residents, whereas some also include seminars of senior students to the entering class (Table 6). The experience gained from this activity may be helpful in the future for both academic and clinically oriented graduates. The latter group may be benefited because exposure to teaching may also assist them in efficiently structuring future lectures to community or professional organizations.

General
Lastly, the overall reputation of the program and its role in educating leaders for the future may be empirically assessed by a high applicant to resident ratio, percentage of the graduates succeeding in examinations (certification, orthodontic boards, specialty, fellowships, etc) or entering academics (Table 7).

IMPLEMENTATION OF CRITERIA: PROGRAM ACCREDITATION AND INDIVIDUAL ASSESSMENT
The foregoing discussion of the components of the educational programs generates a requirement for the formulation of bodies of experts to design, direct, and assess the implementation of criteria, which will correspond to the characteristics, requirements, and priorities of specific countries or regions. This process should also reflect upon the different concepts of orthodontic education. Nonetheless, the accreditation processes followed in many countries present some fundamental similarities, which involve reviewing of curricula, visiting the facilities, and interviewing faculty staff and graduate students. Therefore, the educational, clinical, and research expertise of the body of experts are of critical importance to reveal the actual standing of the program, evaluate the education provided, and highlight the components requiring improvement or revision.
Figure 1 schematically describes the main methods followed for program assessment. In North America, a “central” approach is followed, in the sense that a committee reviews the implementation of a set of criteria widely recognized to be representative of the education provided. This method relies on the assumption that if the individual components, ie, program design, facilities etc., are present in the right proportions, the product will be an efficient program, which will “produce” a competent graduate. Alternatively, in many licensing bodies in Europe, a combined approach is applied. Apart from curricula compliance, ie, Erasmus project,12 an examination of the graduate is integrated as a direct approach to assess the competency of the treatment provider, regardless of the reputation and overall standing of the program from which he or she has received advanced training. Although this method was not implemented to assess the education provided by the programs, rather it sought at evaluating the competency of individuals, it presents two main advantages. First, it secures a minimum standard in the provision of orthodontic services, by directly examining the qualification of the graduate. Second, taken at a large scale, it may serve as an indirect means of revealing the effectiveness of specific advanced orthodontic program curricula.



Citation: The Angle Orthodontist 75, 2; 10.1043/0003-3219(2005)075<0143:AOEEOA>2.0.CO;2
In conclusion, the evolution of assessment criteria and methods of advanced orthodontic education may be an absolute necessity in the coming years. Recognition of the constituent components of high academic performance and clinical training as well as implementation of strict criteria for professional recognition and educational equivalency between countries and individuals may require such an assessment. The purpose of this article was to provide a stimulus for relevant discussion within the orthodontic community because many countries and organizations are in the process of establishing, improving, or harmonizing their standards of orthodontic education. Additional steps such as incorporation of elements from both approaches, ie, program assessment and individual examination, may contribute to achieving this objective.17

Schematic description of the assessment of orthodontic programs followed in various accreditation bodies and licensure agents of different countries. The central approach assumes that the strict adherence to criteria set warrants the level of education provided, whereas an alternative method secures that the product of this educational program is equipped to function as a treatment provider on an individual basis. A merge of these two approaches may substantially enhance the assessment of programs and graduates and facilitate a guide to keep specialty programs abreast of the challenges in contemporary orthodontics
Contributor Notes
Corresponding author: Theodore Eliades, DDS, MS, Dr Med, PhD, 57 Agnoston Hiroon Street, Nea Ionia, Athens GR-14231, Greece (teliades@ath.forthnet.gr)