Influence of head positioning errors on the accuracy of fully automated artificial intelligence-based cephalometric software
To evaluate the accuracy of three fully automated software systems compared to nonautomated cephalometric analysis software tested using cephalograms featuring correct and incorrect head positions. The study sample consisted of 40 lateral cephalograms retrieved retrospectively from a larger pool of pretreatment orthodontic records. Cephalograms were recruited and divided into correct head posture group (CHP) and incorrect head posture group (IHP). Cephalometric data were obtained by manual landmarking (Dolphin software), which served as a reference, and by fully automated AI software (WebCeph, Ceph Assistant, and AudaxCeph). Intraclass correlation coefficients (ICC) and paired t-tests were used for intragroup comparisons, whereas analysis of variance and post-hoc analysis were used to compare performance among artificial intelligence (AI) based software applications. The tested software exhibited a good level of consistency for angular measurements whereas linear measurements were more error-prone. AudaxCeph demonstrated the most consistent accuracy, achieving excellent agreement (ICC > 0.90) for several skeletal parameters; however, it failed in detecting soft tissue accurately. WebCeph and Ceph Assistant showed greater variability, especially for linear measurements (ICC < 0.50). Positional errors drastically reduced measurement accuracy, with linear parameters such as Go-Me showing the poorest agreement across all software. AI-based cephalometric software demonstrated variable accuracy depending on the cephalometric measurement, and this pattern was exacerbated under conditions involving positional errors in cephalograms. Accordingly, oversight by expert clinicians is still required to minimize marginal error.ABSTRACT
Objectives
Materials and Methods
Results
Conclusions
Contributor Notes