P027: GRADING INTRAOPERATIVE PHYSIOLOGICAL "INSTABILITY" USING AN EXPERT REVIEW SYSTEM
Avner Sidi1, Nikolaus Gravenstein, MD1, Chris Giordano, MD1, Jonathan Sidi2; 11Department of Anesthesiology, University of Florida College of Medicine, Gainesville, FL, 2Department of Statistics, Hebrew University, Jerusalem
Introduction: The definition of hemodynamic instability/variability varies between studies,1-3 and there are only a few studies that explore hemodynamic variability mathematically.4-7 We used anesthesia faculty “experts” to grade “instability” from clinical records before applying the grading to an established algorithm taken from the financial industry that defines stability measures.8
Methods: To consider the variables and grade instability, we created a clinical expert review system. Data were gathered from the anesthesiology clinical databank under an IRB protocol. We selected 20 cases >3 h. Selected cases were displayed, with full hemodynamic/physiological tracing, to 20 expert reviewers to review and grade based on their professional opinion. The reviewers were instructed to score change if it was >2 score values (Table 1). There were four rounds in which each of the 20 reviewers reviewed 5 cases every round.
The reviewers graded the stability according to the “stability scale” (Table 1) in multiple time intervals during each case. The grading by reviewers of each case was collated to assess for any differentiation or between reviewers according to experience (years in practice), emergency or acute (emergency/intensive care/trauma) specialty, and significant deviation from the group mean scores.
The physician scores are defined as a polytomous outcome — a discrete ordered scale with 1 equaling the lowest instability and 10 the highest patient hemodynamic instability (Table 1). A two-level random intercept model was used, with the GLIMMIX procedure in SAS 9.4, to evaluate if there was a difference in the mean scores given by physicians of different experience and physicians occupation type (specialty).
Results: We descriptively compared the difference among physicians according to experience or specialty and bootstrapped confidence intervals for each level. We also descriptively compared the difference among physicians according to deviation of the extreme mean score values from the rest of the group. We then fit the results to a three-level random intercept model.
For whole-case mean score for all cases, we found that there are apparent differences among the groups within each case, but this difference is not consistent. This statistical difference in scores is still related to less than two score levels, indicating relatively low or minimal clinical difference.
To compare the effect of a non-emergency specialty with the effect of emergency specialty, we applied hypothesis tests for estimates of differences, representing the estimated odd ratios. The interpretation of these odds ratios is that physicians with a background in emergency cases are 1.72 (1.031, 2.878; 95% confidence limits) more likely to give a lower score compared to non-emergency physicians. When we evaluated experience scoring, we did not find evidence that there are significant differences in the average stability score dependent on years of experience (Figure 1).
Conclusions: According to our findings, grading stability by using a reviewer’s system alone is subject to the effects of previous exposure or clinical specialty and must therefore be supported by an objective system designed to explore the hemodynamic variability mathematically, using an algorithm to assess hemodynamic stability.