Artificial Intelligence in Spinal Imaging and Patient Care: A Review of Recent Advances
Article information
Abstract
Artificial intelligence (AI) is transforming spinal imaging and patient care through automated analysis and enhanced decision-making. This review presents a clinical task-based evaluation, highlighting the specific impact of AI techniques on different aspects of spinal imaging and patient care. We first discuss how AI can potentially improve image quality through techniques like denoising or artifact reduction. We then explore how AI enables efficient quantification of anatomical measurements, spinal curvature parameters, vertebral segmentation, and disc grading. This facilitates objective, accurate interpretation and diagnosis. AI models now reliably detect key spinal pathologies, achieving expert-level performance in tasks like identifying fractures, stenosis, infections, and tumors. Beyond diagnosis, AI also assists surgical planning via synthetic computed tomography generation, augmented reality systems, and robotic guidance. Furthermore, AI image analysis combined with clinical data enables personalized predictions to guide treatment decisions, such as forecasting spine surgery outcomes. However, challenges still need to be addressed in implementing AI clinically, including model interpretability, generalizability, and data limitations. Multicenter collaboration using large, diverse datasets is critical to advance the field further. While adoption barriers persist, AI presents a transformative opportunity to revolutionize spinal imaging workflows, empowering clinicians to translate data into actionable insights for improved patient care.
INTRODUCTION
Providing optimal spine patient care is becoming increasingly complex due to the rapid growth of patient data, the rising number of spine patients, and expanding treatment options. The large amount of information from medical imaging and electronic health records (EMRs), combined with growing patient volumes driven by an aging population, presents a major challenge for efficient data processing and analysis. At the same time, the increase in treatment methods, from minimally invasive procedures to personalized medicine, demands careful consideration of risks, benefits, and suitability for each individual. Traditional methods may struggle in this situation, highlighting artificial intelligence (AI)’s potential to process large amounts of data, identify patterns, and support evidence-based decision-making for more efficient, personalized, and effective spine care [1,2].
Medical research is undergoing a revolution. AI, powered by machine learning (ML) and deep learning (DL), is unlocking new discoveries with the potential to improve diagnoses, treatments, and patient outcomes. While spine image research faced early challenges due to complex anatomical structures, data scarcity, and nonstandardized protocols [3], the field has witnessed a remarkable surge in research and commercially available solutions, particularly in the past 5 or 6 years. This rapid advancement is readily apparent in a PubMed search, where a staggering 86% of all AI spine research papers have been published since 2017.
While traditional research methods like regression and correlation models remain valuable tools, the field is experiencing a paradigm shift towards AI algorithms, particularly ML, DL, and generative models. Compared to regression models, which are limited to predicting simple linear relationships, ML can learn complex, nonlinear interactions and hidden patterns within data [4]. DL builds on this by utilizing numerous layers of computations to extract more complex features from high-dimensional data, such as image data. While both ML and DL can be used to build clinical decision support systems or prognosis prediction models, DL also excels at extracting meaningful features from images, enabling applications like landmark detection and disease classification. Generative models, such as generative adversarial networks and large language models (LLMs), go even further. They learn the underlying patterns and relationships within data and then use this knowledge to create entirely new data [5]. This opens up possibilities like synthetic computed tomography (CT) images generated from magnetic resonance imaging (MRI) images [6], radiology reports written by AI [7], and chatbots to answer frequently asked questions for patients [8].
This review focuses on recent advancements in AI-powered spine image analysis and patient care. However, instead of focusing on the role of a specific AI algorithm, our aim is to cover this from the perspective of a clinical task-based manner (Fig. 1). To do this, we searched PubMed and Google Scholar for relevant articles published between January 2017 and March 2024. Our search query combined terms related to AI, ML, and DL with those related to the spine (including “spine” or “vertebra”) and spinal imaging modalities (including “radiograph,” “CT,” and “MR”). We then assessed the retrieved studies for their methodologies, evaluation metrics, and contributions to spine surgery or research. Additionally, we hand-searched the reference lists of included articles to identify relevant studies not captured by the initial database search. This process resulted in a final selection of 134 articles for review.
CURRENT STATUS AND CLINICAL APPLICATIONS ACROSS PATIENT CARE PROCESS
1. Image Improvement
The improvement of image quality through AI is a well-established field already implemented in clinical practice, particularly in CT and MRI. Traditional interpolation techniques based on simple mathematics artificially increase the resolution but often result in blurring. DL has taken this process to the next level, generating more realistic and sharper images [9]. This AI-based interpolation scheme is incorporated as a part of DL-based image improvement protocols in the latest scanners.
In CT scans, the radiation dose follows the “As low as reasonably achievable” principle. Lower doses typically lead to noisier images. Fortunately, AI algorithms now effectively reduce this noise postacquisition, enabling high-quality images with reduced radiation doses [10].
In MRI, scan time serves as a trade-off for image quality, analogous to the radiation dose in CT: Achieving high image quality often requires longer scan time, leading to patient discomfort and potential movement artifacts. Traditional techniques like parallel imaging and compressed sensing address scan time reduction but still struggle with noises and artifacts. Recent advancements in DL-based reconstruction have overcome these challenges. By combining traditional schemes with DL-based optimization, DL reconstruction significantly improves signal-to-noise ratio with significantly reduced scan times [11,12]. In spine MRI, DL-reconstructed images have been found to be interchangeable with standard MRI for detecting various abnormalities, offering excellent image quality with a remarkable 70% reduction in scan time [13]. Major magnetic resonance (MR) vendors have already incorporated these features, and vendor-neutral solutions for spine MR are also commercially available [13].
However, some concerns remain regarding the potential for DL-based improvements to alter lesion details or introduce artifacts [13,14]. Ongoing research and validation are crucial to ensure the reliability and safety of these AI-powered tools in clinical practice.
2. Assistance at the Initial Diagnostic Stage
Spinal imaging plays a critical role in the initial step of patient care, with measurements and landmark locations being one of the fundamental information. This information guides treatment planning and risk assessment, ensuring precise and objective decision-making for effective patient management. For AI-based image analysis, accurate segmentation of the vertebral body and disc spaces is a critical first step. Segmentation assists in spine numbering and data preparation for other spine analysis algorithms. This is why structure segmentation models have been a significant focus of spine AI research, with numerous public datasets available to train them [15]. Additionally, recent breakthroughs in natural image segmentation techniques have nudged developments of automated segmentation tools for spine surgical hardware, achieving near-perfect Dice scores (Fig. 2) [16].
Assessment of kyphosis, lordosis, and scoliosis is another primary indication of spinal imaging. However, accurate assessment of spinal curvature relies on hand-measured parameters, such as Cobb angle, pelvic incidence (PI), sacral slope, pelvic tilt, lumbar lordosis (LL), and sagittal vertical axis. This is a laborious task, prone to high interobserver agreements, and thus has been one of the early targets of spine AI research. Like human analysis, supervised DL models can be trained to identify or segment specific landmarks on anteroposterior (AP) and lateral spine radiographs, automatically connecting them to calculate spinal parameters. Advanced models can now detect up to 78 landmarks and 18 spinopelvic parameters in whole spine lateral radiographs, demonstrating excellent agreement with human measurements [17-19]. In AP radiographs, automated Cobb angles exhibit lower mean error compared to human intra-/interobservers (2°–6.32° vs. ± 9.6°/ ± 11.8°), showing potential for scoliosis screening in children (sensitivity, 95.7%; specificity, 88.1%), or progression monitoring in postoperative patients [20-22]. Several commercially available software solutions have already received U.S. Food and Drug Administration (FDA) or Korean Ministry of Food and Drug Safety approval for landmark and curvature detection [23,24].
Vertebral segmentation and numbering within spine CT and MR images have become a well-established area of research, offering significant advancements in medical imaging analysis. This technology automatically identifies and delineates specific anatomical structures, most commonly the vertebral body, intervertebral disc, and spinal canal. Reportedly, these automated algorithms achieve impressive accuracy, with Dice scores ranging from 89% to 95% for these key structures [25]. Several medical technology vendors have already received FDA approval for spine labeling software, and their incorporation into PACS (Picture Archiving and Communication Systems) is already underway [23]. The true power of automatic segmentation lies in its ability to facilitate quantitative analysis and automated diagnoses. For instance, automated vertebral segmentation enables (1) the detection of abnormal vertebral heights [26], (2) the Identification of abnormal spinal curvatures [27], and (3) the planning of surgical procedures and radiotherapy [28]. With disc segmentation, we can automatically grade disc degeneration in sagittal MRIs [29]. Furthermore, research with neural foramen segmentation has shown that the cross-sectional area of the neural foramen directly correlates with patient height and inversely correlates with age [30]. These segmentation models have opened doors for automatic diagnoses of spinal stenosis and neural foraminal stenosis [31-33].
3. Image Interpretation and Diagnosis
1) Disc herniation and degeneration
While disc and spinal canal segmentation models play a crucial role in identifying anatomical structures, their potential extends far beyond that. In MRI, AI models are making significant strides in detecting and grading lumbar spinal stenosis. These models are trained to identify stenosis in the lumbar central canal, lateral recess, and neural foramina using axial or sagittal images [32,34]. One study reported remarkable agreement between a trained model and subspecialized radiologists in classifying stenosis severity (normal/mild vs. moderate/severe) in an external test set, achieving kappa values ranging from 0.95 to 0.96 [32]. AI-powered MRI analysis can significantly benefit radiologists by reducing interpretation time (124–274 seconds vs. 47–71 seconds, p < 0.001) and improving interobserver agreement (κ values = 0.71 and 0.70 with DL vs. 0.39 and 0.39 without DL, both p < 0.001) [7]. FDA-approved solutions are already emerging. For example, a lumbar spine report generation software boasts impressive sensitivity and specificity for detecting central canal stenosis, as demonstrated in 2 separate studies (92.70% and 99.04% or 77.14% and 98.95% for central canal stenosis) [35,36].
Beyond MRI, AI-powered spinal stenosis diagnosis is also making progress in CT scans and radiographs. AI models have achieved diagnostic accuracies of 83%–88% for the spinal canal and 71%–75% for the lateral recess on axial CT scans [37]. This opens doors for evaluating disc herniation in CT scans or even diagnosing opportunistic disc herniation during abdomen and chest CT scans. For cervical spine radiographs, AI approaches have demonstrated promising results in detecting ossification of the posterior longitudinal ligament (accuracy 0.88, area under the receiver operating characteristics curve, area under the curve [AUC] 0.94, surpassing the performance of spine physicians) [38] and spondylotic myelopathy (accuracy, 71.1%; AUC = 0.864) [39]. Another study reports AUC values up to 90% for spinal stenosis in lumbar radiographs [40], suggesting the potential of AI as a triage tool for further imaging.
2) Fracture
Compression fracture, the most common type of spinal fracture, has been one of the first targets for fracture detection in radiographs. Current DL models achieve impressive accuracy, reaching around 90% in compression fracture detections [41-43] and even in differentiating between old and new compression fractures (AUC = 0.80) [44]. One model reports the performance was comparable to human readers (accuracy of 93%, p < 0.001) with lung markings as the primary source of false positives [45]. Further advancing the field, an FDA-approved universal fracture detector is already in widespread use. It efficiently detects fractures in radiographs of all limbs, spine, and ribs. For spine fracture specifically, a recent study compared human, AI-only, and AI-assisted human detection. The results revealed a clear advantage of AI-assisted human interpretation, with sensitivity/specificity of 94.5%/100% compared to 92.4%/98.4% for humans alone and 89.1%/62.2% for AI alone [46].
More recently, advancements in fracture detection have extended to CT scans. In 2022, the winners of the cervical spine fracture detection challenge [47] achieved an AUC of 0.96 (95% confidence interval [CI], 0.95–0.96), sensitivity of 88% (95% CI, 86%–90%), and specificity of 94% (95% CI, 93%–96%) [48]. This same year also saw the introduction of an FDA-approved cervical spine fracture detector. Research further extends to thoracolumbar CT scans, with some studies demonstrating the ability to categorize fractures into 4 types (no injury, compression, burst, translational/rotational, and distraction) with accuracies ranging from 68.6% (burst) to 89.3% (distraction) [49].
3) Inflammation and tumors
Identifying and classifying inflammatory diseases, infections, and tumors are actively researched fields within spine imaging, each too vast to explore fully here. However, AI models are already demonstrating impressive capabilities, achieving expert-level performance in several tasks. For example, in pelvic radiographs, DL models can diagnose sacroiliitis with accuracy that is on par with experts (Cohen kappa = 0.79) [50]. Similarly, they can accurately quantify inflammatory sacroiliitis in MRI [51,52] and differentiate between tuberculous and pyogenic spondylitis with an AUC of 0.802 (compared to 0.729 for human experts, p = 0.079) [53].
Spinal oncology has also seen exciting advancements in radiomics and DL. Algorithms can now detect metastatic lesions in CT scans (sensitivity, 75%–90% [54,55]) and be applied to reduce interrater variability [56]. In MRI, DL models outperformed fourthyear residents in differentiating malignant vertebral fractures, achieving 90% sensitivity and 79% specificity [57]. Furthermore, ML and DL can discriminate between normal and pathologic bone marrow patterns in MRI [58,59] and distinguish spine metastases from lung cancer versus other primary origins [60].
4. Surgical Planning and Intraoperative Use
CT aids surgical planning by detailing bone anatomy and pathology and guiding surgical approaches. Moreover, navigation systems using preoperative CT images improve screw placement accuracy. Synthetic CT scans are computer-generated images resembling conventional CT scans but are created using other imaging modalities like MR images through generative models or other DL algorithms [61]. This technology has several advantages over CT scans, such as elimination of radiation exposure (radiation dose of an average lumbar spine CT scan: 3.5 mSv) [2], improved visualization of metal structures and the peripheral field of view, and high-resolution depiction of soft tissue structures like intraosseous hemangiomas [62]. Studies comparing visualization of body structures, artifacts, and geometrical measurements between synthetic and traditional CT scans have found the synthetic versions to be non-inferior [6,62].
However, synthetic CT is still a relatively young field with limited clinical use. While some applications like presurgical and radiotherapy planning are emerging [61,63,64], caution is advised when using measurements from synthetic spine CT scans. Studies have shown inaccuracies in pedicle measurements performed in the axial plane, with relative errors reaching up to 34% [65].
Augmented reality (AR) and virtual reality (VR) are emerging technologies demonstrating promising benefits in various healthcare fields, including robotic surgery, laparoscopic surgery, and, notably, orthopedic surgery for the spine [66]. One of their main applications is as a navigation tool in the operating room. AR and VR systems use computer vision techniques to process preoperative or intraoperative images (radiographs, CT scans, or MRIs) and overlay relevant anatomical structures, potential screw trajectories, or ideal screw locations onto the surgical field, guiding surgeons with real-time visualization [67].
The integration of DL into AR and VR systems further enhances their capabilities, particularly in object and landmark detection within images. For example, DL has been used to track a specific vertebra of interest in fluoroscopic images with high accuracy (mean error of 2.27%) [68] and identify 7 anatomical landmarks on intraoperative lumbar spine CT scans with minimal error [69]. Additionally, DL has shown promise in improving robotic screw placement [1,70] and identifying bone drill breakthroughs during surgery [71]. Classification models powered by DL can even differentiate various types of pedicle screws [72] and anterior cervical fusion systems [73] in radiographs.
Screw navigation systems are getting a boost from neural networks. These AI tools automate screw planning, including screw size and trajectory. One study reports a dramatic 90% reduction in workflow time, with just 3 out of 130 screws requiring manual adjustment [74]. Neural networks can also personalize screw placement for each patient’s bone structure. This customization helps maximize pull-out force and reduce screw failure, which will be an essential benefit for patients with osteoporosis [75].
We anticipate that AI will play a crucial role in addressing some current limitations of AR and VR, such as low image resolution and steep learning curves. AI-powered AR and VR solutions hold potential, offering features like determining ideal spine alignment and implant size, compensating for motion, and even facilitating 3D printing [66,67].
5. Opportunistic Diagnosis
Opportunistic screening refers to usage of incidental information from existing medical images acquired for a different purpose [76]. For example, an abdominal radiographs and CT scans intended for other diagnoses may incidentally reveal compression fractures [77,78]. One of the most promising areas for this approach lies in body composition imaging, particularly for the assessment of bone mineral density (BMD).
While dual-energy x-ray absorptiometry (DXA) remains the standard for BMD measurement, CT scans offer valuable insights, especially for individuals with obesity, severe degenerative spine disease, or postoperative spine conditions where DXA accuracy can be compromised. Quantitative CT using reference phantoms or dual-energy CT has been used for these cases, but their accessibility remains limited. However, advancements in ML are now enabling the extraction of reliable BMD information from various CT scans, including the abdomen, chest, and lumbar spine, transforming them into valuable opportunistic screening tools [79-81].
Opportunistic CT scans are also revealing valuable insights into sarcopenia, a condition linked to vertebral compression fractures and increased mortality across various diseases [82,83]. Sarcopenia is strongly associated with the cross-sectional area and mean density of muscles like the psoas or abdominal wall in CT scans [76]. AI algorithms can now automatically segment muscles, subcutaneous and visceral fat, and vertebral bodies for sarcopenia and BMD measurement [84,85]. Body composition measurements from opportunistic imaging could even potentially aid in the risk stratification of patients undergoing spinal surgery or predict future fractures [86,87].
6. Clinical Decision-Making, Prediction, and Prognostication
ML and DL algorithms are revolutionizing the way we predict patient outcomes in spine surgery. While still in their early stages of clinical implementation, these powerful tools hold immense promise for personalized care and improved decision-making.
While the traditional studies with well-designed large cohorts or linear regression models have achieved success [88], ML algorithms can capture both linear and nonlinear relationships between diverse factors and outcomes, requiring less human intervention in model development [89]. This leads to superior predictive performance compared to traditional methods [90,91]. Furthermore, ML enables clustering of patients with similar data patterns. This paves the way for prognostication and treatment optimization tailored to specific groups of patients [92,93].
However, it’s important to note that successful outcome prediction using ML requires high-quality, well-structured datasets with minimal missing data and clear outcome measures.
1) Herniated disc disease surgery
Herniated disc disease is a common spine condition where ML has shown utility in surgical decision support and postoperative prognosis. One key area of focus involves predicting recurrent herniation after surgery. Several ML studies have identified high-risk patients based on patient demographics, clinical parameters, and pre- and postoperative pain scores [90,94]. Significant factors reported to be associated with reherniation included pain scores, Oswestry Disability Index (ODI), PI–LL mismatch, body mass index, coronal angulation, duration of symptoms, and age [94]. Additionally, incorporating radiographic features such as facet orientation, herniation type, Modic changes, and disc calcification has further enhanced prediction accuracy for recurrent lumbar disc herniation [95].
The decision to proceed with surgery for herniated disc disease involves complex considerations. Models trained on patient demographics, questionnaire data, and MRI results exhibit promising potential in surgical triaging, predicting surgical referrals with AUCs ranging from 0.68 to 0.88 [96,97]. Similar models have also been used to predict improvements in quality of life after surgery (AUCs up to 0.78) [98], or after conservative therapy (100% accuracy for 12 scale ODI) [91]. Mourad et al. [99] built a surgical recommendation model based on clinical symptoms, MRI findings, and patient demographic factors. The root mean square error between model predictions and ground truth was 0.0964, with agreement being higher than agreement between individual doctors.
Moreover, ML models trained with demographic information, comorbidities, and preoperative/intraoperative findings have shown efficacy in predicting hospital length of stay or readmission after spine surgeries like anterior cervical discectomy and fusion [100-102], or lumbar single-level laminectomy [103]. This could aid in determining the appropriate care setting (outpatient vs. inpatient) or optimize hospital resource allocations.
2) Spinal deformity
Adult spinal deformity surgery is complex and carries a high risk of complications. To improve outcomes and reduce these complications, researchers have developed algorithms to aid clinical decision-making [104]. For instance, a study by the International Spine Study Group used ML-based clustering and found 4 prognostic phenotypes [105]. This approach identified that younger, more resilient patients with good mental health were less likely to need repeat surgery compared to older, frail patients with poorer mental health. Another study by Scheer et al. employed decision tree ensembles to predict proximal junctional kyphosis and pseudoarthrosis with an AUC of 0.896 [106] and 0.947 [107].
3) Postoperative complication
ML models are demonstrating remarkable potential in predicting major postoperative complications following spine surgery, such as wound infections, thromboembolism, and mortality [108,109]. These models leverage diverse demographic and clinical parameters, often exceeding the performance of traditional logistic regression models and established risk scores like the American Society of Anesthesiologists (ASA) physical status classification score [110].
For instance, Hopkins et al. developed an ML model that predicts postoperative surgical site infection after spinal fusion with a median AUC of 0.787 [111]. Their analysis revealed factors like congestive heart failure, chronic pulmonary failure, hemiplegia/paraplegia, and multilevel fusion as the most influential variables, providing valuable insights for risk stratification. Additionally, another study identified 10 key predictors, including age, gender, ASA physical status classification grade, surgical approach, and preoperative laboratory values, demonstrating the broad range of factors that ML models can incorporate for comprehensive risk assessment [112].
Hardware failure is another concern, especially for patients with known risk factors such as osteoporosis, long fixation length, and certain fixation end points [113]. You only look once v5 (YOLOv5), a type of convolutional neural network model specialized for both detection and classification, can help detect hardware failure in postoperative radiographs (Fig. 3) [114].
DISCUSSION
1. Limitations and Challenges
While AI holds immense promise for spinal imaging, several limitations require careful consideration. The “black box” nature of certain models and the lack of interpretability in decision-making raise concerns for clinical adoption. Fortunately, researchers are developing techniques to make AI models more interpretable. These techniques, like Gradient-weighted class activation mapping or Shapley additive explanations, can create heatmaps or graphs that highlight the focus of the model when making predictions. This can help explain, for instance, how the model identified spinal stenosis [115] or reherniation [94]. Even more recent advancements involve using LLMs to explain AI model predictions in a more comprehensive way, further improving interpretability [116].
Another crucial hurdle is ensuring generalizability [117]. Recent studies suggest external validation may not adequately assess true model performance. They recommend that researchers prioritize recurrent local validation across diverse datasets to ensure real-world applicability [118]. The focus should shift from chasing perfect metrics to demonstrating tangible clinical value, such as reducing inter-reader variability or mitigating human errors.
Furthermore, the scarcity of high-quality labeled datasets poses a significant challenge. Compared to other radiology fields, large public benchmarks for spinal imaging remain inadequate. Existing public datasets (Table 1) tend to be small in size (< 1,000 images), fragmented across institutions, and heterogeneous in acquisition and populations, leading to overfitting and limited generalizability. While current models perform well for normal spinal anatomy, there are very few models trained on datasets with normal variants [119], fractures [120], or models predicting various disease entities [121,122].
2. Future Directions
Despite the challenges, the future of AI in spinal imaging is bubbling with potential. Researchers are increasingly leveraging prospectively gathered data from clinical trials and multi-centered datasets, yielding promising results [15]. For instance, studies utilizing the American college of surgeons-national surgical quality improvement program database have successfully generated ML models for predicting 30-day readmissions [111,123] or discharge to nonhome facility [124] after lumbar fusion using predischarge information.
Another exciting approach involves directly incorporating EMR or imaging data into DL models. Natural language processing shows promise in predicting intraoperative vascular injuries with superior accuracy compared to traditional methods [125]. Similarly, DL models trained directly on images can identify hidden features invisible to the human eye. For example, DL models utilizing preoperative sagittal MRIs of the cervical spine as inputs have demonstrated higher accuracy in predicting early onset adjacent segment disease in cervical fusion patients compared to models using only preoperative clinical data [126,127]. Additionally, AI models trained on spinal radiographs and CT scans are outperforming established risk assessment tools in predicting fractures [128,129]. Finally, encouraging results have been achieved in predicting the early progression of scoliosis using spinal radiographs [130].
CONCLUSION
In conclusion, this review highlights the remarkable progress and potential of AI in advancing spinal imaging and patient care. From automated measurements to surgical planning, AI is transforming workflow efficiency, accuracy, and reliability across the spectrum of spine imaging. However, thoughtfulness is required to ensure real-world validity, utility, and adoption of AI tools.
Notes
Conflict of Interest
The authors have nothing to disclose.
Funding/Support
This work was supported by the Korea Medical Device Development Fund grant funded by the Korea government (the Ministry of Science and ICT, the Ministry of Trade, Industry and Energy, the Ministry of Health & Welfare, the Ministry of Food and Drug Safety) (Project Number: 17111 96789, RS-2023-00252804).
Author Contribution
Conceptualization: SL, JYJ, AM, JSK; Data curation: SL, JYJ; Formal analysis: SL, JYJ; Funding acquisition: JYJ; Methodology: SL, JYJ, AM, JSK; Project administration: JYJ, JSK; Visualization: SL, JYJ; Writing – original draft: SL, JYJ; Writing – review & editing: SL, JYJ, AM, JSK.