祝贺赵麾宇同学的2D/3D脊柱自动配准工作被BSPC收录

中心博士生赵麾宇的工作——Automatic 2D/3D spine registration based on two-step transformer with semantic attention and adaptive multi-dimensional loss function(基于具有语义注意力和自适应多维损失函数的两步Transformer自动2D/3D脊柱配准)的相关成果近期被《Biomedical Signal Processing and Control》期刊接收发表。

脊柱手术指导的基本技术是将术中二维X射线数据与术前三维CT进行配准。过去基于深度学习的方法通常需要将三维CT转换为二维投影进行进一步配准,这导致了空间信息的丢失,从而无法满足大范围适应性和高精度临床实践的要求。本文提出了一种新的基于Transformer的两步配准网络,在不进行三维CT降维的情况下直接回归变换参数。这种重建、分割模块以及自适应损失函数的设计,不仅扩大了可接受变形的范围,而且提高了配准精度,从而增加了深度学习在脊柱外科导航中的潜力。

为了提高脊柱手术图像引导的配准精度并减少计算负担,我们设计了一种基于语义特征的两步配准神经网络TS-SAR-NET。该网络首先通过三维重建网络处理二维投影数据,以及通过分割网络将三维CT图像转换为特征。这些特征被输入到基于Transformer的粗-细两步配准神经网络中,从而满足手术导航大范围和高精度的要求。此外,我们还将自适应损失函数集成到配准架构中,结合了参数、像素和感知域的损失,以进一步提升配准的准确性。这种方法能够在不牺牲计算效率的情况下,显著提升手术图像配准的质量和可靠性,为外科医生提供更精确的手术导航信息。

实验结果表明,通过在合成和临床数据上实现最先进的性能,平均mTRE分别为0.96mm和2.32mm,证明了所提出方法的有效性和可推广性。为了直观地评估不同方法的性能,我们在合成数据集和临床数据集上对二维X射线和三维CT配准结果进行了可视化比较,并且在每幅图下方展示了相应的平均配准误差(mTRE)结果。

相比之下,该方法在视觉一致性和定量分析上都取得了最佳效果。在3D单个脊柱节段的分割结果中,固定脊柱区域和重叠区域采用绿色标识,而模型估计的区域采用红色标识。与其它方法相比,基于关键语义信息的方法在单个脊椎节段的配准上展现出更好的视觉一致性。此外,在大变形上的高配准性能反映了方法在复杂场景中的稳健性。

摘要: An essential technique for spine surgery guidance is the registration of intraoperative 2D X-ray with preoperative 3D CT, which enables the correlation of real-time imaging with surgical planning. Previous deep-learning-based methods generally need to convert 3D CT into a 2D projection for further registration, resulting in the loss of spatial information and failing to satisfy the clinical requirements of a large adaptation range and high precision. In this paper, a novel transformer-based two-step registration network is proposed to directly regress the transformation parameters without dimension reduction of the 3D CT. The spine information is extracted by reconstruction and segmentation modules and is further used in the registration network that utilizes both the original images and the spine features. Meanwhile, an adaptive multi-dimensional loss function containing both parameter-domain loss and graph-domain loss is designed to be more consistent with the registration mechanism. Both improvements expand the range of acceptable deformations and increase registration accuracy. We demonstrate the validity and generalizability of the proposed method by achieving state-of-the-art performance on both synthesized and clinical data with an average mTRE of 0.96 mm and 2.32 mm. Further, the high registration performance over a large deformation reflects the robustness of the methods in complex scenarios. The proposed methods enhance the tremendous potential of deep learning in spinal surgery navigation.