|
|
|
|
|
|
|
![]() |
|
|
|
|
R1: OVERALL PERFORMANCE COMPARISON ON BOARDUS-9.7K DATASET.
SAM’s official weights perform poorly in zero-shot inference (37.12%) due to the domain gap between natural and medical images. SAMUS improves performance (80.65%) but doesn’t surpass the Single model, likely due to dataset heterogeneity. Our automatic prompt model, with 66% fewer parameters, achieves similar segmentation results (80.01%). Ablation studies reveal that UniUSNet (79.89%) outperforms both the ablation version (78.46%) and Single (78.43%) models, proving the effectiveness of prompts. While UniUSNet and UniUSNet w/o prompt models have fewer parameters, they excel in classification over segmentation, possibly due to the network’s multi-branch structure, suggesting a need for more balanced learning.
R2: Some examples of segmentation result. Each column From left to right: original image, SAM, SAMUS, Single, UniUSNet w/o prompt, Prompt and ground truth.
Segmentation results reveal that UniUSNet outperforms SAM and other models by effectively using nature and position prompts for deeper task understanding.
R3: t-SNE visualization.
We visualized feature distributions of the BUS-BRA, BUSIS, and UDIAT datasets. The Figure shows that the Single model has a clear domain shift, while UniUSNet w/o prompt reduces this shift, indicating better domain adaptation. Prompts further minimize the domain offset.
R4: ADAPTER PERFORMANCE COMPARISON ON BUSI DATASET.
The table shows that UniUSNet w/o prompt and UniUSNet outperform the Single model, demonstrating better generalization and prompt effectiveness. Additionally, the Adapter setup, with minimal fine-tuning, surpasses the Scratch setup, showcasing our model’s adaptability to new datasets efficiently.
![]() |
UniUSNet: A Promptable Framework for Universal Ultrasound Disease Prediction and Tissue
Segmentation.
BIBM, 2024
|
We provide a detailed data processing method for the BroadUS-9.7K dataset (link), as well as a data demo for checking whether the data format is prepared properly and for quickly starting experiments or inferences (link).
Pretrained models can be downloaded here (link).
Webpage template modified from Richard Zhang.