Crack detection and rehabilitation are critical components of a pavement’s life cycle. Various detection methods have been developed, among which classification, object detection, and segmentation deep-learning approaches have been revolutionary. Segmentation models enable the pixel-wise delineation of crack networks, which are used in quantifying severity, defect type, and condition index of distress. However, supervised segmentation algorithms require a substantial amount of pixel-accurate ground truth labels, which are challenging to obtain. Additionally, current models exhibit limited generalizability to unseen data, with state-of-the-art models performing inadequately in detecting low-severity cracks. This article therefore presents a novel crack segmentation approach leveraging Meta’s Segment Anything Model (SAM) and low-cost ground truths. We fine-tune the SAM model using box, points, and text prompts, enhancing the model’s generalizability and improving crack fidelity. The model achieves a 94% F1 score on the authors’ dataset and 91% and 77% F1 scores on the Fully Convolutional Network (FCN) dataset and Crack Forest Dataset (CFD), respectively. Our approach outperforms the U-Net, DeepLabV3+, and TransUNet models on the FCN dataset and achieves comparable performance on the CFD dataset. Exploring different loss combinations during training reveals that a dice and binary cross-entropy loss combination does not significantly outperform a dice and focal loss combination. The use of text prompts in querying the images is also examined. Although initial results look promising, their segmentation and classification accuracies are relatively lower.


    Access

    Download

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Enhanced Crack Segmentation Using Meta’s Segment Anything Model with Low-Cost Ground Truths and Multimodal Prompts


    Additional title:

    Transportation Research Record: Journal of the Transportation Research Board


    Contributors:


    Publication date :

    2025-04-04




    Type of media :

    Article (Journal)


    Type of material :

    Electronic Resource


    Language :

    English