Recently, the growing number of Autonomous Underwater Vehicles (AUVs) can be seen. These vehicles are power supplied and controlled from the sources located on their boards. To operate autonomously underwater robots have to be equipped with the different sensors and software for making decision based on the signals from these sensors. The goal of the paper is to show initial research carried out for underwater objects recognition based on video images. Based on several examples included in the literature, the object recognition algorithm proposed in the paper is based on the deep neural network. In the research, the network and training algorithms accessible in the Matlab have been used. The final software will be implemented on board of the Biomimetic Autonomous Underwater Vehicle (BAUV), driven by undulating propulsion imitating oscillating motion of fins, e.g. of a fish.


    Zugriff

    Download


    Exportieren, teilen und zitieren



    Titel :

    Using Pretrained AlexNet Deep Learning Neural Network for Recognition of Underwater Objects


    Beteiligte:

    Erscheinungsdatum :

    01.01.2020


    Anmerkungen:

    NAŠE MORE : znanstveni časopis za more i pomorstvo ; ISSN 0469-6255 (Print) ; ISSN 1848-6320 (Online) ; Volume 67 ; Issue 1


    Medientyp :

    Aufsatz (Zeitschrift)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch



    Klassifikation :

    DDC:    629




    ALEXNET-BASED INSULATOR SELF-EXPLOSION RECOGNITION METHOD

    LI YINGGUO / CHEN JUNJI / ZHOU JIE et al. | Europäisches Patentamt | 2021

    Freier Zugriff

    Assessing Deep Learning Model Using AlexNet for Water Traffic Counting in Martapura River

    Saubari, Nahdi / Kunfeng, Wang | Springer Verlag | 2023

    Freier Zugriff

    Convolutional Neural Network GNSS-R Sea Ice Detection Based on AlexNet Model

    Zhihao, Jiang / Yuan, Hu / Xintai, Yuan et al. | Springer Verlag | 2022


    Efficient learning of robust quadruped bounding using pretrained neural networks

    Wang, Z / Li, A / Zheng, Y et al. | BASE | 2022

    Freier Zugriff