We propose a convolutional network with hierarchical classifiers for per-pixel semantic segmentation, which is able to be trained on multiple, heterogeneous datasets and exploit their semantic hierarchy. Our network is the first to be simultaneously trained on three different datasets from the intelligent vehicles domain, i.e. Cityscapes, GTSDB and Mapillary Vistas, and is able to handle different semantic levelof-detail, class imbalances, and different annotation types, i.e. dense per-pixel and sparse bounding-box labels. We assess our hierarchical approach, by comparing against flat, nonhierarchical classifiers and we show improvements in mean pixel accuracy of 13.0% for Cityscapes classes and 2.4% for Vistas classes and 32.3% for GTSDB classes. Our implementation achieves inference rates of 17 fps at a resolution of 520 x 706 for 108 classes running on a GPU.
Training of Convolutional Networks on Multiple Heterogeneous Datasets for Street Scene Semantic Segmentation
2018 IEEE Intelligent Vehicles Symposium (IV) ; 1045-1050
2018-06-01
3719414 byte
Conference paper
Electronic Resource
English
Semantic segmentation of UAV image using combined U-net and heterogeneous UAV imagery datasets
British Library Conference Proceedings | 2022
|Semantic Segmentation for Traffic Scene Understanding Based on Mobile Networks
SAE Technical Papers | 2018
|Semantic Segmentation for Traffic Scene Understanding Based on Mobile Networks
British Library Conference Proceedings | 2018
|Semantic video scene segmentation and transfer
British Library Online Contents | 2014
|