DENSEU-NET-BASED SEMANTIC SEGMENTATION OF SMALL OBJECTS IN URBAN REMOTE SENSING IMAGES

DenseU-Net-Based Semantic Segmentation of Small Objects in Urban Remote Sensing Images

DenseU-Net-Based Semantic Segmentation of Small Objects in Urban Remote Sensing Images

Blog Article

Class imbalance is a serious problem that plagues the semantic segmentation task in urban remote sensing images.Since large object classes dominate the segmentation task, small object classes are usually suppressed, so the solutions based on optimizing the overall accuracy are often unsatisfactory.In the light of the class imbalance of the semantic segmentation in urban remote sensing images, we developed the concept of the Down-sampling Block (DownBlock) for obtaining context information and the Up-sampling Block (UpBlock) for restoring the original resolution.

We proposed an end-to-end deep convolutional neural network (DenseU-Net) architecture for pixel-wise urban remote sensing image segmentation.The main Outdoor Swivel Lounge Chair w/Cushion (set of 2) idea of the DenseU-Net is to connect convolutional neural network features through cascade operations and use its symmetrical structure to fuse the detail features in shallow layers and the abstract semantic features in deep layers.A focal loss function weighted by the median frequency balancing (MFB_Focalloss) is proposed; the accuracy of the small object classes and the overall accuracy are improved effectively with our approach.

Our experiments were based on the 2016 ISPRS Vaihingen 2D semantic labeling dataset and demonstrated the following outcomes.In the case where boundary pixels were considered (GT), MFB_Focalloss achieved a good overall segmentation performance using the same U-Net model, and the F1-score of the small object class “car” was improved by 9.28% compared with the cross-entropy loss function.

Using the same MFB_Focalloss Filter Pads loss function, the overall accuracy of the DenseU-Net was better than that of U-Net, where the F1-score of the “car” class was 6.71% higher.Finally, without any post-processing, the DenseU-Net+MFB_Focalloss achieved the overall accuracy of 85.

63%, and the F1-score of the “car” class was 83.23%, which is superior to HSN+OI+WBP both numerically and visually.

Report this page