Multi-expert ensemble models for long-tailed learning typically either learn diverse generalists from the whole dataset or aggregate specialists on different subsets. However, the former is insufficient for tail classes due to the high imbalance factor of the entire dataset, while the latter may bring ambiguity in predicting unseen classes. To address these issues, we propose a novel Local and Global Logit Adjustments (LGLA) method that learns experts with full data covering all classes and enlarges the discrepancy among them by elaborated logit adjustments. LGLA consists of two core components: a Class-aware Logit Adjustment (CLA) strategy and an Adaptive Angular Weighted (AAW) loss. The CLA strategy trains multiple experts which excel at each subset using the Local Logit Adjustment (LLA). It also trains one expert specializing in an inversely long-tailed distribution through Global Logit Adjustment (GLA). Moreover, the AAW loss adopts adaptive hard sample mining with respect to different experts to further improve accuracy. Extensive experiments on popular long-tailed benchmarks manifest the superiority of LGLA over the SOTA methods.
Local and Global Logit Adjustments for Long-Tailed Learning
01.10.2023
2412051 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
Wiley | 1993
|Mixed Logit (or Logit Kernel) Model: Dispelling Misconceptions of Identification
Online Contents | 2002
|German Equipment - Long Term Pressures, Long Term Adjustments
British Library Online Contents | 1994
Transportation Research Record | 2009
|