While steady progress has been made in visual perception, the performance is mainly benchmarked under fair weather/lighting conditions. Even the best-performing algorithms on the existing benchmarks can become untrustworthy in unseen domains or in adverse weather/lighting conditions. The ability to robustly cope with those conditions is absolutely essential for outdoor applications such as autonomous driving. In this talk, I will present our work on semantic scene understanding under adverse weather/illumination conditions and under general unseen domains. This covers multiple contributions: weather phenomenon simulation, curriculum domain adaptation, reference-guided learning, supervision fusion, sensor fusion, and supervision distillation. Our methods all contribute towards the goal of all-season perception and have achieved state-of-the-art performance for semantic scene understanding under bad weather/lighting conditions and under the synthetic2real cross-domain setting.