This study aimed to identify contributing risk factors for pedestrian injury by integrating socio-spatial and street-level contexts through multimodal deep learning to overcome the limitations of existing studies that only consider one type of data. To investigate how the two contexts assist in describing pedestrian injury risk, six multimodal deep learning models were established by varying the ratio integrating the two contexts. The developed model with the highest performance was interpreted by using two XAI methods: SHAP for socio-spatial context and Grad-CAM for street-level context. The results indicated that the street-level context mainly contributes to the pedestrian injury risk level, assisted by the socio-spatial context, which cannot be captured at the street-level.
This study developed two prediction models for urban fire occurrence and related casualties via a fire accident dataset from Seoul, South Korea, from 2017 to 2021. Our models exhibit improved predictive performance by incorporating built environment features, such as building characteristics and the urban context, alongside weather and demographic data. This approach showed improved predictive performance suitable for public health implementation. Compared with the weather- and demographic-only models, our models had an 18.1 % greater fire occurrence prediction accuracy and a 10.4 % greater casualty prediction accuracy. Major variables affecting fire occurrence include building characteristics, e.g., the floor area ratio (FAR), building age, and commercial building number.