Sanghwan Kim

prof_pic.jpg

Zürich, Switzerland

I’m currently pursuing my Master’s degree at ETH Zürich. My academic career has taken me across Switzerland and South Korea, where I’ve accumulated a wealth of experience in machine learning and artificial intelligence. In 2020, I obtained my Bachelor’s degree in Electrical Engineering from KAIST.

My research interest revolves around the broad topics of machine perception, especially at the intersection of computer vision and natural language processing.

Currently, I am working on video understanding guided by large language model and vision language model at AIT lab, ETH Zürich. My recent contributions have been recognized through full paper publications, where I’ve tackled diverse challenges. These include (1) accelerating the sampling process of diffusion models through simple ODE solver distillation, (2) enhancing radiology report generation models by crafting rule-based labelers, and (3) introducing a novel approach to continual learning, achieving a better balance between stability and plasticity.

Selected publications

2023

  1. kim2023lalm.png
    LALM: Long-Term Action Anticipation with Language Models
    Sanghwan Kim, Daoji Huang, Yongqin Xian, and 3 more authors
    arXiv preprint arXiv:2311.17944, 2023
  2. kim2023distilling.jpg
    Distilling ODE Solvers of Diffusion Models into Smaller Steps
    Sanghwan Kim, Hao Tang, and Fisher Yu
    arXiv preprint arXiv:2309.16421, 2023
  3. kim2023boosting.jpg
    Boosting Radiology Report Generation by Infusing Comparison Prior
    Sanghwan Kim, Farhad Nooralahzadeh, Morteza Rohanian, and 5 more authors
    The Association for Computational Linguistics (ACL’23), BioNLP Workshop, 2023
  4. kim2023achieving.jpg
    Achieving a Better Stability-Plasticity Trade-off via Auxiliary Networks in Continual Learning
    Sanghwan Kim, Lorenzo Noci, Antonio Orvieto, and 1 more author
    In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023