About

I am a postdoctoral researcher at Texas A&M University, working with Prof. Yang Shen. My research aims to build foundational models for biology and chemistry that understand and integrate multiple data modalities such as graphs and texts. The goal of my work is to accelerate scientific discovery by enabling AI systems to reason over complex relational and hierarchical structures found in biological, chemical, and biomedical domains.

During my Ph.D. at KAIST School of Computing, advised by Prof. Alice Oh, I studied representation learning for structured data, focusing on the intersection of graph neural networks (GNNs) and large language models (LLMs). My Ph.D. research contributed to graph representation learning methods that leverage pairwise and higher-order interactions for graph-structured data (edges [C2], partial subgraphs [C3], subgraphs [C4], and k-hop subgraphs [C6]).

Selected Publications (See all)

Jiseon Kim, Dongkwan Kim, Joohye Jeong, Alice Oh, In Song Kim. "Measuring Interest Group Positions on Legislation: An AI-Driven Analysis of Lobbying Reports", Arxiv, 2025

Education

  • Ph.D., School of Computing, KAIST, Aug 2025
  • M.S., School of Computing, KAIST, Aug 2019
  • B.S., Major in Computer Science and Minor in Chemistry, KAIST, Feb 2018

Talks & Presentations

Dongkwan Kim. "Graph Representation Learning with Local Structures, Higher Order Interactions, and Knowledge Augmentation", Shen-Lab @ TAMU, 2025

Dongkwan Kim. "Salad-Bowl-LLM: Multi-Culture LLMs by Mixed In-Context Demonstrations", International NLP Workshop at KAIST 2024, 2024

Dongkwan Kim. "Leveraging Structure for Graph Neural Networks", IBS Data Science Group Seminar, 2022

Dongkwan Kim. "How to Find Your Friendly Neighborhood: Graph Attention Design with Self-Supervision", KAIST AI Workshop 21/22, 2022

Dongkwan Kim. "How to Find Your Friendly Neighborhood: Graph Attention Design with Self-Supervision", Learning on Graphs and Geometry Reading Group (LoGaG), 2021

Academic Services

Teaching Experiences

  • TA of AI Tech Boostcamp at NAVER Connect Foundation (Fall 2024)
  • Head TA, TA of Machine Learning for Natural Language Processing (Spring 2021, Fall 2019), Best TA Award at Fall 2019
  • Head TA of Deep Learning for Real-world Problems (Fall 2020, Spring 2020), Best TA Award at Spring 2020
  • TA, Head TA of Data Structure (Fall 2018, Spring 2018)

Open Source Contributions