Page Not Found
Page not found. Your pixels are in another canvas.
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Page not found. Your pixels are in another canvas.
About me
Published:
Published:
Published:
Published:
Published:
Note: I am still writing this post.
Published:
Published:
Jooyeon Kim, Dongkwan Kim, and Alice Oh. International Conference on Web Search and Data Mining (WSDM), 2019
Yeon Seonwoo, Sungjoon Park, Dongkwan Kim, and Alice Oh. Workshop on Noisy User-generated Text at EMNLP (EMNLP W-NUT), 2019
Dongkwan Kim and Alice Oh. Workshop on Graph Representation Learning at NeurIPS (NeurIPS GRL), 2019
Note: The full version is published at ICLR 2021.
Dongkwan Kim and Alice Oh. International Conference on Learning Representations (ICLR), 2021
One-sentence Summary: We propose a method that self-supervise graph attention through edges and it should be designed according to the average degree and homophily of graphs.
Dongkwan Kim and Alice Oh. Workshop on Geometrical and Topological Representation Learning at ICLR (ICLR GTRL), 2022
One-sentence Summary: We propose Subgraph-To-Node translation to efficiently learn representations of subgraphs by coarsely translating subgraphs into nodes.
Dongkwan Kim, Jiho Jin, Jaimeen Ahn and Alice Oh. International Conference on Information and Knowledge Management (CIKM, Short Papers Track), 2022
Dongkwan Kim and Alice Oh. International Conference on Machine Learning (ICML), 2024
One-sentence Summary: We propose Subgraph-To-Node translation to effectively and efficiently learn representations of subgraphs by coarsely translating subgraphs into nodes.
Dongkwan Kim and Alice Oh. Arxiv, 2024
One-sentence Summary: We propose WLKS that extends the WL graph kernel to subgraphs, captures high-order structural similarities in k-hop neighborhoods, efficiently outperforming state-of-the-art GNNs.
Chani Jung, Dongkwan Kim, Jiho Jin, Jiseon Kim, Yeon Seonwoo, Yejin Choi, Alice Oh, and Hyunwoo Kim. Empirical Methods in Natural Language Processing (EMNLP), 2024
One-sentence Summary: We assess key precursors of Theory of Mind (ToM) in LLMs by perception-augmented ToM benchmarks. We propose PercepToM, a ToM method inspired by our findings of models’ strength in perception inference and weakness in perception-to-belief inference.
Dongkwan Kim, Junho Myung, and Alice Oh. Workshop on Socially Responsible Language Modelling Research at NeurIPS (NeurIPS SoLaR), 2024
One-sentence Summary: We explore the use of in-context learning with diverse demonstrations to enhance Large Language Models’ cultural understanding.