Research

Preprints     -     Publications     -     Google Scholar Profile (Up to date)

Preprints

“Balanced Data, Imbalanced Spectra: Unveiling Class Disparities with Spectral Imbalance” arXiv
    🔥To appear at the International Conference on Machine Learning (ICML) (2024, July)
    👤Authors: Chiraag Kaushik1, Ran Liu1, Chi-Heng Lin, Amrit Khera, Matthew Y Jin, Wenrui Ma, Vidya Muthukumar, Eva L Dyer.
    🔑Keywords: Evaluating representations, learning theory, large pre-trained models, computer vision.
    1 Equal contribution, name ordered alphabetically.

“Frequency-Aware Masked Autoencoders for Multimodal Pretraining on Biosignals” arXiv
    🔥To appear at the International Conference on Learning Representations (ICLR Time Series for Health) (2024, May)
    👤Authors: Ran Liu, Ellen L. Zippi, Hadi Pouransari, Chris Sandino, Jingping Nie, Hanlin Goh, Erdrin Azemi, Ali Moin.
    🔑Keywords: Multimodal learning, pretraining, biosignals.

“GAFormer: Enhancing Timeseries Transformers Through Group-Aware Embeddings” OpenReview
    🔥To appear at the International Conference on Learning Representations (ICLR) (2024, May)
    👤Authors: Jingyun Xiao, Ran Liu, Eva L. Dyer.
    🔑Keywords: Timeseries modeling, transformers, interpret group structure.

“Your contrastive learning problem is secretly an alignment problem”
    👤Authors: Zihao Chen, Chi-Heng Lin, Ran Liu, Jingyun Xiao, Eva L Dyer.
    🔑Keywords: Contrastive learning, optimal transport, representation learning, computer vision.

Publications

Conference Proceedings

[C10] “Balanced Data, Imbalanced Spectra: Unveiling Class Disparities with Spectral Imbalance” arXiv
    🔥To appear at the International Conference on Machine Learning (ICML) (2024, July)
    👤Authors: Chiraag Kaushik1, Ran Liu1, Chi-Heng Lin, Amrit Khera, Matthew Y Jin, Wenrui Ma, Vidya Muthukumar, Eva L Dyer.
    🔑Keywords: Evaluating representations, learning theory, large pre-trained models, computer vision.
    1 Equal contribution, name ordered alphabetically.

[C9] “GAFormer: Enhancing Timeseries Transformers Through Group-Aware Embeddings” OpenReview
    🔥To appear at the International Conference on Learning Representations (ICLR) (2024, May)
    👤Authors: Jingyun Xiao, Ran Liu, Eva L. Dyer.
    🔑Keywords: Timeseries modeling, transformers, interpret group structure.

[C8] “LatentDR: Improving Model Generalization Through Sample-Aware Latent Degradation and Restoration” arXiv
    The IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) (2024, January)
    👤Authors: Ran Liu, Sahil Khose, Jingyun Xiao, Lakshmi Sathidevi, Keerthan Ramnath, Zsolt Kira, Eva L. Dyer.
    🔑Keywords: Domain generalization, data augmentation, computer vision.

[C7] “Half-Hop: A graph upsampling approach for slowing down message passing” OpenReview
    The International Conference on Machine Learning (ICML) (2023, June)
    👤Authors: Mehdi Azabou, Venkataramana Ganesh, Shantanu Thakoor, Chi-Heng Lin, Lakshmi Sathidevi, Ran Liu, Michal Valko, Petar Velickovi, Eva L. Dyer.
    🔑Keywords: Data augmentation, graph, self-supervised learning. \

[C6] “Seeing the forest and the tree: Building representations of both individual and collective dynamics with transformers” arXiv
    Advances in Neural Information Processing Systems 35 (NeurIPS) (2022, December)
    👤Authors: Ran Liu, Mehdi Azabou, Max Dabagia, Jingyun Xiao, and Eva L Dyer.
    🔑Keywords: Transformer, multi-channel time-series, neural decoding, domain generalization. \

[C5] “MTNeuro: A Benchmark for Evaluating Representations of Brain Structure Across Multiple Levels of Abstraction” OpenReview
    Advances in Neural Information Processing Systems 35 (NeurIPS Datasets and Benchmarks) (2022, December)
    👤Authors: Jorge Quesada, Lakshmi Sathidevi, Ran Liu, Nauman Ahad, Joy M Jackson, Mehdi Azabou, Jingyun Xiao, Chris Liding, Carolina Urzay, William Gray-Roncal, Erik Christopher Johnson, Eva L Dyer.
    🔑Keywords: Representation learning, multi-task learning, new datasets. \

[C4] “Building representations of different brain areas through hierarchical point cloud networks” OpenReview, Talk
    Medical Imaging with Deep Learning (MIDL) (2022, April)
    👤Authors: Joy M Jackson, Ran Liu, Eva L Dyer.
    🔑Keywords: Representation learning, point cloud, image classification.

[C3] “Drop, swap, and generate: A self-supervised approach for generating neural activity” CameraReady, arXiv, Talk
    Advances in Neural Information Processing Systems 34 (NeurIPS) Oral presentation (top 1%) (2021, December)
    👤Authors: Ran Liu, Mehdi Azabou, Max Dabagia, Chi-Heng Lin, Mohammad Gheshlaghi Azar, Keith Hengen, Michal Valko, Eva L Dyer.
    🔑Keywords: Self-supervision, generative learning, neural decoding, data augmentation.

[C2] “Multi-Scale Modeling of Neural Structure in X-Ray Imagery” CameraReady
    IEEE International Conference on Image Processing (ICIP) (2021, September)
    👤Authors: Aishwarya Balwani, Joseph Miano, Ran Liu, Lindsey Kitchell, Judy A Prasad, Erik C Johnson, William Gray-Roncal, Eva L Dyer
    🔑Keywords: Multi-task learning, image segmentation, 3D reconstruction.

[C1] “A generative modeling approach for interpreting population-level variability in brain structure” CameraReady, bioRxiv
    International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) (2020, October)
    👤Authors: Ran Liu, Cem Subakan, Aishwarya H Balwani, Jennifer Whitesell, Julie Harris, Sanmi Koyejo, Eva L Dyer
    🔑Keywords: Generative learning, interpretability, image synthesis.

Workshops and Posters

[W3] “Frequency-Aware Masked Autoencoders for Multimodal Pretraining on Biosignals” arXiv
    🔥To appear at the International Conference on Learning Representations (ICLR Time Series for Health) (2024, May)
    👤Authors: Ran Liu, Ellen L. Zippi, Hadi Pouransari, Chris Sandino, Jingping Nie, Hanlin Goh, Erdrin Azemi, Ali Moin.
    🔑Keywords: Multimodal learning, pretraining, biosignals.

[W2] “Using self-supervision and augmentations to build insights into neural coding” CameraReady
    NeurIPS 2021 Workshop: Self-Supervised Learning - Theory and Practice (2021, December)
    👤Authors: Mehdi Azabou, Max Dabagia, Ran Liu, Chi-Heng Lin, Keith B Hengen, Eva L Dyer
    🔑Keywords: Neural decoding, self-supervision, data augmentation.

[W1] “Mine your own view: A self-supervised approach for learning representations of neural activity” CameraReady
    NeurIPS 2021 Workshop: Self-Supervised Learning - Theory and Practice (2021, December)
    👤Authors: Mehdi Azabou, Mohammad Gheshlaghi Azar, Ran Liu, Chi-Heng Lin, Erik C Johnson, Kiran Bhaskaran-Nair, WashU-St Louis, Max Dabagia, Bernardo Avila-Pires, Lindsey Kitchell, Keith B Hengen, William Gray-Roncal, Michal Valko, Eva L Dyer.
    🔑Keywords: Representation learning, self-supervision, neural decoding.

Journal Articles

[J3] “Proximity-induced surface superconductivity in Dirac semimetal Cd3As2 CameraReady
    Nature communications (2019, May)
    👤Authors: Ce Huang, Benjamin T Zhou, Huiqin Zhang, Bingjia Yang, Ran Liu, Hanwen Wang, Yimin Wan, Ke Huang, Zhiming Liao, Enze Zhang, Shanshan Liu, Qingsong Deng, Yanhui Chen, Xiaodong Han, Jin Zou, Xi Lin, Zheng Han, Yihua Wang, Kam Tuen Law, Faxian Xiu

[J2] “Quantum Hall effect based on Weyl orbits in Cd3As2 CameraReady
    Nature (2019, January)
    👤Authors: Cheng Zhang, Yi Zhang, Xiang Yuan, Shiheng Lu, Jinglei Zhang, Awadhesh Narayan, Yanwen Liu, Huiqin Zhang, Zhuoliang Ni, Ran Liu, Eun Sang Choi, Alexey Suslov, Stefano Sanvito, Li Pi, Hai-Zhou Lu, Andrew C Potter, Faxian Xiu

[J1] “Inducing Strong Superconductivity in WTe2 by a Proximity Effect” CameraReady
    ACS nano (2018, June)
    👤Authors: Ce Huang, Awadhesh Narayan, Enze Zhang, Yanwen Liu, Xiao Yan, Jiaxiang Wang, Cheng Zhang, Weiyi Wang, Tong Zhou, Changjiang Yi, Shanshan Liu, Jiwei Ling, Huiqin Zhang, Ran Liu, Raman Sankar, Fangcheng Chou, Yihua Wang, Youguo Shi, Kam Tuen Law, Stefano Sanvito, Peng Zhou, Zheng Han, Faxian Xiu