Sang Michael Xie
Sang Michael Xie
Other namesMichael Xie
PhD Candidate, Stanford University
Verified email at - Homepage
Cited by
Cited by
Combining satellite imagery and machine learning to predict poverty
N Jean, M Burke, M Xie, WM Davis, DB Lobell, S Ermon
Science 353 (6301), 790-794, 2016
On the opportunities and risks of foundation models
R Bommasani, DA Hudson, E Adeli, R Altman, S Arora, S von Arx, ...
arXiv preprint arXiv:2108.07258, 2021
Wilds: A benchmark of in-the-wild distribution shifts
PW Koh, S Sagawa, H Marklund, SM Xie, M Zhang, A Balsubramani, ...
International Conference on Machine Learning, 5637-5664, 2021
Transfer learning from deep features for remote sensing and poverty mapping
M Xie, N Jean, M Burke, D Lobell, S Ermon
AAAI, 2016
Adversarial training can hurt generalization
A Raghunathan*, SM Xie*, F Yang, JC Duchi, P Liang
arXiv preprint arXiv:1906.06032, 2019
Weakly supervised deep learning for segmentation of remote sensing imagery
S Wang, W Chen, SM Xie, G Azzari, DB Lobell
Remote Sensing 12 (2), 207, 2020
Understanding and mitigating the tradeoff between robustness and accuracy
A Raghunathan*, SM Xie*, F Yang, J Duchi, P Liang
International Conference on Machine Learning (ICML), 2020
An Explanation of In-context Learning as Implicit Bayesian Inference
SM Xie, A Raghunathan, P Liang, T Ma
International Conference on Learning Representations (ICLR), 2022
Holistic evaluation of language models
P Liang, R Bommasani, T Lee, D Tsipras, D Soylu, M Yasunaga, Y Zhang, ...
arXiv preprint arXiv:2211.09110, 2022
Semi-supervised Deep Kernel Learning: Regression with Unlabeled Data by Minimizing Predictive Variance
N Jean*, SM Xie*, S Ermon
Advances in Neural Information Processing Systems (NeurIPS), 2018
Reparameterizable Subset Sampling via Continuous Relaxations
SM Xie, S Ermon
IJCAI, 2019
Extending the wilds benchmark for unsupervised adaptation
S Sagawa, PW Koh, T Lee, I Gao, SM Xie, K Shen, A Kumar, W Hu, ...
arXiv preprint arXiv:2112.05090, 2021
Connect, not collapse: Explaining contrastive learning for unsupervised domain adaptation
K Shen*, R Jones*, A Kumar*, SM Xie*, JZ HaoChen, T Ma, P Liang
arXiv preprint arXiv:2204.00570, 2022
Why Do Pretrained Language Models Help in Downstream Tasks? An Analysis of Head and Prompt Tuning
C Wei, SM Xie, T Ma
Neural Information Processing Systems (NeurIPS), 2021
In-N-Out: Pre-Training and Self-Training using Auxiliary Information for Out-of-Distribution Robustness
SM Xie*, A Kumar*, R Jones*, F Khani, T Ma, P Liang
International Conference on Learning Representations (ICLR), 2021
Incorporating spatial context and fine-grained detail from satellite imagery to predict poverty
JH Kim, M Xie, N Jean, S Ermon
Working paper, Stanford University, 2016
Reward design with language models
M Kwon, SM Xie, K Bullard, D Sadigh
arXiv preprint arXiv:2303.00001, 2023
No true state-of-the-art? OOD detection methods are inconsistent across datasets
F Tajwar, A Kumar, SM Xie, P Liang
arXiv preprint arXiv:2109.05554, 2021
Data selection for language models via importance resampling
SM Xie, S Santurkar, T Ma, P Liang
arXiv preprint arXiv:2302.03169, 2023
Composed Fine-Tuning: Freezing Pre-Trained Denoising Autoencoders for Improved Generalization
SM Xie, T Ma, P Liang
International Conference on Machine Learning (ICML), 2021
The system can't perform the operation now. Try again later.
Articles 1–20