Chawin Sitawarin

chawins AT berkeley DOT edu



99.3% confidence


CV      Google Scholar

Hello! My name is Chawin Sitawarin. I am a PhD candidate in Computer Science at UC Berkeley, and I am a part of the security group, Berkeley Artificial Intelligence Research (BAIR) and Berkeley DeepDrive (BDD). My advisor is Prof. David Wagner.

I am broadly interested in the security and safety aspects of machine learning. Most of my current and previous works are in the domain of adversarial machine learning, particularly adversarial examples and robustness of machine learning algorithms. If you are wondering why I appear as a panda, give this paper a read.

Previously, I graduated from Princeton University in 2018 where I was very fortunate to be advised by Prof. Prateek Mittal, Prof. Peter Ramadge, and Prof. Alejandro Rodriguez. I was mentored and introduced to adversarial machine learning by Arjun Bhagoji.

I used to keep track of papers on adversarial examples, but I stopped after the number of papers has become overwhelming. You can still find the list here (last update: Sep 2019).


May 19, 2022 Our paper, Demystifying the Adversarial Robustness of Random Transformation Defenses, will appear at ICML 2022.
Mar 21, 2022 I will be interning at Google Research during Summer 2022, hosted by Ali Zand and David Tao.
Dec 3, 2021 Our paper, Demystifying the Adversarial Robustness of Random Transformation Defenses, is selected as one of the three best papers at AAAI-2022 AdvML Workshop. [paper] [slides]
Nov 1, 2021 I am starting at Google Research as a part-time student researcher, mentored by Nicholas Carlini.
Oct 1, 2021 Our paper, Adversarial Examples for k-Nearest Neighbor Classifiers Based on Higher-Order Voronoi Diagrams, will appear at NeurIPS 2021. [paper] [slides]
Sep 15, 2021 Our paper, SAT: Improving Adversarial Training via Curriculum-Based Loss Smoothing, will appear at AISec 2021. [paper] [slides]
Aug 30, 2021 Our project on large-scale adversarial patch benchmark is funded by Microsoft-BAIR Commons.
Jul 23, 2021 Our paper, Improving the Accuracy-Robustness Trade-Off for Dual-Domain Adversarial Training, will appear at ICML 2021 Workshop on Uncertainty & Robustness in Deep Learning. [paper]
Jun 8, 2021 I interned at Nokia Bell Labs (remote) and was very fortunate to be mentored by Anwar Walid.
May 7, 2021 Our paper, Mitigating Adversarial Training Instability with Batch Normalization, will appear at ICLR 2021 Workshop on Security and Safety in Machine Learning Systems. This work is led by Arvind Sridhar, an undergraduate student I mentor at UC Berkeley. [paper]