publications
publications by categories in reversed chronological order.
2023
- ICMLPreprocessors Matter! Realistic Decision-Based Attacks on Machine Learning SystemsChawin Sitawarin, Florian Tramèr, and Nicholas CarliniIn Proceedings of the 40th International Conference on Machine Learning, Aug 2023
Decision-based adversarial attacks construct inputs that fool a machine learning model into making targeted mispredictions. For the most part, these attacks have been applied directly to isolated neural network models. However, in practice, machine learning models are just a component of a much larger system, and we find that state-of-the-art query-based attacks are as much as four times less effective at attacking a prediction pipeline with only one preprocessor than attacking the machine learning model alone. We can explain this discrepancy by the fact that most preprocessors introduce some notion of “invariant” to the input space (e.g., with center crop, the prediction is invariant to the border pixels). Hence, attacks that are unaware of this invariance inevitably wastes a large number of queries to re-discover or to overcome it. We therefore develop techniques to first reverse-engineer preprocessors and then use the extracted information to attack the end-to-end system. Our extraction method uses only a few hundreds queries to learn the preprocessors used by most publicly available models, and our preprocessor-aware attack recovers the same efficacy as just attacking the model alone.
@inproceedings{sitawarin_preprocessor_2023, title = {Preprocessors Matter! Realistic Decision-Based Attacks on Machine Learning Systems}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, author = {Sitawarin, Chawin and Tram{\`e}r, Florian and Carlini, Nicholas}, month = aug, year = {2023}, }
- ICLRPart-Based Models Improve Adversarial RobustnessChawin Sitawarin, Kornrapat Pongmala, Yizheng Chen, Nicholas Carlini, and David WagnerIn International Conference on Learning Representations, May 2023
We show that combining human prior knowledge with end-to-end learning can improve the robustness of deep neural networks by introducing a part-based model for object classification. We believe that the richer form of annotation helps guide neural networks to learn more robust features without requiring more samples or larger models. Our model combines a part segmentation model with a tiny classifier and is trained end-to-end to simultaneously segment objects into parts and then classify the segmented object. Empirically, our part-based models achieve both higher accuracy and higher adversarial robustness than a ResNet-50 baseline on all three datasets. For instance, the clean accuracy of our part models is up to 15 percentage points higher than the baseline’s, given the same level of robustness. Our experiments indicate that these models also reduce texture bias and yield better robustness against common corruptions and spurious correlations. The code is publicly available at https://github.com/chawins/adv-part-model.
@inproceedings{sitawarin_partbased_2023, title = {Part-Based Models Improve Adversarial Robustness}, booktitle = {International Conference on Learning Representations}, author = {Sitawarin, Chawin and Pongmala, Kornrapat and Chen, Yizheng and Carlini, Nicholas and Wagner, David}, year = {2023}, url = {https://openreview.net/forum?id=bAMTaeqluh4}, month = may, }
- VehicleSecShort: Certifiably Robust Perception against Adversarial Patch Attacks: A SurveyChong Xiang, Chawin Sitawarin, Tong Wu, and Prateek MittalIn 1st Symposium on Vehicle Security and Privacy (VehicleSec), Mar 2023Co-located with NDSS 2023. Best Short/WIP Paper Award Runner-Up.
The physical-world adversarial patch attack poses a security threat to AI perception models in autonomous vehicles. To mitigate this threat, researchers have designed defenses with certifiable robustness. In this paper, we survey existing certifiably robust defenses and highlight core robustness techniques that are applicable to a variety of perception tasks, including classification, detection, and segmentation. We emphasize the unsolved problems in this space to guide future research, and call for attention and efforts from both academia and industry to robustify perception models in autonomous vehicles.
@inproceedings{xiang_short_2023, title = {Short: Certifiably Robust Perception against Adversarial Patch Attacks: A Survey}, booktitle = {1st Symposium on {{Vehicle Security}} and {{Privacy}} ({{VehicleSec}})}, author = {Xiang, Chong and Sitawarin, Chawin and Wu, Tong and Mittal, Prateek}, year = {2023}, month = mar, langid = {english}, note = {Co-located with {{NDSS}} 2023. Best Short/WIP Paper Award Runner-Up.}, }
2022
- REAP: A Large-Scale Realistic Adversarial Patch BenchmarkNabeel Hingun, Chawin Sitawarin, Jerry Li, and David WagnerUnder submission, Oct 2022
Machine learning models are known to be susceptible to adversarial perturbation. One famous attack is the adversarial patch, a sticker with a particularly crafted pattern that makes the model incorrectly predict the object it is placed on. This attack presents a critical threat to cyber-physical systems that rely on cameras such as autonomous cars. Despite the significance of the problem, conducting research in this setting has been difficult; evaluating attacks and defenses in the real world is exceptionally costly while synthetic data are unrealistic. In this work, we propose the REAP (REalistic Adversarial Patch) Benchmark, a digital benchmark that allows the user to evaluate patch attacks on real images, and under real-world conditions. Built on top of the Mapillary Vistas dataset, our benchmark contains over 14,000 traffic signs. Each sign is augmented with a pair of geometric and lighting transformations, which can be used to apply a digitally generated patch realistically onto the sign, while matching real-world conditions. Using our benchmark, we perform the first large-scale assessments of adversarial patch attacks under realistic conditions. Our experiments suggest that adversarial patch attacks may present a smaller threat than previously believed and that the success rate of an attack on simpler digital simulations is not predictive of its actual effectiveness in practice.
@article{hingun_reap_2022, author = {Hingun, Nabeel and Sitawarin, Chawin and Li, Jerry and Wagner, David}, journal = {Under submission}, month = oct, primaryclass = {cs}, title = {REAP: A Large-Scale Realistic Adversarial Patch Benchmark}, year = {2022}, }
- ICMLDemystifying the Adversarial Robustness of Random Transformation DefensesChawin Sitawarin, Zachary Golan-Strieb, and David WagnerIn Proceedings of the 39th International Conference on Machine Learning, Oct 2022Best Paper Award from AAAI-2022 Workshop on Adversarial Machine Learning and Beyond
Neural networks’ lack of robustness against attacks raises concerns in security-sensitive settings such as autonomous vehicles. While many countermeasures may look promising, only a few withstand rigorous evaluation. Defenses using random transformations (RT) have shown impressive results, particularly BaRT (Raff et al., 2019) on ImageNet. However, this type of defense has not been rigorously evaluated, leaving its robustness properties poorly understood. Their stochastic properties make evaluation more challenging and render many proposed attacks on deterministic models inapplicable. First, we show that the BPDA attack (Athalye et al., 2018a) used in BaRT’s evaluation is ineffective and likely over-estimates its robustness. We then attempt to construct the strongest possible RT defense through the informed selection of transformations and Bayesian optimization for tuning their parameters. Furthermore, we create the strongest possible attack to evaluate our RT defense. Our new attack vastly outperforms the baseline, reducing the accuracy by 83% compared to the 19% reduction by the commonly used EoT attack (4.3x improvement). Our result indicates that the RT defense on Imagenette dataset (ten-class subset of ImageNet) is not robust against adversarial examples. Extending the study further, we use our new attack to adversarially train RT defense (called AdvRT), resulting in a large robustness gain.
@inproceedings{sitawarin_demystifying_2022, author = {Sitawarin, Chawin and Golan-Strieb, Zachary and Wagner, David}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, note = {Best Paper Award from AAAI-2022 Workshop on Adversarial Machine Learning and Beyond}, title = {Demystifying the Adversarial Robustness of Random Transformation Defenses}, year = {2022} }
2021
- NeurIPSAdversarial Examples for k-Nearest Neighbor Classifiers Based on Higher-Order Voronoi DiagramsChawin Sitawarin, Evgenios M Kornaropoulos, Dawn Song, and David WagnerIn Advances in Neural Information Processing Systems, Oct 2021
@inproceedings{sitawarin_adversarial_2021, author = {Sitawarin, Chawin and Kornaropoulos, Evgenios M and Song, Dawn and Wagner, David}, booktitle = {Advances in Neural Information Processing Systems}, publisher = {{Curran Associates, Inc.}}, title = {Adversarial Examples for k-Nearest Neighbor Classifiers Based on Higher-Order Voronoi Diagrams}, volume = {34}, year = {2021} }
- Improving the Accuracy-Robustness Trade-off for Dual-Domain Adversarial TrainingChawin Sitawarin, Arvind P Sridhar, and David WagnerIn Workshop on Uncertainty and Robustness in Deep Learning, Jul 2021
While Adversarial Training remains the standard in improving robustness to adversarial attack, it often sacrifices accuracy on natural (clean) samples to a significant extent. Dual-domain training, optimizing on both clean and adversarial objectives, can help realize a better trade-off between clean accuracy and robustness. In this paper, we develop methods to improve dual-domain training for large adversarial perturbations and complex datasets. We first demonstrate that existing methods suffer from poor performance in this setting, due to a poor training procedure and overfitting to a particular attack. Then, we develop novel methods to address these issues. First, we show that adding KLD regularization to the dual training objective mitigates this overfitting and achieves a better trade-off, on CIFAR-10 and a 10-class subset of ImageNet. Then, inspired by domain adaptation, we develop a new normalization technique, Dual Batch Normalization, to further improve accuracy. Combining these two strategies, our model sets a new state of the art in trade-off performance for dual-domain adversarial training.
@inproceedings{sitawarin_improving_2021, author = {Sitawarin, Chawin and Sridhar, Arvind P and Wagner, David}, booktitle = {Workshop on {{Uncertainty}} and {{Robustness}} in {{Deep Learning}}}, language = {en}, month = jul, pages = {10}, title = {Improving the Accuracy-Robustness Trade-off for Dual-Domain Adversarial Training}, year = {2021} }
- AISecSAT: Improving Adversarial Training via Curriculum-Based Loss SmoothingChawin Sitawarin, Supriyo Chakraborty, and David WagnerIn Proceedings of the 14th ACM Workshop on Artificial Intelligence and Security, Jul 2021
Adversarial training (AT) has become a popular choice for training robust networks. However, it tends to sacrifice clean accuracy heavily in favor of robustness and suffers from a large generalization error. To address these concerns, we propose Smooth Adversarial Training (SAT), guided by our analysis on the eigenspectrum of the loss Hessian. We find that curriculum learning, a scheme that emphasizes on starting "easy” and gradually ramping up on the "difficulty” of training, smooths the adversarial loss landscape for a suitably chosen difficulty metric. We present a general formulation for curriculum learning in the adversarial setting and propose two difficulty metrics based on the maximal Hessian eigenvalue (H-SAT) and the softmax probability (P-SA). We demonstrate that SAT stabilizes network training even for a large perturbation norm and allows the network to operate at a better clean accuracy versus robustness trade-off curve compared to AT. This leads to a significant improvement in both clean accuracy and robustness compared to AT, TRADES, and other baselines. To highlight a few results, our best model improves normal and robust accuracy by 6% and 1% on CIFAR-100 compared to AT, respectively. On Imagenette, a ten-class subset of ImageNet, our model outperforms AT by 23% and 3% on normal and robust accuracy respectively.
@inproceedings{sitawarin_sat_2021, address = {New York, NY, USA}, author = {Sitawarin, Chawin and Chakraborty, Supriyo and Wagner, David}, booktitle = {Proceedings of the 14th ACM Workshop on Artificial Intelligence and Security}, doi = {10.1145/3474369.3486878}, isbn = {9781450386579}, keywords = {adversarial examples, adversarial machine learning, curriculum learning}, location = {Virtual Event, Republic of Korea}, numpages = {12}, pages = {25–36}, publisher = {Association for Computing Machinery}, series = {AISec '21}, title = {SAT: Improving Adversarial Training via Curriculum-Based Loss Smoothing}, url = {https://doi.org/10.1145/3474369.3486878}, year = {2021} }
- Mitigating Adversarial Training Instability with Batch NormalizationArvind P Sridhar, Chawin Sitawarin, and David WagnerIn Security and Safety in Machine Learning Systems Workshop, May 2021
The adversarial training paradigm has become the standard in training deep neural networks for robustness. Yet, it remains unstable, with the mechanisms driving this instability poorly understood. In this study, we discover that this instability is primarily driven by a non-smooth optimization landscape and an internal covariate shift phenomenon, and show that Batch Normalization (BN) can effectively mitigate both these issues. Further, we demonstrate that BN universally improves clean and robust performance across various defenses, datasets, and model types, with greater improvement on more difficult tasks. Finally, we confirm BN’s heterogeneous distribution issue with mixed-batch training and propose a solution.
@inproceedings{sridhar_mitigating_2021, author = {Sridhar, Arvind P and Sitawarin, Chawin and Wagner, David}, booktitle = {Security and {{Safety}} in {{Machine Learning Systems Workshop}}}, language = {en}, month = may, pages = {13}, title = {Mitigating Adversarial Training Instability with Batch Normalization}, year = {2021} }
2020
- DLSMinimum-Norm Adversarial Examples on KNN and KNN Based ModelsChawin Sitawarin, and David WagnerIn 2020 IEEE Security and Privacy Workshops (SPW), May 2020
We study the robustness against adversarial examples of kNN classifiers and classifiers that combine kNN with neural networks. The main difficulty lies in the fact that finding an optimal attack on kNN is intractable for typical datasets. In this work, we propose a gradient-based attack on kNN and kNN-based defenses, inspired by the previous work by Sitawarin & Wagner [1]. We demonstrate that our attack outperforms their method on all of the models we tested with only a minimal increase in the computation time. The attack also beats the state-of-the-art attack [2] on kNN when k > 1 using less than 1% of its running time. We hope that this attack can be used as a new baseline for evaluating the robustness of kNN and its variants.
@inproceedings{sitawarin_minimumnorm_2020, address = {{Los Alamitos, CA, USA}}, author = {Sitawarin, Chawin and Wagner, David}, booktitle = {2020 {{IEEE}} Security and Privacy Workshops ({{SPW}})}, doi = {10.1109/SPW50608.2020.00023}, keywords = {computational modeling,conferences,data privacy,neural networks,robustness,security}, month = may, pages = {34--40}, publisher = {{IEEE Computer Society}}, title = {Minimum-Norm Adversarial Examples on {{KNN}} and {{KNN}} Based Models}, year = {2020} }
2019
- AISecAnalyzing the Robustness of Open-World Machine LearningVikash Sehwag, Arjun Nitin Bhagoji, Liwei Song, Chawin Sitawarin, Daniel Cullina, Mung Chiang, and Prateek MittalIn Proceedings of the 12th ACM Workshop on Artificial Intelligence and Security, May 2019
When deploying machine learning models in real-world applications, an open-world learning framework is needed to deal with both normal in-distribution inputs and undesired out-of-distribution (OOD) inputs. Open-world learning frameworks include OOD detectors that aim to discard input examples which are not from the same distribution as the training data of machine learning classifiers. However, our understanding of current OOD detectors is limited to the setting of benign OOD data, and an open question is whether they are robust in the presence of adversaries. In this paper, we present the first analysis of the robustness of open-world learning frameworks in the presence of adversaries by introducing and designing øodAdvExamples. Our experimental results show that current OOD detectors can be easily evaded by slightly perturbing benign OOD inputs, revealing a severe limitation of current open-world learning frameworks. Furthermore, we find that øodAdvExamples also pose a strong threat to adversarial training based defense methods in spite of their effectiveness against in-distribution adversarial attacks. To counteract these threats and ensure the trustworthy detection of OOD inputs, we outline a preliminary design for a robust open-world machine learning framework.
@inproceedings{sehwag_analyzing_2019, address = {{New York, NY, USA}}, author = {Sehwag, Vikash and Bhagoji, Arjun Nitin and Song, Liwei and Sitawarin, Chawin and Cullina, Daniel and Chiang, Mung and Mittal, Prateek}, booktitle = {Proceedings of the 12th {{ACM}} Workshop on Artificial Intelligence and Security}, doi = {10.1145/3338501.3357372}, isbn = {978-1-4503-6833-9}, keywords = {adversarial example,deep learning,open world recognition}, pages = {105--116}, publisher = {{Association for Computing Machinery}}, series = {{{AISec}}'19}, title = {Analyzing the Robustness of Open-World Machine Learning}, year = {2019} }
- Defending against Adversarial Examples with K-Nearest NeighborChawin Sitawarin, and David WagnerarXiv:1906.09525 [cs], Jun 2019
(We took the paper down from arXiv because the defense is broken by our new attack. The paper is still available [here](https://drive.google.com/file/d/1_3SjKi92mfCRAg99EXEJXpOpGCCw2OXN/view?usp=sharing)) Robustness is an increasingly important property of machine learning models as they become more and more prevalent. We propose a defense against adversarial examples based on a k-nearest neighbor (kNN) on the intermediate activation of neural networks. Our scheme surpasses state-of-the-art defenses on MNIST and CIFAR-10 against l2-perturbation by a significant margin. With our models, the mean perturbation norm required to fool our MNIST model is 3.07 and 2.30 on CIFAR-10. Additionally, we propose a simple certifiable lower bound on the l2-norm of the adversarial perturbation using a more specific version of our scheme, a 1-NN on representations learned by a Lipschitz network. Our model provides a nontrivial average lower bound of the perturbation norm, comparable to other schemes on MNIST with similar clean accuracy.
@article{sitawarin_defending_2019, annotation = {14 citations (Semantic Scholar/arXiv) [2021-06-11]}, archiveprefix = {arXiv}, author = {Sitawarin, Chawin and Wagner, David}, eprint = {1906.09525}, journal = {arXiv:1906.09525 [cs]}, month = jun, primaryclass = {cs}, title = {Defending against Adversarial Examples with K-Nearest Neighbor}, year = {2019} }
- DLSOn the Robustness of Deep K-Nearest NeighborsChawin Sitawarin, and David WagnerIn 2019 IEEE Security and Privacy Workshops (SPW), May 2019
Despite a large amount of attention on adversarial examples, very few works have demonstrated an effective defense against this threat. We examine Deep k-Nearest Neighbor (DkNN), a proposed defense that combines k-Nearest Neighbor (kNN) and deep learning to improve the model’s robustness to adversarial examples. It is challenging to evaluate the robustness of this scheme due to a lack of efficient algorithm for attacking kNN classifiers with large k and high-dimensional data. We propose a heuristic attack that allows us to use gradient descent to find adversarial examples for kNN classifiers, and then apply it to attack the DkNN defense as well. Results suggest that our attack is moderately stronger than any naive attack on kNN and significantly outperforms other attacks on DkNN.
@inproceedings{sitawarin_robustness_2019, address = {{Los Alamitos, CA, USA}}, author = {Sitawarin, Chawin and Wagner, David}, booktitle = {2019 {{IEEE}} Security and Privacy Workshops ({{SPW}})}, doi = {10.1109/SPW.2019.00014}, keywords = {adaptation models,deep learning,neural networks,optimization,perturbation methods,robustness,training}, month = may, pages = {1--7}, publisher = {{IEEE Computer Society}}, title = {On the Robustness of Deep K-Nearest Neighbors}, year = {2019} }
2018
- CISSEnhancing Robustness of Machine Learning Systems via Data TransformationsArjun Nitin Bhagoji, Daniel Cullina, Chawin Sitawarin, and Prateek MittalIn 52nd Annual Conference on Information Sciences and Systems (CISS), May 2018
We propose the use of data transformations as a defense against evasion attacks on ML classifiers. We present and investigate strategies for incorporating a variety of data transformations including dimensionality reduction via Principal Component Analysis and data "anti-whitening" to enhance the resilience of machine learning, targeting both the classification and the training phase. We empirically evaluate and demonstrate the feasibility of linear transformations of data as a defense mechanism against evasion attacks using multiple real-world datasets. Our key findings are that the defense is (i) effective against the best known evasion attacks from the literature, resulting in a two-fold increase in the resources required by a white-box adversary with knowledge of the defense for a successful attack, (ii) applicable across a range of ML classifiers, including Support Vector Machines and Deep Neural Networks, and (iii) generalizable to multiple application domains, including image classification and human activity classification.
@inproceedings{bhagoji_enhancing_2018, author = {Bhagoji, Arjun Nitin and Cullina, Daniel and Sitawarin, Chawin and Mittal, Prateek}, booktitle = {52nd Annual Conference on Information Sciences and Systems ({{CISS}})}, doi = {10.1109/CISS.2018.8362326}, pages = {1--5}, title = {Enhancing Robustness of Machine Learning Systems via Data Transformations}, year = {2018} }
- CCSNot All Pixels Are Born Equal: An Analysis of Evasion Attacks under Locality ConstraintsVikash Sehwag, Chawin Sitawarin, Arjun Nitin Bhagoji, Arsalan Mosenia, Mung Chiang, and Prateek MittalIn Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, Oct 2018
Deep neural networks (DNNs) have enabled success in learning tasks such as image classification, semantic image segmentation and steering angle prediction which can be key components of the computer vision pipeline of safety-critical systems such as autonomous vehicles. However, previous work has demonstrated the feasibility of using physical adversarial examples to attack image classification systems.
@inproceedings{sehwag_not_2018, address = {{Toronto Canada}}, annotation = {4 citations (Semantic Scholar/DOI) [2021-06-11] 0 citations (Crossref) [2021-06-11] 00000}, author = {Sehwag, Vikash and Sitawarin, Chawin and Bhagoji, Arjun Nitin and Mosenia, Arsalan and Chiang, Mung and Mittal, Prateek}, booktitle = {Proceedings of the 2018 {{ACM SIGSAC Conference}} on {{Computer}} and {{Communications Security}}}, doi = {10.1145/3243734.3278515}, isbn = {978-1-4503-5693-0}, language = {en}, month = oct, pages = {2285--2287}, publisher = {{ACM}}, shorttitle = {Not All Pixels Are Born Equal}, title = {Not All Pixels Are Born Equal: An Analysis of Evasion Attacks under Locality Constraints}, year = {2018} }
- DARTS: Deceiving Autonomous Cars with Toxic SignsChawin Sitawarin, Arjun Nitin Bhagoji, Arsalan Mosenia, Mung Chiang, and Prateek MittalarXiv:1802.06430 [cs], May 2018
Sign recognition is an integral part of autonomous cars. Any misclassification of traffic signs can potentially lead to a multitude of disastrous consequences, ranging from a life-threatening accident to even a large-scale interruption of transportation services relying on autonomous cars. In this paper, we propose and examine security attacks against sign recognition systems for Deceiving Autonomous caRs with Toxic Signs (we call the proposed attacks DARTS). In particular, we introduce two novel methods to create these toxic signs. First, we propose Out-of-Distribution attacks, which expand the scope of adversarial examples by enabling the adversary to generate these starting from an arbitrary point in the image space compared to prior attacks which are restricted to existing training/test data (In-Distribution). Second, we present the Lenticular Printing attack, which relies on an optical phenomenon to deceive the traffic sign recognition system. We extensively evaluate the effectiveness of the proposed attacks in both virtual and real-world settings and consider both white-box and black-box threat models. Our results demonstrate that the proposed attacks are successful under both settings and threat models. We further show that Out-of-Distribution attacks can outperform In-Distribution attacks on classifiers defended using the adversarial training defense, exposing a new attack vector for these defenses.
@article{sitawarin_darts_2018, archiveprefix = {arXiv}, author = {Sitawarin, Chawin and Bhagoji, Arjun Nitin and Mosenia, Arsalan and Chiang, Mung and Mittal, Prateek}, eprint = {1802.06430}, journal = {arXiv:1802.06430 [cs]}, month = may, primaryclass = {cs}, shorttitle = {{{DARTS}}}, title = {{{DARTS}}: Deceiving Autonomous Cars with Toxic Signs}, year = {2018} }
- Photon. Res.Inverse-designed photonic fibers and metasurfaces for nonlinear frequency conversion (Invited)Chawin Sitawarin, Weiliang Jin, Zin Lin, and Alejandro W. RodriguezPhoton. Res., May 2018
Typically, photonic waveguides designed for nonlinear frequency conversion rely on intuitive and established principles, including index guiding and bandgap engineering, and are based on simple shapes with high degrees of symmetry. We show that recently developed inverse-design techniques can be applied to discover new kinds of microstructured fibers and metasurfaces designed to achieve large nonlinear frequency-conversion efficiencies. As a proof of principle, we demonstrate complex, wavelength-scale chalcogenide glass fibers and gallium phosphide three-dimensional metasurfaces exhibiting some of the largest nonlinear conversion efficiencies predicted thus far, e.g., lowering the power requirement for third-harmonic generation by 104 and enhancing second-harmonic generation conversion efficiency by 107. Such enhancements arise because, in addition to enabling a great degree of tunability in the choice of design wavelengths, these optimization tools ensure both frequency- and phase-matching in addition to large nonlinear overlap factors.
@article{sitawarin_inversedesigned_2018, author = {Sitawarin, Chawin and Jin, Weiliang and Lin, Zin and Rodriguez, Alejandro W.}, doi = {10.1364/PRJ.6.000B82}, journal = {Photon. Res.}, keywords = {Nonlinear optics, fibers; Harmonic generation and mixing ; Nonlinear optics, devices; Computational electromagnetic methods ; Nanophotonics and photonic crystals ; Chalcogenide fibers; Harmonic generation; Light matter interactions; Microstructured fibers; Phase matching; Second harmonic generation}, month = may, number = {5}, pages = {B82--B89}, publisher = {OSA}, title = {Inverse-designed photonic fibers and metasurfaces for nonlinear frequency conversion (Invited)}, volume = {6}, year = {2018} }
- DLSRogue Signs: Deceiving Traffic Sign Recognition with Malicious Ads and LogosChawin Sitawarin, Arjun Nitin Bhagoji, Arsalan Mosenia, Prateek Mittal, and Mung ChiangarXiv:1801.02780 [cs], Mar 2018
We propose a new real-world attack against the computer vision based systems of autonomous vehicles (AVs). Our novel Sign Embedding attack exploits the concept of adversarial examples to modify innocuous signs and advertisements in the environment such that they are classified as the adversary’s desired traffic sign with high confidence. Our attack greatly expands the scope of the threat posed to AVs since adversaries are no longer restricted to just modifying existing traffic signs as in previous work. Our attack pipeline generates adversarial samples which are robust to the environmental conditions and noisy image transformations present in the physical world. We ensure this by including a variety of possible image transformations in the optimization problem used to generate adversarial samples. We verify the robustness of the adversarial samples by printing them out and carrying out drive-by tests simulating the conditions under which image capture would occur in a real-world scenario. We experimented with physical attack samples for different distances, lighting conditions and camera angles. In addition, extensive evaluations were carried out in the virtual setting for a variety of image transformations. The adversarial samples generated using our method have adversarial success rates in excess of 95% in the physical as well as virtual settings.
@article{sitawarin_rogue_2018, archiveprefix = {arXiv}, author = {Sitawarin, Chawin and Bhagoji, Arjun Nitin and Mosenia, Arsalan and Mittal, Prateek and Chiang, Mung}, eprint = {1801.02780}, journal = {arXiv:1801.02780 [cs]}, month = mar, primaryclass = {cs}, shorttitle = {Rogue Signs}, title = {Rogue Signs: Deceiving Traffic Sign Recognition with Malicious Ads and Logos}, year = {2018} }
2017
- Beyond Grand Theft Auto v for Training, Testing and Enhancing Deep Learning in Self Driving CarsMark Anthony Martinez, Chawin Sitawarin, Kevin Finch, Lennart Meincke, Alexander Yablonski, and Alain KornhauserarXiv:1712.01397 [cs], Dec 2017
As an initial assessment, over 480,000 labeled virtual images of normal highway driving were readily generated in Grand Theft Auto V’s virtual environment. Using these images, a CNN was trained to detect following distance to cars/objects ahead, lane markings, and driving angle (angular heading relative to lane centerline): all variables necessary for basic autonomous driving. Encouraging results were obtained when tested on over 50,000 labeled virtual images from substantially different GTA-V driving environments. This initial assessment begins to define both the range and scope of the labeled images needed for training as well as the range and scope of labeled images needed for testing the definition of boundaries and limitations of trained networks. It is the efficacy and flexibility of a "GTA-V"-like virtual environment that is expected to provide an efficient well-defined foundation for the training and testing of Convolutional Neural Networks for safe driving. Additionally, described is the Princeton Virtual Environment (PVE) for the training, testing and enhancement of safe driving AI, which is being developed using the video-game engine Unity. PVE is being developed to recreate rare but critical corner cases that can be used in re-training and enhancing machine learning models and understanding the limitations of current self driving models. The Florida Tesla crash is being used as an initial reference.
@article{martinez_grand_2017, archiveprefix = {arXiv}, author = {Martinez, Mark Anthony and Sitawarin, Chawin and Finch, Kevin and Meincke, Lennart and Yablonski, Alexander and Kornhauser, Alain}, eprint = {1712.01397}, journal = {arXiv:1712.01397 [cs]}, month = dec, primaryclass = {cs}, title = {Beyond Grand Theft Auto v for Training, Testing and Enhancing Deep Learning in Self Driving Cars}, year = {2017} }
2016
- CLEOInverse-Designed Nonlinear Nanophotonic Structures: Enhanced Frequency Conversion at the Nano ScaleZin Lin, Chawin Sitawarin, Marko Loncar, and Alejandro W. RodriguezIn 2016 Conference on Lasers and Electro-Optics, CLEO 2016, Dec 2016
\textcopyright 2016 OSA. We describe a large-scale computational approach based on topology optimization that enables automatic discovery of novel nonlinear photonic structures. As examples, we design complex cavity and fiber geometries that can achieve high-efficiency nonlinear frequency conversion.
@inproceedings{lin_inversedesigned_2016, author = {Lin, Zin and Sitawarin, Chawin and Loncar, Marko and Rodriguez, Alejandro W.}, booktitle = {2016 {{Conference}} on {{Lasers}} and {{Electro}}-{{Optics}}, {{CLEO}} 2016}, isbn = {978-1-943580-11-8}, title = {Inverse-Designed Nonlinear Nanophotonic Structures: Enhanced Frequency Conversion at the Nano Scale}, year = {2016} }