publications by categories in reversed chronological order.
2022
ICML
Demystifying the Adversarial Robustness of Random Transformation Defenses
Sitawarin, Chawin, Golan-Strieb, Zachary, and Wagner, David
In Proceedings of the 39th International Conference on Machine Learning (Short Presentation), AAAI-2022 Workshop on Adversarial Machine Learning and Beyond (Best Paper), 2022
Neural networks’ lack of robustness against attacks raises concerns in security-sensitive settings
such as autonomous vehicles. While many countermeasures may look promising, only a few withstand rigorous evaluation.
Defenses using random transformations (RT) have shown impressive results, particularly BaRT (Raff et al., 2019) on ImageNet.
However, this type of defense has not been rigorously evaluated, leaving its robustness properties poorly understood.
Their stochastic properties make evaluation more challenging and render many proposed attacks on deterministic models inapplicable.
First, we show that the BPDA attack (Athalye et al., 2018a) used in BaRT’s evaluation is ineffective and likely over-estimates its robustness.
We then attempt to construct the strongest possible RT defense through the informed selection of transformations and
Bayesian optimization for tuning their parameters. Furthermore, we create the strongest possible attack to evaluate our RT defense.
Our new attack vastly outperforms the baseline, reducing the accuracy by 83% compared to the 19% reduction by the commonly
used EoT attack (4.3x improvement). Our result indicates that the RT defense on Imagenette dataset (ten-class subset of ImageNet)
is not robust against adversarial examples. Extending the study further, we use our new attack to adversarially train RT defense
(called AdvRT), resulting in a large robustness gain.
@inproceedings{sitawarin_demystifying_2022,abbr={ICML},author={Sitawarin, Chawin and Golan-Strieb, Zachary and Wagner, David},bibtex_show={false},booktitle={Proceedings of the 39th International Conference on Machine Learning (Short Presentation), AAAI-2022 Workshop on Adversarial Machine Learning and Beyond (Best Paper)},pdf={https://openreview.net/forum?id=p4SrFydwO5},slides={/assets/slides/aaai_advml_workshop_2022.pdf},title={Demystifying the Adversarial Robustness of Random Transformation Defenses},url={https://openreview.net/forum?id=p4SrFydwO5},year={2022}}
2021
Workshop
Improving the Accuracy-Robustness Trade-off for Dual-Domain Adversarial Training
Sitawarin, Chawin, Sridhar, Arvind P, and Wagner, David
In Workshop on Uncertainty and Robustness in Deep Learning, 2021
While Adversarial Training remains the standard in improving robustness to adversarial attack, it often sacrifices accuracy on natural (clean) samples to a significant extent. Dual-domain training, optimizing on both clean and adversarial objectives, can help realize a better trade-off between clean accuracy and robustness. In this paper, we develop methods to improve dual-domain training for large adversarial perturbations and complex datasets. We first demonstrate that existing methods suffer from poor performance in this setting, due to a poor training procedure and overfitting to a particular attack. Then, we develop novel methods to address these issues. First, we show that adding KLD regularization to the dual training objective mitigates this overfitting and achieves a better trade-off, on CIFAR-10 and a 10-class subset of ImageNet. Then, inspired by domain adaptation, we develop a new normalization technique, Dual Batch Normalization, to further improve accuracy. Combining these two strategies, our model sets a new state of the art in trade-off performance for dual-domain adversarial training.
@inproceedings{sitawarin_improving_2021,abbr={Workshop},author={Sitawarin, Chawin and Sridhar, Arvind P and Wagner, David},bibtex_show={false},booktitle={Workshop on {{Uncertainty}} and {{Robustness}} in {{Deep Learning}}},copyright={Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (CC-BY-NC-ND)},language={en},month=jul,pages={10},pdf={https://chawins.github.io/assets/pdf/UDL2021-paper-048.pdf},title={Improving the Accuracy-Robustness Trade-off for Dual-Domain Adversarial Training},year={2021}}
Workshop
Mitigating Adversarial Training Instability with Batch Normalization
Sridhar, Arvind P,
Sitawarin, Chawin, and Wagner, David
In Security and Safety in Machine Learning Systems Workshop, 2021
The adversarial training paradigm has become the standard in training deep neural networks for robustness. Yet, it remains unstable, with the mechanisms driving this instability poorly understood. In this study, we discover that this instability is primarily driven by a non-smooth optimization landscape and an internal covariate shift phenomenon, and show that Batch Normalization (BN) can effectively mitigate both these issues. Further, we demonstrate that BN universally improves clean and robust performance across various defenses, datasets, and model types, with greater improvement on more difficult tasks. Finally, we confirm BN’s heterogeneous distribution issue with mixed-batch training and propose a solution.
@inproceedings{sridhar_mitigating_2021,abbr={Workshop},author={Sridhar, Arvind P and Sitawarin, Chawin and Wagner, David},bibtex_show={false},booktitle={Security and {{Safety}} in {{Machine Learning Systems Workshop}}},copyright={Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (CC-BY-NC-ND)},language={en},month=may,pages={13},pdf={https://aisecure-workshop.github.io/aml-iclr2021/papers/43.pdf},title={Mitigating Adversarial Training Instability with Batch Normalization},year={2021}}
AISec
SAT: Improving Adversarial Training via Curriculum-Based Loss Smoothing
Sitawarin, Chawin, Chakraborty, Supriyo, and Wagner, David
In Proceedings of the 14th ACM Workshop on Artificial Intelligence and Security, 2021
Adversarial training (AT) has become a popular choice for training robust networks.
However, it tends to sacrifice clean accuracy heavily in favor of robustness and suffers
from a large generalization error. To address these concerns, we propose Smooth Adversarial
Training (SAT), guided by our analysis on the eigenspectrum of the loss Hessian. We
find that curriculum learning, a scheme that emphasizes on starting "easy” and gradually
ramping up on the "difficulty” of training, smooths the adversarial loss landscape
for a suitably chosen difficulty metric. We present a general formulation for curriculum
learning in the adversarial setting and propose two difficulty metrics based on the
maximal Hessian eigenvalue (H-SAT) and the softmax probability (P-SA). We demonstrate
that SAT stabilizes network training even for a large perturbation norm and allows
the network to operate at a better clean accuracy versus robustness trade-off curve
compared to AT. This leads to a significant improvement in both clean accuracy and
robustness compared to AT, TRADES, and other baselines. To highlight a few results,
our best model improves normal and robust accuracy by 6% and 1% on CIFAR-100 compared
to AT, respectively. On Imagenette, a ten-class subset of ImageNet, our model outperforms
AT by 23% and 3% on normal and robust accuracy respectively.
@inproceedings{sitawarin_sat_2021,abbr={AISec},address={New York, NY, USA},arxiv={2003.09347},author={Sitawarin, Chawin and Chakraborty, Supriyo and Wagner, David},bibtex_show={false},booktitle={Proceedings of the 14th ACM Workshop on Artificial Intelligence and Security},doi={10.1145/3474369.3486878},isbn={9781450386579},keywords={adversarial examples, adversarial machine learning, curriculum learning},location={Virtual Event, Republic of Korea},numpages={12},pages={25–36},pdf={https://dl.acm.org/doi/abs/10.1145/3474369.3486878},publisher={Association for Computing Machinery},series={AISec '21},slides={/assets/slides/aisec_2021.pdf},title={SAT: Improving Adversarial Training via Curriculum-Based Loss Smoothing},url={https://doi.org/10.1145/3474369.3486878},year={2021}}
NeurIPS
Adversarial Examples for k-Nearest Neighbor Classifiers Based on Higher-Order Voronoi Diagrams
Sitawarin, Chawin, Kornaropoulos, Evgenios M, Song, Dawn, and Wagner, David
In Advances in Neural Information Processing Systems, 2021
@inproceedings{sitawarin_adversarial_2021,abbr={NeurIPS},author={Sitawarin, Chawin and Kornaropoulos, Evgenios M and Song, Dawn and Wagner, David},bibtex_show={false},booktitle={Advances in Neural Information Processing Systems},pdf={https://openreview.net/forum?id=2j3B_YkC8r},publisher={{Curran Associates, Inc.}},title={Adversarial Examples for k-Nearest Neighbor Classifiers Based on Higher-Order Voronoi Diagrams},volume={34},year={2021}}
2020
DLS
Minimum-Norm Adversarial Examples on KNN and KNN Based Models
Sitawarin, Chawin, and Wagner, David
In 2020 IEEE Security and Privacy Workshops (SPW), 2020
We study the robustness against adversarial examples of kNN classifiers and classifiers that combine kNN with neural networks. The main difficulty lies in the fact that finding an optimal attack on kNN is intractable for typical datasets. In this work, we propose a gradient-based attack on kNN and kNN-based defenses, inspired by the previous work by Sitawarin & Wagner [1]. We demonstrate that our attack outperforms their method on all of the models we tested with only a minimal increase in the computation time. The attack also beats the state-of-the-art attack [2] on kNN when k > 1 using less than 1% of its running time. We hope that this attack can be used as a new baseline for evaluating the robustness of kNN and its variants.
@inproceedings{sitawarin_minimumnorm_2020,abbr={DLS},address={{Los Alamitos, CA, USA}},author={Sitawarin, Chawin and Wagner, David},bibtex_show={false},booktitle={2020 {{IEEE}} Security and Privacy Workshops ({{SPW}})},code={https://github.com/chawins/knn-defense},copyright={All rights reserved},doi={10.1109/SPW50608.2020.00023},keywords={computational modeling,conferences,data privacy,neural networks,robustness,security},month=may,pages={34--40},pdf={https://arxiv.org/abs/2003.09347},publisher={{IEEE Computer Society}},slides={https://youtu.be/4YNoJQ0ptGE?t=2699},title={Minimum-Norm Adversarial Examples on {{KNN}} and {{KNN}} Based Models},year={2020}}
2019
AISec
Analyzing the Robustness of Open-World Machine Learning
When deploying machine learning models in real-world applications, an open-world learning framework is needed to deal with both normal in-distribution inputs and undesired out-of-distribution (OOD) inputs. Open-world learning frameworks include OOD detectors that aim to discard input examples which are not from the same distribution as the training data of machine learning classifiers. However, our understanding of current OOD detectors is limited to the setting of benign OOD data, and an open question is whether they are robust in the presence of adversaries. In this paper, we present the first analysis of the robustness of open-world learning frameworks in the presence of adversaries by introducing and designing øodAdvExamples. Our experimental results show that current OOD detectors can be easily evaded by slightly perturbing benign OOD inputs, revealing a severe limitation of current open-world learning frameworks. Furthermore, we find that øodAdvExamples also pose a strong threat to adversarial training based defense methods in spite of their effectiveness against in-distribution adversarial attacks. To counteract these threats and ensure the trustworthy detection of OOD inputs, we outline a preliminary design for a robust open-world machine learning framework.
@inproceedings{sehwag_analyzing_2019,abbr={AISec},address={{New York, NY, USA}},author={Sehwag, Vikash and Bhagoji, Arjun Nitin and Song, Liwei and Sitawarin, Chawin and Cullina, Daniel and Chiang, Mung and Mittal, Prateek},bibtex_show={false},booktitle={Proceedings of the 12th {{ACM}} Workshop on Artificial Intelligence and Security},code={https://github.com/inspire-group/OOD-Attacks},copyright={All rights reserved},doi={10.1145/3338501.3357372},isbn={978-1-4503-6833-9},keywords={adversarial example,deep learning,open world recognition},pages={105--116},pdf={https://dl.acm.org/doi/pdf/10.1145/3338501.3357372},publisher={{Association for Computing Machinery}},series={{{AISec}}'19},title={Analyzing the Robustness of Open-World Machine Learning},year={2019}}
Preprint
Defending against Adversarial Examples with K-Nearest Neighbor
(We took the paper down from arXiv because the defense is broken by our new attack. The paper is still available [here](https://drive.google.com/file/d/1_3SjKi92mfCRAg99EXEJXpOpGCCw2OXN/view?usp=sharing)) Robustness is an increasingly important property of machine learning models as they become more and more prevalent. We propose a defense against adversarial examples based on a k-nearest neighbor (kNN) on the intermediate activation of neural networks. Our scheme surpasses state-of-the-art defenses on MNIST and CIFAR-10 against l2-perturbation by a significant margin. With our models, the mean perturbation norm required to fool our MNIST model is 3.07 and 2.30 on CIFAR-10. Additionally, we propose a simple certifiable lower bound on the l2-norm of the adversarial perturbation using a more specific version of our scheme, a 1-NN on representations learned by a Lipschitz network. Our model provides a nontrivial average lower bound of the perturbation norm, comparable to other schemes on MNIST with similar clean accuracy.
@article{sitawarin_defending_2019,abbr={Preprint},annotation={14 citations (Semantic Scholar/arXiv) [2021-06-11]},archiveprefix={arXiv},author={Sitawarin, Chawin and Wagner, David},bibtex_show={true},code={https://github.com/chawins/knn-defense},copyright={Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (CC-BY-NC-ND)},eprint={1906.09525},eprinttype={arxiv},journal={arXiv:1906.09525 [cs]},month=jun,pdf={https://arxiv.org/abs/1906.09525},primaryclass={cs},title={Defending against Adversarial Examples with K-Nearest Neighbor},year={2019}}
DLS
On the Robustness of Deep K-Nearest Neighbors
Sitawarin, Chawin, and Wagner, David
In 2019 IEEE Security and Privacy Workshops (SPW), 2019
Despite a large amount of attention on adversarial examples, very few works have demonstrated an effective defense against this threat. We examine Deep k-Nearest Neighbor (DkNN), a proposed defense that combines k-Nearest Neighbor (kNN) and deep learning to improve the model’s robustness to adversarial examples. It is challenging to evaluate the robustness of this scheme due to a lack of efficient algorithm for attacking kNN classifiers with large k and high-dimensional data. We propose a heuristic attack that allows us to use gradient descent to find adversarial examples for kNN classifiers, and then apply it to attack the DkNN defense as well. Results suggest that our attack is moderately stronger than any naive attack on kNN and significantly outperforms other attacks on DkNN.
@inproceedings{sitawarin_robustness_2019,abbr={DLS},address={{Los Alamitos, CA, USA}},author={Sitawarin, Chawin and Wagner, David},bibtex_show={false},booktitle={2019 {{IEEE}} Security and Privacy Workshops ({{SPW}})},copyright={All rights reserved},doi={10.1109/SPW.2019.00014},keywords={adaptation models,deep learning,neural networks,optimization,perturbation methods,robustness,training},month=may,pages={1--7},pdf={https://arxiv.org/abs/1903.08333},publisher={{IEEE Computer Society}},slides={https://youtu.be/PmgKS3zckx8},title={On the Robustness of Deep K-Nearest Neighbors},year={2019}}
2018
CISS
Enhancing Robustness of Machine Learning Systems via Data Transformations
Bhagoji, Arjun Nitin, Cullina, Daniel,
Sitawarin, Chawin, and Mittal, Prateek
In 52nd Annual Conference on Information Sciences and Systems (CISS), 2018
We propose the use of data transformations as a defense against evasion attacks on ML classifiers. We present and investigate strategies for incorporating a variety of data transformations including dimensionality reduction via Principal Component Analysis and data "anti-whitening" to enhance the resilience of machine learning, targeting both the classification and the training phase. We empirically evaluate and demonstrate the feasibility of linear transformations of data as a defense mechanism against evasion attacks using multiple real-world datasets. Our key findings are that the defense is (i) effective against the best known evasion attacks from the literature, resulting in a two-fold increase in the resources required by a white-box adversary with knowledge of the defense for a successful attack, (ii) applicable across a range of ML classifiers, including Support Vector Machines and Deep Neural Networks, and (iii) generalizable to multiple application domains, including image classification and human activity classification.
@inproceedings{bhagoji_enhancing_2018,abbr={CISS},author={Bhagoji, Arjun Nitin and Cullina, Daniel and Sitawarin, Chawin and Mittal, Prateek},bibtex_show={false},booktitle={52nd Annual Conference on Information Sciences and Systems ({{CISS}})},code={https://github.com/inspire-group/ml_defense},copyright={All rights reserved},doi={10.1109/CISS.2018.8362326},pages={1--5},pdf={https://arxiv.org/abs/1704.02654},title={Enhancing Robustness of Machine Learning Systems via Data Transformations},year={2018}}
Photon. Res.
Inverse-designed photonic fibers and metasurfaces for nonlinear frequency conversion (Invited)
Sitawarin, Chawin, Jin, Weiliang, Lin, Zin, and Rodriguez, Alejandro W.
Typically, photonic waveguides designed for nonlinear frequency conversion rely on intuitive and established principles, including index guiding and bandgap engineering, and are based on simple shapes with high degrees of symmetry. We show that recently developed inverse-design techniques can be applied to discover new kinds of microstructured fibers and metasurfaces designed to achieve large nonlinear frequency-conversion efficiencies. As a proof of principle, we demonstrate complex, wavelength-scale chalcogenide glass fibers and gallium phosphide three-dimensional metasurfaces exhibiting some of the largest nonlinear conversion efficiencies predicted thus far, e.g., lowering the power requirement for third-harmonic generation by 104 and enhancing second-harmonic generation conversion efficiency by 107. Such enhancements arise because, in addition to enabling a great degree of tunability in the choice of design wavelengths, these optimization tools ensure both frequency- and phase-matching in addition to large nonlinear overlap factors.
@article{sitawarin_inversedesigned_2018,abbr={Photon. Res.},arxiv={1711.07810},author={Sitawarin, Chawin and Jin, Weiliang and Lin, Zin and Rodriguez, Alejandro W.},bibtex_show={false},doi={10.1364/PRJ.6.000B82},journal={Photon. Res.},keywords={Nonlinear optics, fibers; Harmonic generation and mixing ; Nonlinear optics, devices; Computational electromagnetic methods ; Nanophotonics and photonic crystals ; Chalcogenide fibers; Harmonic generation; Light matter interactions; Microstructured fibers; Phase matching; Second harmonic generation},month=may,number={5},pages={B82--B89},pdf={https://www.osapublishing.org/prj/fulltext.cfm?uri=prj-6-5-B82&id=385779},publisher={OSA},title={Inverse-designed photonic fibers and metasurfaces for nonlinear frequency conversion (Invited)},url={http://www.osapublishing.org/prj/abstract.cfm?URI=prj-6-5-B82},volume={6},year={2018}}
DLS
Rogue Signs: Deceiving Traffic Sign Recognition with Malicious Ads and Logos
Sitawarin, Chawin, Bhagoji, Arjun Nitin, Mosenia, Arsalan, Mittal, Prateek, and Chiang, Mung
We propose a new real-world attack against the computer vision based systems of autonomous vehicles (AVs). Our novel Sign Embedding attack exploits the concept of adversarial examples to modify innocuous signs and advertisements in the environment such that they are classified as the adversary’s desired traffic sign with high confidence. Our attack greatly expands the scope of the threat posed to AVs since adversaries are no longer restricted to just modifying existing traffic signs as in previous work. Our attack pipeline generates adversarial samples which are robust to the environmental conditions and noisy image transformations present in the physical world. We ensure this by including a variety of possible image transformations in the optimization problem used to generate adversarial samples. We verify the robustness of the adversarial samples by printing them out and carrying out drive-by tests simulating the conditions under which image capture would occur in a real-world scenario. We experimented with physical attack samples for different distances, lighting conditions and camera angles. In addition, extensive evaluations were carried out in the virtual setting for a variety of image transformations. The adversarial samples generated using our method have adversarial success rates in excess of 95% in the physical as well as virtual settings.
@article{sitawarin_rogue_2018,abbr={DLS},archiveprefix={arXiv},author={Sitawarin, Chawin and Bhagoji, Arjun Nitin and Mosenia, Arsalan and Mittal, Prateek and Chiang, Mung},bibtex_show={true},code={https://github.com/inspire-group/advml-traffic-sign},copyright={All rights reserved},eprint={1801.02780},eprinttype={arxiv},journal={arXiv:1801.02780 [cs]},month=mar,pdf={https://arxiv.org/abs/1801.02780},primaryclass={cs},shorttitle={Rogue Signs},title={Rogue Signs: Deceiving Traffic Sign Recognition with Malicious Ads and Logos},year={2018}}
CCS
Not All Pixels Are Born Equal: An Analysis of Evasion Attacks under Locality Constraints
Deep neural networks (DNNs) have enabled success in learning tasks such as image classification, semantic image segmentation and steering angle prediction which can be key components of the computer vision pipeline of safety-critical systems such as autonomous vehicles. However, previous work has demonstrated the feasibility of using physical adversarial examples to attack image classification systems.
@inproceedings{sehwag_not_2018,abbr={CCS},address={{Toronto Canada}},annotation={4 citations (Semantic Scholar/DOI) [2021-06-11] 0 citations (Crossref) [2021-06-11] 00000},author={Sehwag, Vikash and Sitawarin, Chawin and Bhagoji, Arjun Nitin and Mosenia, Arsalan and Chiang, Mung and Mittal, Prateek},bibtex_show={false},booktitle={Proceedings of the 2018 {{ACM SIGSAC Conference}} on {{Computer}} and {{Communications Security}}},copyright={Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (CC-BY-NC-ND)},doi={10.1145/3243734.3278515},isbn={978-1-4503-5693-0},language={en},month=oct,pages={2285--2287},pdf={https://dl.acm.org/citation.cfm?id=3278515},publisher={{ACM}},shorttitle={Not All Pixels Are Born Equal},title={Not All Pixels Are Born Equal: An Analysis of Evasion Attacks under Locality Constraints},year={2018}}
Sign recognition is an integral part of autonomous cars. Any misclassification of traffic signs can potentially lead to a multitude of disastrous consequences, ranging from a life-threatening accident to even a large-scale interruption of transportation services relying on autonomous cars. In this paper, we propose and examine security attacks against sign recognition systems for Deceiving Autonomous caRs with Toxic Signs (we call the proposed attacks DARTS). In particular, we introduce two novel methods to create these toxic signs. First, we propose Out-of-Distribution attacks, which expand the scope of adversarial examples by enabling the adversary to generate these starting from an arbitrary point in the image space compared to prior attacks which are restricted to existing training/test data (In-Distribution). Second, we present the Lenticular Printing attack, which relies on an optical phenomenon to deceive the traffic sign recognition system. We extensively evaluate the effectiveness of the proposed attacks in both virtual and real-world settings and consider both white-box and black-box threat models. Our results demonstrate that the proposed attacks are successful under both settings and threat models. We further show that Out-of-Distribution attacks can outperform In-Distribution attacks on classifiers defended using the adversarial training defense, exposing a new attack vector for these defenses.
@article{sitawarin_darts_2018,abbr={Preprint},archiveprefix={arXiv},author={Sitawarin, Chawin and Bhagoji, Arjun Nitin and Mosenia, Arsalan and Chiang, Mung and Mittal, Prateek},bibtex_show={true},code={https://github.com/inspire-group/advml-traffic-sign},copyright={All rights reserved},eprint={1802.06430},eprinttype={arxiv},journal={arXiv:1802.06430 [cs]},month=may,pdf={https://arxiv.org/abs/1802.06430},primaryclass={cs},shorttitle={{{DARTS}}},title={{{DARTS}}: Deceiving Autonomous Cars with Toxic Signs},year={2018}}
2017
Preprint
Beyond Grand Theft Auto v for Training, Testing and Enhancing Deep Learning in Self Driving Cars
Martinez, Mark Anthony,
Sitawarin, Chawin, Finch, Kevin, Meincke, Lennart, Yablonski, Alexander, and Kornhauser, Alain
As an initial assessment, over 480,000 labeled virtual images of normal highway driving were readily generated in Grand Theft Auto V’s virtual environment. Using these images, a CNN was trained to detect following distance to cars/objects ahead, lane markings, and driving angle (angular heading relative to lane centerline): all variables necessary for basic autonomous driving. Encouraging results were obtained when tested on over 50,000 labeled virtual images from substantially different GTA-V driving environments. This initial assessment begins to define both the range and scope of the labeled images needed for training as well as the range and scope of labeled images needed for testing the definition of boundaries and limitations of trained networks. It is the efficacy and flexibility of a "GTA-V"-like virtual environment that is expected to provide an efficient well-defined foundation for the training and testing of Convolutional Neural Networks for safe driving. Additionally, described is the Princeton Virtual Environment (PVE) for the training, testing and enhancement of safe driving AI, which is being developed using the video-game engine Unity. PVE is being developed to recreate rare but critical corner cases that can be used in re-training and enhancing machine learning models and understanding the limitations of current self driving models. The Florida Tesla crash is being used as an initial reference.
@article{martinez_grand_2017,abbr={Preprint},archiveprefix={arXiv},author={Martinez, Mark Anthony and Sitawarin, Chawin and Finch, Kevin and Meincke, Lennart and Yablonski, Alexander and Kornhauser, Alain},bibtex_show={true},copyright={Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (CC-BY-NC-ND)},eprint={1712.01397},eprinttype={arxiv},journal={arXiv:1712.01397 [cs]},month=dec,pdf={https://dl.acm.org/citation.cfm?id=3278515},primaryclass={cs},title={Beyond Grand Theft Auto v for Training, Testing and Enhancing Deep Learning in Self Driving Cars},year={2017}}
2016
CLEO
Inverse-Designed Nonlinear Nanophotonic Structures: Enhanced Frequency Conversion at the Nano Scale
Lin, Zin,
Sitawarin, Chawin, Loncar, Marko, and Rodriguez, Alejandro W.
In 2016 Conference on Lasers and Electro-Optics, CLEO 2016, 2016
\textcopyright 2016 OSA. We describe a large-scale computational approach based on topology optimization that enables automatic discovery of novel nonlinear photonic structures. As examples, we design complex cavity and fiber geometries that can achieve high-efficiency nonlinear frequency conversion.
@inproceedings{lin_inversedesigned_2016,abbr={CLEO},author={Lin, Zin and Sitawarin, Chawin and Loncar, Marko and Rodriguez, Alejandro W.},bibtex_show={false},booktitle={2016 {{Conference}} on {{Lasers}} and {{Electro}}-{{Optics}}, {{CLEO}} 2016},copyright={Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (CC-BY-NC-ND)},isbn={978-1-943580-11-8},pdf={http://ieeexplore.ieee.org/document/7788596/},title={Inverse-Designed Nonlinear Nanophotonic Structures: Enhanced Frequency Conversion at the Nano Scale},year={2016}}