[1]
A. Vedaldi, and K. Lenc, "MatConvNet-Convolutional neural networks for MATLAB", MM ’15: Proceedings of the 23rd ACM international conference on Multimedia Oct 13, 2015. New York, NY, United States, pp. 689-692.
[2]
M. Abadi, "TensorFlow: Large-scale machine learning on heterogeneous distributed systems", arXiv:1603.04467, 2015.
[8]
C. Szegedy, "Intriguing properties of neural networks", arXiv:1312.6199v4, 2014.
[11]
I.J. Goodfellow, J. Shlens, and C. Szegedy, "Explaining and harnessing adversarial examples", arXiv:1412.6572v3, 2015.
[12]
N. Papernot, "Technical report on the cleverhans v2. 1.0 adversarial examples library", arXiv:1610.00768, 2016.
[14]
J. Su, D.V. Vargas, and S. Kouichi, "One pixel attack for fooling deep neural networks", arXiv:1710.08864, 2017.
[16]
S. Baluja, "and Fischer, Adversarial transformation networks:
Learning to generate adversarial examples", arXiv:1703.09387, 2017.
[17]
J. Hayes, and G. Danezis, "Machine learning as an adversarial service: Learning black-box adversarial examples", arXiv:1708.05207, 2017.
[18]
N. Carlini, and D. Wagner, "Towards Evaluating the Robustness of Neural Networks", arXiv:1608.04644, 2016.
[20]
N. Narodytska, and S.P. Kasiviswanathan, "Simple black-box adversarial perturbations for deep networks", arXiv:1612.06299, 2016.
[21]
Y. Liu, "Enhanced attacks on defensively distilled deep neural networks", arXiv:1711.05934, 2017.
[23]
K.R. Mopuri, U. Garg, and R.V. Babu, "Fast Feature Fool: A data independent approach to universal adversarial perturbations", arXiv:1707.05572, 2017.
[24]
H. Hosseini, Y. Chen, S. Kannan, B. Zhang, and R. Poovendran, "Blocking transferability of adversarial examples in black-box learning systems", arXiv:1703.04318, 2017.
[25]
C. Kanbak, S.S. Moosavi-Dezfooli, and P. Frossard, "Geometric robustness of deep networks: analysis and improvement", arXiv:1711.09115, 2017.
[26]
P. Tabacof, and E. Valle, "Exploring the space of adversarial images", IEEE International Joint Conference on Neural Networks, July 24-29, 2016, Vancouver, BC, Canada, pp. 426-433, 2016.
[28]
D.P. Kingma, and M. Welling, "Auto-encoding variational bayes", arXiv:1312.6114, 2014.
[29]
J. Kos, I. Fischer, and D. Song, "Adversarial examples for generative models", arXiv:1702.06832, 2017.
[30]
D.E. Rumelhart, G.E. Hinton, and R.J. Williams, Learning representations by back-propagating errors, Cognitive modeling, vol. 5, 1988.
[33]
S. Huang, N. Papernot, I. Goodfellow, Y. Duan, and P. Abbeel, "Adversarial attacks on neural network policies", arXiv: 1702.02284, 2017.
[34]
J.H. Metzen, M.C. Kumar, and T. Brox, "Fischer universal adversarial perturbations against semantic image segmentation", arXiv:1704.05712, 2017.
[35]
A. Arnab, O. Miksik, and P.H.S. Torr, "On the robustness of semantic segmentation models to adversarial attacks", arXiv:1711.09856, 2017.
[36]
C. Xie, J. Wang, Z. Zhang, Z. Ren, and A. Yuille, "Mitigating adversarial effects through randomization", arXiv:1711.01991, 2017.
[40]
W. Xu, D. Evans, and Y. Qi, "Feature squeezing mitigates and detects carlini/wagner adversarial examples", arXiv:1705.10686, 2017.
[44]
A. Kurakin, I. Goodfellow, and S. Bengio, "Adversarial examples in the physical world", arXiv: 1607.02533, 2016.
[46]
I. Evtimov, K. Eykholt, E. Fernandes, T. Kohno, B. Li, A. Prakash, A. Rahmati, and D. Song, "Robust physical-world attacks on deep learning models", arXiv:1707.08945, 2017.