Adversarial Learning Algorithm

From GM-RKB
Jump to navigation Jump to search

An Adversarial Learning Algorithm is a machine learning algorithm that are intended to address an adversarial opponent.



References

2017

  • (Wikipedia, 2017) ⇒ https://en.wikipedia.org/wiki/Adversarial_machine_learning Retrieved:2017-12-9.
    • Adversarial machine learning is a research field that lies at the intersection of machine learning and computer security. It aims to enable the safe adoption of machine learning techniques in adversarial settings like spam filtering, malware detection and biometric recognition.

      The problem arises from the fact that machine learning techniques were originally designed for stationary environments in which the training and test data are assumed to be generated from the same (although possibly unknown) distribution. In the presence of intelligent and adaptive adversaries, however, this working hypothesis is likely to be violated to at least some degree (depending on the adversary). In fact, a malicious adversary can carefully manipulate the input data exploiting specific vulnerabilities of learning algorithms to compromise the whole system security.

      Examples include: attacks in spam filtering, where spam messages are obfuscated through misspelling of bad words or insertion of good words;[1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] attacks in computer security, e.g., to obfuscate malware code within network packets [13] or mislead signature detection;[14] attacks in biometric recognition, where fake biometric traits may be exploited to impersonate a legitimate user (biometric spoofing) [15] or to compromise users’ template galleries that are adaptively updated over time.[16] [17]

2017

2016

2011

2010

2005


  1. N. Dalvi, P. Domingos, Mausam, S. Sanghai, and D. Verma. “Adversarial classification”. In Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), pages 99–108, Seattle, 2004.
  2. D. Lowd and C. Meek. “Adversarial learning”. In A. Press, editor, Proceedings of the Eleventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), pages 641–647, Chicago, IL., 2005.
  3. B. Biggio, I. Corona, G. Fumera, G. Giacinto, and F. Roli. “Bagging classifiers for fighting poisoning attacks in adversarial classification tasks”. In C. Sansone, J. Kittler, and F. Roli, editors, 10th International Workshop on Multiple Classifier Systems (MCS), volume 6713 of Lecture Notes in Computer Science, pages 350–359. Springer-Verlag, 2011.
  4. B. Biggio, G. Fumera, and F. Roli. “Adversarial pattern classification using multiple classifiers and randomisation”. In 12th Joint IAPR International Workshop on Structural and Syntactic Pattern Recognition (SSPR 2008), volume 5342 of Lecture Notes in Computer Science, pages 500–509, Orlando, Florida, USA, 2008. Springer-Verlag.
  5. B. Biggio, G. Fumera, and F. Roli. “Multiple classifier systems for robust classifier design in adversarial environments”. International Journal of Machine Learning and Cybernetics, 1(1):27–41, 2010.
  6. M. Bruckner, C. Kanzow, and T. Scheffer. “Static prediction games for adversarial learning problems”. J. Mach. Learn. Res., 13:2617–2654, 2012.
  7. M. Bruckner and T. Scheffer. “Nash equilibria of static prediction games”. In Y. Bengio, D. Schuurmans, J. Lafferty, C. K. I. Williams, and A. Culotta, editors, Advances in Neural Information Processing Systems 22, pages 171–179. 2009.
  8. M. Bruckner and T. Scheffer. "Stackelberg games for adversarial prediction problems". In: Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’11, pages 547–555, New York, NY, USA, 2011. ACM.
  9. A. Globerson and S. T. Roweis. “Nightmare at test time: robust learning by feature deletion”. In W. W. Cohen and A. Moore, editors, Proceedings of the 23rd International Conference on Machine Learning, volume 148, pages 353–360. ACM, 2006.
  10. A. Kolcz and C. H. Teo. “Feature weighting for improved classifier robustness”. In Sixth Conference on Email and Anti-Spam (CEAS), Mountain View, CA, USA, 2009.
  11. B. Nelson, M. Barreno, F. J. Chi, A. D. Joseph, B. I. P. Rubinstein, U. Saini, C. Sutton, J. D. Tygar, and K. Xia. “Exploiting machine learning to subvert your spam filter”. In LEET’08: Proceedings of the 1st Usenix Workshop on Large-Scale Exploits and Emergent Threats, pages 1–9, Berkeley, CA, USA, 2008. USENIX Association.
  12. G. L. Wittel and S. F. Wu. “On attacking statistical spam filters”. In First Conference on Email and Anti-Spam (CEAS), Microsoft Research Silicon Valley, Mountain View, California, 2004.
  13. P. Fogla, M. Sharif, R. Perdisci, O. Kolesnikov, and W. Lee. Polymorphic blending attacks. In USENIX- SS’06: Proc. of the 15th Conf. on USENIX Security Symp., CA, USA, 2006. USENIX Association.
  14. J. Newsome, B. Karp, and D. Song. Paragraph: Thwarting signature learning by training maliciously. In Recent Advances in Intrusion Detection, LNCS, pages 81–105. Springer, 2006.
  15. R. N. Rodrigues, L. L. Ling, and V. Govindaraju. "Robustness of multimodal biometric fusion methods against spoof attacks". J. Vis. Lang. Comput., 20(3):169–179, 2009.
  16. B. Biggio, L. Didaci, G. Fumera, and F. Roli. “Poisoning attacks to compromise face templates.” In 6th IAPR Int’l Conf. on Biometrics (ICB 2013), pages 1–7, Madrid, Spain, 2013.
  17. M. Torkamani and D. Lowd “Convex Adversarial Collective Classification”. In: Proceedings of the 30th International Conference on Machine Learning (pp. 642-650), Atlanta, GA., 2013.