Johns Hopkins University Applied Physics Laboratory has partnered with the Intelligence Advanced Research Projects Activity to identify new approaches to defend the artificial intelligence training pipeline against malware.
APL researchers are working with the intelligence communityÂ forÂ an IARPA projectÂ aimed at leveraging deep neural networks to prevent Trojan attacks during AI learning processes, the labÂ said Friday.
Under the TrojanAI effort, APL and IARPA developed algorithms and used various network architectures to defend AI systems against â€œtraining-time attacksâ€ that occur due to â€œbackdoorâ€ threats such as Trojans.
The National Institute of Standards and Technology also utilized theÂ teamâ€™s open-source Python toolset for deep-learning models and deployed it at scale for testing against various detection scenarios.
â€œThe AI supply chain will probably always have holes,â€ said Kiran Karra, a research engineer forÂ the Research and Exploratory Development Department at APL.Â
â€The best AIs are extremely expensive to train, so you often buy them pretrained from third parties. Even when you train your model yourself, youâ€™re typically using some training data that came from elsewhere. These are two prime opportunities to introduce Trojans.â€
The TrojanAI team published details of the project in a report titledÂ â€œThe TrojAI Software Framework: An Open Source Tool for Embedding Trojans into Deep Learning Models".