Testers beware of Adversarial AI

Posted By : Richa Sharma | 10-Mar-2021

Shadow AI is the interspace connecting what the developer develops in his/her environment and the actual product that is required by the company. As a tester, there is always a difference between what you test in your lab with your own data sets and scaling and the product that you can actually use in larger-scale datasets. When we develop an AI model, it's usually developed to perform a dedicated task. We train the AI on a dedicated data set, then what it comes across during preparation the model further learns to make predictions. We then take that out and develop the final product, which is then run on your input data set during test time inference. Data Poisoning is one of the subfields  Data poisonings is one of the subfields of adversarial machine learning. What we mean by data poisoning is when at the very beginning, when a data scientist begins to train his or her models on the training data sets, those training data sets are compromised by the adversary.

 

Data poisoning happens in the lab when you are training models and training data sets that you trust, but may in fact have been compromised or poisoned by the adversary. And what happens there is that you train your final model on that poisoned data set so everything seems to be normal. But once you deploy your package, the model is compromised in the sense that it may underperform and specific cases that the adversary was interested in when they presented your data set. So it’s still performing well 80% of the time. But for that 20% and those specific cases, the model underperforms and again, it becomes tough to detect because it's so hidden under the processes that the model is producing. Being a tester, you will have this question, why should I care about this? Some significant examples would be with the Internet of Things (IoT), automated cities or automated traffic control, all those things, or air traffic. As we are using AI deep learning systems more and more and there are specific applications, you want to make sure that your system is secure because now they're replacing some of them with the mundane tasks that humans use to perform. You're relying on a system to automate these processes, and you want to be sure they're performing as expected.

 

Also Read: Remote testing services and its importance

 

Ways to combat data poisoning Various defensive solutions are being created to help protect your data, your AI model, or your data science processes. If you're designing factory robots, for example, you define specific goals for that robot, but you also create it in a very stable manner. If something unpredictable happens, the robot can deal with that. They looked at AI systems in the same manner. They said, “Okay…is the AI system itself in a box?” It's a black box. It's a nonlinear system. Once you deploy that AI model, it has a specific objective. But at the same time, it's dealing with the environment, external noise, and disturbance, so those adversarial noises and attacks can be subsets of the external noise. You have to develop your model so it's robust against anything unpredictable that may happen to fulfill its objective and the specific tasks that it’s been assigned. They brought the science from automation, how stability and robustness are defined, and applied that to AI systems. And they developed this new way of training based on back-propagation, which leads to the same robustness indices that are common in control theory. Applying these AI systems and AI models and artificial intelligence we have shown that we can outperform many other solutions in this area because we can develop robust neural networks to defend against adversarial attacks.

 

Also Read: Cucumber and TestCases Specifications in Behaviour Driven Development

 

Why Choose Oodles Technologies For DevOps Solutions?

 

We are seasoned DevOps solutions and service providers with vast experience in providing full-scale DevOps solutions for varied business requirements. Our team of DevOps professionals formulates effective strategies to strengthen your enterprise IT infrastructure and enhance operational efficiency. Our 360-degree DevOps solutions and services accelerate the software development lifecycle and ensure faster delivery with continuous deployment. For project related queries, reach us out at info@oodlestechnologies.com.

About Author

Author Image
Richa Sharma

Richa has a good knowledge of Software Testing field. She believes in continuous learning and completing the work with full dedication. Apart from that she loves to sketch.

Request for Proposal

Name is required

Comment is required

Sending message..