Scroll Top

Machine Learning Models Penetration Testing

Reconnaissance – MITRE | ATLAS
1. Search for Victim's Publicly Available Research Materials.
2. Search for Publicly Available Adversarial Vulnerability Analysis.
3. Search Victim-Owned Websites.
4. Search Application Repositories.
5. Active and Scanning.
Resource and Development – MITRE | ATLAS
1. Acquire Public ML Artifacts.
2. Obtain Capabilities of
2. a. Adversarial ML Attack Implementations.
2. b. Software and Tools.
3. Develop Adversarial ML Attack Capabilities.
4. Acquire Infrastructure of
4. a. ML Development WorkSpaces.
4. b. Consumer Hardware.
5. Publish Poisoned Datasets.
6. Poison Training Data.
7. Establish Accounts.
Initial Access – MITRE | ATLAS
1. ML Supply Chain Compromise.
1. a. GPU Hardware.
1. b. ML Software.
1. c. Data.
1.d. Model.
2. Valid Accounts.
3. Evade ML Model.
4. Exploit Public Facing Apps.
ML Model Access – MITRE | ATLAS
1. ML Model Inference Access.
2. ML-Enabled Product or Service.
3. Physical Environment Access.
4. Full ML Model Access.
Execution – MITRE | ATLAS
1. User Execution.
1. a. Unsafe ML Artifacts.
2. Command and Scripting Interpreter.
Persistence – MITRE | ATLAS
1. Poison Training Data.
2. Backdoor ML Model.
2. a. Poison ML Model.
2. b. Inject Payload.
Defence Evasion – MITRE | ATLAS
1. Evade ML Model.
Discovery – MITRE | ATLAS
1. Discover ML Model Ontology.
2. Discover ML Model Family.
3. Discover ML Artifacts.

Collection – MITRE | ATLAS
1. ML Artifact Collection.
2. Data from Information Repositories.
3. Data from Local system.
ML Attack Staging – MITRE | ATLAS
1. Create Proxy ML Model.
1. a. Train Proxy via Gathered ML Artifacts.
1. b. Train Proxy via Replication.
1. c. Use Pre-Trained Model.
2. Backdoor ML Model.
2. a. Poisoned ML Model.
2. b. Inject Payload.
3. Verify Attack.
4. Craft Adversarial Data.
4. a. White-Box Optimisation.
4. b. Black-Box Optimisation.
4. c. Black-Box Transfer.
4. d. Manual Modification.
4. e. Insert Backdoor Trigger.
Exfiltration – MITRE | ATLAS
1. Exfiltration via ML Inference API.
1. a. Infer Training Data Membership.
1. b. Invert ML Model.
1. c. Extract ML Model.
2. Exfiltration via Cyber Means.
Impact – MITRE | ATLAS
1. Evade ML Model.
2. Denial of ML Service.
3. Spamming ML System with Chaff Data.
4. Erode ML Model Integrity.
5. Cost Harvesting.
6. ML Intellectual Property Theft.
7. System Misuse for External Effect.
PENETRATION TESTING

benefits

Penetration Testing the Machine Learning Models

  • The list highlights the business impact and plurality of the most critical vulnerabilities found in Machine Learning Models which is the foundation for Artificial Intelligence.

1. Prevent ML Model Evasion.
2. Prevent ML Model Integrity.
3. Safeguard ML Intellectual Property.
4. Safeguard Systems.
5. Prevent Denial of ML Services.

Privacy Preferences
When you visit our website, it may store information through your browser from specific services, usually in form of cookies. Here you can change your privacy preferences. Please note that blocking some types of cookies may impact your experience on our website and the services we offer.