Cyber Affairs
No Result
View All Result
  • Login
  • Register
[gtranslate]
  • Home
  • Live Threat Map
  • Books
  • Careers
  • Latest
  • Podcast
  • Popular
  • Press Release
  • Reports
  • Tech Indexes
  • White Papers
  • Contact
Social icon element need JNews Essential plugin to be activated.
  • AI
  • Cyber Crime
  • Intelligence
  • Laws & Regulations
  • Cyber Warfare
  • Hacktivism
  • More
    • Digital Influence Mercenaries
    • Digital Diplomacy
    • Electronic Warfare
    • Emerging Technologies
    • ICS-SCADA
    • Books
    • Careers
    • Cyber Crime
    • Cyber Intelligence
    • Cyber Laws & Regulations
    • Cyber Warfare
    • Digital Diplomacy
    • Digital Influence Mercenaries
    • Electronic Warfare
    • Emerging Technologies
    • Hacktivism
    • ICS-SCADA
    • News
    • Podcast
    • Reports
    • Tech Indexes
    • White Papers
COMMUNITY
NEWSLETTER
  • AI
  • Cyber Crime
  • Intelligence
  • Laws & Regulations
  • Cyber Warfare
  • Hacktivism
  • More
    • Digital Influence Mercenaries
    • Digital Diplomacy
    • Electronic Warfare
    • Emerging Technologies
    • ICS-SCADA
    • Books
    • Careers
    • Cyber Crime
    • Cyber Intelligence
    • Cyber Laws & Regulations
    • Cyber Warfare
    • Digital Diplomacy
    • Digital Influence Mercenaries
    • Electronic Warfare
    • Emerging Technologies
    • Hacktivism
    • ICS-SCADA
    • News
    • Podcast
    • Reports
    • Tech Indexes
    • White Papers
NEWSLETTER
No Result
View All Result
Cyber Affairs
No Result
View All Result
  • Cyber Crime
  • Cyber Intelligence
  • Cyber Laws & Regulations
  • Cyber Warfare
  • Digital Diplomacy
  • Digital Influence Mercenaries
  • Electronic Warfare
  • Emerging Technologies
  • Hacktivism
  • ICS-SCADA
  • Reports
  • White Papers

NIST Details Types of Cyberattacks that Leads to Malfunction of AI

admin by admin
Jan 9, 2024
in News
A A
0

Artificial intelligence (AI) systems can be purposefully tricked or even “poisoned” by attackers, leading to severe malfunctions and striking failures.

Currently, there is no infallible method to safeguard AI against misdirection, partly because the datasets necessary to train an AI are just too big for humans to effectively monitor and filter.

Computer scientists at the National Institute of Standards and Technology (NIST) and their collaborators have identified these and other AI vulnerabilities and mitigation measures targeting AI systems.

This new report outlines the types of attacks its AI solutions could face and accompanying mitigation strategies to support the developer community.

Document

Free Webinar

Compounding the problem are zero-day vulnerabilities like the MOVEit SQLi, Zimbra XSS, and 300+ such vulnerabilities that get discovered each month. Delays in fixing these vulnerabilities lead to compliance issues, these delay can be minimized with a unique feature on AppTrana that helps you to get “Zero vulnerability report” within 72 hours.


Four Key Types of Attacks

The research looks at four key types of attacks such as:

  • Evasion
  • Poisoning
  • Privacy
  • Abuse Attacks

It also classifies them based on various characteristics, including the attacker’s goals and objectives, capabilities, and knowledge.

Evasion Attacks

Attackers using evasion techniques try to modify an input to affect how an AI system reacts to it after deployment. 

Some examples would be creating confusing lane markings to cause an autonomous car to veer off the road or adding markings to stop signs to cause them to be mistakenly read as speed limit signs.

Poisoning Attacks

By injecting corrupted data during the training process, poisoning attacks take place. Adding multiple instances of inappropriate language to conversation records, for instance, could be one way to trick a chatbot into thinking that the language is sufficiently prevalent for it to use in real customer interactions.

Privacy Attacks

Attacks on privacy during deployment are attempts to obtain private information about the AI or the data it was trained on to abuse it. 

An adversary can pose many valid questions to a chatbot and then utilize the responses to reverse engineer the model to identify its vulnerabilities or speculate where it came from.

It can be challenging to get the AI to unlearn those particular undesirable instances after the fact, and adding undesirable examples to those internet sources could cause the AI to perform badly.

Abuse Attacks

In an abuse attack, incorrect data is introduced into a source—a webpage or online document, for example—which an AI receives. Abuse attacks aim to provide the AI with false information from an actual but corrupted source to repurpose the AI system for its intended purpose.

With little to no prior knowledge of the AI system and limited adversarial capabilities, most attacks are relatively easy to launch.

“Awareness of these limitations is important for developers and organizations looking to deploy and use AI technology,” NIST computer scientist Apostol Vassilev, one of the publication’s authors, said.

“Despite the significant progress AI and machine learning have made, these technologies are vulnerable to attacks that can cause spectacular failures with dire consequences. There are theoretical problems with securing AI algorithms that simply haven’t been solved yet. If anyone says differently, they are selling snake oil.”

Read the full article here

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

[mc4wp_form id=”387″]

Recent News

  • Understanding the Implications & Guarding Privacy- Axios Security Group
  • Hackers Actively Using Pupy RAT to Attack Linux Systems
  • Buckle Up_ BEC and VEC Attacks Target Automotive Industry

Topics

  • AI
  • Books
  • Careers
  • Cyber Crime
  • Cyber Intelligence
  • Cyber Laws & Regulations
  • Cyber Warfare
  • Digital Diplomacy
  • Digital Influence Mercenaries
  • Electronic Warfare
  • Emerging Technologies
  • Hacktivism
  • ICS-SCADA
  • News
  • Podcast
  • Reports
  • Tech Indexes
  • Uncategorized
  • White Papers

Get Informed

[mc4wp_form id=”387″]

Social icon element need JNews Essential plugin to be activated.

Copyright © 2022 Cyber Affairs. All rights reserved.

No Result
View All Result
  • Home
  • Cyber Crime
  • Cyber Intelligence
  • Cyber Laws & Regulations
  • Cyber Warfare
  • Digital Diplomacy
  • Digital Influence Mercenaries
  • Electronic Warfare
  • Emerging Technologies
  • Hacktivism
  • ICS-SCADA
  • Reports
  • White Papers

Copyright © 2022 Cyber Affairs. All rights reserved.

Welcome Back!

Login to your account below

Forgotten Password? Sign Up

Create New Account!

Fill the forms below to register

All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In
This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.