Cyber Affairs
No Result
View All Result
  • Login
  • Register
[gtranslate]
  • Home
  • Live Threat Map
  • Books
  • Careers
  • Latest
  • Podcast
  • Popular
  • Press Release
  • Reports
  • Tech Indexes
  • White Papers
  • Contact
Social icon element need JNews Essential plugin to be activated.
  • AI
  • Cyber Crime
  • Intelligence
  • Laws & Regulations
  • Cyber Warfare
  • Hacktivism
  • More
    • Digital Influence Mercenaries
    • Digital Diplomacy
    • Electronic Warfare
    • Emerging Technologies
    • ICS-SCADA
    • Books
    • Careers
    • Cyber Crime
    • Cyber Intelligence
    • Cyber Laws & Regulations
    • Cyber Warfare
    • Digital Diplomacy
    • Digital Influence Mercenaries
    • Electronic Warfare
    • Emerging Technologies
    • Hacktivism
    • ICS-SCADA
    • News
    • Podcast
    • Reports
    • Tech Indexes
    • White Papers
COMMUNITY
NEWSLETTER
  • AI
  • Cyber Crime
  • Intelligence
  • Laws & Regulations
  • Cyber Warfare
  • Hacktivism
  • More
    • Digital Influence Mercenaries
    • Digital Diplomacy
    • Electronic Warfare
    • Emerging Technologies
    • ICS-SCADA
    • Books
    • Careers
    • Cyber Crime
    • Cyber Intelligence
    • Cyber Laws & Regulations
    • Cyber Warfare
    • Digital Diplomacy
    • Digital Influence Mercenaries
    • Electronic Warfare
    • Emerging Technologies
    • Hacktivism
    • ICS-SCADA
    • News
    • Podcast
    • Reports
    • Tech Indexes
    • White Papers
NEWSLETTER
No Result
View All Result
Cyber Affairs
No Result
View All Result
  • Cyber Crime
  • Cyber Intelligence
  • Cyber Laws & Regulations
  • Cyber Warfare
  • Digital Diplomacy
  • Digital Influence Mercenaries
  • Electronic Warfare
  • Emerging Technologies
  • Hacktivism
  • ICS-SCADA
  • Reports
  • White Papers

Threats & Vulnerabilities in AI Models

admin by admin
Aug 28, 2023
in News
A A
0

The rapid surge in LLMs (Large language models) across several industries and sectors has raised critical concerns about their safety, security, and potential for misuse.

In the current threat landscape, threat actors can exploit the LLMs for several illicit purposes, such as:-

Recently, a group of cybersecurity experts from the following universities have conducted a study in which they analyzed how threat actors could abuse threats and vulnerabilities in AI models for illicit purposes:-

  • Maximilian Mozes (Department of Computer Science, University College London and Department of Security and Crime Science, University College London)
  • Xuanli He (Department of Computer Science, University College London)
  • Bennett Kleinberg (Department of Security and Crime Science, University College London and Department of Methodology and Statistics, Tilburg University)
  • Lewis D. Griffin (Department of Computer Science, University College London)

Flaws in AI Models

Apart from this, with several extraordinary advancements, the LLM models are also vulnerable to several threats and flaws, as threat actors could easily abuse these AI models for several illicit tasks. 

Besides this, recent detection of the following cyber AI weapons also depicted the rapid uptick in the exploitation of AI models:-

Overview of the taxonomy of malicious and criminal use cases enabled via LLMs  (Source – Arxiv)

However, AI text generation aids in detecting malicious content, including misinformation and plagiarism in essays and journalism, using diverse proposed methods like:-

  • Watermarking
  • Discriminating approaches
  • Zero-shot approaches

Red teaming tests LLMs for harmful language, and the content filtering methods aim to prevent it, an area with a limited focus in the research.

Here below, we have mentioned all the flaws in AI models:-

  • Prompt leaking
  • Indirect prompt injection attacks
  • Prompt injection for multi-modal models
  • Goal hijacking
  • Jailbreaking
  • Universal adversarial triggers

LLMs like ChatGPT have gained huge popularity quickly, but they face challenges, including safety and security concerns, from adversarial examples to generative threats.

With this analysis, security analysts highlighted the LLM risks in academia and the real world, stressing the need for peer review to address proper concerns.

Keep informed about the latest Cyber Security News by following us on Google News, Linkedin, Twitter, and Facebook.



Read the full article here

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

[mc4wp_form id=”387″]

Recent News

  • Understanding the Implications & Guarding Privacy- Axios Security Group
  • Hackers Actively Using Pupy RAT to Attack Linux Systems
  • Buckle Up_ BEC and VEC Attacks Target Automotive Industry

Topics

  • AI
  • Books
  • Careers
  • Cyber Crime
  • Cyber Intelligence
  • Cyber Laws & Regulations
  • Cyber Warfare
  • Digital Diplomacy
  • Digital Influence Mercenaries
  • Electronic Warfare
  • Emerging Technologies
  • Hacktivism
  • ICS-SCADA
  • News
  • Podcast
  • Reports
  • Tech Indexes
  • Uncategorized
  • White Papers

Get Informed

[mc4wp_form id=”387″]

Social icon element need JNews Essential plugin to be activated.

Copyright © 2022 Cyber Affairs. All rights reserved.

No Result
View All Result
  • Home
  • Cyber Crime
  • Cyber Intelligence
  • Cyber Laws & Regulations
  • Cyber Warfare
  • Digital Diplomacy
  • Digital Influence Mercenaries
  • Electronic Warfare
  • Emerging Technologies
  • Hacktivism
  • ICS-SCADA
  • Reports
  • White Papers

Copyright © 2022 Cyber Affairs. All rights reserved.

Welcome Back!

Login to your account below

Forgotten Password? Sign Up

Create New Account!

Fill the forms below to register

All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In
This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.