this post was submitted on 10 Aug 2023
53 points (100.0% liked)

Technology

37383 readers
373 users here now

Rumors, happenings, and innovations in the technology sphere. If it's technological news or discussion of technology, it probably belongs here.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

Paper & Examples

"Universal and Transferable Adversarial Attacks on Aligned Language Models." (https://llm-attacks.org/)

Summary

  • Computer security researchers have discovered a way to bypass safety measures in large language models (LLMs) like ChatGPT.
  • Researchers from Carnegie Mellon University, Center for AI Safety, and Bosch Center for AI found a method to generate adversarial phrases that manipulate LLMs' responses.
  • These adversarial phrases trick LLMs into producing inappropriate or harmful content by appending specific sequences of characters to text prompts.
  • Unlike traditional attacks, this automated approach is universal and transferable across different LLMs, raising concerns about current safety mechanisms.
  • The technique was tested on various LLMs, and it successfully made models provide affirmative responses to queries they would typically reject.
  • Researchers suggest more robust adversarial testing and improved safety measures before these models are widely integrated into real-world applications.
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 11 months ago (1 children)

Good point! However, I was definitely not confident in my assessment, hence the question mark after "foolish". I guess seeing all these "A.I. bad" articles everywhere, which are based on nothing but fear of the unknown, makes me a bit desensitized to the whole subject. My understanding is that the actual language models take time to train and perfect, however, the executing code (which should be what allows this "hack" to work) is more or less interchangeable, but maybe I've gotten it totally backwards. If so, please forgive my ignorance.

[–] [email protected] 2 points 11 months ago

I don’t mean to pick on you, but I also don’t think “AI bad” articles are just based on fear of the unknown. Some of them are, but there are also reasonable concerns with all this, and I believe we will need strong and attentive regulation as we continue.

By analogy, people who opposed car culture in the 50s and 60s were seen as fear mongers who just opposed “progress”, but they turned out to be right. Cars don’t scale, they’re an environmental disaster, the most expensive and dangerous form of transportation possible, and we’ve completely redesigned our society so that now it’s extremely hard to reverse. We should have been more cautious.

The problems raised by these researchers may be an easy fix (disallow these specific tokens), or it may be surprisingly difficult to fix, or indicative of a bigger problem, and therefore worth worrying about. I’m concerned that society is a bit blasé about the risks.