National Cyber Security Centre issues warning over chatbot cyber risks



British officials are warning organisations about integrating artificial intelligence-driven chatbots into their businesses, saying that research has increasingly shown that they can be tricked into performing harmful tasks.

In a pair of blog posts due to be published Wednesday, Britain’s National Cyber Security Centre (NCSC) said that experts had not yet got to grips with the potential security problems tied to algorithms that can generate human-sounding interactions – dubbed large language models, or LLMs.

The AI-powered tools are seeing early use as chatbots that some envision displacing not just internet searches but also customer service work and sales calls.

The NCSC said that could carry risks, particularly if such models were plugged into other elements organisation’s business processes. Academics and researchers have repeatedly found ways to subvert chatbots by feeding them rogue commands or fool them into circumventing their own built-in guardrails.

Cyber expert Oseloka Obiora, chief technology officer at RiverSafe said: “The race to embrace AI will have disastrous consequences if businesses fail to implement basic necessary due diligence checks. Chatbots have already been proven to be susceptible to manipulation and hijacking for rogue commands, a fact which could lead to a sharp rise in fraud, illegal transactions, and data breaches.

“Instead of jumping into bed with the latest AI trends, senior executives should think again, asses the benefits and risks as well as implementing the necessary cyber protection to ensure the organisation is safe from harm,” he added.

For example, an AI-powered chatbot deployed by a bank might be tricked into making an unauthorised transaction if a hacker structured their query just right.

“Organisations building services that use LLMs need to be careful, in the same way they would be if they were using a product or code library that was in beta,” the NCSC said in one its blog posts, referring to experimental software releases.

“They might not let that product be involved in making transactions on the customer’s behalf, and hopefully wouldn’t fully trust it. Similar caution should apply to LLMs.”

Authorities across the world are grappling with the rise of LLMs, such as OpenAI’s ChatGPT, which businesses are incorporating into a wide range of services, including sales and customer care. The security implications of AI are also still coming into focus, with authorities in the U.S. and Canada saying they have seen hackers embrace the technology.





Source link


Like it? Share with your friends!

What's Your Reaction?

hate hate
0
hate
confused confused
0
confused
fail fail
0
fail
fun fun
0
fun
geeky geeky
0
geeky
love love
0
love
lol lol
0
lol
omg omg
0
omg
win win
0
win
Administrator

0 Comments

Your email address will not be published.

Choose A Format
Personality quiz
Series of questions that intends to reveal something about the personality
Trivia quiz
Series of questions with right and wrong answers that intends to check knowledge
Poll
Voting to make decisions or determine opinions
Story
Formatted Text with Embeds and Visuals
List
The Classic Internet Listicles
Countdown
The Classic Internet Countdowns
Open List
Submit your own item and vote up for the best submission
Ranked List
Upvote or downvote to decide the best list item
Meme
Upload your own images to make custom memes
Video
Youtube and Vimeo Embeds
Audio
Soundcloud or Mixcloud Embeds
Image
Photo or GIF
Gif
GIF format