Fusing ChatGPT and Cyber-Deception to Deceive Attackers

6 September 2023 | by Xavier Bellekens

Since its release, ChatGPT has captured the world’s attention – both enthralling and appalling in equal measure.

Here we have an AI that when used judiciously can alleviate repetitive tasks and assist content producers intuitively. But, to counteract that, here we have an AI that when used maliciously can aid criminals to execute cyberattacks on a scale never seen before. 

Jumping on the ChatGPT frenzy, we recently decided to dig into the AI to understand if there was any way it could aid cyber deception and provide a new and faster avenue to build decoys. 

In this blog post, we will outline the projects we ran through in ChatGPT and discuss how we prompted the AI to set up seven decoys, uploaded them up on the cloud and then used them as bait to lure attackers.

The aim was to see if the decoys generated through ChatGPT were sophisticated enough to lure attackers – potentially offering a fast way to aid organisations looking to explore cyber-deception.

We are pleased to report that we succeeded with our aim.

Within minutes of uploading the ChatGPT decoys to the cloud they were being attacked by both human attackers and automated scanners.

The research highlights how ChatGPT can aid organisations looking to trial cyber deception, allowing them to set up basic decoys, with minimal technical skills, and then use them to learn more about malicious activity happening on their networks, allowing them to align their cyber defences effectively.

The project:

To carry out the project we appointed ChatGPT as the honorary CTO of our organisation and our researchers acted as prompt engineers.

A prompt engineer is an individual who specialises in designing and optimising prompts for conversational AI models like ChatGPT.

Their role involves crafting input queries or statements in a manner that effectively extracts the desired information or response from the AI. Prompt engineering requires an understanding of the AI model’s behaviour, strengths, and limitations.

By strategically formulating prompts, an engineer can improve the quality and relevance of the AI-generated responses, enhancing overall performance and user experience.

We prompted ChatGPT to generate the following seven decoys:

  • A Programmable Logic Controller (PLC)
  • An MRI scanner
  • A CCTV camera
  • A printer
  • A website
  • The satellite communications for a boat
  • A passport database

As part of the exercise, we asked ChatGPT for instructions and code for building the decoys, which would all support that traditional functions of their genuine counterparts.

The decoys were not perfect initially, and we had a long dialogue with ChatGPT to refine them, but we eventually created seven decoys that we knew were sophisticated enough to lure attackers. The process didn’t take long, a few hours for each decoy, and we also discovered some prompt-hacks which allowed us to save time as we built more decoys.

Once we were happy with the decoys we uploaded them to the cloud, and within minutes they were getting targeted by attackers.

Our project was working.

Catching the attackers:

Once the decoys were live, we experienced a flurry of activity almost instantly. Within just six minutes, we observed brute force login attempts and automated scans looking for exploitable vulnerabilities.

We also correlated the attack traffic with Lupovis Prowl, our database of malicious IP addresses that allows us to identify where an attacker is coming from an the type of attacks they carry out. Prowl also helps pinpoint indicators of intelligence and shows us whether the attackers are a human or an automated bot such as a mass scanner.

The CCTV camera and the PLC were the two decoys that saw the most activity – with them both being scanned the most. We also embedded vulnerabilities within their systems, which allowed the attackers to believe they were making progress and working towards something of value.

However, it wasn’t just scanners that reached our decoys. We also saw four human attackers: One human attacker logged into the printer and PLC, and we also saw two human attackers running brute force attacks on the passport database.

The future of ChatGPT and Deception:

So, what does this all mean and what is the real learning from our study?

Our study showed us that ChatGPT can open doors for organisations wanting to trial deception, before adopting more sophisticated and comprehensive programs.

You don’t need to be overly technical to run the prompts, so for organisations with minimal technical resources that want to gain an understanding of the malicious activity on their networks, they can use ChatGPT to run these projects quickly and effectively.

This type of deception allows organisations to mislead attackers so they can gain valuable insights into the attack techniques. Through deceptive-based cyber tools and decoys, organisations can lure threat actors towards enticing targets and trick them into thinking they are reaching something of value. Yet, through this reconnaissance, we actually gain invaluable intelligence on them, which can be built into cyber defence.

The decoys established using ChatGPT are not as sophisticated or intuitive as the lures our engineers create, so attackers would fairly quickly realise they have been fooled, but this still provides intelligence to organisations as they can still track IP addresses. When the decoys are integrated with Lupovis Prowl, this allows organisations to correlate the data with the IP addresses we have stored to understand where the attacker is coming from and any indicators of intelligence to see if the attack is coming from an automated bot or a human adversary. Without integrating with Lupovis Prowl, the decoy alone would provide little value.

So, is there a future for ChatGPT and cyber-deception? Yes, absolutely.

The combination provides a fast way for organisations to deploy decoys on their networks to track and monitor malicious actors. When integrated with Lupovis Prowl they can gain invaluable data and carry out reconnaissance on threat actors, which can then be fed into defences – once again demonstrating the value of turning the hunter into the hunted.

6 September 2023 | by Xavier Bellekens

Speak to an Expert

Whether you have a specific security issue or are looking for more information on our Deception as a Service platform, simply request a call back with one of our security experts, at a time that suits you.