• DTP
  • VR/AR
  • What we do
    • Customer Experience
    • Business Process Optimization
    • Data management
    • Customized App Development
    • Cybersecurity
    • Industry 4.0
  • Expertise
    • Manufacturing
    • Professional Services
    • Information Technology
    • Construction
    • Logistics
    • Health
  • Experience
  • Insights
  • Jobs
  • Contact us
  • Slovenščina
  • DTP
  • VR/AR
  • What we do
  • Experience
  • Insights
  • Jobs
Book a demo
  • Contact us
  • Slovenščina
  • DTP
  • VR/AR
  • What we do
  • Experience
  • Insights
  • Jobs
Book a demo

The AI impact on cybersecurity in the ChatGPT era

By Staša 

The risk and reward of AI

Unless you have been on a vacation somewhere far far away for the past few months, without internet access or social contact, you are surely aware of the hype and fear surrounding ChatGPT and other AI tools.
There are numerous discussions on both sides of the coin: the thrill of limitless possibilities of ChatGPT’s potential applications, and the concern of its potential impact on the world as we know it. In between, you can find attempts of enthusiastic people, asking GPT to write them a song for their sweetheart in the style of Chuck Berry or asking how to get an elephant out of the fridge with the use of peanut butter.
However, among these (more or less) amusing attempts, there is a scarce amount of attention paid to the topic on how the AI tools present risks or rewards in the field of cybersecurity.

Understanding the tool

None of the current generation of the artificial intelligence chatbots are able to make intelligently informed decisions.

ChatGPT is a free-to-use AI chatbot product developed by OpenAI. The release of latest, paid version GPT-4 on March 14th, 2023, hit most of the headlines immediately due to significant update and building a reputation as a comprehensive cheat sheet. Microsoft, who provides funding for OpenAI, rolled out ChatGPT in Bing search as a preview.
GPT stands for generative pre-trained transformer, which indicates it is a large language model (LLM) that checks the probability which words are likely to come next in a sequence. Since it is a deep learning algorithm, a neural network learns the context of any language pattern, spoken or computer programming.

So, it is important to note that none of the current generation of the artificial intelligence chatbots, either ChatGPT or its Google rival Bard, DeepMind’s Sparrow and other LLMs, do not really make intelligently informed decisions. They act more like a parrot, repeating words that are likely to be found next to one another.

Businesses are continuously looking for cost-effective ways to improve their product offering, service delivery and scalability. The AI ability to automate certain tasks like generating content for social media or marketing, and act as an assistant when looking for information or providing basic customer service (like answering FAQs) is therefore welcomed.

There is plenty of content on the web, and tools anyone can try. Most of them are praising benefits the new technology brings, so we shall not focus on this. Let us rather have a look at risks, you might not have been even aware to exist.

The risk

The ethic questions when using generative AI is obvious and a concern that also caused the Italy’s data protection authority to ban the use of ChatGPT on 1st April. The use has been reinstated a month later, with enhanced transparency and rights for EU users, but a lot of concerns remain.  A recent study identified six different security implications involving the use of ChatGPT and other large language models (LLMs):  

The large language models (LLM) could either be a cybersecurity ally or a threat.

  • Fraudulent services generation
    It can be a great tool to assist bringing your dream idea to life by creating new services, webpages, texts and more. But it can also be exploited by malicious actors to lure unsuspecting users by providing free access to programs and platforms that mimic original ones.
  • Harmful information gathering
    A person with malicious intent can use it to quickly access available information and can use it to do harm. For example: a cyberattacker can prompt it to divulge what IT system a specific bank uses, to find it where and how to perform the attack.
  • Private data disclosure
    Good news: ChatGPT has guardrails to prevent the sharing of people’s personal data and information. However, it can share information about private lives of public persons, including harmful or speculative content. So, concerns remain in place.
  • Malicious text generation
    One of the most beloved features – the writing ability can be used to create harmful text as well. This can include disinformation, fake news, spam, even impersonation and generation of phishing campaigns.

  • Malicious code generation
    The impressive coding abilities could also be used to produce quick code and use it for harm, allowing potential attackers to deploy threats quicker. Although chatbot has some limits and refuses to generate malicious codes (like obfuscated code), it does agree to generate code that could test system vulnerability.

  • Offensive content production
    There are guardrails implemented to prevent spreading of offensive and unethical content, but if the user is determined enough, this can also be achieved.

Harnessing the power of AI

AI tools already offer a dynamic approach to cybersecurity by leveraging intelligent algorithms and machine learning techniques to identify, respond to, and mitigate emerging risks in real time. With the ability to sift through large volumes of data, AI algorithms can identify suspicious activities, flag potential threats, and provide security teams with timely alerts. This enables organizations to respond swiftly and effectively, mitigating the impact of cyber incidents and minimizing the potential for data breaches or system compromises.

AI tools, including ChatGPT, can also be used as training platforms. By simulating real-world scenarios, these tools provide a safe environment for security teams to enhance their skills, practice incident response, and develop effective strategies to combat cyber threats. 

It can be used to strengthen your cybersecurity efforts. For example, it can provide some degree of support to junior IT to help them better understand the context of the problem they are working on. It can be used by experts, since GPT can be trained on a dataset of payloads used in penetration testing. This can then be used to generate new payloads and can be useful for testing the security of systems and identifying vulnerabilities.
It can also help understaffed teams identify internal vulnerabilities. Posting a question like, “How can I recognize fake online store?” or “How do I recognize phishing email?” can provide a pretty solid response that will aid the user to stay on the safe side.

The Future

Hopefully, in the future of security, users are going to be engaging with AI to help solve problems for these and many other kinds of queries, like what could be the next most likely possible attack on the organization. It should be clear that, if cybercriminals are using ChatGPT and other AI tools to enhance their attacks, the organizations should also be using them to bolster their cybersecurity efforts. Fortunately, you do not have to do it alone.

Engage your employees in cybersecurity activities, execute system security checks to determine possible vulnerabilities, protect yourself, your company, and your surroundings! If you have difficulties how to start or how to improve, we will gladly provide more information. Contact us.
Sources

* Derner, E., Batistic K. (2023) Beyond the Safeguards: Exploring the Security Risks of ChatGPT. https://arxiv.org/pdf/2305.08005.pdf (internet source, 19.05.2023)


Empower Employees for Stronger Cyber Resilience
Previous Article
Cybersecurity and Privacy of BYOD
Next Article

We help you on every step of your journey.

Linkedin Facebook Youtube

Latest news

Meet us at EXPO REAL 2023

Meet our team at EXPO REAL 2023 Our company ANGELIS...

Read more

Slovenia

Unec 21
SI-1381 Rakek

  • sl@angelis.agency
  • +386 40 831 350

Germany

Kronstadter Str. 4
DE-81677 München

  • de@angelis.agency
  • +49 157 359 992 86
© 2025 ANGELIS d.o.o. All rights reserved.
Privacy Policy | About Cookies | Terms & Conditions | Privacy Center
Close Popup

We use cookies to give you the best online experience. By agreeing you accept the use of cookies in accordance with our cookie policy.

I accept My Preferences
Privacy Center Cookie Policy
Close Popup
Privacy Settings saved!
Privacy Settings

When you visit any web site, it may store or retrieve information on your browser, mostly in the form of cookies. Control your personal Cookie Services here.

Type of cookies Privacy Center Privacy Policy Cookie Policy

Necessary cookies
Cookies that are needed for the normal functioning of our website.
  • wordpress_gdpr_cookies_declined
  • wordpress_gdpr_cookies_allowed
  • wordpress_gdpr_allowed_services

Google Analytics
Cookies that are set by Google Analytics to track website usage.
  • _gid
  • _ga_JFRV9YK8FM
  • _gat_gtag_UA_234467193_1
  • _ga

Decline all services
Save
Accept selected services