Meet us at EXPO REAL 2023
Meet our team at EXPO REAL 2023 Our company ANGELIS...
Read moreUnless you have been on a vacation somewhere far far away for the past few months, without internet access or social contact, you are surely aware of the hype and fear surrounding ChatGPT and other AI tools.
There are numerous discussions on both sides of the coin: the thrill of limitless possibilities of ChatGPT’s potential applications, and the concern of its potential impact on the world as we know it. In between, you can find attempts of enthusiastic people, asking GPT to write them a song for their sweetheart in the style of Chuck Berry or asking how to get an elephant out of the fridge with the use of peanut butter.
However, among these (more or less) amusing attempts, there is a scarce amount of attention paid to the topic on how the AI tools present risks or rewards in the field of cybersecurity.
None of the current generation of the artificial intelligence chatbots are able to make intelligently informed decisions.
ChatGPT is a free-to-use AI chatbot product developed by OpenAI. The release of latest, paid version GPT-4 on March 14th, 2023, hit most of the headlines immediately due to significant update and building a reputation as a comprehensive cheat sheet. Microsoft, who provides funding for OpenAI, rolled out ChatGPT in Bing search as a preview.
GPT stands for generative pre-trained transformer, which indicates it is a large language model (LLM) that checks the probability which words are likely to come next in a sequence. Since it is a deep learning algorithm, a neural network learns the context of any language pattern, spoken or computer programming.
So, it is important to note that none of the current generation of the artificial intelligence chatbots, either ChatGPT or its Google rival Bard, DeepMind’s Sparrow and other LLMs, do not really make intelligently informed decisions. They act more like a parrot, repeating words that are likely to be found next to one another.
Businesses are continuously looking for cost-effective ways to improve their product offering, service delivery and scalability. The AI ability to automate certain tasks like generating content for social media or marketing, and act as an assistant when looking for information or providing basic customer service (like answering FAQs) is therefore welcomed.
There is plenty of content on the web, and tools anyone can try. Most of them are praising benefits the new technology brings, so we shall not focus on this. Let us rather have a look at risks, you might not have been even aware to exist.
The ethic questions when using generative AI is obvious and a concern that also caused the Italy’s data protection authority to ban the use of ChatGPT on 1st April. The use has been reinstated a month later, with enhanced transparency and rights for EU users, but a lot of concerns remain. A recent study identified six different security implications involving the use of ChatGPT and other large language models (LLMs):
The large language models (LLM) could either be a cybersecurity ally or a threat.
Malicious text generation
One of the most beloved features – the writing ability can be used to create harmful text as well. This can include disinformation, fake news, spam, even impersonation and generation of phishing campaigns.
Malicious code generation
The impressive coding abilities could also be used to produce quick code and use it for harm, allowing potential attackers to deploy threats quicker. Although chatbot has some limits and refuses to generate malicious codes (like obfuscated code), it does agree to generate code that could test system vulnerability.
Offensive content production
There are guardrails implemented to prevent spreading of offensive and unethical content, but if the user is determined enough, this can also be achieved.
AI tools already offer a dynamic approach to cybersecurity by leveraging intelligent algorithms and machine learning techniques to identify, respond to, and mitigate emerging risks in real time. With the ability to sift through large volumes of data, AI algorithms can identify suspicious activities, flag potential threats, and provide security teams with timely alerts. This enables organizations to respond swiftly and effectively, mitigating the impact of cyber incidents and minimizing the potential for data breaches or system compromises.
AI tools, including ChatGPT, can also be used as training platforms. By simulating real-world scenarios, these tools provide a safe environment for security teams to enhance their skills, practice incident response, and develop effective strategies to combat cyber threats.
It can be used to strengthen your cybersecurity efforts. For example, it can provide some degree of support to junior IT to help them better understand the context of the problem they are working on. It can be used by experts, since GPT can be trained on a dataset of payloads used in penetration testing. This can then be used to generate new payloads and can be useful for testing the security of systems and identifying vulnerabilities.
It can also help understaffed teams identify internal vulnerabilities. Posting a question like, “How can I recognize fake online store?” or “How do I recognize phishing email?” can provide a pretty solid response that will aid the user to stay on the safe side.
Hopefully, in the future of security, users are going to be engaging with AI to help solve problems for these and many other kinds of queries, like what could be the next most likely possible attack on the organization. It should be clear that, if cybercriminals are using ChatGPT and other AI tools to enhance their attacks, the organizations should also be using them to bolster their cybersecurity efforts. Fortunately, you do not have to do it alone.
* Derner, E., Batistic K. (2023) Beyond the Safeguards: Exploring the Security Risks of ChatGPT. https://arxiv.org/pdf/2305.08005.pdf (internet source, 19.05.2023)
Meet our team at EXPO REAL 2023 Our company ANGELIS...
Read moreUnec 21
SI-1381 Rakek
Kronstadter Str. 4
DE-81677 München
We use cookies to give you the best online experience. By agreeing you accept the use of cookies in accordance with our cookie policy.
When you visit any web site, it may store or retrieve information on your browser, mostly in the form of cookies. Control your personal Cookie Services here.