Understanding Security Implications of Engaging with ChatGPT Technology
In January 2023, just two months after launch, ChatGPT (Generative Pre-trained Transformer) became the fastest-growing application of all time, amassing more than 100 million users.
OpenAI’s advanced chatbot may have reinvigorated the public’s interest in artificial intelligence, but few have seriously contemplated the potential security risks associated with this product.
The technology underpinning ChatGPT and other chatbots may be similar, but ChatGPT is in a category of its own. This is great news if you intend to use it as a kind of personal assistant, but worrying if you consider that threat actors also use it.
Cybercriminals canutilize ChatGPT to write malware , build scam websites, generate phishing emails, create fake news, and so on. Because of this, ChatGPT may be a bigger cybersecurity risk than a benefit, asBleeping Computer put it in an analysis.
At the same time, there are serious concerns that ChatGPT itself has certain unaddressed vulnerabilities. For example, in March 2023, reports emerged about some users being able to view titles of others’ conversations. AsThe Verge reported at the time, OpenAI CEO Sam Altman explained that “a bug in an open source library” had caused the issue.
This just underscores how important it is tolimit what you share with ChatGPT , which collects a staggering amount of data by default. Tech behemoth Samsung learned this the hard way, when a group of employees who had been using the chatbot as an assistant accidentally leaked confidential information to it.