3 Trends that Will Shape Cyber Threat Intelligence in 2025
Leon WardThe growing use of AI outside and within organizations is rapidly changing the threat landscape and impacting our approach to threat detection, investigation, and response. As we kickoff 2025, the following three trends suggest that cybersecurity practitioners must continue to advance their use of threat intelligence and are making important progress on that front.
- AI slop becomes one of the top threats to the internet
That may sound dramatic but think about the implicit trust people place in online searches. A result comes up and we typically take it at face value. When multiple, different sources provide the same information, we have even less reason to question its legitimacy. However, AI slop, shoddy or misleading content created using AI, changes the dynamic of the trustworthiness of data. It’s far easier to generate content than it is to fact check. So, the ratio of AI-generated versus human-generated or human-curated content shifts which is problematic.
For example, the BBC recently complained to Apple about the potential negative impact to the media giant’s reputation when Apple’s AI-powered news notification summary service inadvertently generated a false headline and attributed it to the BBC. There is an amplification risk of slop when AI-generated content isn’t labeled as such. Search engines and other services that would not have had the same hallucination, may now be consuming that information as a BBC statement as a fact for AI learning moving forward.
While there was no malicious intent in that example, cybercriminals are using AI slop in the creation of websites and social media profiles and on message boards for financial gain. Organizations, security practitioners, and individuals need to scrutinize online content more closely than ever to mitigate the risk of fraud and extortion. Unfortunately, the masses of people that are mindlessly scrolling social media aren’t well known for their ability to scrutinize content well.
- Leaked data in LLMs will cause headline breaches
Since the launch of ChatGPT, 4.7% of employees have pasted sensitive data into the tool at least once. This includes sensitive internal data, source code, and client data. The exponential growth in usage of ChatGPT is quickly making it a treasure trove of valuable data that is publicly available.
However, it’s not just about ChatGPT. AI-usage can fuel data leakage in a number of different ways. A classic attack vector for threat actors are misconfigured databases. But now this includes vector databases that store information to help developers build AI-powered applications quickly. These databases can be misconfigured in the same way as any data source can be misconfigured, opening the door to threat actors.
Users are also leveraging public AI services to improve their productivity. These include the use of:
- AI-enhanced online meeting services, chat, and note taking apps
- AI tools to find and fix bugs in code which can lead to code leaks and, heaven forbid, any secret values that should never be in there
- Tools that reword and clean-up emails and other content, particularly about sensitive subjects
There are plenty of opportunities for headline breaches to happen. The impact will create challenges with AI adoption, but it’s part of the required AI march towards the plateau of productivity.
- Collaboration among defenders will accelerate
For years, industry and government cybersecurity experts have called for the need for increased collaboration in the form of threat intelligence sharing among defenders. Our 2024 Evolution of Cybersecurity Automation Adoptionreport clearly shows that collaboration now has significant momentum, with 99% of cybersecurity professionals saying they share threat intelligence through at least one channel. More than half (54%) share with direct partners and suppliers and 48% through an official threat-sharing community. A surge in our ThreatQ Community membership over the last 14 months, from approximately 100 members to more than 450, is further evidence that collaboration is accelerating.
One element that is less commonly discussed is the level of practitioner-to-practitioner collaboration across companies. With group chats (like what is used by a subset of the ThreatQ Community) and forums like Reddit, practitioners are sharing more tradecraft and advice than ever before. These forms of collaboration are less formal than company-level information sharing agreements. And the types of information shared are different, focusing on methods, tools, and personal opinions over technical indicators. This experiential level of sharing brings an invaluable human element to collaboration that will surge in 2025 as teams lean into “strength in numbers” to close the skills gap and keep up with rapidly evolving adversaries.
Onward
New security challenges continuously emerge that fuel our collective passion for protecting our organizations. This year will be no different and defenders will continue to demonstrate resilience.
0 Comments