Scoring…the Initial Pitfall is a B*tch

POSTED BY RYAN TROST

Intelligence scoring is sexy as hell, especially when done correctly, but teams are almost guaranteed to have a rocky start.  The initial pitfall is finding the universal agreement sweet spot across team members – between senior members (who tend to always be vocal) and several different roles on a team where people’s threshold of risk are going to vary drastically – it’s inevitable.

I have witnessed this first hand running several teams – at my last gig we would sit down every Thursday/Friday to review the intelligence gathered and attempt to “rate” intelligence as to how much of a risk it posed to the organization.  The dynamics of my team were always pretty interesting as we were 20 strong on the first shift across security analysts, intelligence analysts, signatures engineers, and malware engineers.  The primary goal of the meeting was scoring intelligence but the byproduct was making sure everybody was on the same page across incoming and cultivated intelligence and incidents and signatures being developed.

To an outsider it was immediately obvious which team each person was on by how they evaluated intelligence.  For instance, the intelligence analysts would track adversary missions across the industry and would find “potential” linkages to other intelligence through open source and closed source means.  However, the security analysts knew the more “loose possibilities” in our detection system it inevitably was proportional to the number of false positives they would have to chase.  This provided a looking glass into the roles themselves – security analysts almost exclusively focus on events within the four walls of the organization, whereas, intelligence analysts largely identify anticipated threats to the organization in a predictive manner (…predictive said in a somewhat tongue-and-cheek manner).

It was almost guaranteed the two roles would represent the extremes when assessing the new intelligence coming into the team.  The intelligence analyst would rate the “risk” high, whereas, the security analyst would evaluate the risk “low”.  And this is where the pitfall comes into play!!  What happens when 2 Team Leads (w/ support from the rest of their immediate team) in a SOC “agree to disagree”?! …they both compromise settling on middle ground.  In the world of scoring intelligence this leads to a slow and painful death because it contaminates the rating system because everything blends together!  And at that point, there’s no reason to even have a scoring paradigm.  When scoring intelligence it is important to start with the extremes and fine-tune going forward instead of everything being middle ground and fine-tuning.  This is where the team’s ‘nice guy’ compromise approach needs to change from “threat score 50 out of 100” to “you win/I win”.

Is this approach instantaneous?  Of course not…but over 3-4 months of weekly review and re-assessment a pattern should emerge as to how accurate the intelligence analysts are with hunting open source efforts and how often your adversaries re-use infrastructure.

 

0 Comments

Blog Archive

About ThreatQuotient™

ThreatQuotient™ understands that the foundation of intelligence-driven security is people. The company’s open and extensible threat intelligence platform, ThreatQ™, empowers security teams with the context, customization and prioritization needed to make better decisions, accelerate detection and response and advance team collaboration.
LEARN MORE
Share This