Managing security in today’s highly interconnected world can be like trying to put out fires with a collection of squirt guns. You have the tools, but they never feel powerful enough. With security teams working tirelessly to protect systems, networks, and data, they need the information that empowers them by helping them prioritize their activities. By incorporating threat intelligence into their activities, security teams can contextualize threats so that they can focus their attention on the highest risks facing their organizations.
When gathering threat intelligence, indicators of compromise (IoCs) provide technical information that security teams can use to actively look for malicious actors trying to evade detection.
What Are Indicators of Compromise (IoCs)?
Indicators of compromise (IoCs) are the technical forensic artifacts that indicate malicious actors gained unauthorized access to systems, giving security teams the information necessary to determine whether their systems have been compromised. Identified at the host or network level, IoCs can include unusual activity like:
- Suspicious programs and processes indicating malware
- Abnormal network traffic indicating data or files being sent to a command and control (C&C) server
- Anomalous user account activity indicating an account takeover
Security teams can collect IoCs from governmental and non-governmental entities including:
- Forum of Incident Response and Security Teams
- United States Computer Emergency Readiness Team (CERT)
- Defense Industrial Base Cybersecurity Information Sharing Program
- CERT Coordination Center
- Cybersecurity and Infrastructure Security Agency (CISA)
- Security researchers
- Technology providers
Security teams use IoCs to engage in threat hunting so that they can detect incidents where malicious actors may have evaded their alerts. Meanwhile, incident response teams use IoCs to help them contain attackers and remediate systems.
How Do IoCs Work?
Even the best criminals leave behind traces of evidence. IoCs are the digital forensic evidence that an attacker leaves behind.
Malware is a computer program, and every program creates evidence of its existence in log files. During an incident investigation, incident response teams collect the technical forensic evidence discovered during the investigation. For example, IoCs might include a list of:
- Affected application versions
- IP addresses
- C2 domains
- Bitcoin addresses
- Email addresses
- File names and types to search for
- Locations where malware can be found
Information security teams can identify IoCs by:
- Reviewing historic log files for the indicators
- Looking through current log data for the indicators
- Creating new detection rules built on these indicators
What Is the Difference Between Indicators of Compromise and Indicators of Attack (IoAs)?
The primary difference between the IoCs and IoAs is timing. While an IoC tells you that an incident occurred based on what your log data contains, an IoA will tell whether the attack is currently ongoing in your systems and networks.
IoCs are straightforward because they focus on specific, known security threats discovered after malicious actors have compromised a system. Meanwhile, IoAs tell you whether attackers are currently in the process of attacking the system with insight into:
- Exploitation techniques
- Adversary behavior
- Attacker intent
Understanding the IoC Lifecycle
The IoC lifecycle refers to thor overall place in the larger scope of security. As long as a given IoC is relevant, security teams need to cycle through the following steps:
- Discovery: identifying potential IoCs by monitoring system logs, analyzing network traffic, running security scans, and reviewing alerts
- Assessment: gathering more information to determine how to address the IoC by analyzing network traffic, system files and configurations, and threat intelligence sources
- Sharing: coordinating a response and implementing controls to prevent similar attacks in the future
- Deployment: implementing a multi-layered set of defensive security controls to protect against an attack
- Detection and Response: monitoring systems to detect potential IoCs and responding to identified threats by containing the threat, implementing countermeasures, and communicating the incident
- End of Life: retiring an IoC when it is no longer accurate or effective due to changes in technology, landscape, or the organization’s security posture
Integrate the world’s easiest to use and most comprehensive cybercrime database into your security program in 30 minutes.
Types of Indicators of Compromise
IoCs fall into four basic categories so understanding what those are and what to look for is critical.
Network IoCs
Network IoCs show abnormal network traffic and activity, often detected by network monitoring tools. Some examples of network IoCs are:
- Unusual inbound or outbound traffic
- Unusual spike in traffic from a specific website or IP address
- Communication with known malicious IP addresses
- Geographic irregularities, like traffic from companies where the organization has no employees
- Domain Name Server (DNS) request and registry configuration anomalies
- Unauthorized network scans
- Applications using unusual ports
- Larger than normal HTML response sizes
Host-based IoCs
Host-based IoCs focus on specific computers or systems with suspicious activity, often detected by Endpoint Detection and Response (EDR) or Extended Detection and Response (XDR) tools. Some examples of host-based IoCs are:
- Unusual file activity
- Suspicious processes or services running
- Swells in database read volumes
- Unapproved changes to registry and system files
- Unusual system behavior like unexpected restarts, crashes, or slow performance
File-based IoCs
File-based IoCs show changes that indicate malicious files or malware are present, often detected through file-scanning tools like EDR software and sandboxing tools. Some examples of file-based IoCs are:
- Suspicious file hashes, filenames, and file paths
- High volumes of requests for the ame file
- Changes to file checksums
Behavioral IoCs
Behavioral IoCs focus on abnormal user or entity behaviors, often detected through Identity and Access Management (IAM) or User and Entity Behavior Analytics (UEBA) tools. Some examples of behavioral IoCs include:
- Multiple failed login attempts
- Unusual login attempts, like changes to time of day or geographic location
- Unauthorized access to sensitive data
- Anomalous privileged user or account activity
- Social engineering attempts, including phishing emails requesting login credentials
Flare: Providing Actionable Context for IoCs
IoCs provide the technical information you need, but they lack the context that a holistic approach to threat intelligence provides. To use IoCs effectively, you need visibility into what threat actors plan to target and their tactics, techniques, and procedures (TTPs). Further, you should be placing IoCs into the context of your organization’s overarching IT environment and industry vertical.
With Flare’s platform, you can operationalize your threat intelligence by combining IoC data with clear, deep, and dark web monitoring. Using Flare’s cyber threat intelligence platform, you can connect the dots with threat actor analytics, illicit community monitoring, and intelligence aggregation. Teams using Flare can gain context-rich intelligence covering the risk areas that matter most, ultimately reducing noise. With a unified approach to cyber threat intelligence, disaster recovery planning, and external attack surface monitoring, security teams leveraging Flare’s AI can remediate issues quickly by prioritizing high-risk alerts.