← Blog · · df00tech

Command and Control Detection: Beaconing, DNS Tunneling, and C2 Frameworks

C2 beaconing DNS tunneling Cobalt Strike T1071 T1572 KQL SPL

Every intrusion eventually talks home. The question isn’t whether there’s C2 traffic — it’s whether your SIEM is looking at the right signal. An attacker who gets initial access, dumps credentials, and moves laterally is still a contained incident if they can’t communicate with the outside world. Command and Control (C2) is the pivot point where a compromise becomes an operation.

The challenge: modern C2 frameworks — Cobalt Strike, Sliver, Mythic, Havoc, Brute Ratel — are explicitly engineered to blend in with normal traffic. They beacon over HTTPS to domain-fronted CDNs. They tunnel through DNS. They mimic legitimate user-agents. A generic “alert on suspicious outbound traffic” rule catches nothing. Writing a C2 detection rule that works requires thinking about the specific behavioral signals attackers can’t easily hide.

This guide walks through the four approaches to C2 detection and shows production KQL and SPL queries from the df00tech detection library for each one.

The Four Approaches to C2 Detection

Every C2 detection rule operates on one of four signals:

  1. Content — Inspecting the payload itself. Works for unencrypted C2 or when you have TLS interception. Mostly dead against modern encrypted C2.
  2. Patterns — Timing, packet sizes, and frequency. Beaconing stands out because automated processes are too regular to be human.
  3. Metadata — Who’s talking, from what process, over what protocol. A non-browser binary making HTTPS calls to a cloud VPS is a metadata signal, even if you can’t read a byte.
  4. Destination reputation — Threat intel matching on IP, domain, ASN, or certificate. High precision when it hits, but attackers rotate infrastructure constantly.

Effective detection programs layer all four. Pattern-based detections catch novel C2 infrastructure your threat intel hasn’t seen yet. Metadata catches tools that hide their patterns. Reputation catches the rest. This post focuses on approaches 2 and 3 — the ones you can implement today with logs you already collect.

1. Beaconing Detection: Jitter, Intervals, and Uniform POSTs

Beaconing is the heartbeat of a C2 session. The implant sleeps, wakes up, asks for commands, sleeps again. Cobalt Strike’s default sleep is 60 seconds. Sliver defaults to 60s with 30% jitter. Mythic beacons can be configured anywhere from seconds to hours. The math that catches them: group connections by (source process + destination), bin by time, and measure the mean and standard deviation of the interval between connections.

Here’s the T1071 Application Layer Protocol detection — it scores processes making repeated outbound connections at regular intervals and flags the classic 60s/300s/900s Cobalt Strike sleep timers:

let TimeWindow = 24h;
let BeaconThreshold = 10;
let EntropyThreshold = 4.5;
// Detect anomalous outbound connections with beaconing patterns
DeviceNetworkEvents
| where Timestamp > ago(TimeWindow)
| where RemoteIPType == "Public"
| where ActionType == "ConnectionSuccess"
| where RemotePort in (80, 443, 53, 21, 25, 110, 143, 8080, 8443, 1883, 5222)
| summarize
    ConnectionCount = count(),
    UniqueRemoteIPs = dcount(RemoteIP),
    UniquePorts = dcount(RemotePort),
    Ports = make_set(RemotePort),
    FirstSeen = min(Timestamp),
    LastSeen = max(Timestamp),
    AvgTimeBetween = datetime_diff('second', max(Timestamp), min(Timestamp)) / count()
    by DeviceName, InitiatingProcessFileName, InitiatingProcessId
| where ConnectionCount > BeaconThreshold
| where AvgTimeBetween between (1 .. 3600)
| extend BeaconScore = iff(AvgTimeBetween between (55 .. 65) or AvgTimeBetween between (295 .. 305) or AvgTimeBetween between (895 .. 905), "high", "medium")
| project Timestamp=LastSeen, DeviceName, InitiatingProcessFileName, ConnectionCount, UniqueRemoteIPs, Ports, AvgTimeBetween, BeaconScore, FirstSeen, LastSeen
| sort by ConnectionCount desc

What this catches: The AvgTimeBetween calculation divides the total session duration by connection count — giving you the mean inter-connection interval. When that interval clusters around the canonical C2 sleep values (60s, 300s, 900s), the BeaconScore elevates to high. This is the single most reliable signal for catching commodity C2 frameworks running on default configs.

How to tune it: Start with a 24-hour window and 10+ connections as the floor. Your biggest false positives will be Windows Update, Defender, Datadog, New Relic, and anything with an SCCM/Intune footprint — build an allowlist on (InitiatingProcessFileName + destination domain) tuples, not process names alone. Then add jitter-aware logic: compute the standard deviation of intervals and flag low stdev (regular) as stronger than high stdev. A second detection stage using stdev(timestamp_diff) < 2 seconds over 24+ hourly bins is how you catch implants that hide in the noise floor.

For uniform POST sizes — another beacon signal — summarize sum(SentBytes) and count() per process per hour. If every POST is within 50 bytes of the same size over dozens of connections, that’s automated traffic.

2. DNS Tunneling (T1071.004) — High Entropy Subdomains, TXT Volume

DNS is permitted in every environment. It routes before authentication. It’s rarely inspected at the content layer. That makes it the perfect covert channel — and tools like iodine, dnscat2, Cobalt Strike’s DNS beacon, and OilRig’s BONDUPDATER all take advantage. The attacker encodes C2 data into subdomain labels and reads responses from TXT or NULL records.

The T1071.004 DNS detection catches this with three stacked signals: subdomain length, subdomain count per parent domain, and query frequency.

let TimeWindow = 24h;
let DomainLengthThreshold = 50;
let SubdomainEntropyThreshold = 3.5;
// Detect DNS tunneling via high-entropy, long subdomain queries
DnsEvents
| where TimeGenerated > ago(TimeWindow)
| where QueryType in ("TXT", "NULL", "CNAME", "MX", "A", "AAAA")
| extend DomainLength = strlen(Name)
| extend SubdomainParts = countof(Name, ".")
| extend TopDomain = tostring(split(Name, ".")[-2]) 
| where DomainLength > DomainLengthThreshold or SubdomainParts > 5
| summarize
    QueryCount = count(),
    UniqueSubdomains = dcount(Name),
    AvgDomainLength = avg(DomainLength),
    MaxDomainLength = max(DomainLength),
    QueryTypes = make_set(QueryType),
    FirstSeen = min(TimeGenerated),
    LastSeen = max(TimeGenerated)
    by ClientIP, TopDomain
| where QueryCount > 20 and UniqueSubdomains > 10
| where AvgDomainLength > 40
| extend TunnelConfidence = case(
    UniqueSubdomains > 100 and AvgDomainLength > 60, "high",
    UniqueSubdomains > 50 and AvgDomainLength > 50, "high",
    "medium")
| project LastSeen, ClientIP, TopDomain, QueryCount, UniqueSubdomains, AvgDomainLength, MaxDomainLength, QueryTypes, TunnelConfidence
| sort by QueryCount desc

The detection math: DNS tunneling is fundamentally about data encoding. Each subdomain label can carry up to 63 bytes. To exfiltrate a 3KB payload, you need ~50 unique subdomains. That’s why the rule aggregates unique subdomains per parent domain — a normal client might query one or two subdomains of a given apex; tunneling generates dozens or hundreds. The average domain length check catches the encoded data (base32, base64, or hex) that pushes subdomains well past the 40-character mark typical of legitimate FQDNs.

Compute real entropy if you can. This query uses length and count as proxies for entropy. If your DNS pipeline supports Shannon entropy calculation (Zeek does this natively with dns.tunneling detectors), add entropy(split(Name, ".")[0]) > 3.5 as a fourth condition — it separates base32-encoded payloads from legitimate long domains like d1234abcdefghijk.cloudfront.net.

Known false positives: Akamai (.akamaiedge.net), Cloudflare (.cloudflare.net), Fastly, AWS S3 presigned URLs, and ACME DNS-01 certificate validation. Allowlist these apex domains explicitly. Also: corporate endpoint security agents that encode device IDs in subdomains — CrowdStrike’s cloudsink.net and similar.

For TXT record–specific hunting, pull the secondary hunting query from the same detection: high-volume TXT queries (50+ per hour) to a single parent domain is the signature of dnscat2 and Cobalt Strike DNS beacons that use TXT records to retrieve command output.

3. Web Protocols (T1071.001) — User-Agent and TLS Fingerprints

HTTP/HTTPS C2 hides in the largest haystack: normal web traffic. The detection signals that survive here are metadata-based. Three high-value ones:

Non-browser user-agents on outbound 443. PowerShell’s default user-agent is Mozilla/5.0 (Windows NT; Windows NT 10.0; en-US) WindowsPowerShell/5.1. Python requests uses python-requests/2.x. Cobalt Strike’s malleable profiles rotate UAs but often use dated strings like Mozilla/5.0 (compatible; MSIE 9.0). If you’re logging at your proxy or egress firewall, a detection on user_agent has_any ("MSIE", "python", "curl", "PowerShell") and process !in (browsers) catches a lot.

JA3/JA3S TLS fingerprints. The TLS ClientHello contains enough unique signals (cipher suites, extensions, curves) to fingerprint the client library. Cobalt Strike’s default JA3 has been public for years. Sliver, Mythic, and Havoc each have identifiable fingerprints. If your NDR emits JA3 hashes (Zeek, Suricata, Palo Alto), match against published C2 framework fingerprint lists.

Browser processes talking to non-browser destinations. Chrome connecting to a .onion.ws proxy? PowerShell connecting to a Discord CDN? Those combinations are leaks. The T1071.001 Web Protocols detection uses process-to-destination mapping as a core signal.

4. Protocol Tunneling (T1572) — SSH and ICMP Carrying Payload

When HTTP and DNS detection gets tight, attackers fall back to protocol tunneling — wrapping C2 inside SSH, ICMP, or legitimate VPN protocols. The T1572 Protocol Tunneling detection covers four patterns: tunneling tool execution (Chisel, Plink, Iodine, dnscat2, socat), SSH with port-forwarding flags, Windows netsh portproxy, and DNS-over-HTTPS from non-browser processes:

let TunnelingTools = dynamic(["plink.exe", "plink", "chisel.exe", "chisel", "ligolo.exe", "ligolo", "iodine.exe", "iodine", "ptunnel.exe", "ptunnel", "dns2tcp", "dnscat", "dnscat2", "httptunnel", "htc", "hts", "socat"]);
let DoHProviders = dynamic(["cloudflare-dns.com", "dns.google", "doh.opendns.com", "dns.quad9.net", "mozilla.cloudflare-dns.com", "doh.dns.apple.com"]);
let BrowserProcesses = dynamic(["chrome.exe", "firefox.exe", "msedge.exe", "brave.exe", "opera.exe", "iexplore.exe", "safari", "vivaldi.exe", "chromium"]);
let SystemProcesses = dynamic(["svchost.exe", "MsMpEng.exe", "services.exe", "wininit.exe", "dnscrypt-proxy.exe", "stubby.exe"]);
// Branch 1: Known tunneling tool execution
let KnownTools = DeviceProcessEvents
| where TimeGenerated > ago(1d)
| where FileName in~ (TunnelingTools)
| extend DetectionBranch = "KnownTunnelingTool"
| project TimeGenerated, DeviceName, AccountName, FileName, ProcessCommandLine, InitiatingProcessFileName, InitiatingProcessCommandLine, FolderPath, DetectionBranch;
// Branch 2: SSH with port-forwarding flags (OpenSSH, Plink)
let SSHTunneling = DeviceProcessEvents
| where TimeGenerated > ago(1d)
| where FileName in~ ("ssh.exe", "ssh", "plink.exe", "plink")
    and (
        ProcessCommandLine has "-L " or
        ProcessCommandLine has "-R " or
        ProcessCommandLine has "-D " or
        ProcessCommandLine has "-w " or
        ProcessCommandLine has "LocalForward" or
        ProcessCommandLine has "RemoteForward"
    )
| extend DetectionBranch = "SSHPortForwarding"
| project TimeGenerated, DeviceName, AccountName, FileName, ProcessCommandLine, InitiatingProcessFileName, InitiatingProcessCommandLine, FolderPath, DetectionBranch;
// Branch 3: Netsh portproxy (native Windows tunneling)
let NetshProxy = DeviceProcessEvents
| where TimeGenerated > ago(1d)
| where FileName =~ "netsh.exe"
    and ProcessCommandLine has "portproxy"
    and ProcessCommandLine has "add"
| extend DetectionBranch = "NetshPortProxy"
| project TimeGenerated, DeviceName, AccountName, FileName, ProcessCommandLine, InitiatingProcessFileName, InitiatingProcessCommandLine, FolderPath, DetectionBranch;
// Branch 4: DoH from non-browser, non-system processes
let DoHConnections = DeviceNetworkEvents
| where TimeGenerated > ago(1d)
| where RemotePort == 443
    and RemoteUrl has_any (DoHProviders)
    and InitiatingProcessFileName !in~ (BrowserProcesses)
    and InitiatingProcessFileName !in~ (SystemProcesses)
| extend DetectionBranch = "DNSoverHTTPS", AccountName = InitiatingProcessAccountName, FileName = InitiatingProcessFileName, ProcessCommandLine = InitiatingProcessCommandLine, InitiatingProcessFileName = "", InitiatingProcessCommandLine = ""
| project TimeGenerated, DeviceName, AccountName, FileName, ProcessCommandLine, InitiatingProcessFileName, InitiatingProcessCommandLine, FolderPath = RemoteUrl, DetectionBranch;
union KnownTools, SSHTunneling, NetshProxy, DoHConnections
| sort by TimeGenerated desc

Why the four branches matter: Each branch catches a different tunneling pattern with different fidelity. Netsh portproxy is the highest-fidelity detection in the set — legitimate admins almost never create portproxy rules, so any hit warrants investigation. The known tool branch catches Chisel, Iodine, and dnscat2 — tools with no legitimate enterprise use. SSH port-forwarding flags (-L, -R, -D) are standard for DevOps workflows, so that branch needs tuning against your admin population. DoH from non-browser processes is the trickiest — you’ll want to allowlist endpoint security agents that legitimately use DoH.

Magic Hound, FIN6, and Elephant Beetle all used Plink for RDP tunneling as a documented TTP. When an alert fires with ProcessCommandLine has "-L 3389:" or "-L 13389:127.0.0.1:3389", that’s the exact signature.

Fallback Channels (T1008) — Protocol Switching as a Signal

Sophisticated malware implements primary and backup C2 — HOPLIGHT, BISCUIT, TrickBot, and OilRig’s ISMAgent all ship with multiple hardcoded C2s or protocol-switching logic (ISMAgent falls back from HTTP to DNS when HTTP is blocked). The T1008 Fallback Channels detection catches this pattern: non-browser processes connecting to 3+ distinct external IPs or 3+ distinct ports within an hour is strong evidence of automated retry logic. Pair it with the TLS/DNS signals above and you catch the transition in real time.

Putting It Together: The Detection Stack

One C2 detection rule will never be enough. A functional C2 detection stack layers at minimum:

  • Beaconing analysis (T1071) on all outbound flows, tuned for your noise floor
  • DNS tunneling detection (T1071.004) on both corporate resolvers and any direct-to-external :53 traffic
  • Protocol tunneling hunts (T1572) on process-level telemetry
  • Fallback channel correlation (T1008) to catch protocol switches
  • Ingress tool transfer (T1105) to catch the payload that delivered the beacon in the first place

Run these as scheduled analytic rules with 15-minute windows for beaconing and 1-hour windows for the destination-diversity detections. Feed hits into a triage queue that auto-enriches with VirusTotal, Shodan, and internal threat intel. The goal isn’t to alert on every anomaly — it’s to surface the two or three per week that actually matter.

Browse the full df00tech detection library — 700+ KQL and SPL queries mapped to MITRE ATT&CK, free forever. Each detection includes false positive guidance, hunting variants, atomic test commands, and response playbooks for when the alert fires.