Skip to content
  •  
  •  
  •  
The diff you're trying to view is too large. We only load the first 3000 changed files.
6 changes: 6 additions & 0 deletions .vscode/extensions.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
{
"recommendations": [
"redhat.vscode-yaml"
]
}

14 changes: 13 additions & 1 deletion .vscode/settings.json
Original file line number Diff line number Diff line change
Expand Up @@ -6,5 +6,17 @@
"python.testing.pytestEnabled": true,
"python.terminal.activateEnvironment": true,
"python.envFile": "${workspaceFolder}/.env",
"python.testing.cwd": "${workspaceFolder}"
"python.testing.cwd": "${workspaceFolder}",
"yaml.schemas": {
"./schemas/RemovedContent.schema.json": "removed/*/*.yml",
"./schemas/Baseline.schema.json": ["baselines/*.yml", "!removed/baselines/*.yml"],
"./schemas/CSVLookup.schema.json": "lookups/csv/*.yml",
"./schemas/Dashboard.schema.json": "dashboards/*.yml",
"./schemas/DataSource.schema.json": "data_sources/*.yml",
"./schemas/EventBasedDetection.schema.json": ["detections/**/*.yml", "!removed/detections/*.yml"],
"./schemas/KVStoreLookup.schema.json": "lookups/kvstore/*.yml",
"./schemas/FilebackedMacro.schema.json": "macros/*.yml",
"./schemas/FilebackedSchedule.schema.json": "schedules/*.yml",
"./schemas/Story.schema.json": ["stories/*.yml", "!removed/stories/*.yml"]
}
}
43 changes: 12 additions & 31 deletions baselines/baseline_of_blocked_outbound_traffic_from_aws.yml
Original file line number Diff line number Diff line change
@@ -1,37 +1,18 @@
name: Baseline of blocked outbound traffic from AWS
id: fc0edd96-ff2b-48b0-9f1f-63da3782fd63
version: 2
date: '2026-01-14'
version: 3
creation_date: '2020-04-29'
modification_date: '2026-05-13'
author: Bhavin Patel, Splunk
type: Baseline
status: production
description: This search establishes, on a per-hour basis, the average and the standard
deviation of the number of outbound connections blocked in your VPC flow logs by
each source IP address (IP address of your EC2 instances). Also recorded is the
number of data points for each source IP. This table outputs to a lookup file to
allow the detection search to operate quickly.
search: '`cloudwatchlogs_vpcflow` action=blocked (src_ip=10.0.0.0/8 OR src_ip=172.16.0.0/12
OR src_ip=192.168.0.0/16) ( dest_ip!=10.0.0.0/8 AND dest_ip!=172.16.0.0/12 AND dest_ip!=192.168.0.0/16)
| bucket _time span=1h | stats count as numberOfBlockedConnections by _time, src_ip
| stats count(numberOfBlockedConnections) as numDataPoints, latest(numberOfBlockedConnections)
as latestCount, avg(numberOfBlockedConnections) as avgBlockedConnections, stdev(numberOfBlockedConnections)
as stdevBlockedConnections by src_ip | table src_ip, latestCount, numDataPoints,
avgBlockedConnections, stdevBlockedConnections | outputlookup baseline_blocked_outbound_connections
| stats count'
how_to_implement: You must install the AWS App for Splunk (version 5.1.0 or later)
and Splunk Add-on for AWS version (4.4.0 or later), then configure your `VPC flow
logs.`.
description: This search establishes, on a per-hour basis, the average and the standard deviation of the number of outbound connections blocked in your VPC flow logs by each source IP address (IP address of your EC2 instances). Also recorded is the number of data points for each source IP. This table outputs to a lookup file to allow the detection search to operate quickly.
search: '`cloudwatchlogs_vpcflow` action=blocked (src_ip=10.0.0.0/8 OR src_ip=172.16.0.0/12 OR src_ip=192.168.0.0/16) ( dest_ip!=10.0.0.0/8 AND dest_ip!=172.16.0.0/12 AND dest_ip!=192.168.0.0/16) | bucket _time span=1h | stats count as numberOfBlockedConnections by _time, src_ip | stats count(numberOfBlockedConnections) as numDataPoints, latest(numberOfBlockedConnections) as latestCount, avg(numberOfBlockedConnections) as avgBlockedConnections, stdev(numberOfBlockedConnections) as stdevBlockedConnections by src_ip | table src_ip, latestCount, numDataPoints, avgBlockedConnections, stdevBlockedConnections | outputlookup baseline_blocked_outbound_connections | stats count'
how_to_implement: You must install the AWS App for Splunk (version 5.1.0 or later) and Splunk Add-on for AWS version (4.4.0 or later), then configure your `VPC flow logs.`.
known_false_positives: No false positives have been identified at this time.
references: []
tags:
analytic_story:
- AWS Network ACL Activity
- Suspicious AWS Traffic
- Command And Control
detections:
- Detect Spike in blocked Outbound Traffic from your AWS
product:
- Splunk Enterprise
- Splunk Enterprise Security
- Splunk Cloud
security_domain: network
product:
- Splunk Enterprise
- Splunk Enterprise Security
- Splunk Cloud
security_domain: network
schedule: Default Baseline
58 changes: 12 additions & 46 deletions baselines/baseline_of_kubernetes_container_network_io.yml
Original file line number Diff line number Diff line change
@@ -1,55 +1,21 @@
name: Baseline Of Kubernetes Container Network IO
id: 6edaca1d-d436-42d0-8df0-6895d3bf5b70
version: 5
date: '2026-01-14'
version: 6
creation_date: '2024-01-10'
modification_date: '2026-05-13'
author: Matthew Moore, Splunk
type: Baseline
status: production
description: This baseline rule calculates the average and standard deviation of inbound
and outbound network IO for each Kubernetes container. It uses metrics from the
Kubernetes API and the Splunk Infrastructure Monitoring Add-on. The rule generates
a lookup table with the average and standard deviation of the network IO for each
container. This baseline can be used to detect anomalies in network communication
behavior, which may indicate security threats such as data exfiltration, command
and control communication, or compromised container behavior.
search: "| mstats avg(k8s.pod.network.io) as io where `kubernetes_metrics` by k8s.cluster.name
k8s.pod.name k8s.node.name direction span=10s | eval service = replace('k8s.pod.name',
\"-\\w{5}$|-[abcdef0-9]{8,10}-\\w{5}$\", \"\") | eval key = 'k8s.cluster.name' +
\":\" + 'service' | stats avg(eval(if(direction=\"transmit\", io,null()))) as avg_outbound_network_io
avg(eval(if(direction=\"receive\", io,null()))) as avg_inbound_network_io stdev(eval(if(direction=\"\
transmit\", io,null()))) as stdev_outbound_network_io stdev(eval(if(direction=\"\
receive\", io,null()))) as stdev_inbound_network_io count latest(_time) as last_seen
by key | outputlookup k8s_container_network_io_baseline"
how_to_implement: "To implement this detection, follow these steps: 1. Deploy the
OpenTelemetry Collector (OTEL) to your Kubernetes cluster. 2. Enable the hostmetrics/process
receiver in the OTEL configuration. 3. Ensure that the process metrics, specifically
Process.cpu.utilization and process.memory.utilization, are enabled. 4. Install
the Splunk Infrastructure Monitoring (SIM) add-on (ref: https://splunkbase.splunk.com/app/5247)
5. Configure the SIM add-on with your Observability Cloud Organization ID and Access
Token. 6. Set up the SIM modular input to ingest Process Metrics. Name this input
\"sim_process_metrics_to_metrics_index\". 7. In the SIM configuration, set the Organization
ID to your Observability Cloud Organization ID. 8. Set the Signal Flow Program to
the following: data('process.threads').publish(label='A'); data('process.cpu.utilization').publish(label='B');
data('process.cpu.time').publish(label='C'); data('process.disk.io').publish(label='D');
data('process.memory.usage').publish(label='E'); data('process.memory.virtual').publish(label='F');
data('process.memory.utilization').publish(label='G'); data('process.cpu.utilization').publish(label='H');
data('process.disk.operations').publish(label='I'); data('process.handles').publish(label='J');
data('process.threads').publish(label='K') 9. Set the Metric Resolution to 10000.
10. Leave all other settings at their default values."
description: This baseline rule calculates the average and standard deviation of inbound and outbound network IO for each Kubernetes container. It uses metrics from the Kubernetes API and the Splunk Infrastructure Monitoring Add-on. The rule generates a lookup table with the average and standard deviation of the network IO for each container. This baseline can be used to detect anomalies in network communication behavior, which may indicate security threats such as data exfiltration, command and control communication, or compromised container behavior.
search: "| mstats avg(k8s.pod.network.io) as io where `kubernetes_metrics` by k8s.cluster.name k8s.pod.name k8s.node.name direction span=10s | eval service = replace('k8s.pod.name', \"-\\w{5}$|-[abcdef0-9]{8,10}-\\w{5}$\", \"\") | eval key = 'k8s.cluster.name' + \":\" + 'service' | stats avg(eval(if(direction=\"transmit\", io,null()))) as avg_outbound_network_io avg(eval(if(direction=\"receive\", io,null()))) as avg_inbound_network_io stdev(eval(if(direction=\"transmit\", io,null()))) as stdev_outbound_network_io stdev(eval(if(direction=\"receive\", io,null()))) as stdev_inbound_network_io count latest(_time) as last_seen by key | outputlookup k8s_container_network_io_baseline"
how_to_implement: "To implement this detection, follow these steps: 1. Deploy the OpenTelemetry Collector (OTEL) to your Kubernetes cluster. 2. Enable the hostmetrics/process receiver in the OTEL configuration. 3. Ensure that the process metrics, specifically Process.cpu.utilization and process.memory.utilization, are enabled. 4. Install the Splunk Infrastructure Monitoring (SIM) add-on (ref: https://splunkbase.splunk.com/app/5247) 5. Configure the SIM add-on with your Observability Cloud Organization ID and Access Token. 6. Set up the SIM modular input to ingest Process Metrics. Name this input \"sim_process_metrics_to_metrics_index\". 7. In the SIM configuration, set the Organization ID to your Observability Cloud Organization ID. 8. Set the Signal Flow Program to the following: data('process.threads').publish(label='A'); data('process.cpu.utilization').publish(label='B'); data('process.cpu.time').publish(label='C'); data('process.disk.io').publish(label='D'); data('process.memory.usage').publish(label='E'); data('process.memory.virtual').publish(label='F'); data('process.memory.utilization').publish(label='G'); data('process.cpu.utilization').publish(label='H'); data('process.disk.operations').publish(label='I'); data('process.handles').publish(label='J'); data('process.threads').publish(label='K') 9. Set the Metric Resolution to 10000. 10. Leave all other settings at their default values."
known_false_positives: No false positives have been identified at this time.
references: []
tags:
analytic_story:
- Abnormal Kubernetes Behavior using Splunk Infrastructure Monitoring
detections:
- Kubernetes Anomalous Inbound Outbound Network IO
product:
- Splunk Enterprise
- Splunk Enterprise Security
- Splunk Cloud
security_domain: network
deployment:
scheduling:
product:
- Splunk Enterprise
- Splunk Enterprise Security
- Splunk Cloud
security_domain: network
custom_schedule:
cron_schedule: 0 2 * * 0
earliest_time: -30d@d
latest_time: -1d@d
Expand Down
58 changes: 12 additions & 46 deletions baselines/baseline_of_kubernetes_container_network_io_ratio.yml
Original file line number Diff line number Diff line change
@@ -1,55 +1,21 @@
name: Baseline Of Kubernetes Container Network IO Ratio
id: f395003b-6389-4e14-89bf-ac4dbea215bd
version: 3
date: '2026-01-14'
version: 4
creation_date: '2024-01-10'
modification_date: '2026-05-13'
author: Matthew Moore, Splunk
type: Baseline
status: production
description: This baseline rule calculates the average ratio of inbound to outbound
network IO for each Kubernetes container. It uses metrics from the Kubernetes API
and the Splunk Infrastructure Monitoring Add-on. The rule generates a lookup table
with the average and standard deviation of the network IO ratio for each container.
This baseline can be used to detect anomalies in network communication behavior,
which may indicate security threats such as data exfiltration, command and control
communication, or compromised container behavior.
search: "| mstats avg(k8s.pod.network.io) as io where `kubernetes_metrics` by k8s.cluster.name
k8s.pod.name k8s.node.name direction span=10s | eval service = replace('k8s.pod.name',
\"-\\w{5}$|-[abcdef0-9]{8,10}-\\w{5}$\", \"\") | eval key = 'k8s.cluster.name' +
\":\" + 'service' | stats avg(eval(if(direction=\"transmit\", io,null()))) as outbound_network_io
avg(eval(if(direction=\"receive\", io,null()))) as inbound_network_io by key _time
| eval inbound:outbound = inbound_network_io/outbound_network_io | eval outbound:inbound
= outbound_network_io/inbound_network_io | stats avg(*:*) as avg_*:* stdev(*:*)
as stdev_*:* count latest(_time) as last_seen by key | outputlookup k8s_container_network_io_ratio_baseline"
how_to_implement: "To implement this detection, follow these steps: 1. Deploy the
OpenTelemetry Collector (OTEL) to your Kubernetes cluster. 2. Enable the hostmetrics/process
receiver in the OTEL configuration. 3. Ensure that the process metrics, specifically
Process.cpu.utilization and process.memory.utilization, are enabled. 4. Install
the Splunk Infrastructure Monitoring (SIM) add-on. (ref: https://splunkbase.splunk.com/app/5247)
5. Configure the SIM add-on with your Observability Cloud Organization ID and Access
Token. 6. Set up the SIM modular input to ingest Process Metrics. Name this input
\"sim_process_metrics_to_metrics_index\". 7. In the SIM configuration, set the Organization
ID to your Observability Cloud Organization ID. 8. Set the Signal Flow Program to
the following: data('process.threads').publish(label='A'); data('process.cpu.utilization').publish(label='B');
data('process.cpu.time').publish(label='C'); data('process.disk.io').publish(label='D');
data('process.memory.usage').publish(label='E'); data('process.memory.virtual').publish(label='F');
data('process.memory.utilization').publish(label='G'); data('process.cpu.utilization').publish(label='H');
data('process.disk.operations').publish(label='I'); data('process.handles').publish(label='J');
data('process.threads').publish(label='K') 9. Set the Metric Resolution to 10000.
10. Leave all other settings at their default values."
description: This baseline rule calculates the average ratio of inbound to outbound network IO for each Kubernetes container. It uses metrics from the Kubernetes API and the Splunk Infrastructure Monitoring Add-on. The rule generates a lookup table with the average and standard deviation of the network IO ratio for each container. This baseline can be used to detect anomalies in network communication behavior, which may indicate security threats such as data exfiltration, command and control communication, or compromised container behavior.
search: "| mstats avg(k8s.pod.network.io) as io where `kubernetes_metrics` by k8s.cluster.name k8s.pod.name k8s.node.name direction span=10s | eval service = replace('k8s.pod.name', \"-\\w{5}$|-[abcdef0-9]{8,10}-\\w{5}$\", \"\") | eval key = 'k8s.cluster.name' + \":\" + 'service' | stats avg(eval(if(direction=\"transmit\", io,null()))) as outbound_network_io avg(eval(if(direction=\"receive\", io,null()))) as inbound_network_io by key _time | eval inbound:outbound = inbound_network_io/outbound_network_io | eval outbound:inbound = outbound_network_io/inbound_network_io | stats avg(*:*) as avg_*:* stdev(*:*) as stdev_*:* count latest(_time) as last_seen by key | outputlookup k8s_container_network_io_ratio_baseline"
how_to_implement: "To implement this detection, follow these steps: 1. Deploy the OpenTelemetry Collector (OTEL) to your Kubernetes cluster. 2. Enable the hostmetrics/process receiver in the OTEL configuration. 3. Ensure that the process metrics, specifically Process.cpu.utilization and process.memory.utilization, are enabled. 4. Install the Splunk Infrastructure Monitoring (SIM) add-on. (ref: https://splunkbase.splunk.com/app/5247) 5. Configure the SIM add-on with your Observability Cloud Organization ID and Access Token. 6. Set up the SIM modular input to ingest Process Metrics. Name this input \"sim_process_metrics_to_metrics_index\". 7. In the SIM configuration, set the Organization ID to your Observability Cloud Organization ID. 8. Set the Signal Flow Program to the following: data('process.threads').publish(label='A'); data('process.cpu.utilization').publish(label='B'); data('process.cpu.time').publish(label='C'); data('process.disk.io').publish(label='D'); data('process.memory.usage').publish(label='E'); data('process.memory.virtual').publish(label='F'); data('process.memory.utilization').publish(label='G'); data('process.cpu.utilization').publish(label='H'); data('process.disk.operations').publish(label='I'); data('process.handles').publish(label='J'); data('process.threads').publish(label='K') 9. Set the Metric Resolution to 10000. 10. Leave all other settings at their default values."
known_false_positives: No false positives have been identified at this time.
references: []
tags:
analytic_story:
- Abnormal Kubernetes Behavior using Splunk Infrastructure Monitoring
detections:
- Kubernetes Anomalous Inbound to Outbound Network IO Ratio
product:
- Splunk Enterprise
- Splunk Enterprise Security
- Splunk Cloud
security_domain: network
deployment:
scheduling:
product:
- Splunk Enterprise
- Splunk Enterprise Security
- Splunk Cloud
security_domain: network
custom_schedule:
cron_schedule: 0 2 * * 0
earliest_time: -30d@d
latest_time: -1d@d
Expand Down
Loading
Loading