Strict data residency requirements across multiple geographic regions.
→Deploy multiple Microsoft Sentinel workspaces, one per region. Use Azure Lighthouse for centralized management.
Why: Keeps log data within geographic boundaries for compliance while allowing a central SOC to operate across all workspaces.
Reference↗
Sentinel workspace ingesting over 100 GB of data per day.
→Switch the Log Analytics workspace pricing tier from Pay-As-You-Go to Commitment Tiers.
Why: Commitment Tiers provide significant cost savings for high-volume, predictable data ingestion compared to standard pricing.
High-volume logs (e.g., Windows Security Events) are driving up SIEM costs.
→1. Use a Data Collection Rule (DCR) to filter events at the source. 2. Configure the destination table for Basic Logs.
Why: DCRs reduce ingestion costs by collecting only necessary events. Basic Logs reduce storage costs for verbose data not requiring full analytics.
Reference↗
Compliance requires data retention for more than 2 years (e.g., 7 years).
→Configure the workspace with 90-day interactive retention and a 7-year total retention (archive tier).
Why: Balances immediate searchability (interactive) with low-cost, long-term storage (archive). Access archived data via Search Jobs.
Collect security events from on-premises Windows and Linux servers.
→Install the Azure Arc agent for management, then deploy the Azure Monitor Agent (AMA) via Arc.
Why: Arc extends the Azure control plane to on-premises, enabling native management and data collection with the modern AMA agent.
Ingest logs from third-party devices (e.g., firewalls) that support Syslog.
→Deploy a dedicated Linux VM as a Log Forwarder with the AMA. Use CEF format for structured security data.
Why: Centralizes collection for devices that cannot host an agent. CEF provides a normalized, queryable schema for security events.
Ingest incidents and alerts from Microsoft Defender XDR into Sentinel.
→Enable the Microsoft Defender XDR data connector and its incident creation/bi-directional sync option.
Why: Creates a unified incident queue and ensures status changes are synchronized between Sentinel and the Defender portal.
Filter specific Windows Event IDs at the source to reduce ingestion volume.
→Configure a Data Collection Rule (DCR) with an XPath query to specify which Event IDs to collect.
Why: Reduces ingestion volume and cost by filtering data at the source agent, before it is sent to the workspace.
Require the fastest possible detection time for critical events.
→Use a Near Real-Time (NRT) analytics rule.
Why: NRT rules run every minute, offering detection latency of ~1-2 minutes, much faster than the 5-minute minimum for scheduled rules.
Detect a threshold of events within a specific time window (e.g., brute force attacks).
→Create a Scheduled analytics rule using KQL `summarize ... by bin(TimeGenerated, 5m), ...`.
Why: The `bin()` function is critical for grouping events into discrete, non-overlapping time windows for accurate threshold detection.
Detect complex, multi-stage attacks that individual alerts might miss.
→Enable Fusion analytics rules for advanced multistage attack detection.
Why: Fusion uses ML to correlate low-fidelity signals across multiple data sources into high-confidence incidents, reducing alert fatigue.
Detect insider threats or compromised accounts based on anomalous behavior.
→Enable User and Entity Behavior Analytics (UEBA).
Why: UEBA establishes behavioral baselines for users and entities, then flags significant deviations that don't match specific rule logic.
Write a single, source-agnostic analytics rule for multiple data sources (e.g., DNS from various vendors).
→Use Advanced Security Information Model (ASIM) parsers in the KQL query.
Why: ASIM provides a normalized schema, allowing queries to run against a unified view (e.g., `imDns`) instead of multiple vendor-specific tables.
Manage Sentinel content (analytics rules, workbooks) as code and deploy across environments.
→Use Microsoft Sentinel Repositories to connect a GitHub or Azure DevOps repository.
Why: Enables CI/CD workflows, version control, and automated, consistent deployment of security content (Sentinel-as-Code).
Automate basic incident triage tasks like assigning owners, changing status, or adding tags.
→Use an Automation Rule triggered on incident creation.
Why: Automation rules are lightweight and synchronous, ideal for simple triage actions without the overhead of a Logic App.
Automate complex incident responses involving external systems (e.g., block user in Entra ID, send Teams message).
→Create a Playbook (Azure Logic App) and trigger it from an Automation Rule.
Why: Logic Apps provide the orchestration engine and connectors needed for complex, multi-step responses and integrations.
Understand the scope of an attack by visualizing relationships between entities (users, IPs, hosts).
→Use the Investigation Graph on the incident details page.
Why: Provides an interactive map of the attack, making it easy to see connections and pivot between related entities and alerts.
Standardize and accelerate common investigation workflows for the SOC team.
→Create and share a Promptbook in Microsoft Security Copilot.
Why: Promptbooks chain together a series of natural language prompts to create a guided, repeatable investigation process for common scenarios.