Sentinel Deployment Checklist: What You Actually Need Before Day One

Sentinel Deployment Checklist: What You Actually Need Before Day One

All right class.

This is the pre-deployment checklist for people who actually want to know what they're doing. The architectural decisions that determine whether you end up with a functional security platform or a nightmare you regret.

What You're Actually Building

Sentinel runs on Log Analytics. That's it. There's no separate Sentinel database. You ingest data into Log Analytics tables, Sentinel reads those tables, automation and detection rules operate on those tables, and the Defender portal visualizes the data in those tables.

This matters because Log Analytics is regional. Sentinel is regional. Your data stays in the region you pick. You don't (well, you shouldn't at least) split your security data across regions.

The Defender portal sits on top of Log Analytics. Prettier UI, better incident correlation, native integration with Defender products. But still backed by Log Analytics underneath.

Workspace Architecture: The One Decision That Actually Matters

Here's what actually matters: you can't cross tenant boundaries with data ingestion. A Log Analytics workspace in Tenant A cannot ingest data from resources in Tenant B. Sentinel in Tenant A cannot correlate incidents from Tenant B. That's not possible with Azure's architecture (native one, you can have some fun with API + Azure Storage, I would advise staying far away from it though)

So your decision is simpler than it sounds:

Single tenant, single workspace: You're one company. One Azure tenant. One Log Analytics workspace. Everything flows into that workspace. Windows events from all your servers, cloud logs, everything. One ingestion bill. Simple to manage. Easy to query across data sources.

This is the default. Most companies do this.

Single tenant, multiple workspaces: You're one company, but you have strict isolation requirements. Financial data in one workspace, operational data in another. HIPAA compliance requires data to stay in HIPAA regions, so that the workspace is separate. You have multiple workspaces in the same tenant, each isolated, each with its own retention and RBAC.

This is more complex. You maintain separate detection rule sets. You deploy rules to each workspace separately. Incident correlation only works within a workspace.

Multi-tenant via Lighthouse (MSP scenario): You're managing security for multiple customers. Each customer is a different tenant. Each customer has their own Log Analytics workspace in their own tenant. You use Azure Lighthouse to delegate permissions from your master tenant so you can manage all those customer workspaces without leaving your own tenant.

This is the best way to manage multiple tenants at scale. You don't pull data into your tenant. Lighthouse lets you access and manage customer workspaces while the data stays in their tenant.

The practical reality: If you're deploying for customers (MSP, consultant, agency), each customer gets one workspace in their own tenant. You set up Lighthouse delegated access so your team can manage all customers' workspaces from your master tenant. Customer data never crosses tenant boundaries. Your team has a unified management experience across all customer environments.

If you're deploying for yourself (single company), you get one workspace unless you have hard isolation requirements. Then you get multiple workspaces in the same tenant.

RBAC: Stop Thinking About Permissions

RBAC: The Dual Permission Model

Here's the problem nobody tells you: you're about to manage permissions in two separate systems.

Right now (until March 31, 2027): You assign Azure RBAC roles in the Azure portal. Sentinel Reader, Responder, Contributor. Done. Users see Sentinel data and correlated Defender alerts in one place.

After March 31, 2027: The Azure portal Sentinel interface is gone. But Azure RBAC for Sentinel doesn't disappear. It's still there, still required, still managed in the Azure portal. Plus now you also need Defender Unified RBAC roles assigned in the Defender portal for XDR data access (obviously don't wait for the March, start implementing that as soon as you can)

What this means: Your SOC analyst who needs to see both Sentinel incidents and Defender for Endpoint alerts needs two role assignments. One in Azure portal (Sentinel Responder). One in Defender portal (Security operations). If either one is missing, they have gaps in what they can see.

Azure RBAC roles for Sentinel SIEM (Azure portal):

  • Sentinel Reader: View only
  • Sentinel Responder: Manage incidents, run playbooks
  • Sentinel Contributor: Create analytics rules, modify configuration
  • Log Analytics Contributor: Create workbooks, configure data

Defender Unified RBAC roles (Defender portal, post-March 31, 2027):

  • Security operations: Manage incidents and alerts
  • Security posture: Manage vulnerabilities
  • Data operations: Query data lake, manage retention
  • Authorization and settings: Manage roles (admin only)

Assign to Entra ID groups, not individuals. Both portals. Both systems required.

Most organizations will forget one or the other during the transition. Your analysts will complain they can't see certain data. You'll spend a week debugging which permission system is missing. You can cook something better than this (yes, I wrote this only so I could insert the image below)

News: Microsoft extended the sunset deadline to March 31, 2027 (from July 2026) The additional nine-month window reflects feedback from enterprise customers managing Sentinel at scale

Data Retention: Don't Get Surprised By Storage Costs

Analytics tier stores data for 90 days free. That means 90 days of hot storage (called analytics tier) that you can query at full performance with no additional charges beyond ingestion.

After 90 days, data is erased unless you extend it. If you extend beyond 90 days, you pay for the extension. The longer you extend, the more you pay.

Most people set it to 90 days and call it done which is perfectly fine. That works for active threat hunting. If you need longer retention for compliance, you move data to the data lake tier and pay separately.

Here's what you actually need to decide: What data do you query after 30 days? Answer that and you know your retention window. If your compliance requirement is "keep logs for 7 years," you're probably not querying all of that actively. Move it to data lake cold storage.

If you need 90 days of hot queries for active threat hunting, leave it at 90 days free. Don't extend to 180 days because someone thinks "more data is safer." If you and your SOC is doing a good job there is no need to extend the dates over that (unless your budget fully allows that of course, in which case by all means go for it!). You'll pay $0.10 per GB per month for every GB over 90 days. On 10 GB per day of data, that's $300 per month just for storing data you won't query. Be specific about the data you need to keep, quite often legal requirement tells you to keep data like SignInLogs, not NonInteractiveSigninLogs, that alone can save you thousands over the course of a year.

Regions and Data Residency

Pick a region. Your workspace is in that region. Your data stays in that region. Microsoft doesn't move it around.

If you have GDPR requirements, your workspace lives in Europe. Not East US with European data somehow. Europe.

If you have HIPAA requirements, your workspace lives in US regions that are HIPAA certified.

If you're in the UK, you want UK South or UK West. If you're in France, you want France Central, you know the drill.

The data lake tier (used for long-term retention) has fewer regions which are changing quite rapidly given it's a new product so keep your pulse on this one.

Check which region supports both your workspace and the data lake if you plan to use long-term retention from day one.

Primary vs Secondary Workspaces: Only Matters If You Have Multiple

If you have multiple workspaces (Lighthouse setup with different customer tenants), one workspace per tenant is "primary." That primary workspace gets full incident correlation with Defender XDR.

What does that mean? Defender sees threats from your primary workspace and correlates them with Defender alerts. Creates unified incidents.

Secondary workspaces don't get that correlation. They stay isolated.

If you have one workspace per customer via Lighthouse, each customer's workspace is primary within their own tenant. That's the right setup.

Data Connectors: Enable Them Once, Verify Them Immediately

Don't enable every connector "just in case."

For each data source you ingest, you need to verify the prerequisite is met and the data actually flows.

Defender products (Defender for Cloud, Endpoint, Office 365, etc.): You must have the product licensed. Enabling the connector without the license does nothing. You'll see zero data and waste time debugging.

Azure Activity: Requires the subscription has resources with diagnostics enabled. Enabling the connector alone doesn't magically pull logs. You have to turn on diagnostic logs on the resources.

Entra ID sign-in and audit logs: Requires P1 license (included in Business Premium, Audit logs work with any Entra ID license). That's it. You enable the connector and data flows. Volume is manageable.

Syslog and CEF (on-prem logs): Requires a Linux collector VM in your network that forwards logs to Sentinel. Don't forget the VM cost. A small Linux VM is $30 to $80 per month. Sentinel ingestion may be pricy so plan well before enabling it (for example exclude blocked traffic)

Custom connectors (APIs, webhooks, whatever): You build the integration. You define the schema. You're responsible for making sure the data fits in Log Analytics. Send 100 MB per event and you'll have a very expensive day.

Enable connectors one at a time. Verify the data appears in the workspace. Make sure it's not ingesting noise. Then move to the next connector.

Cost Forecasting Before Day One

Everyone sees "$5.22 per GB" and thinks it's cheap. Then they ingest Windows Security events from 30 servers and the bill arrives at $2,000 per month.

Here's the math:

Windows Security event: Average 0.5 GB per server per day. 30 servers. 15 GB per day. Times $5.22. That's $78 per day or $2,349 per month.

Entra ID sign-in logs: 0.1 GB per day (Free ingestion for Azure Activity): 0.1 GB per day (for a 100-person organization). $0.52 per day or $16 per month.

Cloud app discovery: 0.05 GB per day. $0.26 per day or $7 per month.

Defender alerts: Negligible. Less than $1 per month.

Total: 15.15 GB per day. $79.08 per day. $2,372 per month.

If that's over your budget, you filter Windows logs (as you should!) You log only Security events that matter: failed logins, privilege escalation, account creation, group changes. You filter out noise. You get down to 0.08 GB per server per day. Suddenly, it's 2.4 GB total per day. $12.53 per day. $375 per month.

Do the math before day one. If your budget is $500 per month and your unfiltered estimate is $2,500, you're filtering from the start.

Moving to the Defender Portal

The Defender portal is where you'll do incident response, threat hunting, and investigation work. The Azure portal Sentinel interface goes away March 31, 2027

Some things work differently in Defender than they did in Microsoft Sentinel:

The fusion analytics rule gets disabled. Defender handles correlation now with better logic (to those who are investigating incidents on a daily basis - take it with a grain of salt)

SecurityIncident table loses the Description field for Sentinel-created incidents. They're now treated as Microsoft XDR incidents, which don't support descriptions. If you have ServiceNow integrations or automation rules pulling the Description field, they will fail - test and update these integrations before migration

IdentityInfo table becomes a Defender native table. Table-level RBAC stops working on it. Your resource-context RBAC needs adjustment.

Playbook triggers have up to 5-minute latency now instead of real-time. Not ideal for immediate response, but acceptable for most workflows (though in my case, I've seen latency being at 10 minutes rather than 5)

Automation rules triggered by incident modifications have different properties available to check. Old rules that reference specific incident properties might fail.

This isn't trivial. You test in a non-production workspace first. You verify incident correlation works. You verify playbooks still trigger. Then you migrate production.


The Actual Day One Checklist

Print this. Laminate it. Work through it.

Planning Phase

  • Existing or new subscription for Log Analytics Workspace? (You should always use a new one where you can, keep security-related stuff separate)
  • Workspace architecture decided: Single or multiple?
  • Region compliance verified: GDPR = Europe, HIPAA = US. Data stays where you put it.
  • Primary workspace identified: If multi-workspace, which gets Defender XDR integration first?
  • Dual RBAC strategy documented: Azure RBAC roles (Reader/Responder/Contributor) plus Defender Unified RBAC roles (Security Operations) for every analyst role
  • Entra ID groups created: Assign roles to groups, never to individuals
  • Data sources identified: What are you actually ingesting? Filter Windows logs.
  • Cost forecast completed: GB/day × $5.38 (UK South PAYG) = monthly budget? (Keep in mind to always check the current pricing, as it may differ in your region/time when you are reading this blog)
  • Retention policy documented: 90 days hot? Extended retention on the data lake?
  • Compliance requirements listed: GDPR, HIPAA, SOC 2, industry obligations mapped to workspace regions
  • Maintenance cadence planned: Weekly, monthly, quarterly tasks defined upfront (don't skip this)

Infrastructure Phase

  • Azure subscription ready: Owner role confirmed, cost tracking enabled
  • Log Analytics workspace created: Correct region selected, Sentinel enabled
  • Managed identities created: For playbooks and automation authentication
  • Azure Key Vault created: Secrets for API keys and credentials (not hardcoded in playbooks!)
  • Network path verified: Log forwarders can reach workspace, firewall rules tested
  • Monitoring & alerts configured: Cost Management billing alerts, ingestion tracking

Sentinel Deployment

  • Sentinel enabled: Confirmed in portal
  • Azure RBAC roles assigned: Groups have correct Sentinel roles in Azure Portal
  • UEBA Enabled
  • Sentinel has access to trigger playbooks
  • Sentinel Health & Auditing enabled
  • First data connector enabled: Start with Azure Activity or Defender (minimum prerequisites)
  • Data verified flowing: Check table/Data Connector
  • Cost monitoring active: Billing alerts set, daily ingestion tracked in Cost Management
  • SOC Optimization run: Identify coverage gaps and missing rules.

Analytics Rules & Detection

  • First analytics rule created and tested: Use Content Hub templates, not custom KQL (yet).
  • Rule tuning started: Identify your top 10 alert-generating rules and assess false positive rate.
  • Automation rules created: Suppress known-good activity (service account logins, scheduled tasks) without deleting detections
  • First playbook built and tested: Logic app created, incident trigger configured. Account for 5-10 minute latency from Defender
  • Fusion rule status verified: Disabled by design (Defender handles correlation now)

Defender Portal Transition

  • Workspace onboarded to Defender: Primary workspace connected, Defender portal access confirmed
  • Defender Unified RBAC configured: Permission groups assigned (Security Operations, Authorization & Settings, Data Operations)
  • Dual permissions tested: Analyst needs BOTH Azure Sentinel Responder AND Defender Security Operations roles to see all data
  • Incident correlation tested: Sentinel incidents + Defender XDR alerts correlating? Running smoothly?
  • Description field issue checked: If ticketing systems/automation rules reference SecurityIncident.Description: They will break - fix proactively
  • Playbook latency tested: Verify 5-10 minute delay is acceptable for your workflows. Build retry logic if needed.

Data Connectors (Ongoing)

  • Tier 1 connectors enabled: Entra ID, Defender XDR, Office 365, Azure Activity (the easy wins)
  • Tier 2 connectors evaluated: Azure resources (Key Vault, Storage, NSG). Use Azure Policy for automation
  • Tier 3 connectors planned: APIs, third-party (Cisco, Cloudflare, 1Password). Budget time for Azure Functions
  • Third-party integrations scoped: AWS CloudTrail? On-premises logs? SIEM federation? Planned, not deployed yet
  • Each connector tested individually

Post-Deployment Maintenance (The Hard Part)

  • Weekly tasks defined: Review top alert-generating rules, close false positives, spot-check incidents
  • Monthly meeting scheduled: SOC team review - what's working, what's not, what's missing
  • Monthly data connector audit: Any ingestion errors? Costs trending up? Coverage gaps emerging?
  • Quarterly analytics review: Are your rules still relevant? Thresholds still sane? New threats to detect?
  • Annual architecture planning: Workspace consolidation? Licensing review? Defender portal migration completed?
  • Maintenance checklist created: Embed this in your operations

Documentation (Non-Negotiable, Use Workbooks!)

  • Workspace architecture diagram: Regions, primary/secondary, Workspace Manager setup if multi-tenant
  • Data sources documented: Source → Table name, expected volume (GB/day), responsible team
  • Analytics rules documented: Why it exists, what threat it detects, false positive rate, and owner, backups?
  • RBAC documented: Who has what role (Azure + Defender), why they need it (do it per group, not per user)
  • Runbook written: Incident response workflow, escalation path, who to page at 3 AM
  • Cost tracking dashboard: Weekly ingestion review, budget burn-down, forecasting for next quarter
  • Playbook documentation: Trigger conditions, what it does, manual override procedure, owner contact, and backups
  • Troubleshooting guide: Common issues (data not flowing, latency, failed playbooks), resolution steps

The Hard Truth Section (Add to Blog)

This isn't deployment. This is the foundation. What comes next is maintenance. Don't ignore it or Sentinel instances become alert factories that nobody trusts. When the real breach happens, the signal is buried in the noise.

Your SOC team knows this better than anyone. Listen to them. They're in the trenches. They see what's real and what's theatre.

The organizations that get security right aren't the ones with the fanciest tools. They're the ones who maintain them obsessively.

What People Screw Up And How You Won't

"We'll filter logs later"

You won't. You'll ingest everything on day one, see the bill on day eight, panic, spend two weeks filtering, and have nothing but angry meetings. Do the math now. Filter on day one, where you can (you may of course have some exceptions)

"We'll document this once we're stable"

You won't. Document as you go. Deploy on day one with five data sources, by day 30 you have fifteen, and you don't remember why you configured that CEF forwarder the way you did. Document today while you remember.

"Everyone should be able to manage rules"

Sentinel Contributor can delete analytics rules you've spent weeks tuning. Your interns and junior analysts don't need that. Use Sentinel Responder for triage and investigation. Sentinel Contributor for the people writing detection rules.

"We'll move to Defender later"

March 31, 2027 is not a suggestion. The Azure portal Sentinel interface gets turned off. You move before that or you're forced to migrate under pressure. Plan the migration now so you're ready when it happens.

Class dismissed.

Consent Preferences