What is Microsoft Fabric Mirroring for SQL Server 2025?
Microsoft Fabric Mirroring for SQL Server 2025 is a Zero-ETL solution that uses “Change Feed” technology to replicate data into OneLake in Delta Parquet format. Unlike previous versions that used CDC, SQL Server 2025 provides a high-performance, low-impact link to Fabric. To configure it, you must Arc-enable your SQL instance, set up an On-Premises Data Gateway, and grant specific ALTER ANY EXTERNAL MIRROR permissions to a dedicated Fabric login.
Beyond the ETL Tax: Why SQL Server 2025 Fabric Mirroring is the Future of Real-Time Analytics
For decades, the “Nightly Batch Job” has been the heartbeat of the enterprise. Data Architects have spent countless hours—and millions of dollars—building, maintaining, and troubleshooting complex ETL (Extract, Transform, Load) pipelines to move data from operational SQL Servers into analytical warehouses. These pipelines are often brittle, expensive, and introduce significant data latency.
While Fabric Mirroring is a standalone breakthrough, it is part of a much larger ecosystem of innovation. To see how this fits into the broader release, check out our SQL Server 2025 new features comprehensive guide.
With the release of SQL Server 2025, Microsoft has officially signaled the beginning of the “Zero-ETL” era. The flagship feature enabling this shift is Fabric Mirroring. By leveraging a high-frequency Change Feed mechanism, SQL Server 2025 can now continuously reflect data into Microsoft Fabric’s OneLake in an analytics-ready Delta Parquet format—without the overhead of traditional replication or third-party tools.
In this expert guide, I will provide a technical, “on-the-ground” roadmap for DBAs and Data Engineers looking to implement Fabric Mirroring for SQL Server 2025.
Why SQL Server 2025 Mirroring is Different
Before we dive into the setup, it is crucial to understand why this technology replaces existing methods like Transactional Replication or SSIS.
Technology Comparison: Fabric Mirroring vs. Legacy Methods
| Feature | Fabric Mirroring (SQL 2025) | Synapse Link (SQL 2022) | Transactional Replication | Traditional ETL (SSIS/ADF) |
|---|---|---|---|---|
| Data Movement | Zero-ETL (Change Feed) | Change Data Capture (CDC) | Log Reader Agent | Batch Extraction |
| Latency | Near Real-Time (Seconds) | Minutes | Seconds/Minutes | Hours/Days |
| Format | Delta Parquet (Native AI) | Parquet / SQL Tables | SQL Tables | Various |
| Source Impact | Very Low (Log-based) | Moderate (CDC Tables) | Moderate (Distributor) | High (Query-based) |
| Complexity | Low (Point-and-Click) | Moderate | High | High |
| AI Readiness | Native (OneLake) | Requires Integration | Manual Export | Manual Export |
Swipe left to see more →
Understanding the Zero-ETL Architecture
Traditional replication methods like Transactional Replication or Change Data Capture (CDC) were designed for different eras. While effective, they often require heavy maintenance and can impact the performance of the source system.
Fabric Mirroring for SQL Server 2025 introduces a more refined approach. It utilizes a Change Feed that scans the transaction log at high frequency. Instead of storing changes in internal tables (like CDC), it publishes these changes directly to a landing zone in Fabric. A replicator engine within Fabric then merges these files into the target Delta tables.
The Role of Azure Arc and the Gateway
To bridge the gap between your on-premises environment and the Microsoft Fabric cloud, two critical components are required:
- Azure Arc: This acts as the management plane, providing the SQL Server instance with a System Managed Identity (SMI) for secure, outbound authentication.
- On-Premises Data Gateway (OPDG): This handles the actual data movement, ensuring that your data remains secure as it traverses from your local network to OneLake.
Prerequisites: The Pre-Flight Checklist
Before running your first T-SQL configuration script, ensure your environment meets these strict requirements:
- SQL Server Edition: You must be running SQL Server 2025. While versions 2016–2022 support Mirroring via CDC, only 2025 supports the high-performance Change Feed.
- Azure Arc Integration: The instance must be Arc-enabled. This is non-negotiable for SQL 2025 mirroring.
- Recovery Model: The source database must be set to the Full Recovery Model.
T-SQL: Verifying the Recovery Model
Before proceeding, run this script to ensure your database is ready for the Change Feed.
-- Replace 'YourDatabaseName' with your target database
SELECT name, recovery_model_desc
FROM sys.databases
WHERE name = 'YourDatabaseName';
-- If it returns 'SIMPLE', change it to FULL
-- ALTER DATABASE YourDatabaseName SET RECOVERY FULL;
- Gateway Hardware: Ensure the server hosting the On-Premises Data Gateway has sufficient CPU and memory to handle the ingestion of your largest tables.
How to Configure Microsoft Fabric Mirroring for SQL Server 2025 (The End of ETL)
Configuring the mirror is a two-phase process: preparing the security layer on the SQL engine and then initiating the link within the Fabric SaaS environment.
From the Field: The “Gateway Ghost” and Memory Pressure
In a recent SQL Server 2025 deployment, we encountered a classic “Gateway Ghost” issue where mirroring would silently stall despite all T-SQL checks passing. After digging into the On-Premises Data Gateway (OPDG) logs, we discovered that the default 8GB RAM configuration on our gateway VM was being pegged at 100% during the initial snapshot of a 40GB fact table.
While the SQL engine was ready to stream, the gateway service was essentially “choking” on the metadata merge into OneLake. The fix wasn’t a T-SQL command—it was scaling the gateway VM to 16GB of RAM and 8 vCPUs.
Expert Lesson: Don’t just monitor your SQL Server; monitor your Gateway’s memory usage. If you see high
GetIncrementalChangesAsynctimeouts in your Fabric Monitoring logs, your gateway likely needs more “breathing room” to handle the Zero-ETL traffic.
Phase 1: Preparing the SQL Server Source
Mirroring requires a dedicated security principal with elevated but specific permissions. Unlike standard reporting users, the Fabric mirroring identity needs to interact with the internal engine’s mirroring state.
Step 1: Create the Fabric Login
First, connect to the master database of your SQL Server 2025 instance. You can use a SQL-authenticated login or a Microsoft Entra ID (formerly Azure AD) login. For most hybrid scenarios, a SQL-authenticated login is the most straightforward to manage initially.
-- Run in the master database
USE [master];
GO
CREATE LOGIN [fabric_login] WITH PASSWORD = '<Strong_Password_Here>';
GO
Step 2: Configure Database Permissions
Switch context to the specific database you wish to mirror. You must grant the SELECT permission on all tables to be mirrored, along with the critical ALTER ANY EXTERNAL MIRROR permission. This latter permission allows Fabric to initialize and maintain the Change Feed.
-- Run in the database you want to mirror
USE [YourDatabaseName];
GO
CREATE USER [fabric_user] FOR LOGIN [fabric_login];
GO
-- Grant necessary permissions for Fabric Mirroring
GRANT SELECT TO [fabric_user];
GRANT ALTER ANY EXTERNAL MIRROR TO [fabric_user];
GRANT VIEW DATABASE PERFORMANCE STATE TO [fabric_user];
GRANT VIEW DATABASE SECURITY STATE TO [fabric_user];
GO
Step 3: Handle Availability Groups (If Applicable)
If your SQL Server is part of an Always On Availability Group, you must create the login on all replicas (Primary and Secondary). Crucially, the Login SID must match across all instances to ensure mirroring continues seamlessly after a failover.
Phase 2: Connecting the Fabric Portal
With the source database prepared, the remaining configuration happens within the Microsoft Fabric Workspace.
- Create Mirrored SQL Server Item: In your Fabric workspace, select “Mirrored SQL Server database” from the create menu.
- Establish Connection: Provide the server address (using the Azure Arc name if applicable) and the database name.
- Authentication: Select your On-Premises Data Gateway and provide the credentials for the
fabric_logincreated in Phase 1. - Select Objects: You can choose to “Mirror all data” (up to the current 500-table limit) or select specific high-value tables.
T-SQL: Checking Mirroring Enablement
Once configured in the portal, you can verify if the database engine has successfully enabled data lake replication.
SELECT [name], is_data_lake_replication_enabled
FROM sys.databases
WHERE name = 'YourDatabaseName';
Optimizing Performance with SQL Server 2025 Resource Governor
While Fabric Mirroring is designed to be low-impact, high-volume transactional systems can experience resource contention during heavy data bursts. To ensure that your analytics streaming never compromises your primary OLTP performance, you should configure the SQL Server 2025 Resource Governor for Mirroring.
By creating a dedicated resource pool, you can cap the CPU and Memory usage of the Change Feed process. This is particularly useful for protecting your production workload from “Observer Overhead” during initial data reseeding.
-- Create a Resource Pool to limit Mirroring impact
USE master;
GO
CREATE RESOURCE POOL [FabricMirroringPool]
WITH (
MAX_CPU_PERCENT = 20, -- Cap CPU impact at 20%
MAX_MEMORY_PERCENT = 10 -- Cap Memory at 10%
);
GO
-- Create a Workload Group for the Mirroring account
CREATE WORKLOAD GROUP [FabricMirroringGroup]
USING [FabricMirroringPool];
GO
-- Reconfigure to apply changes
ALTER RESOURCE GOVERNOR RECONFIGURE;
GO
Pro-Tip: Use a Classifier Function to automatically route the
fabric_mirror_accountconnection into theFabricMirroringGroup. This ensures that even if mirroring demands more power, the SQL engine will prioritize your mission-critical user queries.
Performance Tuning and DBA Guardrails
While “Zero-ETL” implies simplicity, DBAs must monitor the impact on the source system.
Transaction Log Management
Because Mirroring relies on the transaction log, active transactions will hold the log truncation until the changes are reflected in Fabric. If you have massive, long-running batch updates on the source, you must monitor your transaction log growth closely to prevent the disk from filling up.
Monitoring the Change Feed
SQL Server 2025 provides new Dynamic Management Views (DMVs) to track the health of the Mirroring pipeline. Use the following script to check for errors or scan progress.
-- Check for any errors in the change feed process
SELECT * FROM sys.dm_change_feed_errors;
-- Monitor current log scan sessions
SELECT * FROM sys.dm_change_feed_log_scan_sessions;
-- Review current configuration and state (State 4 = Active/Healthy)
EXEC sp_help_change_feed;
Resource Governor Integration
Unique to SQL Server 2025, you can create a Resource Governor pool specifically to cap the resources consumed by the mirroring Change Feed. This ensures that even during periods of intense data synchronization, your primary business workloads (OLTP) remain prioritized and unaffected.
Solving the Primary Key Dilemma
A common point of failure in Fabric Mirroring setup is the “Primary Key” requirement. For a table to be mirrored, it must have a defined Primary Key. Tables without Primary Keys (heaps) are currently not supported for mirroring from SQL Server. If your application uses heaps for logging or temporary staging, these will need to be excluded from the mirroring scope or refactored with a PK.
T-SQL: Identifying Tables without Primary Keys
Use this script to find tables that are currently ineligible for mirroring.
SELECT s.name AS SchemaName, t.name AS TableName
FROM sys.tables t
INNER JOIN sys.schemas s ON t.schema_id = s.schema_id
WHERE OBJECTPROPERTY(t.object_id, 'TableHasPrimaryKey') = 0
AND t.is_ms_shipped = 0;
Real-World Impact: Near-Zero Reporting Lag
The ultimate goal of this configuration is to enable Direct Lake mode in Power BI. By having your SQL Server data automatically persisted as Delta Parquet in OneLake, Power BI can “attach” to the data directly. This eliminates the need for “Dataset Refreshes” and provides your executive team with dashboards that reflect the state of the business in near real-time.
Fabric Mirroring in Always On Availability Groups
For enterprise environments requiring High Availability, Fabric Mirroring SQL Server 2025 Always On Availability Group support is a game changer. Mirroring is fully AG-aware, meaning it can survive a failover without requiring a full re-initialization of the data in OneLake.
Key Failover Considerations:
- Listener Connection: When setting up the connection in the Fabric Portal, always use the Availability Group Listener name rather than an individual node IP.
- Permission Parity: The
fabric_mirror_accountand theALTER ANY EXTERNAL MIRRORpermission must be manually created on all replicas (Primary and all Secondaries). - Automatic Resumption: Upon failover, the Change Feed will automatically resume from the new primary replica as long as the Fabric user exists and has the correct permissions.
Expert Note: Unlike legacy CDC, which requires complex manual intervention after an AG failover, the SQL Server 2025 Change Feed is natively integrated into the HADR framework, making it the most resilient Zero-ETL solution ever released.
Conclusion
Fabric Mirroring for SQL Server 2025 is more than just a new feature; it is a fundamental architectural shift. By removing the need for manual ETL pipelines, organizations can reduce their operational overhead and accelerate the time-to-insight. While the initial setup requires careful attention to Azure Arc and gateway security, the long-term benefits of a unified, real-time data estate in OneLake are undeniable. This “Zero-ETL” approach ensures your high-performance SQL Server 2025 instance remains the single source of truth while fueling modern, AI-driven analytics.
Next Steps
- Audit Your Tables: Identify which of your production tables lack Primary Keys and evaluate if they can be converted.
- Trial Azure Arc: If you haven’t yet, Arc-enable a development instance of SQL Server 2025 to test the connectivity.
- Monitor Logs: Set up alerts for transaction log growth on any database you plan to mirror.
- Read More: Explore our deep dives on SQL Server 2025 Vector Search and T-SQL Regular Expressions to see how else 2025 is modernizing the data engine.
- Ready to explore more? Return to our SQL Server 2025 Pillar Article to discover Vector Search, T-SQL enhancements, and modern security features.
Frequently Asked Questions (FAQs) on Microsoft Fabric Mirroring for SQL Server 2025
1. What is the difference between SQL Server 2025 Mirroring and Change Data Capture (CDC)?
While both technologies track changes, the SQL Server 2025 Change Feed used in Mirroring is significantly more efficient than legacy CDC. Traditional CDC writes changes to side-tables within the source database, which increases storage overhead and IOPS. SQL Server 2025 Mirroring reads directly from the transaction log and streams data to Microsoft Fabric OneLake without persisting change tables on the source, making it a true Zero-ETL solution with minimal performance impact.
2. Does Fabric Mirroring for SQL Server 2025 require a Premium Capacity?
Yes. To use the Mirroring feature in Microsoft Fabric, your workspace must be assigned to an F-64 or higher capacity (or a Power BI Premium P-capacity). This is required because the “Mirrored SQL Server” item utilizes specialized compute engines to handle the landing and merging of Delta Parquet files into OneLake in real-time.
3. Can I mirror an on-premises SQL Server 2025 instance to Fabric without Azure Arc?
No. Azure Arc-enabled SQL Server is a mandatory requirement for SQL Server 2025 Mirroring. Azure Arc provides the necessary identity management (Managed Identity) and secure control plane that allows the SQL engine to communicate its mirroring state to the Fabric cloud securely.
4. What happens to Fabric Mirroring if my On-Premises Data Gateway goes offline?
If the On-Premises Data Gateway (OPDG) or network connectivity is interrupted, SQL Server 2025 will continue to hold the necessary transaction log records. Mirroring is resilient; once connectivity is restored, the Change Feed will resume from the exact point it stopped, catching up on the backlog. However, DBAs should monitor transaction log growth during extended outages.
5. Why are some of my tables not showing up in the Fabric Mirroring selection?
The most common reason for missing tables is the Primary Key requirement. For a table to be eligible for Zero-ETL Mirroring, it must have a defined Primary Key. Additionally, certain system tables, temporary tables, and tables using unsupported data types (though the list of supported types has expanded in 2025) will be excluded from the replication list.
6. Is there a cost associated with data egress when mirroring to Fabric?
When mirroring from an on-premises SQL Server 2025 to the cloud, standard Azure data egress charges may apply depending on your networking setup (e.g., ExpressRoute vs. Internet). However, within the Fabric environment, the ingestion and transformation into Delta Parquet format are typically covered under your Fabric Capacity units.
7. Can I mirror a SQL Server 2025 database that is part of an Always On Availability Group?
Yes, Fabric Mirroring fully supports Always On Availability Groups (AG). You must ensure the Fabric login and permissions are mirrored across all replicas. In a failover scenario, the Change Feed automatically resumes on the new primary replica, ensuring continuous data availability in OneLake.
8. How does SQL Server 2025 Mirroring affect transaction log truncation?
Mirroring works similarly to Transactional Replication in this regard. The SQL Server transaction log cannot be truncated (recycled) until those logs have been processed by the Change Feed and acknowledged by Microsoft Fabric. It is vital to have a robust log backup strategy and monitor the log_reuse_wait_desc in sys.databases.
9. Can I use Fabric Mirroring for disaster recovery (DR)?
While Mirroring provides a near real-time copy of your data in OneLake, it is designed for analytics and reporting (Zero-ETL), not for SQL-to-SQL disaster recovery. It transforms your data into Delta Parquet format, which is optimized for Power BI and Spark, but it cannot be “restored” back into a functional SQL Server instance as a primary database.

Add comment