Back

Contents

From OLTP to AI: Choosing Your Microsoft Fabric Database Engine

by Olivier TraversJuly 31, 2025,

In a previous post, we explored how PowerTable takes applications to where data lives rather than copying data to applications. That foundational approach raises natural questions: which data architecture should you choose within Microsoft Fabric, and how do the different options affect PowerTable applications?

After decades of database specialization, we're witnessing something remarkable: transactional and analytical systems are converging again, but this time with architectural intent rather than compromise.

You probably noticed a string of acquisitions pointing in the same direction. First, Microsoft acquired Citus Data in 2019 to bring distributed PostgreSQL capabilities to Azure. More recently, Databricks acquired Neon for $1B in May 2025 to add serverless PostgreSQL to its platform, swiftly followed by Snowflake’s acquisition of Crunchy Data for $250M for pretty much the same reason.

Microsoft bet its data platform future on Fabric's unified approach, providing specialized compute engines that share OneLake storage. In doing so, the company is showing they're willing to put interoperability and customer demand above protecting their crown jewels, using open standards shared with the likes of Databricks (Delta) and Snowflake (thanks to Iceberg interop). Did you know that the Fabric runtime runs on top of Mariner 2.0, aka Azure Linux?

What sets this convergence apart from earlier "one size fits all" platforms is that modern systems embrace workload-specific processing while unifying storage, governance, and security. Microsoft Fabric exemplifies this strategy. For organizations building data applications with PowerTable, understanding when to use Fabric's three data approaches affects performance, cost, governance, and how easily business users can work with their data.

The OLTP/OLAP Split: Why It Happened and Why It's Evolving

The historical separation of transactional and analytical systems solved real performance problems. Row-based storage optimizes for individual record operations, while column-based storage excels at aggregate queries across millions of rows. Running both workloads on the same system created lock contention and performance disasters.

This architectural constraint shaped decades of software design. Every business system needed its own database, its own copy of data, its own ETL processes to feed analytical systems.

What changed wasn't the fundamental trade-offs between storage formats, but our ability to manage them effectively. SAP HANA pioneered in-memory hybrid processing starting in 2010, delivering what would later be dubbed HTAP by Gartner in 2014. Around the same time, Microsoft was developing VertiPaq, an in-memory columnar engine that debuted in PowerPivot (2010) then became the core of Power BI's semantic models to this day. Microsoft then brought this OLAP innovation to the transactional world with SQL Server columnstore indexes in 2012, which used xVelocity technology based on the VertiPaq engine.

The industry has seen multiple attempts at unified systems over the decades. Early efforts failed because the hardware couldn't deliver on the promise - trying to run analytical queries on transactional systems created lock contention disasters, while 'universal' databases couldn't handle real production workloads at scale. Every major vendor now claims unified platforms, but the economics have fundamentally changed. What failed at $1000/GB RAM costs succeeds at $10/GB. What created performance bottlenecks on spinning disks works on NVMe SSDs.

Yet, while hardware continued to become more powerful, the appetite for more data continued to grow in parallel. In response, rather than insisting on cramming everything into one engine, Fabric provides specialized compute options that share storage. So yes, it's effectively a hybrid platform, but that doesn't mean a single engine is meant to do everything. You choose the right processing method for each workload while maintaining data consistency and governance.

This pattern of bringing specialized engines by use case under one roof is in display too with vector databases. Here, Fabric offers several options with CosmosDB, EventHouse, and even the good old SQL Database itself is catching up for this AI-driven requirement. Snowflake and Databricks have made similar moves.

Fabric's Three SQL Engines: Specialized Tools, Shared Foundation

Leaving aside vector databases for now, understanding Fabric's architecture choices helps explain why PowerTable works effectively across different engines, and when you'd choose each approach.

Fabric SQL Database: When You Need ACID Guarantees

Fabric SQL Database is a serverless abstraction over Azure SQL that provides full ACID transaction support, referential integrity, and familiar T-SQL interfaces. For PowerTable applications, this means immediate consistency for writeback operations. When a user updates pricing data in a PowerTable application, that change appears instantly for all other users and systems.

This makes Fabric SQL Database ideal for master data management where data integrity is non-negotiable. Product catalogs, pricing tables, and organizational hierarchies benefit from guaranteed consistency without synchronization delays.

The trade-off is analytical performance at scale. While Fabric SQL Database handles substantial workloads, it's optimized for transactional patterns rather than complex analytical queries across billions of rows.

Fabric Data Warehouse: Analytical Power with T-SQL Compatibility

Fabric Data Warehouse provides T-SQL interfaces optimized for analytical workloads. Built on massively parallel processing (MPP) architecture with autonomous workload management, it uses distributed query processing across multiple compute nodes while storing data behind the scenes in Delta format for ACID transactions and interoperability.

For PowerTable users, this means building applications on existing data warehouse tables while benefiting from analytical query performance when displaying large datasets. The writeback operations PowerTable provides work seamlessly, optimized for bulk updates rather than high-frequency transactional changes.

This approach works particularly well for planning and budgeting applications where users need large amounts of historical context but make relatively infrequent updates to forecasts or targets.

The T-SQL compatibility means existing database teams can work immediately without learning new languages or paradigms.

Fabric Lakehouse: Flexible Foundation with Read-Only SQL Access

The Lakehouse approach combines data lake flexibility with structured query capabilities. Built on Delta Lake format, it provides ACID transactions on object storage with both SQL and Spark interfaces.

However, there's a critical architectural limitation: PowerTable can read from Lakehouse SQL endpoints but cannot write back to them. The Lakehouse SQL endpoint operates in read-only mode due to Delta Lake's limitations around multi-table transactions. To modify Lakehouse data, you must use Spark interfaces.

This means PowerTable applications requiring writeback functionality need different patterns when working with Lakehouse environments. You might read reference data from Lakehouse while writing transactional updates to Fabric SQL Database or Data Warehouse, then synchronizing changes through Spark processes. That said, it is not uncommon to combine lakehouses and warehouses in end-to-end Fabric solutions.

The Lakehouse excels for scenarios where PowerTable displays large analytical datasets for context while capturing user inputs in separate transactional stores.

Choosing the Right Foundation: Decision Patterns

ScenarioRecommended EngineKey Considerations
Master data with frequent updates; bottom-up forecastingFabric SQL DatabaseImmediate consistency, referential integrity
Top-down planning/budgetingFabric Data WarehouseLarge dataset performance, T-SQL compatibility
Analytics dashboards with contextFabric LakehouseRead-only display, external writeback patterns
Mixed structured/unstructured dataLakehouse + SQL DatabaseDisplay flexibility, separate transactional store

From a data engineering perspective, the lakehouse targets Python or Scala developers familiar with Spark and notebooks, while the warehouse will feel very familiar to organizations        and individuals with a T-SQL background. In a future entry we will review where PowerTable fits within a medallion architecture as well as discuss the central role of semantic models in contemporary data architectures that have added AI readiness as one of their main goals.

PowerTable sync destinations - Who doesn’t like options? But choose wisely!

OneLake Security: Vision vs. Current State

Microsoft's original vision for Fabric security was ambitious and compelling. The initial "OneSecurity" concept, unveiled in 2023, promised a "Secure Once and Use Anywhere" model where granular security policies—including row-level, column-level, and data masking—would be defined once in OneLake and automatically enforced across all engines and workloads.

Thisvery ambitious vision turned out to be arduous to execute. Two years later, OneSecurity became the more tightly-scoped OneLake Security, available in limited preview for the past few months. While valuable, this represents a more incremental approach than the original mission statement. And while OneLake Security continues to mature, the current RBAC model provides role-based access control for folder-level permissions, but has important limitations:

  • Security policies don't apply to users with workspace write permissions (Admin/Member/Contributor roles)
  • Different Fabric engines can present inconsistent data views due to separate security models

Microsoft's Path Forward: Microsoft continues evolving this capability, with OneLake data access roles scheduled to be replaced by "OneLake security" in Q3 2025.

PowerTable's Approach: Given the current security landscape, PowerTable implements its own permission model that works consistently across all supported platforms—whether Fabric, Snowflake, Databricks, or traditional databases. This ensures predictable, granular access control regardless of the underlying platform's security maturity. As Microsoft's unified security vision evolves and stabilizes, PowerTable can adapt its integration accordingly while maintaining the reliable governance customers expect today.

Beyond Fabric: A Multi-Platform Reality

While Fabric provides compelling integration benefits, most organizations have data across multiple platforms. PowerTable's connectors to Snowflake, Databricks, and traditional databases acknowledge this reality while maintaining similar architectural principles.

The strategic moves by industry leaders highlight recognition that sustainable data applications must work with enterprise data platform foundations rather than creating isolated data islands. Microsoft's early investment in PostgreSQL through Citus Data positioned them ahead of recent competitive moves, while PowerTable extends this philosophy by working directly with your chosen platform. Just because PowerTable is a Fabric workload doesn’t mean you have to do everything in Fabric.

The Maturity Advantage

What distinguishes PowerTable from traditional data application builders is recognition that sustainable applications must be built on enterprise data platform foundations. Applications that combine UI, compute, and storage in single-vendor packages might seem simpler initially, but they recreate the data movement and synchronization problems that led organizations to adopt platforms like Fabric in the first place.

By working directly with your chosen data platform's native capabilities, PowerTable applications inherit the governance, security, and performance characteristics you've already established - avoiding the data integration failures we've seen repeated across multiple technology cycles. Your data stays where your policies expect it to be.

Getting Started: Match Your Architecture to Your Needs

The best way to understand these architectural choices is through specific business scenarios. Whether you're managing pricing data requiring immediate consistency, building planning applications on your data warehouse, or creating flexible workflows that span multiple data types, the key is starting with a concrete use case.

Identify the data sources involved, the performance requirements, and the governance constraints. This provides a foundation for choosing the right Fabric approach and understanding how PowerTable works within your existing architecture.

The convergence of transactional and analytical systems represents an evolution beyond the rigid separations of the past. Modern platforms like Fabric enable getting benefits of both approaches without the traditional compromises, but only when you choose the right tool for each specific workload.

See how PowerTable works with your specific Fabric architecture. Request a demo to explore how these choices play out with your data, your requirements, and your governance model.

Technical Appendix - Delta Lake's Fundamental Architectural Limitations

1. Single-Table Transaction Scope:

2. Optimistic Concurrency Control Design:

3. Cloud Object Storage Architecture:

Why This Makes Fabric Lakehouse SQL Endpoint Read-Only

1. Traditional SQL Expectations vs. Delta Reality:

  • SQL Users Expect Multi-Table Transactions: Standard SQL operations often involve multiple tables (foreign keys, complex updates, referential integrity)
  • Delta Protocol Can't Deliver: Since the Delta Lake protocol cannot guarantee ACID properties across multiple tables, allowing SQL writes would violate user expectations of database consistency

2. Spark as the Designed Write Interface:

Note: you technically could write data with your own code, not just Spark, as long as you'd be willing to dive into the Delta spec and create manifests yourself, aligned with the Delta version supported by the current Fabric runtime. In other words, that would be quite a heavy lift.

3. Microsoft Fabric's Implementation Choice:

4. PowerTable Implications:

  • Protocol Constraint: writeback is PowerTable's core functionality but the lakehouse's SQL endpoint is read-only, excluding it from our list of sync destinations
  • Workaround Patterns: implement writeback in SQL Database or Warehouse as a transactional store layer separate from the lakehouse

This technical limitation isn't a Microsoft design flaw, it's an inherent constraint of the Delta Lake open-source protocol that prioritizes single-table ACID guarantees over multi-table transaction support. Databricks is pursuing multi-table transactional support via Unity Catalog, but exploring this goes beyond the scope of this entry.

Request a demo

Learn how Lumel helps enterprises deliver real-time integrated reporting and planning applications

Get Lumel Brochure

Enhance your BI, analytics and xP&A use cases with our no-code Data App suite for Power BI.
Download now
Lumel
Look Forward. Think Ahead ®
Leader in Unified Planning & Analytics for the Modern Data Stack.
© 2025 Lumel Inc. All rights reserved.
Connect With Us