Skip to main content

Accessing Microsoft Fabric data locally with OneLake file explorer

If you've spent any time working with Microsoft Fabric, you know that navigating to the web portal every time you need to inspect, upload, or tweak a file gets old fast. OneLake File Explorer is Microsoft's answer to that friction — a lightweight Windows application that mounts your entire Fabric data estate directly in Windows File Explorer, the same way OneDrive handles your documents.

One..what?


OneLake is the unified data lake underpinning every Microsoft Fabric tenant. Unlike traditional architectures where teams maintain separate data lakes per domain or business unit, every Fabric tenant gets exactly one OneLake — one place where Lakehouses, Warehouses, KQL databases, and other Fabric items store their data. There's no need to copy data between engines; Spark, SQL, and Power BI all read from the same underlying storage.

The organizational hierarchy is straightforward:

Tenant → Workspaces → Items (Lakehouses, Warehouses, etc.) → Files/Tables.

This maps neatly to a folder structure, which is exactly what OneLake File Explorer exposes locally.

How OneLake file explorer works

Install steps:

  1. Download the MSIX installer from the Microsoft Download Center.
  2. Run the installer and sign in with your Microsoft Entra ID when prompted — or choose a specific account if you work across multiple tenants.
  3. Open Windows File Explorer. Your OneLake data should appear within a few seconds.

Starting with recent versions, you can sign in with different accounts and switch between them via the system tray icon — useful if you manage workspaces across multiple organizational tenants.

Once installed, the application runs silently in the system tray and integrates with Windows File Explorer under a root folder at %USERPROFILE%\OneLake - Microsoft\. From there, you navigate workspaces and items just like any local directory.

A key detail you need to understand: syncing creates placeholders, not full downloads. When you browse a folder, the app pulls up-to-date metadata so you can see file names, sizes, and structure — but the actual bytes only land on your machine when you double-click a file to open it. This makes it practical even when you have access to workspaces containing terabytes of data.

Changes flow the other way too. When you create, modify, or delete a file through Windows File Explorer, those changes are automatically pushed back to OneLake. Drop a Parquet file into a Lakehouse's Files/ directory from your local machine, and it shows up in Fabric immediately. This is a genuinely useful shortcut for rapid iteration during pipeline development.

Practical workflows

Uploading raw data to a Lakehouse is as simple as dragging files into the appropriate Files/ directory. No portal, no AzCopy command, no storage account SAS token to juggle. For batch ingestion during development or ad hoc uploads, this is significantly faster than the alternatives.

Inspecting Lakehouse file structure without spinning up a Spark session is another common use case. You can see the Delta table directory layout, check for the presence of _delta_log/ folders, and verify partition structures — all from File Explorer or any local tool (VS Code, for instance) that can browse directories.

Editing CSV and Excel files works similarly to OneDrive. Open a .csv or .xlsx file from the OneLake mount in Excel, make your edits, and save. Closing the file triggers an automatic sync back to OneLake, and the updated version is immediately visible in the Fabric web portal.

Bridging local and portal workflows is handled cleanly via right-click context menus. Right-click a workspace or item and select OneLake → View Workspace Online (or View Item Online) to jump directly to that location in the Fabric web portal. The portal always opens to the item's root folder, which is a minor limitation to know about.

My takeaway

OneLake File Explorer is a small tool, but it closes a real gap in the Fabric developer experience. The pattern of "cloud storage you can mount locally" is one developers already trust from OneDrive and S3-backed tools — applying it to a unified data lake that spans Lakehouses, Warehouses, and real-time analytics items is a sensible extension.

The practical takeaway is this: if you're working with Microsoft Fabric and still uploading files through the portal UI or writing custom upload scripts for development work, try the file explorer first. It won't replace programmatic pipelines in production, but it significantly reduces the overhead of the inner development loop.

More information

Access Fabric data locally with OneLake file explorer - Microsoft Fabric | Microsoft Learn

Popular posts from this blog

Kubernetes–Limit your environmental impact

Reducing the carbon footprint and CO2 emission of our (cloud) workloads, is a responsibility of all of us. If you are running a Kubernetes cluster, have a look at Kube-Green . kube-green is a simple Kubernetes operator that automatically shuts down (some of) your pods when you don't need them. A single pod produces about 11 Kg CO2eq per year( here the calculation). Reason enough to give it a try! Installing kube-green in your cluster The easiest way to install the operator in your cluster is through kubectl. We first need to install a cert-manager: kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.5/cert-manager.yaml Remark: Wait a minute before you continue as it can take some time before the cert-manager is up & running inside your cluster. Now we can install the kube-green operator: kubectl apply -f https://github.com/kube-green/kube-green/releases/latest/download/kube-green.yaml Now in the namespace where we want t...

Azure DevOps/ GitHub emoji

I’m really bad at remembering emoji’s. So here is cheat sheet with all emoji’s that can be used in tools that support the github emoji markdown markup: All credits go to rcaviers who created this list.

Podman– Command execution failed with exit code 125

After updating WSL on one of the developer machines, Podman failed to work. When we took a look through Podman Desktop, we noticed that Podman had stopped running and returned the following error message: Error: Command execution failed with exit code 125 Here are the steps we tried to fix the issue: We started by running podman info to get some extra details on what could be wrong: >podman info OS: windows/amd64 provider: wsl version: 5.3.1 Cannot connect to Podman. Please verify your connection to the Linux system using `podman system connection list`, or try `podman machine init` and `podman machine start` to manage a new Linux VM Error: unable to connect to Podman socket: failed to connect: dial tcp 127.0.0.1:2655: connectex: No connection could be made because the target machine actively refused it. That makes sense as the podman VM was not running. Let’s check the VM: >podman machine list NAME         ...