Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
In this tutorial, you ingest more dimensional and fact tables from the Wide World Importers (WWI) into the lakehouse. Pipelines enable you to ingest data at scale with the option to schedule data workflows.
Prerequisites
- If you don't have a lakehouse, you must create a lakehouse.
Ingest data
In this section, you use the Copy data activity of the Data Factory pipeline to ingest sample data from an Azure storage account to the Files section of the lakehouse you created in the previous tutorial.
In the workspace you created in the previous tutorial, select New item.
Search for Pipeline in the search bar and select the Pipeline tile.
In the New pipeline dialog box, specify the name as IngestDataFromSourceToLakehouse and select Create.
From your new pipeline's Home tab, select Pipeline activity > Copy data.
Select the new Copy data activity from the canvas. Activity properties appear in a pane below the canvas, organized across tabs including General, Source, Destination, Mapping, and Settings. You might need to expand the pane upwards by dragging the top edge.
On the General tab, enter Data Copy to Lakehouse in the Name field. Leave the other fields with their default values.
On the Source tab, select the Connection dropdown and then select Browse all.
In the Choose a data source to get started page, search for and select Azure blobs.
Enter the following details in the Connect data source page. Then select Connect to create the connection to the data source. For this tutorial, all the sample data is available in a public container of Azure blob storage. You connect to this container to copy data from it.
Property Value Account name or URL https://fabrictutorialdata.blob.core.windows.net/sampledata/Connection Create new connection Connection name wwisampledata Authentication kind Anonymous On the Source tab, the newly created connection is selected by default. Specify the following properties before moving to the destination settings.
Property Value Connection wwisampledata File path type File path File path Container name (first text box): sampledata
Directory name (second text box): WideWorldImportersDW/parquetRecursively Checked File format Binary On the Destination tab, specify the following properties:
Property Value Connection wwilakehouse (choose your lakehouse if you named it differently) Root folder Files File path Directory name (first text box): wwi-raw-data File format Binary You have configured the copy data activity. Select the Save icon on the top ribbon (below Home) to save your changes, and select Run to execute your pipeline and its activity. You can also schedule pipelines to refresh data at defined intervals to meet your business requirements. For this tutorial, we run the pipeline only once by selecting Run.
This action triggers data copy from the underlying data source to the specified lakehouse and might take up to a minute to complete. You can monitor the execution of the pipeline and its activity under the Output tab. The activity status changes from Queued > In progress > Succeeded.
Tip
Select View Run details to see more information about the run.
After the copy activity is successful, open your lakehouse (wwilakehouse) to view the data. Refresh the Files section to see the ingested data. A new folder wwi-raw-data appears in the files section, and data from Azure Blob tables is copied there.
To load incremental data into a lakehouse, see Incrementally load data from a data warehouse to a lakehouse.