This guide focuses on storing and loading Pandas DataFrames in Snowflake. Dagster also supports using PySpark DataFrames with Snowflake. The concepts from this guide apply to working with PySpark DataFrames, and you can learn more about setting up and using the Snowflake I/O manager with PySpark DataFrames in the reference guide.
To gather the following information, which is required to use the Snowflake I/O manager:
Snowflake account name: You can find this by logging into Snowflake and getting the account name from the URL:
Snowflake credentials: You can authenticate with Snowflake two ways: with a username and password, or with a username and private key.
The Snowflake I/O manager can read all of these authentication values from environment variables. In this guide, we use password authentication and store the username and password as SNOWFLAKE_USER and SNOWFLAKE_PASSWORD, respectively.
The Snowflake I/O manager requires some configuration to connect to your Snowflake instance. The account, user are required to connect with Snowflake. One method of authentication is required. You can use a password or a private key. Additionally, you need to specify a database to where all the tables should be stored.
You can also provide some optional configuration to further customize the Snowflake I/O manager. You can specify a warehouse and schema where data should be stored, and a role for the I/O manager.
from dagster_snowflake_pandas import SnowflakePandasIOManager
from dagster import Definitions, EnvVar
defs = Definitions(
assets=[iris_dataset],
resources={"io_manager": SnowflakePandasIOManager(
account="abc1234.us-east-1",# required
user=EnvVar("SNOWFLAKE_USER"),# required
password=EnvVar("SNOWFLAKE_PASSWORD"),# password or private key required
database="FLOWERS",# required
role="writer",# optional, defaults to the default role for the account
warehouse="PLANTS",# optional, defaults to default warehouse for the account
schema="IRIS",# optional, defaults to PUBLIC)},)
With this configuration, if you materialized an asset called iris_dataset, the Snowflake I/O manager would be permissioned with the role writer and would store the data in the FLOWERS.IRIS.IRIS_DATASET table in the PLANTS warehouse.
Finally, in the Definitions object, we assign the SnowflakePandasIOManager to the io_manager key. io_manager is a reserved key to set the default I/O manager for your assets.
For more info about each of the configuration values, refer to the SnowflakePandasIOManager API documentation.
The Snowflake I/O manager can create and update tables for your Dagster defined assets, but you can also make existing Snowflake tables available to Dagster.
To store data in Snowflake using the Snowflake I/O manager, the definitions of your assets don't need to change. You can tell Dagster to use the Snowflake I/O manager, like in Step 1: Configure the Snowflake I/O manager, and Dagster will handle storing and loading your assets in Snowflake.
import pandas as pd
from dagster import asset
@assetdefiris_dataset()-> pd.DataFrame:return pd.read_csv("https://docs.dagster.io/assets/iris.csv",
names=["sepal_length_cm","sepal_width_cm","petal_length_cm","petal_width_cm","species",],)
In this example, we first define our asset. Here, we are fetching the Iris dataset as a Pandas DataFrame and renaming the columns. The type signature of the function tells the I/O manager what data type it is working with, so it is important to include the return type pd.DataFrame.
When Dagster materializes the iris_dataset asset using the configuration from Step 1: Configure the Snowflake I/O manager, the Snowflake I/O manager will create the table FLOWERS.IRIS.IRIS_DATASET if it does not exist and replace the contents of the table with the value returned from the iris_dataset asset.
You may already have tables in Snowflake that you want to make available to other Dagster assets. You can create source assets for these tables. By creating a source asset for the existing table, you tell Dagster how to find the table so it can be fetched for downstream assets.
from dagster import SourceAsset
iris_harvest_data = SourceAsset(key="iris_harvest_data")
In this example, we create a SourceAsset for a pre-existing table - perhaps created by an external data ingestion tool - that contains data about iris harvests. To make the data available to other Dagster assets, we need to tell the Snowflake I/O manager how to find the data.
Since we supply the database and the schema in the I/O manager configuration in Step 1: Configure the Snowflake I/O manager, we only need to provide the table name. We do this with the key parameter in SourceAsset. When the I/O manager needs to load the iris_harvest_data in a downstream asset, it will select the data in the FLOWERS.IRIS.IRIS_HARVEST_DATA table as a Pandas DataFrame and provide it to the downstream asset.
Step 3: Load Snowflake tables in downstream assets#
Once you have created an asset or source asset that represents a table in Snowflake, you will likely want to create additional assets that work with the data. Dagster and the Snowflake I/O manager allow you to load the data stored in Snowflake tables into downstream assets.
import pandas as pd
from dagster import asset
# this example uses the iris_dataset asset from Step 2@assetdefiris_cleaned(iris_dataset: pd.DataFrame)-> pd.DataFrame:return iris_dataset.dropna().drop_duplicates()
In iris_cleaned, the iris_dataset parameter tells Dagster that the value for the iris_dataset asset should be provided as input to iris_cleaned. If this feels too magical for you, refer to the docs for explicitly specifying dependencies.
When materializing these assets, Dagster will use the SnowflakePandasIOManager to fetch the FLOWERS.IRIS.IRIS_DATASET as a Pandas DataFrame and pass this DataFrame as the iris_dataset parameter to iris_cleaned. When iris_cleaned returns a Pandas DataFrame, Dagster will use the SnowflakePandasIOManager to store the DataFrame as the FLOWERS.IRIS.IRIS_CLEANED table in Snowflake.