Used for working with Snowflake data warehouse.
This connection type supports the following task types:
This connection can be used in a Data Migration.
Usable as Source | Usable as Target | Usable as Incremental Source | Usable as Incremental Target |
---|---|---|---|
✔ | ✔ | ✔ | ✔ |
Template Name | Description |
---|---|
Snowflake | The standard Snowflake connection string-requires details such as the host, account, role, database and warehouse to connect to. |
When using the connection string the YOUR_FULL_HOST can contain your YOUR_ACCOUNT values as shown below:
Connection String Template:
"account=YOUR_ACCOUNT;host=YOUR_FULL_HOST.snowflakecomputing.com;role=PUBLIC;db=YOUR_DATABASE;"
"account=Sample;host=sample.ap-southeast-2.snowflakecomputing.com;role=PUBLIC;db=SAMPLE_WH;WAREHOUSE=DEMO_WH"
"account=readeraccount01;host=sample.ap-southeast-2.snowflakecomputing.com;role=PUBLIC;db=SampleDatabase;WAREHOUSE=DEMO_WH"
LoadDateTime fields will be stored in UTC format in your database.
The username and password fields for this connection type should be your Snowflake Data Warehouse credentials.
The role of your account should be provided when creating the connection, if it’s not known the default will be set to public.
Snowflake data migrations require a staging connection and staging file definition that is a blob based connection such as Azure Blob and a blob file definition.
When performing a data migration from a Snowflake source to a different Snowflake target, it will require a staging connection or internal staging. SnowSQL is required to be installed on the same host as the Agent if you wish to perform internal staging.
When using an internal stage for Snowflake connections users will have to install SnowSQL. Users will also have to restart their Agent if they install SnowSQL in an existing running Agent.
When importing double data from MySQL and SQL Server into Snowflake, Snowflake currently cannot handle the minimum double. To handle the minimum double you can cast the value into a varchar before inserting it into Snowflake.
We suggest using CSV over Parquet when importing timezone sensitive data, so that you do not have any issues with timestamp handling when using the Snowflake connector.