Redshift Advanced Settings
  • 26 Jul 2021
  • 3 Minutes to read
  • Dark
    Light
  • PDF

Redshift Advanced Settings

  • Dark
    Light
  • PDF

Article Summary

Warning:

We do not recommend changing advanced settings unless you are an experienced Panoply user.

For users who have some experience working with their data in Panoply, there are a number of items that can be customized for this data source.

  1. Incremental Key: By default, Panoply fetches all of your Redshift data on each run. If you only want to collect some of your data, enter a column name to use as the incremental key. The column must be logically incremental. Panoply will keep track of the maximum value reached during the previous run and will start there on the next run.
    • Incremental Key configurations
      • If no Incremental Key is configured by the user, Panoply collects all the Redshift data on each run for the Redshift tables or views selected.
      • If the Incremental Key is configured by column name, but not the column value, Panoply collects all data, and then automatically configures the column value at the end of a successful run.
      • If the Incremental Key is configured by column name and the column value (manually or automatically), Panoply will use that value as the place to begin the collection.
      • The value is updated at the end of a successful collection to the last value collected.
      • In future collections, the new value is used as the starting value. So in future collections Panoply looks for data where the IK value is greater than where the collection ended.
      • When an Incremental Key is configured, Panoply will look for that key in each of the selected tables and views. If the table or view does not have the column indicated as the Incremental Key, it must be collected as a separate instance of the data source.
      • A table or view may have some records that have a ‘null’ value for the incremental key, or they may not capture the incremental key at all. In these situations Panoply omits these records instead of failing the entire data source.
  2. Destination Schema: This is the name of the target schema to save the data. The default schema for data warehouses built on Google BigQuery is panoply. The default schema for data warehouses built on Amazon Redshift is public. This cannot be changed once a source has been collected.
  3. Destination: The default destination is {__tablename}, there is no prefix, Panoply uses the tablename in the source.
  4. Primary Key: The primary key is an id field that defines the column that contains the table's Primary Key. If this option is left blank and the sheet does not contain an ID column, Panoply will insert an id, formatted as a GUID, such as 2cd570d1-a11d-4593-9d29-9e2488f0ccc2.
Redshift id columnUser Entered primary keyOutcome
yesnoPanoply will automatically select the id column and use it as the primary key.
yesyesNot recommended. Panoply will use the id column but will overwrite the original source values.
If you want Panoply to use your database table's id column, do not enter a value into the Primary Key field.
nonoPanoply creates an id column formatted as a GUID, such as 2cd570d1-a11d-4593-9d29-9e2488f0ccc2.
noyesPanoply creates a hashed id column using the primary key values entered, while retaining the source columns.

WARNING: Any user-entered primary key will be used across all the files selected.

4. Exclude: The Exclude option allows you to exclude certain data, such as names, addresses, or other personally identifiable information. Enter the column names of the data to exclude.

  1. Parse String: If the data to be collected contains JSON, include the JSON text attributes to be parsed.
  2. Truncate: Truncate deletes all the current data stored in the destination tables, but not the tables themselves. Afterwards Panoply will recollect all the available data for this data source.
  3. Click Save Changes and then click Collect
    • The data source appears grayed out while the collection runs.
    • You may add additional data sources while this collection runs.
    • You can monitor this collection from the Jobs page or the Data Sources page
    • After a successful collection, navigate to the Tables page to review the data results.

Was this article helpful?