Flex Connector Advanced Settings
- 07 May 2024
- 3 Minutes to read
- Print
- DarkLight
- PDF
Flex Connector Advanced Settings
- Updated on 07 May 2024
- 3 Minutes to read
- Print
- DarkLight
- PDF
Article summary
Did you find this summary helpful?
Thank you for your feedback
Warning:
Although we usually do not recommend changing advanced settings unless you are an experienced Panoply user. In the case of the Flex Connector, there are some changes that might be required
For users who have some experience working with their data in Panoply, there are a number of items that can be customized for this connector.
- Disable configuration validation: Select this option to disable and skip the configuration validation. Note that the validation is important and will help understand if the connector was configured properly.
- Downloadable link: Select downloadable link when the API response includes a URL to download a file or retrieve the data
- URL key: Path to the URL in response object. Only available when downloadable link is selected. For example:
results.link
- Data key: Path to the data in response object. Only available when downloadable link is not selected. For example:
data.products
- Array to flatten: Enter the path to an array you want to flatten. For example:
dict1.dict2.array_title
Notes
- Flattening an array will duplicate the number of records in the table. For example, a record with an array with 5 elements in it will generate 5 records in the destination table.
- You can only flatten one array in each Flex Connector.
- Skip XML attributes: When connecting to an API that returns XML results, some of the returning XML fields might have attributes attached to them. Select this option to skip all of the XML attributes and ingest only the XML values. For example, for the data
<score type="integer" id="30">100</score>
, Panoply will ingest the value100
to thescore
column - Skip rows in CSV response: Select this option to skip the first N rows in a CSV response.
- Number of rows to skip: When selecting to skip rows in CSV response, enter the number of rows to skip
- Errors waiting time: Enter error codes and waiting time in seconds in the following format:
error_code=waiting_time; error_code2=waiting_time2
. - Primary Key: Primary Keys are the column(s) values that uniquely identify a row. Once identified, Panoply upserts new data and prevents duplicate data.
Panoply automatically selects the Primary Key using the available ID columns. If none are available, you may configure this manually by choosing the columns to use. - Incremental Key: Enter the desired incremental key based on the destination table. The incremental value will be used in the next succeeding Flex Collector collection. You will need to use
{incval}
in the URL parameters or POST data. The format of the incremental value must match the API's expected format as part of the API call. When using the{incval}
in the API, you can encapsulate it with multiple different date functions (functions are identified by<< >>
):date_add(date, period, value)
- Add or subtract specific periods from the date value- date - the date value or
{incval}
- period - The period type. Acceptable values:
'seconds'
,'minutes'
,'hours'
,'days'
,'months'
or'years'
. - amount - The amount of periods to add or subtract
- date - the date value or
date_format(date, pattern)
- Change the date format to any format.- date - the date value or
{incval}
- pattern - The desired date pattern. See available values here
- date - the date value or
to_timestamp(date)
- Returns the Epoch Unix timestamp of the given dateutcnow()
- Return the current UTC timestamp
Example
<<to_timestamp(date_add('{incval}', 'months', -1))>>
- Destination Schema: This is the name of the target schema to save the data. The default schema for data warehouses built on Google BigQuery is panoply. The default schema for data warehouses built on Amazon Redshift is public. This cannot be changed once a source has been collected.
- Exclude: The Exclude option allows you to exclude certain data, such as names, addresses, or other personally identifiable information. Enter the column names of the data to exclude.
- Truncate: Truncate deletes all the current data stored in the destination tables, but not the tables themselves. Afterwards Panoply will recollect all the available data for this connector.
- Lock Schema: Lock schema will block any tables' schema changes like adding new columns, changing of data types or adding new tables.
- Load Strategy: Control the ingestion behavior of existing records. You can either set it to Upsert (Update existing and insert new records) or Append (Always insert the records). The default strategy is Upsert.
- Nested Data: Control the ingestion behavior of nested objects. You can either set it to create one-to-many tables (default behavior) or flatten the nested data to the parent table
- Click Save Changes then click Collect.
- The connector appears grayed out while the collection runs.
- You may add additional connectors while this collection runs.
- You can monitor this collection from the Jobs page or the Connectors page.
- After a successful collection, navigate to the Tables page to review the data results.
Was this article helpful?