FTP walkthrough
Data Integration offers a streamlined solution for establishing a connection with an FTP server and effortlessly transferring data to your chosen Target, letting you uncover valuable insights within your datasets.
Create a new River within Data Integration by selecting extraction techniques and efficiently overseeing your data storage. Efficiently use Data Integration features to retrieve data from an FTP server and seamlessly incorporate it into your data ecosystem.
Prerequisite
Before you can start extracting data from an FTP server using Data Integration, ensure you have a configured FTP server connection. After creating a connection, you can set up and execute a data extraction River.
Extracting data from FTP to your chosen target
Procedure
-
Navigate to the Data Integration Account.
-
Create a new River: Choose Create River from the top-left corner.
-
Choose River Type: Select Source to Target as the River type.
-
Locate FTP Server in the list of available data sources under the Storage section.
-
Click FTP Server.
-
Provide a name for your River.
-
Under Source Connection, select an existing connection or create a new one.
-
Select Folder: Choose the desired folder from the list of available directories on the FTP server.
-
Choose Extraction Method.
- All - This method lets you pull all data from the source and replaces all existing data in the target, regardless of time periods. It is useful when the source data is complete and up-to-date, and you want to create a fresh copy of the source data in the target.
- Incremental load by file modified timestamp- This method lets you control the date range of your data.
- Start Date is mandatory.
- Data can be retrieved for the date range specified between the start and end dates.
- If you leave the end date blank, the data is pulled at the current time of the river's run.
- Dates are in UTC time.
- Use the Last Days Back For Each Run option to extend the starting date and retrieve data from a specified number of days before the selected start date.
-
Incremental run: by template - Templates lets you run over folders and load files in the order in which they were created. Choose a template type (Timestamp or Epoch time) and a data date range.
-
Timestamp Template: Use
{}and the appropriate timestamp components to establish the folder format. -
Epoch Time Template: Incorporate
{e}(for epoch) or{ee}(for epoch in milliseconds) fields to define the folder structure for running by epoch time. Enter the desired starting value and an optional ending value. -
This approach applies to the entire library and is not applicable to individual files.
-
- Start Date is mandatory.
- Data can be retrieved for the date range specified between the start and end dates.
- If you leave the end date blank, the data is pulled at the current time of the river's run.
- Dates are in UTC time.
- Use the "Interval Chunk Size" field when you intend to retrieve data over extended time frames.
-
File Path Prefix and Pattern: For the chosen extraction methods, specify the file path prefix and file pattern to filter by.
-
Select After-Pull Action:
- Retain the original location.
- Transfer to the archive path: Select the container name and specify the optional archived folder path.
- Delete.
-
Choose the number of files to pull (Leave empty for all).
-
Select the desired file type: CSV, Excel, JSON, or Other.
For each type, you can include a compressed file; ensure to select the Is Compressed checkbox.
File types
CSV
CSV lets you select the Delimiter, Quote character according to your preference.
Excel
In Excel, you can specify the sheet by its position, using a comma separator for multiple sheets. Typically, the initial row serves as the header, but you can opt for different rows. If this scenario applies and your dataset begins after that particular row, configure the "start loading rows from row number" to the subsequent row following the header. An empty list signifies that all sheets are included.
Scroll to the bottom of this section to use “Auto Mapping,” which shows you the structure of your file.
JSON
The supported JSON format is jsonlines format only.
Other
When opting for Other files, Data Integration accepts them in their original state, without any data transformation, and limits your Target selection to storage.
- Navigate to the Target tab and choose a Target warehouse.
- For each file type, you can choose to include a compressed version by selecting the Is Compressed checkbox.
- When referencing a file within another file, ensure to add the correct file extension in the prefix field; otherwise, Data Integration cannot identify the correct file.
- If two files share the same name, one compressed and one not, marking a file as compressed during execution can cause Data Integration to select the uncompressed version, leading to an error.
- To avoid this, ensure to use a prefix with the file format or provide the full file name along with its format.