Data Pipelines is a native data ingestion tool that lets you import data from external files or systems directly into Fountain (no third-party middleware required). You define reusable pipeline configurations that map incoming data to Fountain entities, then trigger imports manually through the app, automatically via SFTP, or in real time via webhook when an external system routes data to Fountain.
Enablement Required
Data Pipelines is currently available to select customers only. Please contact your Fountain representative for more information about enabling this feature.
Data Pipelines supports importing the following entity types:
Workers
Jobs
Locations
Location Groups
Openings
Prospects
Imports use upsert logic: if Fountain finds a matching record, it updates it; if no match is found, it creates a new record. Matching criteria vary by entity type:
Workers — matching is based on your platform-level Worker Profiles Matching and Rehire settings, using exact-match criteria such as SSN, email, phone number, and date of birth, depending on your configuration.
Jobs, Locations, and related entities — if an external ID is provided and mapped, it is the most reliable match key. If no external ID is mapped, Fountain may fall back to name matching, which is less reliable and can produce duplicate records if names change or differ in formatting.
Matching on Name or UUID
Fountain matches incoming records against existing entities using the entity name or a UUID field. If your source data includes a field named UUID, Fountain prioritizes matching by ID. To update an entity's name, you must include a UUID field — without it, Fountain cannot identify the existing record by its new name and creates a new entity instead.
Navigate to Data Pipelines
Navigate to Settings, then search for Data Pipelines or scroll to the Workflows section and select it.
The Data Pipelines page
The Data Pipelines page lists all pipeline configurations for your account.
Each pipeline displays the following:
Name — The pipeline's display name
Direction — Import (inbound); outbound direction is coming soon
Data Type — The Fountain entity type the pipeline targets (e.g., Jobs, Workers)
Source — The format used (e.g., CSV)
Status — Active or inactive
Internal — Whether the pipeline is a Fountain-provided standard configuration (Internal) or one created by your team (Custom)
Created On — The date the pipeline was created
Use the search field to find a pipeline by name. Select Explore Logs to view the full execution history across all pipelines.
Standard and Custom Pipelines
Fountain provides pre-built standard pipelines for each supported entity type. Standard pipelines are ready to use without any configuration and cover standard field imports. They are listed with an Internal label.
Custom pipelines are created by your team to handle non-standard schemas, custom attributes, or specific transformation requirements. They are listed with a Custom label.
Create a Data Pipeline
Select Add a data pipeline from the Data Pipelines page. If no pipelines exist yet, select New Data Pipeline from the empty state.
Pipeline creation has two steps: Setup and Mapping.
Set Up the Pipeline
The Setup step configures the pipeline's basic properties.
Enter a name for the pipeline in the Name field.
Under Attributes to provide, Inbound from Outside to Fountain will be selected by default.
Outbound Direction is coming soon — The Outbound from Fountain to outside option is not yet available.Open the Data type dropdown and select the Fountain entity you want to import data into: Workers, Jobs, Locations, Openings, Location Groups, or Prospects.
Under Sample data, provide a representative sample of your source file. Fountain uses this sample to generate the field mapping interface in the next step. Choose one of three input methods:
CSV file — Upload a CSV file (10 MB max)
JSON file — Upload a JSON file (10 MB max)
JSON — Paste JSON data directly into the text area
If your JSON source data is nested, select Enter custom path under JSON data path and specify the path to the data array.
Select Next to continue to the Mapping step.
Map Fields
The Mapping step connects fields from your source data to the corresponding fields in the Fountain database.
Fountain automatically maps source columns to Fountain fields when the names match. For any column that has not been mapped, open the Select field dropdown on the right side of that row and choose the appropriate Fountain target field.
Target fields are organized into two categories:
Standard fields — Core Fountain fields such as uuid, name, etc...
Custom attributes — Custom fields defined for your account (for example, Department, Grade, FLSA Status)
To map a source field that does not appear in the main mapping list, select Add a mapping. The Add mapping panel opens on the right side of the screen. Select the source column from the Source Field dropdown, then select the target field from the Target Field list.
To remove a mapping — for example, if your sample file contains extra headers you do not want to import — select the delete icon to the right of that row.
Test the Mapping
Before saving, test the mapping against your sample data to confirm that fields resolve correctly.
Select Test in the top right corner of the Mapping step. The Preview Mapping modal opens ready to run the test. Click Run Test to run the mapping against all records in your sample data.
The modal has three sections:
Record list (left) — A scrollable list of records from your sample data. Select any record to view its values in the center panel.
Sample data (center) — The field values for the selected record, showing source values and their resolved Fountain targets. The top of this panel displays a summary of total lines processed, errors, and warnings.
Logs (right) — A per-field breakdown showing whether each mapping passed or failed, along with the resolved value
If you need to adjust a mapping after reviewing the results, close the modal, update the mapping, and select Test again. Select Retry within the modal to re-run the test without closing.
Save the Pipeline
When you are satisfied with the mapping, select Save in the top right corner of the Mapping step.
The pipeline is saved and added to the Data Pipelines list with a status of Active. A confirmation message appears at the top of the page.
Import Data
There are three ways to import data using a pipeline: directly from an entity settings page for manual imports, automatically via SFTP, or in real time via webhook when an external system sends data to Fountain.
Import From an Entity Settings Page
Settings pages for supported entity types (such as Settings > Jobs or Settings > Locations) include an Import button for uploading a CSV file and processing it through a pipeline immediately.
To import data from an entity settings page:
Navigate to Settings, then select the entity you want to import data into (for example, Jobs).
Select Import.
In the Upload a CSV file with your [entity] data modal, open the Import configuration dropdown and select the pipeline to use.
Optionally, select Download CSV template to download a correctly formatted template for the selected pipeline.
Upload your CSV file by dragging it into the upload area, or select browse to locate it on your computer (CSV file, 10 MB max).
The Preview Mapping modal opens automatically and displays a preview of the resolved data against the selected pipeline's mapping.
Review the results. Warnings appear for any rows where a field may not resolve as expected. Errors indicate rows that cannot be imported.
Select Import [entity] to execute the import.
Standard Configurations Available Out of the Box
Fountain provides pre-built standard pipelines for each entity type. These work for most standard-attribute imports with no setup required. If you need to import custom attributes or apply specific data transformations, select a custom pipeline from the dropdown or create one in Data Pipelines.
After the import completes, you are returned to the entity list. Import results are available in the pipeline logs.
CSV Import Overwrites All Attribute Values
When importing to update existing records, any attribute not included in your CSV is cleared on the existing record, even if it currently holds a value. Ensure your CSV includes values for all attributes you want to preserve, not just the ones you are updating. Contact your Fountain representative before running bulk updates on existing records.
Import via SFTP
SFTP Import Requires Configuration
SFTP-based imports require a dedicated folder provisioned by a Fountain representative. Contact your Fountain representative to set up SFTP access for your account.
Connect via an External Webhook
If your external system supports outbound automations (for example, triggering a notification when a worker record is created or updated in your HRIS) you can route that data directly into a Fountain Data Pipeline without SFTP access or engineering assistance.
To set this up, create an automation in Fountain's Automation Center using Webhook as the source. Fountain generates a webhook URL that you provide to your external system. When the configured event occurs in that system, it sends a JSON payload to the URL, and Fountain processes it through the selected pipeline.
This approach is self-serve: as long as your external system supports outbound event notifications.
View Pipeline Logs
The Logs page shows the execution history for all pipeline runs. Select Explore Logs from the Data Pipelines page to access it.
Each log entry displays:
Direction — Whether the run was an import
File Name — The name of the file that was processed
Data Pipeline — The pipeline used to process the file
Status — Whether the run succeeded or failed
Last Attempted At — The date and time of the run in local time
Select a log entry to view the full execution detail, including row-level statistics: total rows processed, success count, failure count, create count, and update count.
Select Copy payload to copy the raw execution log to your clipboard.











