Passing Parameters from Triggers to Pipelines in Azure Data Factory: The Complete Guide
Imagine you have a delivery driver (your pipeline) who shows up every morning to deliver packages. But nobody tells him WHERE to deliver them. He just drives around aimlessly. That is a pipeline without trigger parameters.
Now imagine giving the driver a delivery slip every morning that says “deliver to 123 Main Street, package contains electronics, priority: urgent.” THAT is what trigger parameters do — they tell your pipeline exactly what to process, when the data is from, and what file just arrived.
In my previous post on triggers, I covered the three trigger types. This post goes deeper into the part most tutorials skip: how triggers actually pass information to pipelines, and how pipelines use that information to process the right data.
Table of Contents
- Why Trigger Parameters Matter
- The Parameter Flow: Trigger to Pipeline to Activity
- Schedule Trigger Parameters
- Tumbling Window Trigger Parameters
- Storage Event Trigger Parameters
- Step-by-Step: Wiring Trigger Parameters to Pipeline
- Using Trigger Parameters in Pipeline Activities
- System Variables Available from Triggers
- Real-World Scenarios
- Common Mistakes
- Interview Questions
- Wrapping Up
Why Trigger Parameters Matter
Think of it like ordering food delivery. The restaurant (trigger) needs to tell the delivery driver (pipeline) three things:
- What to deliver (which file, which data range)
- Where it is coming from (which folder, which time window)
- When it was prepared (timestamp for partitioning)
Without this information, the pipeline is blind. It either processes everything (wasteful) or nothing (useless).
Without trigger parameters:
Trigger fires at 2 AM --> Pipeline runs
Pipeline: "Okay, I'm running... but what data should I process?
All of it? Just today's? Which folder? I have no idea."
With trigger parameters:
Trigger fires at 2 AM --> passes windowStart="2026-04-10T00:00:00Z",
windowEnd="2026-04-11T00:00:00Z"
Pipeline: "Got it. I'll process data from April 10 only."
The Parameter Flow: Trigger to Pipeline to Activity
The flow has three levels, like passing a baton in a relay race:
TRIGGER (generates information)
|
|-- "Here's the window start time and file name"
|
v
PIPELINE PARAMETERS (receives and holds the information)
|
|-- "I'll store these values so my activities can use them"
|
v
ACTIVITY (uses the information)
|
|-- "I'll use the window start time in my SQL WHERE clause"
|-- "I'll use the file name in my Copy source"
Think of it like a relay race: the trigger is runner #1 who passes the baton to the pipeline (runner #2), who passes it to each activity (runner #3). The baton is the parameter value.
Schedule Trigger Parameters
What a Schedule Trigger Can Pass
Schedule triggers are like an alarm clock — they go off at a set time. They do not generate dynamic information about data or files. But they CAN pass static values you configure:
Schedule Trigger: TR_Daily_2AM
Pipeline Parameters:
Environment = "production"
SourceSchema = "SalesLT"
LoadType = "full"
Available System Variables
Inside the pipeline, you can access trigger metadata:
@trigger().name --> "TR_Daily_2AM"
@trigger().startTime --> "2026-04-10T02:00:00Z"
@trigger().scheduledTime --> "2026-04-10T02:00:00Z"
Real-life analogy: A Schedule trigger is like your morning alarm. It wakes you up at 7 AM, but it does not tell you what meetings you have today. You have to figure that out yourself (or hardcode the info).
When to Use Schedule Trigger Parameters
- Passing environment names (dev/uat/prod) to control pipeline behavior
- Passing schema names or table filters
- Any static configuration that differs per trigger
Tumbling Window Trigger Parameters
What Makes Tumbling Window Special
This is the most parameter-rich trigger. It is like a conveyor belt at a factory — each item (time window) gets its own processing slot, and the trigger tells the pipeline exactly which slot to process.
Tumbling Window Trigger: TR_Hourly_Window
Window: [2026-04-10T14:00:00Z, 2026-04-10T15:00:00Z]
Passes to pipeline:
windowStartTime = "2026-04-10T14:00:00Z"
windowEndTime = "2026-04-10T15:00:00Z"
Available System Variables
@trigger().outputs.windowStartTime --> "2026-04-10T14:00:00Z"
@trigger().outputs.windowEndTime --> "2026-04-10T15:00:00Z"
@trigger().startTime --> when the trigger actually fired
@trigger().scheduledTime --> when it was supposed to fire
How to Wire Tumbling Window Parameters
Step 1: Create pipeline parameters
Pipeline Parameters:
windowStart (String)
windowEnd (String)
Step 2: Map in trigger configuration
| Pipeline Parameter | Value |
|---|---|
| windowStart | @trigger().outputs.windowStartTime |
| windowEnd | @trigger().outputs.windowEndTime |
Step 3: Use in a Copy activity source query
@concat('SELECT * FROM orders WHERE order_date >= ''',
pipeline().parameters.windowStart,
''' AND order_date < ''',
pipeline().parameters.windowEnd, '''')
This generates:
SELECT * FROM orders
WHERE order_date >= '2026-04-10T14:00:00Z'
AND order_date < '2026-04-10T15:00:00Z'
Real-life analogy: A Tumbling Window trigger is like a shift schedule at a factory. The morning shift (6 AM – 2 PM) processes morning orders. The afternoon shift (2 PM – 10 PM) processes afternoon orders. Each shift knows EXACTLY which time range is their responsibility. Nobody processes the same order twice, and no order is missed.
Building Date-Partitioned Output with Window Parameters
Use the window time to create organized output folders:
@concat('orders/',
formatDateTime(pipeline().parameters.windowStart, 'yyyy'), '/',
formatDateTime(pipeline().parameters.windowStart, 'MM'), '/',
formatDateTime(pipeline().parameters.windowStart, 'dd'), '/',
formatDateTime(pipeline().parameters.windowStart, 'HH'))
Produces: orders/2026/04/10/14/ — one folder per hour window.
Storage Event Trigger Parameters
What a Storage Event Trigger Passes
When a file lands in your storage, the trigger automatically tells the pipeline WHICH file arrived. This is like a doorbell camera — it does not just tell you “someone is at the door,” it tells you WHO is at the door.
Storage Event Trigger: TR_NewFile
Event: File "daily_sales_20260410.csv" created in "input/" folder
Passes to pipeline:
folderPath = "input/"
fileName = "daily_sales_20260410.csv"
Available System Variables
@trigger().outputs.body.fileName --> "daily_sales_20260410.csv"
@trigger().outputs.body.folderPath --> "input/"
@trigger().name --> "TR_NewFile"
@trigger().startTime --> when the trigger fired
How to Wire Storage Event Parameters
Step 1: Create pipeline parameters
Pipeline Parameters:
TriggerFileName (String)
TriggerFolderPath (String)
Step 2: Map in trigger configuration
| Pipeline Parameter | Value |
|---|---|
| TriggerFileName | @trigger().outputs.body.fileName |
| TriggerFolderPath | @trigger().outputs.body.folderPath |
Step 3: Use in Copy activity
Source dataset FolderName: @pipeline().parameters.TriggerFolderPath
Source dataset FileName: @pipeline().parameters.TriggerFileName
Real-life analogy: A Storage Event trigger is like a mailroom clerk. When a package arrives, the clerk does not just say “a package came.” They say “a package from Amazon, tracking #12345, arrived at 3:15 PM, it’s in bin B7.” Your pipeline knows exactly which package to process.
Real-World Flow
1. Vendor drops "daily_sales_20260410.csv" into input/ folder
2. Storage event trigger fires
3. Trigger passes: fileName="daily_sales_20260410.csv", folderPath="input/"
4. Pipeline receives these as parameters
5. Copy activity reads ONLY that specific file
6. Loads it into the Sales staging table
7. Moves processed file to archive/ folder
Step-by-Step: Wiring Trigger Parameters to Pipeline
Let me walk through the complete wiring for a Tumbling Window trigger (the most common production pattern):
Step 1: Add Pipeline Parameters
- Open your pipeline in ADF/Synapse Studio
- Click on the blank canvas (not on any activity)
- At the bottom panel, click the Parameters tab
- Click + New and add:
- Name:
windowStart, Type: String, Default: (empty) - Name:
windowEnd, Type: String, Default: (empty)
Step 2: Create the Trigger
- Click Add trigger > New/Edit > + New
- Type: Tumbling Window
- Name:
TR_Hourly_Orders - Recurrence: Every 1 Hour
- Start:
2026-04-10T00:00:00Z - Max concurrency: 1
- Click Next
Step 3: Map Trigger Outputs to Pipeline Parameters
On the “Trigger Run Parameters” screen, you see your pipeline parameters listed:
| Parameter | Value |
|---|---|
| windowStart | @trigger().outputs.windowStartTime |
| windowEnd | @trigger().outputs.windowEndTime |
Type these values into each parameter field.
Step 4: Click OK and Publish
- Click OK to save the trigger
- Click Publish all to activate
Step 5: Use Parameters in Activities
In your Copy activity source query:
@concat('SELECT * FROM orders WHERE order_date >= ''',
pipeline().parameters.windowStart,
''' AND order_date < ''',
pipeline().parameters.windowEnd, '''')
Using Trigger Parameters in Pipeline Activities
In Copy Activity Source (Dynamic Query)
@concat('SELECT * FROM ', pipeline().parameters.TableName,
' WHERE modified_date >= ''',
pipeline().parameters.windowStart, '''')
In Copy Activity Sink (Dynamic Folder Path)
@concat('output/',
formatDateTime(pipeline().parameters.windowStart, 'yyyy-MM-dd'),
'/', pipeline().parameters.TableName)
In Stored Procedure (Audit Logging)
@pipeline_run_id: @pipeline().RunId
@trigger_name: @trigger().name
@window_start: @pipeline().parameters.windowStart
@window_end: @pipeline().parameters.windowEnd
In Set Variable or If Condition
@equals(pipeline().parameters.LoadType, 'full')
Use in If Condition to branch: full load path vs incremental path.
System Variables Available from Triggers
Here is the complete reference:
| Variable | Available From | Returns |
|---|---|---|
@trigger().name |
All triggers | Trigger name |
@trigger().startTime |
All triggers | When the trigger actually fired |
@trigger().scheduledTime |
All triggers | When it was scheduled to fire |
@trigger().outputs.windowStartTime |
Tumbling Window only | Window start |
@trigger().outputs.windowEndTime |
Tumbling Window only | Window end |
@trigger().outputs.body.fileName |
Storage Event only | File name that triggered |
@trigger().outputs.body.folderPath |
Storage Event only | Folder path of the file |
@pipeline().RunId |
Always available | Unique pipeline run GUID |
@pipeline().Pipeline |
Always available | Pipeline name |
@pipeline().TriggerType |
Always available | “ScheduleTrigger”, “TumblingWindowTrigger”, etc. |
@pipeline().TriggeredByPipelineName |
Execute Pipeline only | Parent pipeline name |
Real-World Scenarios
Scenario 1: Hourly Incremental Load with Tumbling Window
Trigger: TR_Hourly (Tumbling Window, every 1 hour)
Passes: windowStart, windowEnd
Pipeline: PL_Hourly_Incremental
Copy Activity Source Query:
SELECT * FROM transactions
WHERE created_at >= '@{pipeline().parameters.windowStart}'
AND created_at < '@{pipeline().parameters.windowEnd}'
Copy Activity Sink Folder:
transactions/year=@{formatDateTime(pipeline().parameters.windowStart,'yyyy')}/
month=@{formatDateTime(pipeline().parameters.windowStart,'MM')}/
day=@{formatDateTime(pipeline().parameters.windowStart,'dd')}/
hour=@{formatDateTime(pipeline().parameters.windowStart,'HH')}/
Like a newspaper delivery route — each hour has its own delivery area (data range) and its own mailbox (output folder).
Scenario 2: File-Driven Ingestion with Storage Event
Trigger: TR_VendorFile (Storage Event, on blob created in input/)
Passes: fileName, folderPath
Pipeline: PL_Process_Vendor_File
Copy Activity Source:
Read @{pipeline().parameters.TriggerFolderPath}/@{pipeline().parameters.TriggerFileName}
Copy Activity Sink:
Write to processed/@{formatDateTime(utcnow(),'yyyy-MM-dd')}/@{pipeline().parameters.TriggerFileName}
Delete Activity:
Remove original file from input/ after successful processing
Like a mailroom that opens each package, processes the contents, and files it in the right cabinet based on the date it arrived.
Scenario 3: Multi-Environment Pipeline with Schedule Trigger
Trigger (Dev): TR_Daily_Dev
Passes: Environment="dev", Schema="dev_schema"
Trigger (Prod): TR_Daily_Prod
Passes: Environment="prod", Schema="prod_schema"
Same Pipeline: PL_Daily_ETL
Uses @pipeline().parameters.Environment to:
- Choose which Key Vault to read secrets from
- Choose which ADLS container to write to
- Log the environment in audit table
Like the same recipe being cooked in two different kitchens — the steps are identical, but the ingredients (connections, containers) come from different pantries.
Common Mistakes
1. Forgetting to Create Pipeline Parameters First
You create a trigger and try to map parameters, but the pipeline has no parameters to map TO.
Fix: Always create pipeline parameters BEFORE creating the trigger.
2. Using @trigger() Directly in Activities
WRONG: Copy activity source uses @trigger().outputs.windowStartTime directly
This works for simple cases but breaks when you run the pipeline manually (no trigger context).
Fix: Map trigger outputs to pipeline parameters. Use @pipeline().parameters.windowStart in activities. When debugging manually, enter test values for the parameters.
3. Wrong Trigger Output Path for Storage Events
WRONG: @trigger().outputs.fileName
RIGHT: @trigger().outputs.body.fileName
Storage event outputs are nested inside .body. Tumbling window outputs are directly on .outputs.
4. Not Publishing After Creating Triggers
Triggers only activate after you Publish. Creating a trigger without publishing means it exists in draft but never fires.
5. Tumbling Window Backfill Surprise
Setting a start date in the past creates windows for every interval since that date. If you set start = January 1 with hourly windows, that is 2,400+ pipeline runs queued immediately.
Fix: Set the start date to today or the recent past. Use max concurrency = 1 for controlled backfill.
Interview Questions
Q: How do you pass information from a trigger to a pipeline? A: Create pipeline parameters first, then map trigger system variables to those parameters in the trigger configuration. Activities use @pipeline().parameters.ParamName to access the values. This keeps the pipeline testable — you can pass values manually during Debug without a trigger.
Q: What is the difference between @trigger().outputs for Tumbling Window vs Storage Event? A: Tumbling Window provides windowStartTime and windowEndTime directly on outputs. Storage Event provides fileName and folderPath nested inside outputs.body. The paths are different: @trigger().outputs.windowStartTime vs @trigger().outputs.body.fileName.
Q: Why should you use pipeline parameters instead of @trigger() directly in activities? A: Because @trigger() only has values when the pipeline is started by a trigger. If you run the pipeline manually (Debug or Trigger Now), @trigger() returns null and the pipeline fails. Pipeline parameters can receive values from triggers OR from manual input, making the pipeline testable in both scenarios.
Q: How does a Tumbling Window trigger help with hourly data processing? A: It passes windowStartTime and windowEndTime to the pipeline, which uses them in the source query WHERE clause to extract only that hour’s data. Each window is processed independently and exactly once. If a window fails, it is retried without affecting other windows.
Wrapping Up
Trigger parameters are the communication channel between “when to run” (trigger) and “what to process” (pipeline). Without them, your pipeline is a delivery driver without a delivery slip — running on time but with no idea what to deliver.
The pattern is always the same: create pipeline parameters, map trigger outputs to those parameters, use @pipeline().parameters in your activities. This works for all three trigger types and keeps your pipelines testable with manual runs.
Related posts: – ADF Triggers: Schedule, Tumbling Window, Event – ADF Expressions Guide – Metadata-Driven Pipeline in ADF – Incremental Data Loading
Naveen Vuppula is a Senior Data Engineering Consultant and app developer based in Ontario, Canada. He writes about Python, SQL, AWS, Azure, and everything data engineering at DriveDataScience.com.