ImpulseSync™ User Manual
HomePricingContact Us
  • Introduction
    • What is Impulse?
  • Crash Course of ImpulseSync
    • Overview Of ImpulseSync
    • Step 1: Endpoints
      • Endpoint Configuration
    • Step 2: Jobs
      • Job Configuration
      • Step 2a: Content manipulators
      • Step 2b: Content mapper
    • Step 3: Syncing
  • Getting Started
    • Core Concepts
    • Creating Endpoints
    • Creating Jobs
    • Starting a Transaction
    • Transaction Reports
    • Automating Jobs with Pipelines
    • Scripting Post Sync
    • Scheduling Jobs and Pipelines
    • Dashboard
    • Managing Jobs/Pipelines
    • Content Mapper
      • Aligning Mismatched Content
      • Connector Matrix
      • Locked Fields
      • Content Aligner
      • Aligning Content Challenges
  • Reports
    • Reports Screen
    • Debug Report
    • Messages
  • Connectors
    • Common Job Options
    • All Connectors List
    • Source Connectors
      • Contentful
      • Contentstack
      • dotCMS
      • Drupal v7
      • Drupal v9
      • GitHub
      • GraphQL
      • MS Teams
      • SCP
      • Snapshot
      • Strapi v3
      • Strapi v4
    • Destination Connectors
      • Contentful
      • Contentstack
      • dotCMS
      • SCP
      • Strapi v3
      • Strapi v4
  • Content Manipulators
    • Common Manipulator Options
    • Add Replace Field
    • AI(Artificial intelligence)
    • Change ID Manipulator
    • CSV Store Manipulator
    • Dynamic Job Store Manipulator
    • File to Text
    • Folder Manipulator
    • Get and Set Field
    • Language
    • Liquid Field
      • Liquid On the Quick
      • Basics
        • Impulse Values
        • Impulse Variables
        • Operators
        • Truthy and falsy
        • Types
        • Whitespace control
      • Tags
        • Control flow
        • Impulse Content Objects
        • Iteration
        • Utility
        • Variable
      • Filters
        • abs
        • append
        • capitalize
        • ceil
        • compact
        • concat
        • date
        • date_str
        • default
        • divided_by
        • downcase
        • escape
        • escape_once
        • first
        • floor
        • getStoredValue
        • htmlQuery
        • htmlReplace
        • idMap
        • join
        • jq
        • json
        • last
        • lstrip
        • map
        • minus
        • modulo
        • newline_to_br
        • plus
        • prepend
        • remove
        • remove_first
        • replace
        • replace_first
        • reverse
        • round
        • rstrip
        • section
        • sections
        • size
        • slice
        • sort
        • sort_natural
        • split
        • str_to_date
        • strip
        • strip_html
        • strip_newlines
        • times
        • truncate
        • truncatewords
        • type
        • uniq
        • upcase
        • utl_decode
        • url_encode
      • Liquid Playground
    • Markdown
    • Regex
    • Relationship
    • Store Field
    • Tidy
  • Time Machine
    • Snapshot
    • Viewing Snapshots
    • Delivery from Snapshots
  • Cookbook Recipes
    • Adding Fields
    • Aligning Content between Endpoints
    • Avoid overriding Fields
    • Avoid syncing Content Types
    • Combing Fields
    • Default Field Value
    • File (.doc) to Structured Content
    • File (.docx) to Structured Content - Expanded
    • HTML to Structured Content
    • Language (Locale) mismatch between endpoints
    • Paths/IDs Changed
    • Reference to Value
    • Single Content Type to Multiple
    • Splitting Content with Reference
    • Syncing Content with Languages
    • Text Select to Boolean
    • Text to Reference
    • Text to Reference - liquid
    • Two Sources to One Destination
    • Changing a folder path
    • Combining data between content types
    • Converting HTML Sections
    • JSON object to reference
    • Use CSV to convert values
    • Storing fields with Store field motator
  • Troubleshooting
    • What to do if I run into a Job Problem
    • Troubleshooting via UI
    • Submitting a ticket
  • Using Impulse Headlessly
    • Getting Started with cURL
      • Creating Endpoints
      • Creating Jobs
      • Starting a Transaction
      • Transaction Reports
      • Automating Jobs with Pipelines
      • Scheduling Jobs and Pipelines
      • Aligning Mismatched Content
      • Scripting Post Sync
  • Organization Tier Restrictions
  • Content Storage Options
Powered by GitBook
On this page
  • Creating a Pipeline
  • Create an Empty Pipeline
  • Add a Job to a Pipeline
  • Removing a Job From a Pipeline
  • Updating a Pipeline
  • Running a Pipeline
  • Tracking Pipeline Transaction Status
  • Canceling a Pipeline Transaction
  1. Using Impulse Headlessly
  2. Getting Started with cURL

Automating Jobs with Pipelines

{
  "pipelineId": "0c08370b-bf1d-11ec-b314-0242ac150009",
  "name": "updated name",
  "steps": [
    {
      "pipelineId": "0c08370b-bf1d-11ec-b314-0242ac150009",
      "name": "Step 1",
      "order": 1,
      "jobs": [
        "6c454a83-a4ca-4c8f-acb5-ea10521458aa"
      ]
    },
    {
      "name": "Step 0",
      "order": 0,
      "jobs": [
        "87f0e362-82a4-11eb-8dcd-0242ac130003"
      ]
    }
  ]
}

Creating a Pipeline

Create an Empty Pipeline

curl --request POST '{{impulse-protocol}}://{{impulse-domain}}:{{impulse-port}}/private/pipeliner/pipelines'

This will create a pipeline without any steps. The response will include the pipelineId. You should save this value to be reused.

Add a Job to a Pipeline

To add jobs to a pipeline, you will need the:

  • Pipeline ID

  • Job ID

  • Step order

curl --request POST '{{impulse-protocol}}://{{impulse-domain}}:{{impulse-port}}/private/pipeliner/pipelines/{{pipelineID}}/job' \
--data-raw '{
    "jobId": "87f0e362-82a4-11eb-8dcd-0242ac130003",
    "step": 0
}'
curl --request POST '{{impulse-protocol}}://{{impulse-domain}}:{{impulse-port}}/private/pipeliner/pipelines/{{pipelineID}}/job' \
--data-raw '{
    "jobId": "3a72ccda-c383-11ea-87d0-0242ac130003",
    "step": 0
}'
curl --request POST '{{impulse-protocol}}://{{impulse-domain}}:{{impulse-port}}/private/pipeliner/pipelines/{{pipelineID}}/job' \
--data-raw '{
    "jobId": "33791608-58fc-4aef-a3de-42ba71ce0e44",
    "step": 1
}'

Based on the step order provided, the job will be added to that step. When a pipeline runs, it will run all steps in their provided order. So in this example jobs 87f... and 3a7... will run in parallel before job 337...

The response to this request will show the updated pipeline, allowing you to verify that it looks as expected.

{
  "pipelineId": "0c08370b-bf1d-11ec-b314-0242ac150009",
  "name": "0c08370b-bf1d-11ec-b314-0242ac150009",
  "steps": [
    {
      "pipelineId": "0c08370b-bf1d-11ec-b314-0242ac150009",
      "name": "Step 1",
      "order": 1,
      "jobs": [
        "33791608-58fc-4aef-a3de-42ba71ce0e44"
      ]
    },
    {
      "name": "Step 0",
      "order": 0,
      "jobs": [
        "3a72ccda-c383-11ea-87d0-0242ac130003",
        "87f0e362-82a4-11eb-8dcd-0242ac130003"
      ]
    }
  ]
}

Removing a Job From a Pipeline

curl --location -g --request DELETE '{{impulse-protocol}}://{{impulse-domain}}:{{impulse-port}}/private/pipeliner/pipelines/{{pipelineID}}/job' \
--data-raw '{
    "jobId": "87f0e362-82a4-11eb-8dcd-0242ac130111",
    "step": 0
}'

The response to this request will show the updated pipeline, allowing you to verify that it looks as expected.

{
  "pipelineId": "0c08370b-bf1d-11ec-b314-0242ac150009",
  "name": "0c08370b-bf1d-11ec-b314-0242ac150009",
  "steps": [
    {
      "pipelineId": "0c08370b-bf1d-11ec-b314-0242ac150009",
      "name": "Step 1",
      "order": 1,
      "jobs": [
        "33791608-58fc-4aef-a3de-42ba71ce0e44"
      ]
    },
    {
      "name": "Step 0",
      "order": 0,
      "jobs": [
        "3a72ccda-c383-11ea-87d0-0242ac130003"
      ]
    }
  ]
}

Updating a Pipeline

First you will need to get the current pipeline.

curl --request GET '{{impulse-protocol}}://{{impulse-domain}}:{{impulse-port}}/private/pipeliner/pipelines/{{pipelineID}}'

The response will return the pipeline object. You can then manipulate this object to add more steps, jobs, or update the name of the pipeline.

For example, taking the above response you can change the name of the pipeline.

{
  "pipelineId": "0c08370b-bf1d-11ec-b314-0242ac150009",
  "name": "my-headless-pipeline",
  "steps": [
    {
      "pipelineId": "0c08370b-bf1d-11ec-b314-0242ac150009",
      "name": "Step 1",
      "order": 1,
      "jobs": [
        "33791608-58fc-4aef-a3de-42ba71ce0e44"
      ]
    },
    {
      "name": "Step 0",
      "order": 0,
      "jobs": [
        "3a72ccda-c383-11ea-87d0-0242ac130003"
      ]
    }
  ]
}
curl --request PUT '{{impulse-protocol}}://{{impulse-domain}}:{{impulse-port}}/private/pipeliner/pipelines/{{pipelineID}}' \
--data-raw '{
  "pipelineId": "0c08370b-bf1d-11ec-b314-0242ac150009",
  "name": "my-headless-pipeline",
  "steps": [
    {
      "pipelineId": "0c08370b-bf1d-11ec-b314-0242ac150009",
      "name": "Step 1",
      "order": 1,
      "jobs": [
        "33791608-58fc-4aef-a3de-42ba71ce0e44"
      ]
    },
    {
      "name": "Step 0",
      "order": 0,
      "jobs": [
        "3a72ccda-c383-11ea-87d0-0242ac130003"
      ]
    }
  ]
}'

As stated in the API doc, you will need to list every value in the request. Otherwise the value will be defaulted and likely lost.

Running a Pipeline

curl --location -g --request POST '{{impulse-protocol}}://{{impulse-domain}}:{{impulse-port}}/private/pipeliner/pipelines/{{pipelineID}}/start'

The response will include the id of the pipeline transaction . This can be used to track the status of the transaction.

Tracking Pipeline Transaction Status

curl --location -g --request GET '{{impulse-protocol}}://{{impulse-domain}}:{{impulse-port}}/private/pipeliner/pipelines/{{pipelineID}}/transactions/9bfe48f8-bf1f-11ec-bccf-0242ac150009'

Canceling a Pipeline Transaction

curl --request POST '{{impulse-protocol}}://{{impulse-domain}}:{{impulse-port}}/private/pipeliner/pipelines/{{pipelineID}}/transactions/{{pipelineTransactionID}}/cancel'

This will cancel all sync transactions related to this pipeline transaction stopping the processes from continuing further.

PreviousTransaction ReportsNextScheduling Jobs and Pipelines

Last updated 2 years ago

To create a pipeline, you will want multiple jobs created. After you have created 2+ jobs you can get started with creating a pipeline. You can use the to do this.

The best way to create a pipeline is to do it in iterations. Start by creating an empty pipeline using the .

Using all three of these values you can add a job to the pipeline by using the . This example adds three jobs to the pipeline. Two at step 0 and one at step 1.

The to add a job to a pipeline step can be used to remove a job from a pipeline step. The payload is the same for both the DELETE and POST requests.

To update the pipeline with a new name or add multiple jobs/steps in a single request you can use the .

Once the object has be edited you can send it as a payload to the same endpoint in a .

Like a job pipelines can be run. This is done by using the pipeliner's . This request will start a pipeline transaction which is combination of standard syncTransactions .

To track the status of the pipeline transaction, you can use the .

The response will show the list of sync transaction started by the pipeline. You can then view the status of each . The response will also show if the pipeline transaction is active and the last step that was completed by the pipeline.

Similar to a sync transaction you can also cancel a pipeline transaction. You can use the .

pipeliner API
/pipelines endpoint
/pipelines/{{pipelineID}/job endpoint
same endpoint
/pipelines/{{pipelineID}} endpoint
PUT request
/pipelines/{{pipelineID}}/start endpoint
/pipelines/{{pipelineID}}/transactions/{{transactionID}} endpoint
individual sync transaction
/pipelines/{pipelineID}/transactions/{transactionID}/cancel endpoint