Sync Manager

The Sync Manager exposes API endpoints, could be REST, gRPC, etc. These endpoints could be consumed by a Client UI, Cron Job, etc.

Responsibility 1: Handle API calls and Kafka messages to start the Sync Process. If a payload exits, the payload will be stored in Diesel with the pattern of transactionid/key. The transactionID is the parent path. The key is a predefined key that a future microservice will look for. Ie.. In the case of Github the Job detail is not enough for the Taxi. The Taxi needs the payload from Github. So, we store in Diesel transactionID/github_payload which will store the actual payload for the Taxi to get.

Responsibility 2: This is NOT represented directly in the flow diagrams, but the Sync Manager is responsible for the entire process flow. Each Microservice consumes a message that was produced by the Sync Manager. In addition to this, the Sync Manager consumes messages produced by each microservice.

Example - when the Job Depot is done it will produce a job_depot_complete message which will be consumed by the Sync Manager and then the Sync Manager will produce a new message to the Endpoint Depot and so on.

Responsibility 3: As each Microservice consumes a message that was produced by the Sync Manager the Sync Manager is used to start a Watchdog process to validate the health of each produced message, this Watchdog process starts a Heartbeat on each produced message and listens for Heartbeats Pulses associated to these individuals produced messages to track the status of these processes and in case of errors be able to retry the failing message.

Last updated