The Digibee platform was designed to ensure integrations run uninterrupted through high availability strategies. However, it is necessary to apply appropriate configurations to each deployment to prevent that requests above the pipeline's capacity from making it unavailable even for a brief moment.
In this article we will cover how to proceed if your flow is showing the error below in the Monitor => Completed Executions tab:
This pipeline execution was aborted and cannot be automatically retried because it was configured to disallow redeliveries.
(Execution response aborted with code 500)
Identifying scenarios and how to resolve
This error mainly occurs when there is an excessive memory and/or CPU request. It should be borne in mind that even the large size has a limited capacity that it can handle. So, to solve it, you will need to follow the steps below:
1. Open the Monitor => Metrics tab and filter by the period of the error in question. Note: the shorter the period selected, the more accurate the values on the graphs will be (article).
2. Check if there has been a memory or CPU overflow. Note: the memory limit is 100%, but the CPU limit is 20% for small, 40% for medium and 80% for large (article).
3. Check whether the size of the input or output payloads has reached 5 MB or the limit configured in the trigger.
4. Open the tab Monitor => Completed Executions or Pipeline Logs and check the last steps before the interruption.
5. Open the pipeline in Build and check whether it fits into any of the following scenarios:
- Large file being loaded (e.g. over 7 MB or over the limit configured in the file connector)
- Loading a lot of data via DB connector
- REST or SOAP which are costly to process
- Large number of connectors with external interaction (REST/SOAP/DB/Script/file)
- Loop blocks containing external connectors and running in parallel or sequence
- Parallel Execution followed by multiple external connectors
- Excessive amount of or very extensive logs
6. Solution: once there is a diagnosis, apply one or more of the following strategies:
- Increase the pipeline deployment size
- Reduce the number of concurrent executions (the deploy memory is used by the simultaneous executions, if I use 40 executions in a Large deployment, all the memory can be used by the 40 simultaneous executions, if I use 30 instead of 40, the same amount of memory will be divided by the 30 simultaneous executions, which means that each of the 30 will be stronger than compared to 40, having seen that the division of memory was by a smaller number)
- Reduce the amount of data at the input or output of the pipeline
- Control the data load in the pipeline by adding limits, filters and pagination
- Increase the pipeline frequency to make processing lighter
- Remove or shorten logs with very large payloads
- Clear the memory of session management components with the DELETE operation
- Prefer sequential executions to parallel ones
- Split processing into multiple pipelines
Optimizing the deployment
For an optimized and error-free deployment configuration, you must first publish the pipeline and monitor its performance to determine if it will need refinement.
In addition to the appropriate allocation of computational resources (vertical scale), the number of simultaneous executions (horizontal scale) must be adjusted in a balanced manner, providing a regular pace to the processing.
Other factors to consider are the tolerated queue size and the desired throughput rate, which is the relationship between executions per second (EPS) and pipeline response time.