This article aims to demonstrate good practices in validating messages received by a consumer.
Using Events
When we have integrations that require process segregation, it is very common to use the concept of Events so that we can delegate part of the process in question to another pipeline, which in this case we call consumer.
In these situations, it is important to have a good definition of the data that will be received by this consumer and also to establish a well-defined contract between the processes, in order to have a clear, concise and valid message being transmitted between the two processes.
From this, let's look at the following example scenario:
We have an integration that handles purchase orders for e-commerce.
We need the product information contained in each order and, due to the large volume of orders, we decided to segregate the integration into a pipeline that will fetch orders from an API and another pipeline that will receive product data that must be included in another API .
In the first pipeline, we call the Orders API which has the following JSON return:
{
"status": 200,
"body": {
"numPedido": "001",
"produtos": [
{
"codigo": "123",
"nome": "Produto A"
},
{
"codigo": "456",
"nome": "Produto B"
}
],
"cliente": {
"nome": "Cliente Exemplo",
"cpf": "123.456.789-10",
"email": "[email protected]",
"enderecos": {
"logradouro": "Rua de Exemplo",
"numero": "10",
"cep": "01010-10"
}
}
...
},
"headers": {
"Cache-Control": "no-cache,must-revalidate,max-age=0,no-store,private",
"Content-Type": "application/json;charset=UTF-8",
"Date": "Wed, 01 Jul 2020 19:04:46 GMT",
"Expires": "Thu, 01 Jan 1970 00:00:00 GMT",
"Set-Cookie": "BrowserId=up7LXrwv46wesv5NEeq9ps_4AgB_,
"Strict-Transport-Security": "max-age=31536000; includeSubDomains",
"Transfer-Encoding": "chunked",
"Vary": "Accept-Encoding",
"X-B3-Sampled": "0",
"X-B3-SpanId": "8c419a93ibsi00=d8e54316",
"X-B3-TraceId": "8c419a938gbva9y54316",
"X-Content-Type-Options": "nosniff",
"X-ReadOnlyMode": "false",
"X-XSS-Protection": "1; mode=block"
}
}
From this return, we only need the "products" array:
{...}
"produtos": [
{
"codigo": "123",
"nome": "Produto A"
},
{
"codigo": "456",
"nome": "Produto B"
}
]
{...}
So, does it make sense for us to send this complete JSON to the second pipeline? No!
By sending all the content, we pollute the message sent to the consumer with unnecessary data.
With a Transfomer, JSON Generator or the Event Connector itself, we can process it so that only product information is sent to our consumer.
This concern may seem unnecessary because even if we send all the data, our consumer will receive the necessary product data.
But let's imagine a JSON return with 300 lines. Would it make sense to send JSON like this where only 20 lines are useful product information? Definitely not!
Regardless of the volume of data, we must always transfer only the data necessary for the integration to work.
Two other very important points that sometimes do not receive much attention or are overlooked are:
execution logs
integration maintainability
In a scenario in which an error has occurred in the integration or the integration needs changes, unnecessary information can make it difficult and disruptive to analyze an error or maintain the pipeline, especially if the person who will handle the incident is not the same person who developed it. integration initially.
Validation of data and contracts
To validate data and contracts defined between processes, we can use in our consumer:
Validator Connector
Through a JSON Schema, we can define the exact JSON that must be received by our consumer for the integration to continue.
Choice Connector
Through JSON Path, we can verify certain information contained in the message received by the consumer. Based on the JSON mentioned above, we could have something like:
$.produtos
ou$.produtos[?(@.codigo && @.nome)]
If these conditions were not met, an error flow would be triggered and the process would be terminated as it did not meet the specified contract.