Scalable data processing
“I’ve heard you like scalability, so what about some scalability to scale your scalability?”
The use-case of this architecture is to provide a very scalable data processing pipeline that handles both authentication with the data provider and processing the data into flexible datasets. For the purpose of this specific setup, the consumption of the data itself (which is managed by a separate application) is outside of the scope.
In this specific setup we are talking about authenticating with web sockets , processing this data through an application layer and writing this data into either our Clickhouse or MongoDB datasets. Though there will not be much detail, I will describe all three layers individually.