Skip to content Skip to sidebar Skip to footer

Architecture For A Lot Of Data Logging, DB Or File?

I'm working on a Python app I want to be scalable to accommodate about 150 writes per second. That's spread out among about 50 different sources. Is Mongodb a good candidate for t

Solution 1:

I would say that mongodb very good fit for the logs collection, because of:

  1. Mongodb has amazing fast writes
  2. Logs not so important, so it's okay to loose some of them in case of server failure. So you can run mongodb without journaling option to avoid writes overhead.
  3. In additional you can use sharding to increase writes speed, in same time you can just move oldest logs to separate collection or into file system.
  4. You can easy export data from database to the json/csv.
  5. Once you will have everything in a database you will able to query data in order to find log that you need.

So, my opinion is that mongodb perfectly fit for such things as logs. You no need manage a lot of logs files in the file system. Mongodb does this for you.


Post a Comment for "Architecture For A Lot Of Data Logging, DB Or File?"