Amazon Quantum Ledger Database is is a fully managed ledger database which tracks all changes of user data and maintains a verifiable history of changes over time. It was announced at AWS re:Invent 2018 and now available in five AWS regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), and Asia Pacific (Tokyo).
You may ask why you would like to use QLDB (a ledger database) instead of using your traditional database solution. We all know that it’s possible to create history tables for our fact tables, and keep them up to date using triggers, stored procedures, or even with our application code (by writing changes of the main table to its history table). You can also say that your database have write-ahead/redo logs, so it’s possible to track and verify the changes of all your data as long as you keep them in your archive. On the other hand, It’s clear that this will create an extra workload and complexity for the database administrator and the application developer while it does not guarantee that the data was intact and reliable. What if your DBA directly modifies the data and history table after disabling the triggers and even alter the archived logs? You may say it’s too hard, but you know that it’s technically possible. In a legal dispute, or a security compliance investigation, this might be enough to question to the integrity of the data.
QLDB solves this problem with cryptographically verifiable journal. When an application needs to modify data in a document, the changes are logged into the journal files first (WAL concept). The difference here is, each block is hashed (SHA-256) for “verification” and has a sequence number to specify its address within the journal. QLDB calculates this hash value using the content of the journal block and the hash value of previous block. So the journal blocks are chained by the hash values! The QLDB users do not have access to the logs and the logs are immutable. In anyway, if someone modifies data, they also need to update the journal blocks related with the data. This will cause a new hash to be generated for the journal block, and all the following blocks will have a different hash value than before.