The motive for developing the Real-Time Blockchain Banking System is to overcome both the limitations of transactions on blockchain and the limitations of non-blockchain transactions. The representative limitations of blockchain transactions include the performance and privacy issues. The limitation of non-blockchain bank transactions is that they cannot process the transfer in real time. Although it first seems that bank-to-bank transactions are processed in real time, each bank aggregates all transactions and actually processes them in batches at midnight.
The Real-Time Blockchain Banking System enabled the real-time transaction consensus between banks through blockchain. Furthermore, it improved the performance through batch processing on blockchain and addressed the privacy issue by adopting an architecture of storing account information in each bank’s MySQL database.
From a high-level view, the Real-Time Blockchain Banking System consists of blockchain and application. The system has adopted Hyperledger Fabric as its blockchain network. Hyperledger Fabric is a permissioned blockchain, meaning that only approved nodes (participants) can participate in the network. As the number of banks is limited and countable and the unreliable participants cannot enroll in the network systematically, Hyperledger Fabric is suitable for the banking system. The peer nodes of each bank, which have acquired access rights to a Channel, store the transactions in the blocks in their own ledgers.
Application consists of a “Pre-Processor” and a “Post-Processor.” The Pre-Processor handles remittance requests from customers. When a block is appended, the Post-Processor updates the account balance of the receivers of each bank transaction. For confidentiality and privacy, each bank has their own MySQL database to store their customers’ account balance.
From a data-flow perspective, the Real-Time Blockchain Banking system is composed of four parts:
- Pre-Processing: The application first checks the database to validate whether an account has enough balance to process the transfer transaction. If validated, it updates the “Pre-Transfer” table.
- Batch Processing: The transactions are bound based on the receiving bank. When a certain number of transactions are bound or a timeout is reached, the peer sends a batch transaction request to blockchain.
- Blockchain Consensus: The sending bank and the receiving bank reach a consensus on bank transactions. The receiving bank adds the transactions to the “Pre-Deposit” table. When both banks respond, the Orderer, which refers to the block issuer in Hyperledger Fabric, creates a block.
- Post-Processing: The sending bank and the receiving bank update the balance in their database.
The Pre-Processor consists of 1) Classifier, 2) Scheduler, and 3) Pre-processors. The Classifier classifies the transfer requests based on senders’ account number and assigns each transaction to a queue appropriate for the account number. The queues in the Scheduler deliver the transactions to pre-processors. The pre-processor then checks the account balance table and the pre-transfer table to validate that the sending account has enough balance to transfer. Then, it adds the transaction to the pre-transfer table. The reason for dividing the transactions based on the account number is to structurally prevent concurrent access to the same account in the pre-transfer table, without relying on the locking mechanism of MySQL.
The post-processor’s role is to finally update the account balance table. However, as numerous transactions that were tied in batches come into the post-processor at once, and these transactions try to establish database connections, which are limited, a large number of transactions could fail while waiting for connection resources. Moreover, since the post-processor occupies most of the connection, the requests from the pre-processor could also fail. Therefore, the Database Connection Handler distributes and allocates the connections for each processor. It creates a certain number of goroutines for each module, each module sends a request to the goroutine in a round-robin manner, and the goroutines make connections with the database. This approach not only prevented requests from failing, but also improved the TPS by 33%.