link href="../../css/mdb.min.css" rel="stylesheet">
  • Motivations

  • The existing distributed ledger technology (DLT) (e.g., blockchain) systems have a number of drawbacks that prevent it from being used as a generic platform for distributed ledger across the globe. A wide range of services and applications can be improved and/or solved by using distributed ledger technology (DLT). These services and applications have widely varying quality of service (QoS) requirements. However, most existing DLT systems do not distinguish different QoS requirements, resulting in significant performance issues such as poor scalability and high cos

Challenges

It is not surprising that the current DLT systems have these challenges. From the history of information technology (IT), including telephone networks, the traditional Internet, and cellular networks, we have seen similar challenges faced by other technologies in their early stage. For example, in the 1990s, with more and more applications built on TCP/IP, the Internet became often congested, and the performance of some applications (e.g., video streaming) was not acceptable for massive popularity due to network congestion (Youtube was not launched until 2005.). In addition, specialized networks were used for different applications, e.g., public switched telephone networks (PSTNs) for voice, IP networks for data, and cable networks for TV.

Most of the recently developed DLT systems focus on increasing transaction throughput to improve scalability, e.g., Lightning Network, Raiden Network, Sharding and Plasma, Cardano, EOS, Zilliqa, etc. However, the history of the traditional Internet has told us that increasing throughput alone cannot solve the congestion problem due to heterogeneous QoS requirements from different applications, dynamics of applications, dynamics of available resources, distributed networks without central coordination, etc.

In DLT systems, these situations still apply. For example, different services and applications built on DLT have widely varying QoS requirements. While instant confirmation is desirable when you are buying a cup of coffee using cryptocurrencies, confirmation latency can be tolerated when you are buying a house or conducting computation-intensive machine learning tasks. Moreover, in addition to TPS, other metrics should be considered, such as cost (e.g., transaction fee (a.k.a. gas) in Ethereum and RAM costs in EOS). While it may be ok to pay $1 transaction fee to buy a cup of coffee, it is undesirable to pay $1 for transferring several bits (e.g., reading temperature) in Internet of things (IoT) applications with billions of IoT devices, or $1 for creating an account in social media applications with billions of users. Furthermore, while privacy is the main concern in some applications, others may not care about privacy.

Increased Throughput

Recognising varied application uses for the networks allows developing a more flexibile network stack

Resource Efficiency

Virtualized units of resources are intelligently managed

Cheaper

Applications with higher resource usage may pay for higher demand on the network

Transparent

The network may be fully public and auditable, while the integrity of the ledger is valid

Trust

The network is a collaboration of trusted third-party vendors versus a central authority

Creative

A flexible QoS system supports developing completely new distributed application models.


Solution

Virtualization has been playing a central role in addressing these challenges in the IT landscape. Essentially, virtualization refers to technologies designed to provide abstraction of underlying resources (e.g., hardware, compute, storage, network, etc.). By providing a logical view of resources, rather than a physical view, virtualization can significantly improve the performance, facilitate system evolution, and simplify system management and configuration, and reduce cost. Indeed, virtualization is one of the major enabling technologies behind recent advances of IT, including cloud computing, edge computing, and network function virtulization (NFV).

The following figure shows a brief journey of virtualization. We can see that virtualization has been playing an important role to abstract the underlying resources, so that people can focus on the things they care the most. Now, we can have access to hardware, OS, software, storage, and network via virtualization. Therefore, we believe that virtualization will be naturally the next step for DLT.

History of Virtualization

A brief journey of virtualization

From the history of telephone networks, the traditional Internet, and cellular networks, we can see that, at the beginning of the development of these systems, management/control and user traffic were usually coupled together due to easier implementation. However, as the system evolved over time, management/control is decoupled from user traffic due to many benefits described in the following table.

Before the Decoupling After the Decoupling Benefits of the Decoupling
Telephone Networks • Signaling System No. 5 (SS5) • Signaling System No. 7 (SS&7) • Reduce the call setup time
• Reduce the toll fraud
• Easier to introduce new services
Traditional Internet • Best-effort
• IntServ
• DiffServ
• Network Function Virtualization (NFV)
• Software-defined Networking (SDN)
• Lower operation cost
• Simplify network management
• Facilitate network evolution
Cellular Networks • 4th Generation (4G) • Control&User Plane Separation (CUPS) in 5G • Reduce latency of applications and services
• Increase throughput
• Independent evolution of control&user planes