Blockchain as a coordinated global Von Neumann architecture

Jag Sidhu
7 min readJun 17, 2022

In a previous article we laid out the mechanism design of Syscoin’s 4 layer architecture for a global coordinated financial solution. As we have built out our vision and completed the first iteration of the design we have made some new realizations of scale working with rollup developers and ZK researchers. These realizations relate to scaling the new bottleneck in a modular blockchain design which is Data Availability. We spoke about the solutions to these bottlenecks here. As we think more and more about our modular design hat, we start to make a realization that what we are developing towards is the same intuitions and design goals that John Von Neumann must have had when working with a localized computing environment. Obviously a Von Neumann architecture has some stark differences but the design also has some remarkable similarities; overall the goal is to create a global coordinated version of a computing environment which allows for sovereign financial decisioning and provenance of those decisions made provable through cryptographic fingerprints stored on a ledgered chain of “blocks”. The takeaway should be that if we are correct in the comparison of our blockchain design to that of modern computing architecture, then we can contextualize our future innovations of how we scale blockchains compared to how the modern computer has scaled up. Let’s dive in!

Let’s review the basic Von Neumann design:

Let’s have a thought experiment where the Syscoin design is a corollary to the Von Neumann model in an attempt to create an efficient globally “coordinated” financial computational platform.

Let’s break down the components of the modern computing architecture and see how they relate to parts in our Syscoin design:

Input Device

This is where there is an action to the Central Processing Unit. In our comparison to the Syscoin blockchain this is the input from a users perspective to create or interact with transactions on the blockchain or on secondary layers.

Central Processing Unit

This is the guts of the Von Neumann architecture where calculations happen and the system that coordinates data to perform calculations. The Arithmetic Logic Unit (ALU) is responsible for performing calculations and the Control Unit (CU) is performing management tasks between the various caches/memories and responsible for the gates to store temporary calculation results from the ALU. In our blockchain we can think of the calculations actually happening in our “ALU” which would be the ZK or Optimistic Rollups. The “CU” in our case is the actual blockchain which manages the calculations and states that get rolled up and aggregated from the “ALU” rollups. The job of the “CU” therefore is to not perform many calculations (although some basic calculations can happen there) but to settle and perform management tasks for the “ALU” itself.

In modern CPU’s there are typically many complex ALU’s capable of performing billions of arithmetic and logic operations per second. In our case there will be many complex rollups performing transactions in parallel, thus scaling up our design in the same way.

How does the modern computer come to consensus on calculations? Through the control unit. It manages the information and consistency of calculations amongst ALUs, reliability of information from memory and disk required to perform calculations. However across multiple computer systems some sort of Byzantine Fault Tolerance (BFT) consensus is required for consistency, this is where Nakamoto Consensus comes into play, by creating a provable energy expenditure that cannot be synthesized or trivialized further even via new computers (quantum) we create a way for multiple Von Neumann models to cross operate value. Holistically in such a design, the Control unit becomes the coordination unit amongst logical ALU’s that perform calculations locally according to specific use cases (e.g., ZK, Optimistic Rollup instantiations, etc.). The logical computer spans individual systems and correspondingly expands processing capabilities whilst retaining consistency.

There will be multiple optimizations along the way to how ALUs perform (i.e., rollup optimizations), how control units do their job on coordinating (ZK-Everything for internal state) but consistency across the components of the system will likely remain optimal with the BFT consensus as introduced by Bitcoin. The building blocks provided in our Von Neumann inspired blockchain architecture will be the basics to build upon our future global coordinated financial systems.

Output

The output from the CPU is the result of the final calculation and requested result which in our case is the update of the blockchain’s state given back to the user for his own record keeping or archival purposes.

Memory

Volatile register based storage medium to be able to access localized information related to CPU calculations and short-term data storage. In the case of a blockchain this would be akin to our Proof of Data Availability (PoDA) where rollup data is stored short-term and pruned just as memory is pruned either after a reset or after the cache logic determines the data is no longer required because it is not likely to be accessed.

Disk

This is non-volatile storage of information, for permanence and in our case, it is the block space itself which stores long-term calculations, data and states for a premium rate paid in the fees of the native token of the blockchain. In the case of PoDA, we actually also use the disk to store the data on a local disk (the hard drive).

We also leverage the disk or the blockchain data itself as the stored-program medium where contract byte-code is stored.

If we are intending to create a globally coordinated financial computer, the goal of this article was to have a thought experiment to try to base our architecture off of the modern computer that is the Von Neumann architecture. Several basic building blocks have direct components as we have designed Syscoin to replicate this architecture which we know works and scales, and there are several performance improvements perhaps in the future as we model our design for a globally coordinated computing platform. Features that scaled modern computers were cache, locality for fetching memory or data, branch prediction computing and reduced latency via hardware between memory and cpu.

Note that we have already alleviated the major bottleneck to traditional Von Neumann architecture:

  1. Via PoDA which is the Von Neumann bottleneck which is due to a shared bus between program memory (where the smart contracts in our case are stored) and data memory (where the data for L2 censorship resistance exists).
  2. Also note in our case the CPU is not required to fetch data and memory on demand from our Layer 1 as they are usually localized to the rollup running them and so theoretically we can have no busy-wait CPU cycles when calculating proofs for transactions.
  3. Also the design of a modular blockchain introduces true parallel CPU possibilities as transactions do not require resources common to all rollups and thus computing can happen completely independently

Interplanetary

If we truly wish to have a coordinated computer working at scale globally then this architecture should also be able to span across other planets. The prerequisite to do such a thing requires the minimization of the dependence on time in such a system. For example if the system depends on block times in a central coordinated way (monolithic blockchain design) then we may become too dependent on time as consumers of the system and thus it breaks down even when used globally (where firewalls restrict throughput). In our analogy of a Von Neumann design we simply think of a design where dependence on memory and Control Unit is separate from the ALU. With the ALU separate and coordinated over some time-independent interval we can process computations on other planets but be coordinated on Earth through the settlement layers of the blockchain. Since there is a very large time delay for communications between Earth and another planet localized “rollups” or ALU’s would allow for transactions to happen fast enough for local economies and yet still secured by a common settlement layer.

Future works

Once we create a frame of reference for our global coordinated financial computer, we can model innovations and improvements based on the solutions in modern Von Neumann architecture. A few things we should look at are:

  1. Cache between memory (L1 contracts and data availability) and CPU (ZK- Rollup sequencers)
  2. Modified Harvard architecture for separate cache strategy for each rollup (rollup design strategy, composability between rollups)
  3. Using branch prediction algorithms to improve CPU performance (ZK- Rollup sequencers)
  4. Limiting CPU stack or scratchpad memory to reduce resources per user/group/use-case (ZK-Rollup sequencer rate limiting)
  5. Implementing CPU on chip, provide locality of reference (ZK Proof ASICs)

Final Words

Just as the discovery depicted in the seminal paper by Von Neumann described that the program and the data can use the same address space, so to do we separate the data availability from the block space but use the same mechanism to secure it so it can be fetched in the same space.

When Alan Turing visited Princeton University and worked with Von Neumann on the philosophy of Artificial Intelligence, I doubt either would have factored the implications of AI in the eventual Singularity event that Von Neumann coined we will need to dis-intermediate access to computing resources and have coordination amongst a logical computing model that spans digital computing devices globally. Then will it only be possible for agents to transact value amongst one another at scale, friction-less and without human intervention.

--

--

Jag Sidhu
Jag Sidhu

Written by Jag Sidhu

Syscoin Core Developer and Foundation President

Responses (9)