April 3, 2025
Web3 Has A Memory Problem - and We Finally Have A Fix thumbnail
Business

Web3 Has A Memory Problem – and We Finally Have A Fix

A World Computer Needs A Memory That’s Not Just DeCentralized But Also Efficiency, Scalable, and Reliable. We Can Build It Using Random Linear Network Coding (RLNC), Says Muriel Médard, CO-FOUNDER OF OPTIMUM, WHICH OFFERS MEMORY INFRASTRUCTURE FOR Any BLOCKCHAIN. Médard is the co-in-in-lineor of rlnc, which she have developed over Two Decades of Mit Research.”, – WRITE: www.coindesk.com

A World Computer Needs A Memory That’s Not Just DeCentralized But Also Efficiency, Scalable, and Reliable. We Can Build It Using Random Linear Network Coding (RLNC), Says Muriel Médard, CO-FOUNDER OF OPTIMUM, WHICH OFFERS MEMORY INFRASTRUCTURE FOR Any BLOCKCHAIN. Médard is the co-in-in-lineor of rlnc, which she have developed over Two Decades of Mit Research. Updated APR 1, 2025, 5:40 PM UTCPUBLISHED APR 1, 2025, 5:14 PM UTC

Web3 has a memory Problem. Not in the “We Forgot Something” Sense, But In the Core Architectural Sense. IT DOESN’T HAVE A REAL MEMORY LAYER.

After World War II, John Von Neumann Laid Out The Architecture for Modern Computers. Every Computer Needs Input and Output, A CPU for Control and ARITHMETIC, AND MEMORY TO THE STREE OF THE LATEST Version Data, Along with A “BUS” TO RETRIEVE AND UPDATE THAT DATA. Commonly Known As Ram, This Architecture Has Been The Foundation of Computing for Decades.

AT ITS CORE, Web3 is A Decentralized Computer – A “World Computer.” At the Higher Layers, It’s Fairly Recognizable: Operation Systems (EVM, SVM) Running on Thousands of Decentralized Nodes, Powering Decentralized Applications and Protocols.

But, WHEN YOU DIG DEER, PROHINGING’S MISSING. The Memory Layer Essential for Storying, Accessing and Updating Short-Term and Long Term Data, Doesn’T Look Like The Memorry Bus or Memory Unit von neumann envisioned.

Insthead, It’s A Mashup of Different Best-Effort Approaches to Achiev This Purpos, and The Results Are Overall Messy, Ineficient and Hard to Navigate.

Here’s The Problem: If We’re Going to Build A World Computer That’s Fundamentally Different from The Von Neumann Model, there Better Be a Really Good Reason to do so. As of Right Now, Web3’s Memory Layer Isn’t Just Different, It’s Convoluted and Ineficient. Transactions Are Slow. Storage Is Sluggish and Costly. Scaling for Mass Adoption with this Current Approach is nigh Impossible. And, that’s not what’s Decentralization Was Supped to Be About.

But there is another Way.

A Lot of People in this Space Are Trying Their Best to Work AROUND THIS LIMATION AND WE’RE AT A POINT NOW WHERE The CURRENT WORKAROOUND Solutions Just Cannot Kieep Up. This is a WHERE USING ALGEBRAIC CODING, WHICH MAKES Use of Equations to Represent Data for Efficiency, Resilience and Flexibility, Comes in.

The Core Problem is this: How do we Immplement Decentralized Code for Web3?

A New Memory InfrastructureThis is some i took the Leap from Academia WHERE I HELD The ROLE OF MIT NEC Chair and Professor of Software Science and Engineering to Dedicate Myseld and A Team of Expert in Advance.

I SAW Something Bigger: The Potential To Redefine How We Think About Computing in A Decentralized World.

My Team at Optimum is Creating Decentralized Memory That Works Like A Dedicated Computer. Our Approach is Powered by Random Linear Network Coding (Rlnc), A Technology Developed in My Mit lab Over Nearly Two Decdes. I a proven data Coding Method that maximizes ThroughPut and resilience in High-Reliability Networks from Industrial Systems to the Internet.

Data Coding Is The Process of Converting Information from One Format to Another for Efficient Story, Transmission or Processing. DATA CODING HAS BEEN AROUND FOR DECADES AND THERE ARE MANY ITERATIONS OF IT IN USE IN NETWORKS TODAY. Rlnc is the Modern Approach to Data Coding Built Special for Decentralized Computing. This Scheme Transforms Data Into Packets for Transmission Across A Network of Noodes, Ensuring High Speed ​​and Efficiency.

With Multiple Engineering Awards from Top Global Institutions, More Perna 80 Patents, and NumeRous Real-World Deployents, Rlnc is No Longer Just Aory. Rlnc Has Garnered Significant Recognition, Including the 2009 IEEE Communications Society and Information Theory Society Joint Paper Award for the Work “A Random Lineark Network Cding Cding App. Rlnc’s Impact Was Acknowledged with The IEEE KOJI KOBAYASHI Computer and Communications Award in 2022.

Rlnc is now Ready for Decentralized Systems, Enabling Faster Data Propagation, Efficient Storage, and Real-Time Access, Making IT A KEY Solution for Web3’s Scalas.

WHY this mattersLet’s Take A Step Back. WHY DOES ALL OF THIS MATTER? Because We Need Memory for the World Computer That’s Not Just Decentralized But Also Efficient, Scalable and Reliable.

Currently, Blockchains Rely on Best-Effort, Ad Hoc Solutions that Achiev Partilly What Memory in High-Performance Computing Does. Whaty Lack is a unified Memory Layer that Encompasses Both the Memory Bus for Data Propagation and the Ram for Data Storage and Access.

The Bus Part of the Computer Should not Become the bottleneck, as it does now. LET ME Explain.

“Gossip” is the Common Method for Data Propagation in Blockchain Networks. IT is a peer-to-peer communication protocol in a WHICH NODES Exchange Information with Random Peers to Spread Data Across The Network. In it Current Implementation, It Strugggles at Scale.

Imagine You Need 10 Pieces of Information from Neighbors Who Repeat Whaty’ve Heard. As You Speak to Them, at First You Get New Information. But As You Approach Nine Out of 10, The CHANCE OF HEARING COMMETING NEW FROM A NEIGHBOR DROPS, MAKING The Final Piece of Information The Hardest to Get. CHANCES Are 90% That Next Thing You Hear Is Something You Already Know.

This is How Blockchain Gossip Works Today – Efficient Early On, But Redundant and Slow WHEN TRYING TO COMPLETE The INFORMATION Sharing. You would have to be extremely Lucky to get something new every time.

With rlnc, we get around the Core Scalacycy Issue in Current Gossip. Rlnc Works As Thought You Managed to Get Extremely Lucky, SO every Time You Hear Info, It Just Happers to Be Info That Is New To You. That means Much Greater ThroughPut and Much Lower Latence. This Rlnc-Powered Gossip Is Our First Product, WHICH VALIDATORS CAN IMPLEMENT THRUGH A SIMPLE API CALL TO OPTIMIZE DATA PROPAGATION FOR THEIR NODES.

Let US Now Examine the Memory Part. It Helps to Think of Memory as Dynamic Storage, Like Ram in A Computer or, for that Matter, Our Closet. Decentralized Ram Should Mimic A Closet; It Should Be Structured, Reliable, and Consistent. A Piece of Data is either there or not, No Half-Bits, No Missing Sleeves. That’s atomicity. ITEMS STAY IN THE ORDER They WERE PLACED – YOU MIGHT SEE An Older Version, But Never A Wrong One. That’s consistency. And, Unless Moved, Everything Stays Put; Data Doesn’s Disappear. That’s durability.

Insthead of the Closet, What do we have? Mempols are not someting we keep around in computers, so would we do that in web3? The main reisson is that there is not a proper Memory Layer. If we havek of Data Management in Blockchains as Managing Cloths in Our Closet, A Mempool Is Like A Pile of Laundry on the Floor, WHERE YOU ARE NOT SURE

Current Delays in Transaction Processing Can Be Extremely High for Any Single Chain. Citying Ethereum as an example, It Takes Two Epochs or 12.8 Minutes to Finalize Any Single Transaction. Without Decentralized RAM, Web3 Relies on Mempools, Whore Transactions Sit Until they’re propertyed, resulting in delays, congredation and unpredictivity.

Full Noodes Store Everything, Bloating The System and Making Retrieval Complex and Costly. In computers, The Ram Keeps What Is Currently Needed, While Less-Disposed Data Moves to Cold Storage, Maybe in the Cloud or Disk. Full Noodes Are Like A Closet with All the Cloths You Ever Wore (from Everything You’ve Ever Worn As a Baby Until Now).

This is not someting we do on our computers, but they exist in web3 because storage and read/wite Access aren’t optimized. With rlnc, we create decentralized RAM (Deram) for Timely, Updateable State in A Way that is Economical, Resilien and Scalable.

DAMA AND DATA PROPAGATION POWERED BY RLNC CAN SOLVE Web3’s Biggest Bottlenecks by Making Memory Faster, More Efficiency, and More Scalable. It Optimizes Data Propagation, Reduces Storage Bloat, and Enables Real-Time Access Without Compromising Decentralization. It’s Long been A Key Missing Piece in the World Computer, But Not for Long.

Note: The Views Expressed in this Column Are Those of the Author and Do Not Necessarily Reflect Those of Coindesk, Inc. i Owners and Affilites.

Muriel MédardMuriel Médard is the CO-FOUNDER AND CEO OF OPTIMUM, The High-Performance Memory Infrastructure for Any Blockchain. She is the co-inventor of rlnc-the technology beyind optimum, Spun Out of Over Two Decades of Mit Research-and Holds the NEC Chair of Software Science and Engineering at Mit. She is a member of the US National Academy of Engineering, The American Academy of Arts and Sciences, The German National Academy of Sciences, A Fellow of The US National Academy of Inventures Electronics Engineers.

Muriel Médard

Related posts

Galaxy Secures UK Approval for License to Expand Derivatives Trading

unian ua

The US is preparing sanctions for violation of the ceasefire – Stubb

unian ua

Chart of the Week: Will April Bring Good Luck or Fool’s Hope for Bitcoin?

unian ua

Leave a Comment

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More