Jonathan Schemoul – Founder of Aleph.im Ep #48

EPISODE SUMMARY

Jonathan "Moshe" Schemoul is the founder of aleph.im, a cross-chain p2p storage, computing network and first decentralized indexing provider for Solana. 00:36 – Intro & how did Jonathan Schemoul got in crypto 02:09 – What is Aleph and how does it work? 06:48 – Is Aleph database a blockchain? 09:20 – Understanding core nodes and Aleph’s economics 11:22 – How does Aleph interact with DNS? 15:29 – How does Aleph get verification of certificates? 21:44 – How does Aleph check integrity of computation? 25:06 – What is Aleph’s vision? 30:32 – Will Aleph always be project facing or will it one day be user facing? 32:28 – What load can Aleph currently handle? 39:00 – How do the economics work for people providing hardware and bandwidth? DISCLAIMER The information on this podcast is provided for educational, informational, and entertainment purposes only, without any express or implied warranty of any kind, including warranties of accuracy, completeness, or fitness for any particular purpose. The information contained in or provided from or through this podcast is not intended to be and does not constitute financial advice, investment advice, trading advice, or any other advice. The information on this podcast is general in nature and is not specific to you, the user or anyone else. You should not make any decision, financial, investment, trading or otherwise, based on any of the information presented on this podcast without undertaking independent due diligence and consultation with a professional broker or financial advisor.

EPISODE NOTES

Jonathan “Moshe” Schemoul is the founder of aleph.im, a cross-chain p2p storage, computing network and first decentralized indexing provider for Solana.

  • 00:36 – Intro & how did Jonathan Schemoul got in crypto
  • 02:09 – What is Aleph and how does it work?
  • 06:48 – Is Aleph database a blockchain?
  • 09:20 – Understanding core nodes and Aleph’s economics
  • 11:22 – How does Aleph interact with DNS?
  • 15:29 – How does Aleph get verification of certificates?  
  • 21:44 – How does Aleph check integrity of computation?
  • 25:06 – What is Aleph’s vision?
  • 30:32 – Will Aleph always be project facing or will it one day be user facing?
  • 32:28 – What load can Aleph currently handle?
  • 39:00 – How do the economics work for people providing hardware and bandwidth?

DISCLAIMER

The information on this podcast is provided for educational, informational, and entertainment purposes only, without any express or implied warranty of any kind, including warranties of accuracy, completeness, or fitness for any particular purpose.The information contained in or provided from or through this podcast is not intended to be and does not constitute financial advice, investment advice, trading advice, or any other advice.The information on this podcast is general in nature and is not specific to you, the user or anyone else. You should not make any decision, financial, investment, trading or otherwise, based on any of the information presented on this podcast without undertaking independent due diligence and consultation with a professional broker or financial advisor.

Anatoly Yakovenko (00:12):

Hey folks, this is Anatoly, and you’re listening to The Solana Podcast. And today I have Jonathan Schemoul with me, who’s the founder of the Aleph.im project. Really awesome to have you.

Jonathan Schemoul (00:22):

Thank you very much. I’m really happy to be here today.

Anatoly Yakovenko (00:25):

Cool. We usually start these with a simple question, how did you get into crypto? What’s your story? What’s the origin story?

Jonathan Schemoul (00:36):

Well, into crypto it’s a long story. I started way back in time, a bit on Bitcoin then I stopped because it was only money back then. And that wasn’t the end game for me. Then I came back into crypto in 2015, 2016, and I started doing a bit of development because I saw that I really wanted to be part of Web 3, to do nice things with it. I started developing as an open-source developer for a few projects. One of these is the newest project which is Chinese blockchain layer one. I’m not really involved with it anymore.

Jonathan Schemoul (01:16):

But working with them as a community open source developer, I saw that there was some missing links somewhere that you couldn’t decentralize all the stack with just layer one, it is not the one that they were building back then. So that’s how the Aleph.im project is born. For me, besides that, I’ve been developing for a lot of companies before in the IOT space and also for big banks sometime ago. I’ve been a developer for a lot of years.

Anatoly Yakovenko (01:48):

That’s great. I mean, that’s a great background. The thing that you’re focusing on with Aleph is this idea that Web 3 is just a small part of the piece, but you still need UI front-ends, business logic and things sitting on top of the blockchain. How does that work?

Jonathan Schemoul (02:09):

The idea is that, okay, now you can have smart contracts on Solana, that’s great. You can even do way much more on like just money on smart contracts, that’s great. Now, you need to have a front-end. So you need to have storage for that front-end. That’s not all because a smart contract, a program doesn’t have all the data that you need. So you will need some kind of indexing to get history. You will need a back-end for that.

Jonathan Schemoul (02:37):

Most of the DeFi application that we see have some centralized back-end behind them. They’re running on AWS, sometimes on dedicated servers or stuff like that that is still centralized. If a government, and we just saw something about it today, wants to shut down the DeFi protocol that is organized like that, they can. With Aleph.im what we are trying to do is decentralize the last mile, because for that last mile most projects are using AWS, so we need to decentralize AWS.

Jonathan Schemoul (03:11):

So we provide storage, as in file storage for the front-end files, database storage, because most applications are just databases and also an equivalent to Amazon Lambda, where you start small functions that will be launched on a decentralized cloud, where there is place for them and will get you a return value, and these can be written in any language and connects the web and also a PC from blockchains here at Solana obviously.

Anatoly Yakovenko (03:42):

Got it. Super Cool. So this is a storage mechanism. Does it guarantee consistency? How’s it decentralized? What happens if you nuke it? Yellowstone flows up, the current set of servers from Aleph get destroyed in the volcano. How do I move, switch, what state do I lose? Those are the hard distributed systems question.

Jonathan Schemoul (04:08):

Yeah. It’s a really good question. Aleph.im is not a blockchain at all. We don’t have a blockchain. There are enough already. We just accept messages from blockchains. All the supported blockchains are accepted on the network, that means that that message that is signed by a material address is accepted on network, a message that is signed by a certain address is accepted on the network. All our network, hence the name .im, dot instant messaging, the whole system works with messages on the network.

Jonathan Schemoul (04:45):

Those messages are organized by channels, just like you would go on telegram channels and get the history of them. The network keeps track of those messages and when you start a new node, you get the history of messages, not directly from the other nodes, you will connect two blockchains to specific smart contracts on blockchain. Look at past events, for example, on the Ethereum or on Solana. You look at past events for the synchronization of the network and you look, okay, there has been all these events, okay, let me ask the whole network what those messages were. Then you resync, when there are missing parts you leave them apart and then you get a view on the channels on the messages.

Anatoly Yakovenko (05:31):

So you write your software, your Lambda hook as if it’s a re-entrant, right? So you’re kind of recording your progress potentially on Solana as you’re processing it.

Jonathan Schemoul (05:43):

For the Lambda it’s a bit different. Here I was explaining how the network works for the messaging on the global state. For the state of pure application, you could either get your state from a blockchain here at Solana. For example, all the indexing effort that we are doing is using Solana as a source of synchronization for these Lambda. But then you can have multiple kind of volumes because since it’s Linux Micro VM machine, everything is a volume.

Jonathan Schemoul (06:18):

So we have local storage volume that is local to the running host. And then the Lambda kind of issue messages on a decentralized database of data and project or under storage, and then raising to the local file system and then issue messaging, et cetera. And we are also working on another kind of phase system that is distributed, where any of them that can write in it on the overall receive the changes, which is kind of tricky.

Anatoly Yakovenko (06:48):

Is the database, the Aleph database, distributed database? Is that a Byzantine fault-tolerant database? Is it designed with that in mind?

Jonathan Schemoul (06:58):

Yeah. The idea is that when you send a message on the network, it gets stored by all the over nodes that are interested in your channel. And then there are synchronization node that go and write hashes of the data and signatures inside messages that they push on blockchains. So that when overcome, they can synchronize it and replicate all the data. So that even if one part of the network gets totally disconnected, you can have one part that gets reconnected to the other therefore the peer to peer network for blockchain, for APFS. We have multiple kind of different connectivity solutions so that they can reconnect on resync.

Anatoly Yakovenko (07:42):

So the Aleph database, if it’s Byzantine fault tolerant, I mean, doesn’t that make it a blockchain? Is there a token? Is crypto economically like fault tolerant?

Jonathan Schemoul (07:56):

Yeah. So we have a token, but the token is living on multiple blockchain, Ethereum, Solana, and a few others, but those are the most used today. We have a token, you need a token for your data to stay there. If you don’t have any more your data gets garbage collected. But we don’t have a blockchain because we go and write on over layer ones. We are technically a layer two database which is computing pre storage.

Anatoly Yakovenko (08:23):

But the data storage, like the Aleph distributed database, what is that backed by? Or can I pick my own blockchain to use it as a common interface or something like that?

Jonathan Schemoul (08:34):

Well, currently it writes on Ethereum, we’re working on making it write on Solana. For this we need our indexer to be super powerful. So we’ll get it writing on Solana very soon. Basically you can write on multiple blockchains and use it as a source of proof.

Anatoly Yakovenko (08:53):

Got it. That’s pretty interesting. So it really doesn’t have its own blockchain and you’re just using the fault tolerance of the chains you’re connected to.

Jonathan Schemoul (09:04):

Exactly.

Anatoly Yakovenko (09:06):

Awesome. Yeah, that’s really cool. So the other challenge I think is like how do you deal with domains and the web? Where do you run these executed nodes? How do you connect all those pieces?

Jonathan Schemoul (09:20):

It’s a really good question. To connect all the pieces together, we didn’t develop some really fancy stuff like proof of space and time and things like that to verify that the data is really stored. We are using something much more low-tech, which is just a quality control. We have core channel nodes, which are the controllers of the network, which needs to keep some Aleph have stakers on such economics. They are verifying that other core channel nodes are behaving well. And that also the resource nodes are behaving well. Then the resource nodes are really doing the work of storing data, providing computing, et cetera. And they’re continuously controlled by the core channel nodes.

Anatoly Yakovenko (10:09):

That’s great. So they’re basically like a tokenized health check, right?

Jonathan Schemoul (10:14):

Yeah.

Anatoly Yakovenko (10:14):

I can spin this up and they can continuously monitor whether this computation is making progress, right?

Jonathan Schemoul (10:21):

Exactly.

Anatoly Yakovenko (10:21):

Is that verification, is that programmable? Can me as an app developer, can I kind of code up my own apps, specific health checks or an interface or something like that?

Jonathan Schemoul (10:35):

It’s a really good question. That’s what we are working on exactly right now.

Anatoly Yakovenko (10:40):

I’m leaking all the features. My imagination is going.

Jonathan Schemoul (10:44):

No, no worry. Well, it’s really interesting because to understand if an application behaves well on one host, you need to understand what the application is doing. So yes, we will give some kind of health check, which is kind of a unit test of how the app should work. So you will be able to provide unit tests for your app basically.

Anatoly Yakovenko (11:11):

That’s really Cool. What about domains? Like actual DNS?

Jonathan Schemoul (11:17):

Yeah.

Anatoly Yakovenko (11:20):

I’m asking all the hard questions.

Jonathan Schemoul (11:22):

Yeah. These questions will be answered if I explain how we handle access to this virtual machine. Because for DNS, for just IPFS, there is already quite a few solution, that’s not an issue. But then if you want to make a domain point to one micro VM, you want your micro VM to be able to serve your data. How we do first the load balancing because that’s the important question. For load balancing we have two ways, one, which is a regular cloud load balancing, which could be blocked by government, could be censored, because that’s what can happen when you have centralized point of control.

Jonathan Schemoul (12:07):

We will run it ourselves and a few of our partners might run some of the cloud load balancers that basically you can just point your domain to the cloud load balancer. And then the cloud load balancer will create certificates and stuff like that. It will work. We will run one instance. Ubisoft will likely run another. And like many of our partners. Well, for Ubisoft it’s not sure, just some talks about it. But perhaps over partners could run cloud load balancers that we’d go on point on specific micro VM host to see where your app is running and point it to them, that might work.

Jonathan Schemoul (12:48):

Now What happens if a government says, “This app shouldn’t work, this domain shouldn’t work.” Then you have two solutions, you either put the front-end inside IPFS, use some IPFS gateways, et cetera. And then the back-end is on the VM network. But then what happens if a government blocks the specific DNS inside the micro VM global.aleph.sh .aleph.cloud Whatever. Then we have a decentralized load balancing that comes into play.

Jonathan Schemoul (13:24):

The idea of the decentralized load balancing is that your browser will connect to the IPFS network using leap peer to peer, just leap peer to peer, find Pi Aleph nodes running, contact them directly then ask Pi Aleph node, “What micro VM host are running this software?” And then you can contact them directly. We are working on the JavaScript library that will do all this work on the client side so that you can have your front-end in IPFS that will then go and find all the back-end hosts that could answer your request.

Anatoly Yakovenko (13:58):

That’s super cool. You guys are working on some really hard problems. I think it should be fairly easy to kind of have basically a resolver that points to ENS in the system, right. That’s fairly straightforward. And basically you should be able to use any kind of like name, system, command any blockchain.

Jonathan Schemoul (14:25):

Yeah, clearly.

Anatoly Yakovenko (14:26):

Do you think that this is something that browsers are starting to recognize as standardizable? Is there a future where you think this technology could start percolating to the UI level where the end user can pick like blockchain based DNS resolver that kind of like connects all the pieces, right? From the human to this decentralized one.

Jonathan Schemoul (14:51):

I think that something that could come, I think that those that could really help in this is Mozilla foundation, I think that they would be the one to talk with. We aren’t in talk with them because we don’t really take that step right now. We have a lot on our plate. But in the future I’m pretty sure it’s the way to go. We will connect to any effort in that area and we will recognize it. I know that for IPFS for example, IPFS, IPNS, there are some efforts on some browser extension that you can install to have it, et cetera.

Anatoly Yakovenko (15:29):

How does like certificate chaining play with us? What happens if I need to have a cert on my service and things like that.

Jonathan Schemoul (15:38):

A certificate on your service? Yeah.

Anatoly Yakovenko (15:41):

Like their sign or whatever.

Jonathan Schemoul (15:43):

Well, we use the one that everyone uses, which is-

Anatoly Yakovenko (15:48):

Let’s Encrypt. The EFF one.

Jonathan Schemoul (15:49):

Yeah, exactly. We’re using this one, we used the discovery with the content, so that we switch to a specific content when Let’s Encrypt connects, then we serve this content, then we get a valid certificate, we can serve the good content.

Anatoly Yakovenko (16:07):

Can you unpack that a little bit?

Jonathan Schemoul (16:10):

Yeah. Well, Let’s Encrypt has multiple ways to certify that you have a certain domain, for sub domains of .aleph.sh and .aleph.cloud, It’s easy, we are using wildcard certificates. For custom domains that you could make point to your content directly, what we do is that you put a key inside your DNS to say, this is the virtual machine that should be mapped to that domain. Then you do a CNAME to our cloud load balancer and then the VM host when they get a request for this one, they go and check the DNS to see what VM they should serve on the generator certificate using Let’s Encrypt for that domain and they start serving it.

Anatoly Yakovenko (16:59):

Oh man, this would be really cool. But if we could have like an ENS where in my ENS registry I set my Let’s Encrypt domain, and then I run a local DNS server on my home machine where I run my browser and point that as a resolver, you could kind of tie these knots together and get-

Jonathan Schemoul (17:23):

Yeah, it could work.

Anatoly Yakovenko (17:24):

That’s really cool. What happens if these instances die, where do you guys get more hardware? How does that process work?

Jonathan Schemoul (17:36):

Well, an instance can just stop, then the load balancing system will find another instance to run your code. Then what happens when an instance get a request for a code that doesn’t have for the micro VM network. I mean, it goes on the network, checks, okay, what is the database entry that is in front? It takes the database entries. Has there been any upgrades to it? Okay. I get the upgrades. I subscribe using web socket to the upgrades of this database entry basically because it’s a document about database entry.

Jonathan Schemoul (18:14):

And then it looks, okay, so this is the root FS that I should load. Do I have it? I have it, could I use it? If not, I download it from the network. I applied that root FS, where is the code? Okay. What volume does it needs and it builds and retransits and gets you the answer. For a cold start with no root FS or whatever, it can take a few seconds. But in general you use the same root FS as others. So you can get the code start. If you don’t have the code, it’s less than a second. If you already have the code of the application is like 150 millisecond for a cold start.

Anatoly Yakovenko (18:53):

Got It. And is the coordination to decide where to start this particular instance? Does that occur over the underlying chain, like Solana or Ethereum or whatever?

Jonathan Schemoul (19:08):

Again, that’s something that we’re working on. At start it’s on the cloud load balancer. So the cloud load balancer are semi centralized for that. The idea is that each micro VM running node that starts running one will register a message, which is a database entry with a reference to say, “I am running this one.” And then the cloud load balancer looks at the uptimes of the available micro VMs and say, “Okay, this micro VM has it ready.” I’m forwarding it to it.

Jonathan Schemoul (19:40):

And then if there is none, then it could just route it to like a random one that has a good uptime. And then this one, the next time kind of like be choosing automatically because it is already serving it. If there is a lot of requests, it will provision multiple ones.

Anatoly Yakovenko (19:59):

Interesting. Got it. And you anticipate that you’ll basically be able to move if the underlying chain is cheap and fast enough you should be able to move the coordination and kind of like start this instance, pull this volume. This would be really cool with like Arweave backed storage volumes. Because you could almost then see the lifetime, the life cycle of the application as its business logic is evolving, right? That state is very useful to developers who are being able to go back to a checkpoint effectively at any given time too.

Jonathan Schemoul (20:38):

Well, right now we are using our own storage engine, which is APFS compatible. But in the future we will allow to choose other storage engine and we will also develop gateways with like Arweave, Filecoin and other.

Anatoly Yakovenko (20:53):

Super cool. I used to work at Mesosphere so I don’t know if you’ve heard of them, like D2iQ, this was kind of Kubernetes competitor, trying to build this decentralized operating system using Mesos as the jobs kind of Q-engine. There’s a lot of similar challenges there, and this is really cool that you guys are building this in a decentralized web application that’s kind of hosted in the real cloud, the mythical cloud.

Jonathan Schemoul (21:28):

Yeah. Well, there’s a saying, there is no cloud, it’s just other people computers. Here it’s really other people computer. So it’s pretty good because then you don’t trust those computers because you know it’s other people computers.

Anatoly Yakovenko (21:44):

How do you guys ensure the integrity of the computation itself? How do I know that the virtual machine, the execution environment that’s running isn’t malicious.

Jonathan Schemoul (21:54):

It’s a really good question. There is multiple questions there. How can I ensure that this computation isn’t returning a bad result because it knows who is on the other end. The load balancing system ensures that you don’t really see who is in the other end, so you don’t know who is making the request. So you don’t know if it’s a quality control call or if it’s a real call. It goes back to your question of the testing of the application. And there is another one there which is the question of the secrets, because you might need secrets. If you want to do push notification based on a smart contract event on Solana, let’s say, because that’s something that we are working on right now, thinking about it.

Anatoly Yakovenko (22:48):

That’s super cool.

Jonathan Schemoul (22:48):

So you would need secrets. You will need to story a secret to being able to go back to this device and send these device and notification. So you either store secrets in the local storage of the instance, but then if the instance dies, you can get it back or you try to get shared secrets between multiple hosts. We are working on it. We don’t have a total answer on that. What we are working on is using free shirt cryptography, so that multiple host defined by the developer come under these secrets. And then you go back to a question of trust, which is problematic.

Anatoly Yakovenko (23:30):

By the threshold cryptography, is this like an MPC to compute, or are you guys thinking like BLS or like Schnorr aggregation?

Jonathan Schemoul (23:42):

More like you encrypt something that can be decrypted by multiple private keys.

Anatoly Yakovenko (23:47):

Got it.

Jonathan Schemoul (23:48):

And then if they want to send a message, it needs to be signed by at least x of y.

Anatoly Yakovenko (23:54):

Right. Got it.

Jonathan Schemoul (23:57):

Because this micro VM I mentioned can also send messages on the network. These messages on the network will be database entries that in the end might end up also on-chain using all records or whatever. Because these micro VM can read from on chain data and the idea is that we are working so that they can also write on chain as well. So then you might need some kind of trust somewhere. So one developer could say, I trust this host this host this host, but they need at least to do that calculation three times, let’s say. But it’s a bit problematic and we are still working on it. It’s not finished yet, so yeah.

Anatoly Yakovenko (24:40):

That’s what I mean, that’s a really hard problem.

Jonathan Schemoul (24:41):

Yeah.

Anatoly Yakovenko (24:43):

Really cool. Yeah, the secrets thing is really challenging. I guess, what’s your vision for this? You guys are tackling on some really hard problems, you get all of them done in the next year.

Jonathan Schemoul (25:01):

I hope so.

Anatoly Yakovenko (25:06):

What happens then? What is the vision for Aleph?

Jonathan Schemoul (25:08):

Well, here we are only speaking about a few crypto issues. We aim at bigger than just the crypto ecosystem. What we really want to do is decentralize the web, so getting bigger, way, way bigger, that’s the goal. We are working with a few bigger partners who are part of the Ubisoft entrepreneurial labs, for example. We want to have a lot of hosting partners in the game that start providing resources so that I want it to be as easy as spinning up AWS server or whatever, you would just spin up VMs under the .im network. I want it to be as easy as using Firebase, using Amazon Lambda, et cetera.

Jonathan Schemoul (25:51):

And we have another big project going on, which is the indexing on Solana, where we are indexing data for a few protocols, currently Raydium, we might have another already soon. Well, I can say the name. We are working a lot on Orca, on port finance right now, and a lot of others actually that I can’t really talk yet. But the idea is to have all these data available, have all these data feed coming up so that you can have events based on them, also do off-chain computation and things like that.

Jonathan Schemoul (26:29):

I really want DeFi to be totally resilient because until it’s totally decentralized, you can stop DeFi. When it’s totally decentralized, you can’t. And if there is only the smart contracts that are decentralized, you can still stop it.

Anatoly Yakovenko (26:48):

Yeah. That’s definitely a fair point. I think the UX issues around building also just like push notifications and all these other things for projects are really hard to overcome if it’s a decentralized project, because who’s going to host those servers, right, to connect to mobile and everything else. Yeah. You guys have a lot of work set out and it’s pretty exciting. What do you think is missing? If you guys had like another, somebody else was building this other piece that you think is missing in the Web 3, what would it be?

Jonathan Schemoul (27:26):

What is missing today in the Web 3 ease of use for all this. We are trying to tackle this, but we have so much on our end. So this is a big issue, ease of use for developers, ease of use for users. Well, Phantom is already doing a great work on that end on Solana. But yeah, this and also I think that there is some kind of breaks between the … In DeFi, if you want to move money into the real world, it gets hard really fast because there has been some kind of complications that have been put in place by regulators, by banks, by whatever. If we could just get all these parts simpler, it could be great. Some kind of link between FinTech and crypto that would work everywhere in the world, including Europe, USA, et cetera. It would be great. There are a lot of people working on it, but that’s something that is missing as well.

Anatoly Yakovenko (28:28):

Yeah. Identity and like having those easy ramps is still hard. What about DNS? Just straight up resolving, do you think that’s tackleable from a Web 3 perspective.

Jonathan Schemoul (28:45):

The issue is the way DNS is done. DNS protocol is great, but it implies centralization points, a lot of centralization points, which are problematic. Then you will need another standard on DNS. But if you have another standard on DNS, then you have the issue that the network right now is done, is not done for it and the browser don’t understand it, et cetera, and operating system don’t understand it. We would need gateways for that. I think it’s doable. It’s definitely doable, but it’s a lot of work. And you would need multiple root servers, even virtual root servers, like what you said, local DNS server that would resolve your request, it could work.

Jonathan Schemoul (29:38):

If Let’s Encrypt could understand it in the same way, it would work. Or we could even have something different than the root certificate that we have today, because with blockchain, we already have private keys. We already have signature. So if you sign your content with your private key, then you can verify it on the other end. And you don’t really need all these chains of certificates that are here today. So that could also be another solution, but it would need another way, because right now we have roots certificate, children’s certificate, et cetera. And it all goes back to central authority. The whole DNS on certificate system today goes with authority. With blockchain we are trained to remove authorities.

Anatoly Yakovenko (30:33):

Yeah. Do you guys see this as becoming developer facing, or maybe someday eventually kind of like client facing and want these decentralized applications running for me, kind of my own instances. Or is this always going to be here I am, team Orca, go to this domain as a user.

Jonathan Schemoul (30:56):

It’s a good question as well. It’s always the issue between hosted components, locally run components and kind of pragmatic on that. At start I would really like to, everything runs inside my browser, everything works. That’s great. In reality, you have mobile phones, you have tablets, you have computers, you have a lot range of devices that can be running all the time. So real peer to peer application can’t really work that well, unless you go and say, “Okay. While you are waiting for me, please send it to my friend, that will forward the data for me, et cetera.

Jonathan Schemoul (31:40):

Blockchains are really helping there is that we have a centralized authority, which is the blockchain that you can trust and that can hold data for you and can even encrypt it for you or store it on aleph.im, whatever, and only you can decrypt it. I think that the mix between the two would be good, like self hosted data and remotely hosted data on the decentralized cloud, a good mix of the two could be good. And the efforts by the leap peer to peer team, with the javascript leap peer to peer. And there are a few of us like that helps, because once you have access to a peer-to-peer network directly from your browser, you can cut middlemen. You can cut central authorities, et cetera, if you’re the blockchain that serves as a central authority.

Anatoly Yakovenko (32:28):

What kind of loads have you guys seen or been able to test this out, in terms of like users request per second, kind of WebSocket connections per second.

Jonathan Schemoul (32:39):

It depends because when it’s per server, that’s not that much of an issue because the micro VM supervisor just forwards the request to the underlying software. If you don’t choose local persistent volume, the supervisor can run as many instances of your program as needed, then you can spawn multiple one even inside the same supervised cluster. And then the network, if it sees that this one has issues adding the request load you can load new ones.

Jonathan Schemoul (33:18):

I don’t think that there is really a limit on the request per second for that. So it’s not really the issue that we have. And then on the database part, same, if you access one API server and you give it 500,000 requests per second, it would go down, because it’s a server. If you target multiple API server, you are good. So that’s also where the decentralized load balancing helps because if you use a cloud load balancer obviously even this cloud can go down. But if you contact a peer to peer network to know what host can answer, then you can contact multiple host. And all our core channel nodes, we are currently 54 of them are also API servers that users can connect to to get the data, which will be certified by our core channel node.

Anatoly Yakovenko (34:10):

Cool. As a whole, how many, I guess, do you have an idea of how many users per second or humans per second have you guys served in some peak times?

Jonathan Schemoul (34:21):

We don’t, because we don’t store metrics currently, we should. We don’t have it because we didn’t want to have any kind of log or whatever on the users, but we should add it, that’s actually a good point, we will.

Anatoly Yakovenko (34:37):

Yeah. I mean, I think you got to be really aware of privacy and how that impacts some applications. But really interesting to see how this works. Caching is another one of those things, basically having a distributed cache around the world for often queried data. And this is an issue that I think doesn’t have a good solution in Web 3 right now. You do all this work, set up a purely thin client, that’s like loads from code, only talks to the chain and then you got to go fetch assets. And if you’re using centralized … Yeah, they can basically inject whatever they want.

Jonathan Schemoul (35:25):

Yeah, that’s the main issue. And the good part is that if you also randomize where the request of the users go, if there is one bad actor, it will only inject bad data once in a while you don’t even know where. Once there is a quality control it will detect it, so that can also be a solution. It’s not a silver bullet either, but it can definitely help. So like for Solana what we are doing right now, for Raydium for example, is that we have an indexer that talks to multiple RPC of Solana then get the transaction history, store it inside the level DB, inside the micro VM, and then index the data.

Jonathan Schemoul (36:09):

Then we can get data on the pool’s latest trades and stuff like that. The idea is that if there is too much request on one index, it will start another index or another index or another index, or et cetera, so that when you do a request, it reroutes you randomly to multiple hosts that have the same index.

Anatoly Yakovenko (36:28):

How fast is that?

Jonathan Schemoul (36:31):

Not fast enough currently. Well, it’s fast enough for Raydium.

Anatoly Yakovenko (36:35):

Okay.

Jonathan Schemoul (36:36):

It works really well.

Anatoly Yakovenko (36:40):

Raydium gets a ton of hits. I mean, some of their IDOs have seen half a million requests per second-

Jonathan Schemoul (36:48):

Yeah. So for the Raydium data, it handles it well, like all the trades, whatever, it handles it pretty well. We don’t get behind blocks in the indexing, so it works well. For Serum it’s a bit more problematic because you need to watch, event cue all the time. I really hope they will have some kind of flux in the future. I think that they are working on it. So that would really help us either to get history even when we aren’t watching their event cue.

Anatoly Yakovenko (37:23):

Yeah. So not half a million per second, half a million total, which is quite different, but yeah, they see some really good traffic.

Jonathan Schemoul (37:30):

Yeah.

Anatoly Yakovenko (37:32):

Cool. I mean, that’s really cool. I think really hard part I think in designing these systems, one, is the problem is difficult, but then once you build the first version of it and you start hitting real traffic, there’s a lot of parts that fit together that break under load. So what is your debugging like? How do you guys actually monitor like debug, like PagerDuty, what do you guys use as a team?

Jonathan Schemoul (38:01):

Right now our team is still small. We are growing a lot. Right now we are like 10 developers. A few months ago we were only three. A year ago I was alone. So we are growing really fast and we are putting all these things into place. Right now everyone monitors and checks what happens and it helps. There is Hugo who is on the micro VM side, Ali was mostly on the indexer side, myself we can get everything. But we are putting really real stuff in place right now to have it, because we are a growing startup so it takes time to get everything in place.

Anatoly Yakovenko (38:43):

Yeah, for sure. Do you envision a PagerDuty team for this?

Jonathan Schemoul (38:48):

Yes. I think that we will need one. Once we have more application that are using it, we will need one. So yes, if you have advices on that day, I’m really happy to get them.

Anatoly Yakovenko (39:00):

I mean, it’s just part of life. It’s not complicated. It’s just work. This is I think that like response team I think is a difficult thing to set up in a decentralized community. If you guys are building a decentralized network with providers that are supplying hardware and all this other stuff, those are the folks that we found to be really responsive and have a lot of stake in growing this. How do the economics work for all the people actually supplying the hardware and bandwidth, et cetera?

Jonathan Schemoul (39:36):

Again, the research and economics aren’t live yet. We are working on them. The core channel nodes economics is already there for like a year, now it works well. For the core channel node you need to have 200,000 Aleph to start a node and 500,000 Aleph, staked on a node, so that it can start to run. And then all the node operator get a share of a global envelope daily for all the nodes. All the stakers get a part of the envelope for stakers. The more nodes active, the bigger the envelope for staker is. But then for each node, they will earn a bit less if there are more nodes because it’s a global envelope. So it helps stakers grow the number of nodes that are active, so that’s for the core channel nodes.

Jonathan Schemoul (40:25):

For the resource nodes, to get storage or computing on network, there is two ways to get it. One that is already live, which is hold X amount of Aleph and get that amount of storage, hold X amount of Aleph and have the ability to start one VM with X megabyte of RAM, X virtual CPU, et cetera. And then the multiplier, and all that gives you the total count of micro VM I mentioned that can be running on your network based on your balance. The good part with that is that partner project could use a lending protocol to borrow Aleph where depositing their own token to get service. They would get the service for free just paying interest in their token, inside the borrowing protocol.

Anatoly Yakovenko (41:14):

Got it.

Jonathan Schemoul (41:15):

So that’s a way for protocols to get it, but it’s quite expensive because they don’t directly pay for it. So for this way of using it, Aleph.im network is paying for them from the incentive pool, which right now it’s one fifth of the supply, and we are changing it in the next few months, we’ll change a bit of economics. It will be nearly half of the supply that would be dedicated to pay for that. Because since you lock a part of the supply, then you can release a bit inside circulating because of this new use. So that’s for the hold X Aleph tokens.

Jonathan Schemoul (41:51):

And then there is another way that isn’t developed yet that we will likely use Solana for, because it’s fast enough for micro-payments in that area. It’s like pay per action, pay X Aleph per gigabyte per month. You as a provider, you can say, “I am okay to be paid at least that much.” And then users will say, “I want my data to be replicated at least four time. And I’m okay to pay at most that much for this.” Then you get divided by those who provide service and the payment is done as micro payments. And same for the micro VM you pay per CPU per hour, et cetera.

Anatoly Yakovenko (42:32):

Got it. That’s really cool. Well, this has been awesome to have you on the show. I mean, we got into I think the really deep, deep tells of how Aleph works, so I had a blast because it really reminds me of the spending, working on the stuff for centralized systems. It’s really cool to see this kind of built ground up for decentralized ones as well. So appreciate the work you’re doing. Thank you, Jonathan.

Jonathan Schemoul (43:00):

Thank you very much for having that call. It was really great talking with you.

Anatoly Yakovenko (43:04):

Awesome. And good luck to you guys. I mean, startups are blood, sweat and tears, so just keep working on the vision. You’ll get there.

Jonathan Schemoul (43:11):

Thank you very much.

Anatoly Yakovenko (43:13):

Cool. Take care.

Leave a Reply

Your email address will not be published. Required fields are marked *