summaryrefslogtreecommitdiff
path: root/transcripts/sydney-bitcoin-meetup/2020-05-19-socratic-seminar.mdwn
blob: c47d7d25d01ed9b93b2b188b889d4cc23fbb5e9a (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
Name: Socratic Seminar

Topic: Agenda in Google Doc below

Location: Bitcoin Sydney (online)

Date: May 19th 2020

Video: No video posted online

Google Doc of the resources discussed: https://docs.google.com/document/d/1hCTlQdt_dK6HerKNt0kl6X8HAeRh16VSAaEtrqcBvlw/

The conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.

# 0xB10C blog post on mempool observations

https://b10c.me/mempool-observations/2-bitmex-broadcast-13-utc/

These 3-of-4 multisig inputs don’t have compressed public keys so they are pretty large in current transaction sizes. They could potentially be way smaller. I thought about what are he effects of the daily broadcast.

You are saying that they don’t use compressed public keys. They use full 64 byte public keys?

65 bytes yes. With uncompressed public keys, an average output is bigger than 500 bytes. I looked at the distribution of time. I found out that they process withdrawals at 13:00 UTC. I looked at the effects this broadcast had. If you imagine really big transactions being broadcast at the same time at different fee levels. They take up a large part of the next blocks that are found. I clearly saw that the median fee rate levels rose quite significantly. They had a real spiky increase. Even the observed fee rates entering the mempool, there was the same increase which is a likely reaction to estimating a higher fee so users pay a higher fee. There are the two charts. The first one is the median estimated fee rate, what the estimators give me. The second one is the fee rate relative to midnight UTC. There is this clear huge increase visible at the same time BitMEX does the broadcast. The fee rate is elevated for the next few hours.

The effect you are pointing to there is there is this cascading effect where everyone else starts estimating the fee higher than they theoretically should. Would that be accurate in terms of how to explain that?

The estimators think “We want to outbid the transactions that are currently in the mempool.” If there are really big transactions in the mempool…. For example these BitMEX transactions pay a much higher fee. I estimated that based on an assumption that is a 4 satoshi per byte increase over 8 hours, around 1.7 Bitcoin in total fees are paid additionally due to this broadcast every day. This is about 7 percent of the total transaction fees daily.

Happy days for the miners. Potentially even more now during a high fee time?

I haven’t looked at what the current effect is. I also noticed that the minimum fee rate that a block includes spikes as well. Transactions with very low fee rates won’t get confirmed for a few hours. I talk about the improvements that can be made. One thing I mention is these uncompressed public keys started to be deprecated as early as 2012. By 2017 more than 95 percent of all keys added to the blockchain per day were compressed. The 5 percent remaining are mainly from BitMEX. For example transaction batching could be helpful as well. In the end the really big part is if you pay to a P2SH multisig address which you can only spend from if you supply a 3-of-4 redeem script then you always have these big inputs that you need to spend. There might be other ways to decrease the size there. I am sure it is not something doable for BitMEX because they have really high security standards. One thing that could be done in theory would be to say “We let users deposit on a non multisig address and then transfer them into a colder multisig wallet and have very few UTXOs lying in these big inputs.” Obviously using SegWit would help as well, reducing the size as the script is moved into the witness. BitMEX mentioned they are working on using P2SH wrapped SegWit and they estimate a 65 percent reduction there. Potentially in the future BitMEX could explore utilizing Schnorr and Taproot. The Musig scheme, I don’t know if that would work for them or not. Combine that with batching and you would have really big space savings. It might even increase BitMEX’s privacy. They use vanity addresses so everything incoming and outgoing is quite visible for everyone who wants to observe what is going there. There is definitely a lot to be done. They are stepping in the right direction with SegWit but it might be possible for them to do more. I spoke to Jonny from BitMEX at the Berlin Socratic and they have hired someone specifically for this reason to improve their onchain footprint over the next year or so. That is good news. Any questions?

With the way the wallets react with this giant transaction in the mempool, they increase their fee rate. It slowly drops after that. Do you think the wallets are improving correctly in this situation? The algorithm they use to choose the fees is adapting to this giant transaction efficiently or is it inefficient?

I am not sure. Typically if you estimate fee rates you say “I want to be included in the next block or in the next 3 blocks. Or 10 blocks maybe.” If that is a constraint of your use case or your wallet then maybe that is the right reaction. If you want to pay as low fees as possible it is maybe not the correct reaction.

It is a really great blog post. It strikes a nice balance between really going into the detail but actually explaining things in a way that is easy for the layman to understand. 

# Fabian Jahr blog post on “Where are the coins?”

https://medium.com/okcoin-blog/btc-developer-asks-where-are-the-coins-8ea70b1734f4

I remember running a node for the first time and doing what it said in the tutorial to run `gettxoutsetinfo`. It gave me a really weird number for the total amount. It was not clear and there was no detailed information on exactly went where. I found this [post](https://bitcoin.stackexchange.com/questions/38994/will-there-be-21-million-bitcoins-eventually/38998#38998) from Pieter Wuille pretty late. He explained it similarly to how I explain it and he also gives particular numbers. He doesn’t talk about `gettxoutsetinfo`. He talks about the total amount, how many coins will there ever be in total. It is a different view and this is the reason why I didn’t find it in the beginning. With my post I tried to add even more information so I go over the numbers of the very recent state of the UTXO set. I used the work that I did with the CoinStatsIndex, an open [pull request](https://github.com/bitcoin/bitcoin/pull/18000).  I wanted to link to the different parts of the codebase that is implementing this or why things work the way they do. It is also on my [website](https://fjahr.com/posts/where-are-the-coins/). OkCoin reposted it and this got a little bit more traction. A lot of these links go to the codebase. My article is aimed at someone who is running a node. I hope with this information I can take them to get more into the code and explore why things work they do and form that mindset of going deeper from just running a node the same way that I did over the last couple of years.

At what point did you come across the Pieter Wuille post?

It was almost done. I had produced some bugs in my code. I was running the unclaimed miner reward code to get the exact number and make sure it really adds up 100 percent. The rest of the text was mostly written. It was great that I found it because I took this blog that I copied over from his stuff. I am not sure I would have found this information otherwise. I could have found in which blocks what went missing but Pieter at the time was maybe in contact with people to identify bugs from the miners. That was great and definitely helped me finish up much more quickly. 

I think it is a really cool post. 

These are things Fabian that you stumbled across whilst contributing to Core right? This isn’t you using the Core wallet and thinking “I don’t how this works. I need to research this”?

Initially when I saw the first time that was the case. I was confused by it and I wanted to know why there is this weird number but I didn’t have the tools to analyze it. I accepted the incomplete information that I found on Bitcointalk and went on. At some point I realized what I am doing for CoinStats, that is the same stuff that I need to know to analyze what this exact number is. It is a different view on the same statistic. I realized I can actually write an article because I already have all the knowledge and I am working with the code. I can just adapt it a little bit in order to get these exact numbers. That is what motivated me to actually write a blog post. It is not something that I do very often.

I recall seeing some discussion where one OG on Twitter and Matt Corallo was talking with him about the time he tried to intentionally one less satoshi in your block reward as a miner in the early days.

With BIP 30 you have this messy solution where overriding blocks are hardcoded into the codebase. Is it possible at this point to actually fork back to an earlier block in 2010? Or these rules and other rules essentially add a checkpoint at some point after that?

That’s what BIP 34 does. I don’t really remember how exactly that works. There are measures here but I am not exactly sure how it would play out if we would go back that far. There are a lot of comments above the [code](https://github.com/bitcoin/bitcoin/blob/b549cb1bd2cc4c6d7daeccdd06915bec590e90ca/src/validation.cpp#L2014). 

It says we will enforce BIP 30 every time unless the height is one of these particular heights and the hash is these two particular blocks. If you do a re-org to as far as back as you like you won’t be able to do anything apart from those two blocks as violations of it.

This rule applies to every block apart from these two blocks.

# PTLCs (lightning-dev mailing list)

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-April/002647.html

This is the mailing list post from Nadav Kohen summarizing a lot of the different ideas about PTLCs.

Nadav is keen on working on PTLCs now despite not having Schnorr. There is this discussion about whether it is worth it in the case of discreet log contracts where you have one version that doesn’t use adaptor signatures and you have the original one that uses some kind of Lightning construction where you have a penalty transaction. The adaptor signature one gets rid of the penalty. You have an oracle who is releasing a different secret key depending on the outcome of the bets. It is Liverpool versus Manchester. The oracle releases one secret key based on Liverpool winning, another one on Manchester wining and another one if they tie. It is really simple to do a bet on this kind of thing if you have the adaptor based one. What you do is you both put money into a joint output, if you don’t have Schnorr you use OP_CHECKMULTISIG. Then you put adaptors signatures on each of these outcomes, tie, one team wins, the other team wins. The point, I call it the encryption key, for the signature for each of those outcomes is the oracle’s point. If the oracle releases one secret key what it means is you can decrypt your adaptor signature, get the real signature, put that on the chain. Then you get the transaction which pays out to whoever gets money in that situation. Or if they tie both get money back for example. This is a really simple protocol. The one without adaptor signatures is pretty complicated. I was just pointing out in this mailing list post it is far preferable to have the adaptor one in this case. Even though the ECDSA adaptor signatures are not as elegant as Schnorr it would probably be easier to do that now than do what the original discreet log contract [paper](https://adiabat.github.io/dlc.pdf) suggests. 

Rust DLC stuff was worked on at the Lightning Hack Sprint?

Jonas, Nadav and the rest of the team worked on doing a PTLC. The latest one, they did tried to do a Lightning discreet log contract. You are trying to modify rust-lightning to do something it wasn’t meant to do yet. I don’t think much was completed other than learning how rust-lightning works. It looks like it will go somewhere. The rust-lightning Slack has a DLC channel where some subsequent discussions are going on.

There was a question on IRC yesterday about a specific mailing list post on one party ECDSA. Are there different schemes on how to do adaptor like signatures with one party ECDSA?

The short story is that Andrew Poelstra invented the adaptor signature. Then Pedro Moreno-Sanchez invented the ECDSA one. When he explained it he only explained it with a two party protocol. What I did is say we can simplify this to a single signer one. This doesn’t give you all the benefits but it makes far simpler and much more practical for use in Bitcoin today. That was that mailing list [post](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2019-November/002316.html). It seems solid to me. There might be some minor things that might change but from a security perspective it passes all the things that I would look for. I implemented it myself and I made several mistakes. When I looked at whether Jonas Nick had made the same mistakes, he didn’t. He managed to do all that in a hackathon, very impressive.

# On the scalability issues of onboarding millions of LN clients (lightning-dev mailing list)

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-May/002678.html

Antoine Riard was asking the question of scalability, what happens with BIP 157 and the incentive for people to run a full node versus SPV or miner consensus takeover.

I think the best summary of my thoughts on it in much better words than I could is John Newbery’s [answer](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-May/017823.html). These are valid concerns that Antoine is raising. On the other hand they are not specific to BIP 157. They are general concerns for light clients. BIP 157 is a better light client than we have before with bloom filters. As long as we accept that we will have light clients we should do BIP 157.

This conversation happened on both the bitcoin-dev mailing list and lightning-dev mailing list.

This sums up my feelings on it. There were a lot of messages but I would definitely recommend reading John’s one.

The Luke Dashjr [post](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-May/017820.html) was referring back to some of Nicolas Dorier’s [arguments](https://medium.com/@nicolasdorier/neutrino-is-dangerous-for-my-self-sovereignty-18fac5bcdc25). He would rather people use an explorer wallet in his parlance, Samourai wallet or whatever, calling back to the provider as opposed to a SPV wallet. John’s comment addresses that at least somewhat. It is not specific to BIP 157.

The argument is really about do we want to make light clients not too good in order to disincentivize people to use them or do we not want to do that? It also comes down how do we believe the future will play out. That’s why I would recommend people to read Nicolas Dorier’s article and look at the arguments. It is important that we still think about these things. Maybe I am on the side of being a little more optimistic about the more future. That more people will run their own full nodes still or enough people. I think that is also because everyone will value their privacy will want to run a full node still. It is not like people will forget about that because there is a good light client. I honestly believe that we will have more full nodes. It is not like full nodes will stay at the same number because we have really good light clients.

To reflect some of the views I saw elsewhere in the thread. [Richard Myers](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-May/002702.html) is working on mesh networking and people trying to operate using Bitcoin who can’t run their own full node. If they are in a more developing world country and don’t have the same level of internet or accessibility of a home node they can call back to. Even there there are people working on different ideas. One guy in Africa just posted about how they are trying to use Raspiblitz as a set up with solar panels. That is a pretty cool idea as well.

This is the PGP model. When PGP first came out it was deliberately hard to use so you would have to read documentation so you wouldn’t use it insecurely. That was a terrible idea. It turns out that users who are not prepared to invest time, you scare off all these users. The idea that we should make it hard for them so they do the right thing, if you make it hard for them they will go and do something else. There is a whole graduation here. At the moment a full node is pretty heavy but this doesn’t necessarily have to be so. A full node is something that builds up its own its own UTXO set. At some stage it does need to download all the blocks. It doesn’t need to do so to start with. You could have a node that over time builds it up and becomes a full node for example, gradually catches up. Things like Blockstream’s satellite potentially help with that. Adding more points on the spectrum is helpful. This idea that if they are not doing things we like we should serve them badly hasn’t worked already. People using Electrum random peers and using them instead. If we’ve got a better way of doing it we should do it. I’ve always felt we should channel the Bitcoin block headers at least over the Lightning Network. The more sources of block headers we have the less chance you’ve got of being eclipsed and sybil attacked.

I recall some people were talking about the idea of fork detection. Maybe you might be not able to run your own full node and you are running some kind of SPV or BIP 157 light client. It would know that there is a fork so now I should be more careful instead of merely trusting whatever miner or node has served me that filter.

There are certainly more things we can do. The truth is that people are running light clients today. I don’t think that is going to stop. If you really want to stop them work really hard on making a better full client than we have today.

I also liked Laolu’s [post](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-May/002685.html) too. You can leverage existing infrastructure to serve blocks. There was a discussion around how important is economic weight in the Bitcoin network and how easy would it be to try to takeover consensus and change to their fork of 22 million Bitcoin or whatever. 

This was what happened with SegWit2x, aggressively going after the SPV wallets. They were like “We are going to carry all the SPV wallets with us”. It is a legitimate threat. There is an argument that if you have significant economic weight you should be running a full node, that is definitely true. But everyone with their little mobile phone with ten dollars on it is maybe not so important.

One of the arguments that came up in this thread is if you make it so that the full nodes have to keep serving the blocks for the light clients that might increasingly put more pressure on the full nodes to serve the filters. That might create this weird downward spiral incentive to not run their full nodes because of the increasing demands on them.

Serving filters is less intensive than serving blocks. We’ve already got that. We are already there.

If you don’t want to serve the filters you don’t have to serve them. It is a command line option, it is off by default.

How easy it to run your full node and pair it with your own phone? It looks like we are looking for an enterprising entrepreneur to create a full node that is also a home router, a modem and a Pi hole. Does anyone have any thoughts on what could be done to make that easier?

Shouldn’t we have protocol level Bitcoin authentication and encryption first so we can guarantee that we are connecting to our own node? That seems like the obvious thing.

I am maybe one of the few people who doesn’t buy the economic weight argument about these light clients. The thinking goes that if a large proportion of the economy are not verifying their own transactions the miners could just change the rules. Create Bitcoin Miner’s Vision with 22 million Bitcoin and everyone will just go along with it. Even if a large amount of economic weight is with that I fail to see how people could just go along with it. It is going to be noticed very quickly. Light client would have to switch. It would be a bad scenario if a lot of infrastructure is running on light clients. But I don’t think it is going to change the definition of what is Bitcoin. I can’t see it happening. Of course if no one is running full nodes then yeah. As long as it is easy to run a full node, you can run it at home on your Raspberry Pi. Some people decide not to because of their phone or they don’t have the right hardware or internet connection. I can’t see how this could change what is Bitcoin in any scenario. What Bitcoin will be is always be what bitcoind full nodes think is Bitcoin. If you some other coin on some other fork of it it is not going to be Bitcoin and it is not going to be worth anything.

That was literally what SegWit2x was about. That was literally what they tried to do. They said “We are taking Bitcoin in this direction and we’re taking the light clients with us because we’ll have the longer chain.” It is not a theoretical, sounds unlikely, that was literally what the plan was. The idea was that it would force enough people to flip and that would be where the economic majority are now. It would lever the small number of people running full nodes into switching to follow that chain as well because that was going to be where everyone was. Your useless old Bitcoins weren’t going to worth anything. This is not some theoretic argument, this was literally what happened. Saying this is not going to happen again because reasons is probably not going to carry weight for at least ten years until people have forgotten about that one.

But would it work? If there were more light clients is it more likely to work? I don’t see it.

It would absolutely work. All the light clients would follow it. Then your exchange or service doesn’t work anymore if you are not following the same chain as all the light clients talking to you. Follow the light clients if they are the economic majority that is where you have to go.

It becomes a question of economics then. I don’t see it actually playing out that way. The reason is the people who decide what is Bitcoin is not really the owners of Bitcoin or the people in the P2P network. It is more like the people who have US dollars who want to buy Bitcoin. They are the ones who decide what it is at the end of the day. The exchange is going to offer them two different ticker symbols for these two different Bitcoins and they are going to decide which one is the real one. They are not going to decide based on which one the light clients are following. I don’t see how having light clients follow you is going to help you too much in deciding what is the real Bitcoin.

If Bitcoin was used for real daily economic activity which it isn’t today then you’ve got two choices. This change happens, the miners push this through and the light clients go one way. Say you’re relying on Bitcoin income and people sending you Bitcoin. You have two choices. All the light clients don’t work for your customers or you change, follow the miners and go with the economic majority. You may rail against it, “This is not real Bitcoin”, it doesn’t matter. People on the margins will go with the majority. If the economic majority goes one way you will go that way because otherwise you will suffer. At the moment we have this luxury that most people are not living day to day on Bitcoin. They could sit it out and wait to see what wins. But if there is real monetary flow with Bitcoin then that isn’t necessarily the case and you can place time pressure on people. The economic majority is significant then. The time it would take to get a response, to get all the SPV nodes to flip across or reject whatever blocks could be significant. It could be long enough that people will cave. Given we’ve seen serious actors try this once and a significant amount of mining power threaten to try this I think it is a real scenario.

Let’s say in that scenario SPV wallet developers think how can I let users select which chain they wish to be on. At that point the SPV wallet developers would have to quickly update their code to help their users select the correct chain, not this fork 22 million Bitcoin chain. Theoretically it is possible but it is not so easy to quickly put in place to help them fight the adversarial change.

Imagine the chaos that would ensue. The people who haven’t upgraded can’t send money to people who have upgraded. They are also under pressure to get their software working. The easiest way to get their software working is to follow everyone else.

I see the argument and I think you’ve put it better than I’ve ever heard it before. I still think there is “Ok I have to make a decision about what I am going to do.” I can follow this chain but if the people who have US dollars who are investing in Bitcoin, big players, don’t think that chain has Bitcoin on it. I do see the economic majority argument but just not from economic majority of holders of Bitcoin.

You are simplifying the argument. There are different constituents. Even though I agree that one set of constituents are the whales that hold Bitcoin or are buying lots of Bitcoin, if they are going to lose money because all the other constituents in the ecosystem say that chain isn’t Bitcoin then it doesn’t matter what they say. They either lose money or they go with the ecosystem majority. It is not as simple as saying one constituent has all the power. This is definitely not the case. There are four or five different constituents.

They all have one thing in common. They are bidding for Bitcoin with US dollars or some other currency or goods. People who have Bitcoin do not get to decide what the real Bitcoin is because they are going to have both Bitcoin and the forked coin. They don’t get to decide which one is the real one. The guys who decide are the people who purchase it on exchanges. They are going to say which is the real one, we have a social consensus on which one is the real one. I see there could be an attack on it but I don’t see how light clients really are going to be the key to the malicious attack winning. 

This is where I agree with you. I think it is not light client or full node. It is light client or custodial. Custodial are just as bad as light clients if not worse. Then you are fully trusting the exchange’s full node. You also only need one exchange full node to serve billions of users. If we can move people from custodial to light client I see that as an improvement. That is the comparison people don’t often make.

This seems counter to the Nicolas Dorier argument. He would rather you use a custodial wallet because that is just bad for your risk because you are not holding your own keys. That is resisting a miner takeover.

I would think that would make the economic weight argument even worse. Because then they have actually have the Bitcoin, you have nothing. If you have a SPV wallet you ostensibly have the keys and you can do something else with them.

If you want to discuss the bad things you can do having the majority of clients controlled by a custodial entity is arguably even worse than the SPV mode.

What Nicolas says I would rather use a custodial wallet for myself if that means the whole ecosystem doesn’t use light clients.

That is the PGP disease. Let’s make it really hard to use them so everyone is forced into a worse solution. It is a terrible idea.

What are some of the ideas that would make it easier for people run their own full node. You mentioned making it easier for people to call back to their own node. Right now it is difficult.

The friends and family idea. That computer geek you know runs a full node for you which is second best to running your own. The idea of a node that slowly works its way backwards. It has a UTXO snapshot that it has to trust originally. Then it slowly works its way backwards and downloads blocks for however long it takes. Eventually it gets double ticked. It is like “I have actually verified. This is indeed the genuine UTXO set.” A few megabytes every ten minutes gets lost in the noise these days.

That is related to the James O’Beirne [assumeutxo](https://github.com/jamesob/assumeutxo-docs/tree/master/proposal) idea. 

And UTXO set hashes that have been proposed in the past are pretty easy to validate. You can take your full node and you can check that this is the hash you should have at this block. You can spread that pretty widely to people who trust you and check that is something you’ve got as well. Creating a social consensus allows you to bootstrap really fast. I can grab that UTXO set from anywhere and slowly my node will work its way backwards. With some block compression and other tricks we can do that. The other issue is of course that there is a lot of load. If you serve an archived full node and serve old blocks you get slammed pretty hard with people grabbing blocks. I had a half serious proposal for adopt a block a while ago. Every light client would hold 16 blocks. They would advertise that and so you would be able to grab 16 blocks from here and 16 blocks from there. Rather than hammering these full nodes even light clients would hold a small number of blocks for you. You could do this thing where you would slowly wind your way backwards and validate your UTXO set. I think the effort is better put into making it easier to run a full node. If you don’t like light clients move the needle and introduce things that are closer to what you want in a full client. Work on that rather than trying to make a giant crater so that no one can have a light client. That is short term and long term a bad idea.

Will Casarin made a comment around how he had set up his Bitcoin node with his phone to call back to his own node. It is more like enthusiast level or developer level to use your own full node and have your phone paired with it. To have your own Electrum Rust Server and pair your laptop back to it. How can that be made easy for the Uncle Jim, friends and family node so he can set it up so they call back to his node.

If you stop encouraging people to experiment with this stuff you can inadvertently stop people from experimenting and entrepreneurs from figuring cool new ways to have your phone paired with your full node making it easy. You can stop people from getting into Bitcoin. This option doesn’t exist now. That lowers demand, that lowers the price and the price lowers the hashrate. You can have your pure Bitcoin network with only full nodes but it is at a lower price and a lower hash rate. Is your security better? It is definitely up for debate.

On Neutrino the way I read that was it made it easier for me as a local enthusiast to just serve effectively a SPV. What doesn’t get brought up to much is the counterargument that if you make it easier to be a SPV you are going to get a lot more honest SPVs out there that will sway the economic majority. Instead of there being blockchain.info as the only party serving SPV filters and they own the money, if I’m a SPV and others are a SPV there will be a lot more of the network who are acting honestly and needing SPV filters.

The problem with SPV is that it trusts the person who serves you.

But my node isn’t going to serve you rubbish forks.

I have been running a Electrum server for many years. Often I see people at conferences telling me they are using my node. They are trusting me. I say “Don’t trust me” but it has happened this way. At least they know this guy is running a node and I have to trust him. It is not some random node on the internet.

If you trust a certain full node or you connect to a certain full node and you are getting light client filters from it then you are fine.

I am still a random person for them. Maybe they saw me once or twice at a conference. I always tell them “Run your own node.”

This is where you need to have a ladder. It is much better people are going “I am connecting to this guy’s node. He is serving me filters” than none of the options apart from a full node are there and therefore I am just going to trust Coinbase or Kraken to deal with it. I am not going to worry about how my balance is being verified. The situation that you describe is superior to custodial. And it is a long journey. As long people understand what they are doing you can see in 5, 10 years they are like “I don’t want to trust this guy’s node anymore. I don’t want to have filters served from this guy’s node. I am going to run my own node.” You have given them a stepping stone towards running a full node that wouldn’t have been there if you had ditched all the superior light client options.

# RFC: Rust code integration into Bitcoin Core

https://github.com/bitcoin/bitcoin/issues/17090

Last month we spoke about this idea.

This has been a topic that has floated around for a while, a Rust integration in Bitcoin Core. It first started in the middle of last year when Cory Fields and Jeremy Rubin hacked together some initial Rust integration and had a proof of concept as to how you might integrate a simple Rust module into the Core codebase. There was an initial [PR](https://github.com/bitcoin/bitcoin/issues/17090) that has a lot of conceptual level discussion in it. I would definitely recommend that anyone interested in Rust integration to bootstrap by reading through this discussion. There is Cory’s [presentation](https://diyhpl.us/wiki/transcripts/cryptoeconomic-systems/2019/everything-is-broken/) he gave at the MIT conference (2019). That summarizes a lot of the thinking why a language like Rust might be something good for Bitcoin. When we are having a lot of these discussions a lot of thinking is five or ten years down the line. Thinking about where we want to be at that point. As opposed to we should rewrite everything in Rust right now. Cory brings this up as well. You can look at a lot of other big security critical open source projects like Firefox. Mozilla is one of the orgs that kicked off a lot of this research into safer languages. Web browsers are exactly the kind of thing you want to build using safer languages. Maybe Rust makes sense for us. There would be downsides to blanket switched to using Rust next week. There are lot of long term contributors working on Bitcoin Core who may not be interested in or as fluent in Rust as they are C or C++. You can’t necessarily just switch to a new language and expect everyone else to pick that up and be happy with using it going forward. We have thousands and thousands of lines of code and tests that have worked for near on a decade. There is not necessarily a good reason to rewrite for the sake of rewriting it. Maybe better languages come along. Particularly not when some of the benefits that Rust might present are not necessarily applicable to those parts of the codebase. Since the original discussion around the end of last year Matt ended up opening a few PRs. A serving DNS headers over Rust [PR](https://github.com/bitcoin/bitcoin/pull/16834) he put forward. A parallel P2P client in Rust [PR](https://github.com/bitcoin/bitcoin/pull/17376). He wrote another networking stack or peer-to-peer stack in Rust that would run alongside the C++ implementation. If the C++ implementation crashed for some reason maybe you could failover to the Rust implementation. Try understanding things like that. Maybe you want to run the Rust implementation and failover to the C++ implementation. There is lots of code there. I think it is still worth discussing but I can’t imagine how the discussion ended. It got to the point where a lot of our PRs and discussions end up. There’s a flurry of comments and opinions for a week or two and then it dies down and everyone moves onto discussing and arguing the next interesting thing to come up. It is probably one of the things that needs a champion to pick it up and really push the use case. Obviously Matt has tried to that and it hasn’t worked. I don’t know what needs to be done differently or not. It is one of these open for discussion topics for the repo. Personally I think it is really interesting. I am played around with writing some Rust. I like writing it. It seems to be where big software projects are going. That includes things like operating systems for desktop and mobile, web browsers. Idealistically in five or ten years time if we manage to fix a lot of the other things we need to fix up and trim down in the Core codebase, at that point if some or all of it has been transitioned into Rust I wouldn’t be against it. It is definitely unclear the path to that at the moment.

Pathway wise is this even possible to have a parallel Rust thing going at the same time? Is that potentially going to run into problems where the Rust version disagrees with the C++ version?

You have already got Bitcoin clients that exist written in Rust. You’ve got rust-bitcoin that Andrew (Poelstra) and Matt (Corallo) work on which still has a big warning at the top of the [README](https://github.com/rust-bitcoin/rust-bitcoin/blob/master/README.md) saying “Don’t use in this production for consensus critical applications.”

I have never seen a rust-bitcoin node be listed on an [bitnodes.io](https://bitnodes.io/) site as a node that is following the consensus rules and you can connect to it. So it appears people are following that warning.

I am not sure who may or may not be running them. Maybe Square Crypto are running some rust-bitcoin nodes. Maybe at Chaincode when Matt was run there had a few running. I’m not entirely sure. It looks like rust-bitcoin has some extra features that Core doesn’t.

I have never seen a rust-bitcoin node anywhere.

I have never seen one on a node explorer.

[Coindance](https://coin.dance/) or the Clark Moody [site](https://bitcoin.clarkmoody.com/dashboard/). That is all I’m familiar with.

What is the right way to introduce Rust? Would it be a pull request where it is just integration and the conversation is focused on whether Rust should be introduced into Core or not. Or should it be included a feature pull request where the feature in some kind of way shows off how Rust can be used and what the pros of Rust are. Then you would be discussing the feature and Rust at the same time.

Splitting the conceptual discussion and actual implementation is tricky. As soon as you turn up with an implementation the conversation is always likely to diverge back to conceptual again. I think we have gone in circles enough with the conceptual stuff. At this point we may have to open enough PRs and throw enough Rust code at the wall until something sticks that we can get some agreement on. Even if we can’t get it into the actual Core codebase, if we start pulling it into some of our build systems and tooling. That is another way to inject it into or get in front of a lot of the developers working on our codebase. We had some horrible C tools that were used as part of the Mac OS build process. I have played round with rewriting one of those in Rust. It works but there is very little interest in the build system let alone operating specific components of it. I thought might not be the best way forward for some Rust integration. Maybe writing some sort of test harness or some other test component in Rust. Something that is very easily optional to begin with as well. As soon as you integrate Rust code into Core then it has to become part of the full reproducible build process as well. This makes it more tricky. If you can jam it into an additional test feature that might be a way to get some in. Regardless of what you do as soon as you open a PR that has some sort of Rust code in it there are going to be some people who going to turn up and say they are not interested regardless of what it does. Be prepared to put up with that. If it is something that you are interested in I am happy to talk about that and discuss that and help you to figure out the best path to get it done.

Rust is definitely promising. I don’t know but reading through the conversation I think there is consensus people find the language interesting and find the properties that it brings promising. To steelman the opposing argument it is not only that a lot of contributors can’t write or review Rust code, it is also how much of this is a bet on Rust as a language. There have been so many languages that have sprung up in the past and showed promise but the maintainers of that language or the path it went down meant it fizzled out and it didn’t become the really useful language that most people are using. At what point do you have to make a bet on an early language and an early ecosystem and whether Core should be making that bet?

That is a good thing to think about. I think Rust is like 11 or 12 years old. There is one argument to be made that it is not stable yet. Maybe that is something that will improve over the next 2, 3, 4 years. Maybe we shouldn’t be the software project that makes bets on “new” languages. At the same time there are a lot of big companies making that same bet. I dare say Windows one of them that are looking at using Rust. Mozilla obviously starting Rust but has a very long history of using Rust in production for many many years now in security critical contexts. I wish I had more examples than that but I don’t off the top of my head. The long term contributors not necessarily knowing Rust is obviously a good one. But at the same time the consensus critical code is going to be the absolute last thing that we rush to rewrite in Rust. The stuff that you would be rewriting in Rust is probably going to be very mundane to begin with. I would also like to think that if we do pick up a language like Rust maybe we pick up more contributors. We have seen a few PRs opened, people who aren’t necessarily actively contributing to Core all the time but they may very well contribute more they could contribute in a language like Rust which would be great. There is no end of arguments from either side to integrate or not.

I really like Rust. It is going from strength to strength and it has got definite merit. But you have got an existing expertise who have tooled up in a certain language. Some of that is self selected but whether that is true or not they exist. Now you are asking people to learn two languages. If you want to broadly read the codebase you have to know two languages and that is a significant burden. Perhaps in five years time when Rust is perhaps an obvious successor to C++ then that intersection… Arguing that we will get more contributors for Rust is probably not true. You might get some different contributors but C++ is pretty well known now. There is the sexy argument. “Work on this, it is in Rust” Unfortunately that will become less compelling too over time. As Rust gets bigger and more projects are using it and there are other cool things you could be doing with Rust anyway. It would be better to do stuff in Rust as far as security. I am cautiously hopeful that the size of bitcoind is going down not up. Stuff should be farmed out to the wallet etc with better RPCs and better library calls so it is not this monolithic project anymore. The argument that we are going to be introducing all these security holes if we don’t use Rust is probably weaker. As you said the last thing that will change is the consensus code. That said I like projects that stick to two languages. It is really common to have projects where you have to know multiple languages to deal with them. Working around the edges and reworking some of the tooling in Rust, building up those skills slowly is definitely the way forward. It will take longer than you want and you hope but it is a pretty clear direction to head in. But head in in terms of a decade we will be talking about finally rewriting the core in Rust. It is not going to happen overnight.

I definitely share your optimism for the size of bitcoind going down over time as well. Even if we couldn’t do Rust, if all we could do is slash so much out of that repository I would be very happy.

At least you don’t have the problem that your project is commonly known as c-bitcoin.

Is there a slippery slope argument where if you start introducing a language even right at the periphery that it is hard to ever get it out. It is the wrong word but it is kind of like a cancer. Once it is in there it starts going further and further in. If you are going to start introducing it even at the periphery you need to be really bought in and really confident. There must have been examples with Linux where people wanted to introduce other languages than C and it didn’t happen as far as I know.

It didn’t happen. You have a base of developers who are comfortable with a language and you are not going to move to anything else. The only way you get Linux in Rust to rewrite everything.

I thought there were people introducing some Rust things into the Linux kernel.

We went through the whole wave of people adding modules in C++ as well. But Linux has an advantage. If Linus (Torvalds) decided he really liked Rust and it was the way forward he could drag everyone else with him. It is the benevolent dictator model. I can’t think of an open source project that has switched implementation languages and survived. I can’t think of many that have tried to switch implementation languages. The answer is generally if you are going to rewrite it is usually a different group that come along and rewrite it. 

What about making a fresh one starting from scratch in Rust?

There is rust-bitcoin. 

It is just missing a community of maintainers.

And people using it as well.

If you are going to introduce things into bitcoind you could do the piecemeal approach where you take certain things and shove Rust in there. Wouldn’t it be cool to have a whole working thing in Rust and then take bits out as you want them into the bitcoind project?

The idea of having an independent implementation of Bitcoin has been pretty roundly rejected by the core developers.

Bitcoin Core is not modular right now so I think there would be a lot more steps before that would be possible.

You would need libconsensus to be a real thing.

libconsensus is another good thing to argue about if we have endless time.

We haven’t talked much about the promise of Rust. Rust is really exciting. Definitely look into Rust. It is just whether there is a potential roadmap to get it into Core and whether that is a good thing or not is the question.

If we had managed to have a Core dev meetup earlier this year, we didn’t because it got cancelled, I’m sure that would have been one of the things very much discussed in person. We’ll wait and see.

# Dual funding in c-lightning

https://github.com/ElementsProject/lightning/pull/3418

Lisa and I are doing daily pair programming four days a week working on the infrastructure required for dual funding. We have gone through the spec, we are happy with the way the spec works but we have to implement it. The thing with dual funding is instead of creating a funding transaction unilaterally you get some information from the peer, what their output should look like etc. With dual funding you negotiate. I throw in some UTXOs, you throw in some UTXOs, I throw in some outputs, you throw in some outputs. You are effectively doing a coinjoin between the two of you. That is a huge ball of wax. Coinjoin is still a bleeding edge technology in Bitcoin land. We are use [libwally](https://github.com/ElementsProject/libwally-core)  which has PSBT support now but it turns out to be really hard, we hit some nasty corner cases, I just filed a bug report on it. PSBT is set up for “We’ve got this transaction, let’s just get the signatures now.” But actually building a transaction where you contribute some parts and I need to modify my PSBT to add and delete parts and put them in the right order is something that the library doesn’t handle well. Even just building up the infrastructure so you can do this is really interesting. We have got a PR that plumbs PSBTs through c-lightning now as a first class citizen which is the beginning of this. The PSBT format is an obvious consequence of that increasingly in Bitcoin we don’t put all the stuff onchain. You have to have metadata associated with things. Some of that is obvious like pay-to-pubkey-hash, you need to know what the real pubkey is, that doesn’t appear. To more subtle things like you need to know the input amount to create a signature in SegWit. That is not part of the transaction as such, it is kind of implied. Plumbing all that through is step zero in getting dual funding to work. We will get onto the dual funding part which is the fun bit but we have spent a week on c-lightning infrastructure to get that going. What we are doing is for 0.9, the next release which Christian (Decker) will be captaining, we are trying to get as much of the infrastructure in as possible. That monster PR was this massive number of commits, this giant unreviewable thing. We said “Let’s break it up into the infrastructure part and PSBT support”. Once we’ve got all that done we will have an implementation that will be labelled experimental. With c-lightning you can enable experimental features. It will turn on all these cool not spec finalized things that will only work with other things that have been modified to work with it. One of those will be dual funding. We are hoping that this will give us some active research that will feed back into the spec process and will go around like that. Dual funding has been a long time coming. Lisa doesn’t really want to work on dual funding, she wants to work on the advertisement mechanism where you can advertise that you are willing to dual fund. She has to get through this. That’s the carrot. We are definitely hoping that it will make the next release in experimental form and then we can push on the spec. The spec rules are you have to have two independent implementations in order to get it in. Ours will be one and then we will need one of the others to step up and validate that our spec is implementable and they are happy with it. Then we get to the stage where it gets in the spec. It is a long road but this is important. It does give us a pay-to-endpoint (P2EP) like pack. Not only does it give us both the ability to add inputs and outputs but also the ability to add the relevant inputs and outputs. You can use it to do consolidation and things like that that will really throw out chain analysis because there is a channel coming out of this as well.

Would you be trying to do that equal output thing as well as part of that channel open? Or is it just a batched transaction?

When we have Schnorr signatures of course you won’t be able to distinguish the channel opens. Then it starts to look completely random. At the moment you can tell that that is obviously a channel close even if it is a mutually agreed close. If it is 2-of-2 it is almost certainly a channel. In the longer term you will then get this coinjoin ability. In the initial implementation we made sure it is possible to negotiate multiple channel opens at once and produce this huge batch. You could also be using it for consolidation. Someone opens a channel with you, while you are here and making this transaction may as well throw this input and output in. We’ll do that as well. You can get pretty aggressive batching with this stuff.

It requires interoperability with some other Lightning client like lnd or eclair or whoever. Presumably the idea then is c-lightning would create a PSBT with one of the other clients to jointly construct a transaction so they can make the channel together.

We are not using generic PSBT across the wire. We are using a cut down add input, add output kind of thing. It is a subset of PSBT. We will use that internally. That also gives us a trivial ability for us to improve our hardware wallet support. Once you are supporting PSBTs then it is trivial to plug it into hardware wallets as well. That is pretty exciting. We are using dual funding to motivate us to do PSBTs.

Have you looked at how lnd have implemented it yet? Is that informing how you implement it in lnd?

They haven’t implemented dual funding.

They have implemented PSBT support.

PSBT support itself is actually pretty easy to do. You can already do PSBT today in c-lightning as a plugin. It wasn’t in the core. You would build a PSBT based on the information the core would give you. That works. You have been able to use hardware wallets with c-lightning since before the last lnd release. It wasn’t very polished. This will bring it to a level of polish where we can recommend people use it. Rather than telling people they need to string these pieces together to get it to work. The PSBT itself, we are using libwally. We import libwally, pretty easy. 

Would PSBT would become part of the Lightning spec at that point?

No. We let you add arbitrary outputs but we don’t need the full PSBT for that. You just tell it the output. As long as you have got the bits that we need that opens the channel and our outputs go where we expect we let you put whatever stuff you want in the funding transaction. If there is something wrong with it it won’t get mined. That is always true. You could put a valid input and then double spend that input. We always need to handle the case where the opening transaction doesn’t work for some reason. We might as well let you use anything. Just give us the output script, we don’t need the full PSBT flexibility. There is some caution with allowing arbitrary PSBTs. Other than generally having a large attack surface it would be a requirement that you understand the PSBT format and we are not ready to go that far. It is quite easy. “Here’s an output, here’s an output number” and then you can delete the number. The negotiation is quite complicated. “Add this output. Add this input. Add this output. Flip that input. Flip that output.” As long as at the end you’ve got all the stuff. That turns out to be important. If you are negotiating with multiple people you have to make sure everyone sees things in the same order. You have an index for each one and you specify that it will be in ascending order. That is required because if you are routing an output I might need to move my output out the way and put yours in there because that is what you expect. We all have to agree. It turns out to be quite a tricky problem. The actual specification of the inputs and outputs themselves is trivial so we haven’t use PSBT for that. The whole industry has been moving to PSBT because it solves a real problem. That problem is that there is metadata that you need to deal with a transaction that is not actually inside the transaction format that hits the blockchain. This is good, keep the minimum amount of stuff on the blockchain. It does create this higher hurdle for what we need to deal with. The lack of standardization has slowed down development. It is another thing you need to be aware of when you are developing a wallet or hardware wallet. PSBT really does fill that gap well. I expect to see in the future more people dealing with PSBTs. A Bitcoin tx is almost like an artifact that comes out of that. Everyone starts talking PSBTs and not talking transactions at all. 

In my recent [episode](https://stephanlivera.com/episode/168/) with Lisa (Neigut) we were talking about the script descriptors in Bitcoin Core. Would that be someone that comes into the Lightning wallet management as well so that you would have an output descriptor to say “This is the onchain wallet of my Lightning wallet.”?

We would love to get out of the wallet game. PSBT to some extent lets us do that. PSBT gives us an interoperable layer so we can get rid of our internal wallet altogether and you can use whatever wallet you want. I think that is where we are headed. The main advantage of having normal wallets understand a little bit of Lightning is that you could theoretically import your Lightning seed and get them to scrape and dredge up your funds which would be cool. Then they need to understand some more templated script outputs. We haven’t seen demand for that yet. It is certainly something that we have talked about in the past. A salvage operation. If you constrain the CSV delay for example it gets a lot easier. Unfortunately for many of the outputs that you care about you can’t generate all from a seed, some of the information has to come from the peer. Unless you talk with the peer again you don’t know enough to even figure out what the script actually is. Some of them you can get, some of them are more difficult. 

I read Lisa’s initial mailing list [post](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-November/001532.html). She wrote down it works. This is a protocol design question. Why is it add input, add output and not just one message which adds a bunch of inputs and outputs and then the other guy sends you a message with his inputs and outputs and you go from there?

It has to be stateful anyway because you have to have multiple rounds. If you want to let them negotiate with multiple parties they are not going to know everything upfront. I send to Alice and Bob “Here’s things I want to add.” Then Alice sends me stuff. I have to mirror that and send it to Bob. We have to have multiple rounds. If we are going to have multiple rounds keep the protocol as simple as possible. Add, remove, delete. We don’t care about byte efficiency, there is not much efficiency to be gained. A simple protocol to just have “Add input, add input, add input” rather than “Add inputs” with a number n. It simplifies the protocol.

I would have thought it would be harder to engineer if you are modifying your database and then you get another output coming in. Now maybe you have to do locking. I guess you have to do that anyway?

The protocol ends up being an alternating protocol where you say “I have nothing further to add and I’m finished.” You end up ping ponging back and forth. It came out of this research on figuring out how to coordinate multiple of these at once. Not that we are planning to implement that but we wanted to make sure it was possible to do that. That does add some complexity to the protocol. The rule is you don’t check at the end. Obviously they need to have enough inputs to fund the channel that they’ve promised and stuff like that. But they might need to change an input and remove an UTXO and use a different one. Then they would delete one and add another. That is completely legal even though at some point it could be a transaction with zero inputs. It is only when it is finalized that you do all the checks. It is not that hard as long as it is carefully specified. Says he who hasn’t implemented it yet. That’s one of the reasons we implement it to see if it is a dumb idea.

# Bitcoin Core MacOS functional tests failing intermittently

https://github.com/bitcoin/bitcoin/issues/18794

I am not entirely sure why this is on the list, it is not that interesting. I think this is a case of us reenabling some functional tests that we had disabled in Travis a while back. They were still failing so we turned them back on again. The failures were Python socket OS type errors that no one could really recreate locally. They just happened on Travis intermittently. It was a pain to have them fail now and again for no specific reason.

Two elements. One, an update on the state of functional tests in Core. The other thing is there seems to be a few problems with Mac OS. Thing like fuzzing are difficult to get up and running on Mac OS. I just wondered what problems there were with functional tests on Mac OS as well.

I think we have got the functional tests fairly nailed down on Mac OS now. Especially over the past 18-24 months Mac OS has turned into a horrible operating system to use for a lot of stuff we are doing. It comes pre-installed with lots of really old versions of tools like make 3.8. It has got Apple clang by default that is a version of some upstream LLVM clang with a bunch of Apple stuff hacked in. We spent a lot of time working around in our build system. There were issues with the fuzzers as well. I patched some of those. That was linking issues. Or the alternative to throw away Apple clang and use a proper LLVM clang 10 or whatever is the latest release. It works now. I don’t think our docs are entirely correct. There are a few command line flags, there is a flag in there telling people to disable the AS assembly optimizations. I don’t think that is required for Mac OS specifically for any reason. That will work but if you want to do a lot of fuzzing or have something that continuously runs a lot of the functional and unit tests for Core spin up a Linux instance somewhere preferably a recent Ubuntu 20 or Debian 10/11. Run them on there instead. On the CI more generally, we have obviously used Travis for a long time with sometimes good success, other times worse success. I think Marco (Falke) has got to the point where he got fed up with Travis and has really genericized a lot of our CI infrastructure so you can run it wherever you want. I know he runs it on Cirrus and something else as well regularly. He has a btc_nightly [repo](https://github.com/MarcoFalke/btc_nightly) with scripts for doing that that you can fork and use as well. Continual issues with Travis with certain builds running out of disk space. We reported this 4 or 5 months ago and we still haven’t got answers about why it happens. Maybe we will migrate to something else. Anytime it comes up, do we pay for something else and where are we going to migrate to? We don’t want to end up on something worse than what we have currently have.

# Modern soft fork activation

https://twitter.com/LukeDashjr/status/1260598347322265600?s=20

This one was on soft fork deployment.

We discussed this last month. AJ and Matt Corallo put up mailing list [posts](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-January/017547.html) on soft fork activation. We knew that Luke Dashjr and others have strong views. This was asking what his view is. I think we are going to need a new BIP which will be BIP 8 updated to be BIP 9 and BIP 148 but I haven’t looked into how that compares to what Matt and AJ were discussing on the mailing list.

What was discussed on the mailing list at the start of the year was as Luke says was BIP 9 plus BIP 149. BIP 149 was going to be the SegWit activation which expires in November, we’ll give it a couple of months and then we’ll do a new activation in January 2018 I guess that will last a year. There will be signaling but at the end of that year whether the signaling works or not it will just activate. It divides it up into two periods. You’ve got the first period which is regular BIP 9 and the second period is essentially the same thing again but at the end it succeeds no matter what. The difference between that and BIP 148 is what miners have to do. What we actually had happen was before the 12 months of SegWit’s initial signaling ended we said “For this particular period starting around August miners have to signal for SegWit. If they don’t this hopefully economic majority of full nodes will drop their blocks so that they will be losing money.” Because they are all signaling for it and we’ve got this long timeframe it will activate no matter what. It will all be great. The difference is mostly BIP 148 got it working in a shorter timeframe because we didn’t need to have to start it again in January but it also forced the miners to have this panic “We might be losing money. How do we coordinate?” That was BIP 91 that allowed them to coordinate. The difference is that BIP 148 we know has worked at least once. BIP 149 is similar to how stuff was activated in the early days like P2SH. But we don’t know 100 percent it will work but it is less risk for miners at least. 

The contrasting approaches are how quickly it gets pushed through and how much opportunity miners are given to flag that they are happy with the change?

In both cases miners can flag they are happy with it pretty quickly. It is just how quickly everyone else can respond if miners aren’t happy with it for stupid reasons. 

If miners dragged their heels or they don’t want to proceed with it one method is more forcing?

Yes. I don’t think there is any reason we can’t be prepared to do both. Set it up so that we’ll have this one year and this other year and at the end of two years it will definitely be activated as long as it seems sensible. If it is taking too long we can have the emergency and do the exact same thing we did with BIP 148 and force signaling earlier. They are not necessarily incompatible ideas either.

# AltNet, a pluggable framework for alternative transports

https://github.com/bitcoin/bitcoin/pull/18988

We have this one on AltNet from Antoine Riard. 

There were 2 PRs he opened all at once. One is this AltNet framework and there is this other watchdog PR. One of was Marco’s comments was that we can have these drivers but he hasn’t thought about all the trade-offs. It seems beneficial if add-ons can be attached and removed during runtime as well as developed independently of the Bitcoin Core development process. I definitely agree that if we are going to be building these drivers that has to happen outside of the Bitcoin Core repository. As far as loading and unloading plugins at runtime goes definitely not a fan of that idea.

Are you not a fan of it at runtime as in while the thing is running or not a fan of it even as part of the configure option at runtime?

If it was a configure option so you had a line in bitcoin.conf “add driver XYZ” and then you start and you’re running, sure. But he actually means you run for a day and then you go I’ll unload this driver and reload this driver, add a new driver while you’re still running…

A bitcoin.conf option makes sense but a RPC call to load and unload seems complicated?

Yes it seems like an attack surface. What are these drivers going to look like? Are you loading some random shared libraries? We definitely don’t want to go down the route of are these drivers authenticated in some way? Have we built them reproducibly and someone has signed off that they are ok to load? That sounds like all sorts of things that we don’t want to be managing or have to do in our codebase.

I also briefly skipped over it when he released it. The way I understood the watchdog PR one was that it is looking at the network and giving alerts on things that it sees that might hint at something fishy going on. My thought on that was that in practice I don’t really know how that is going to work out. If we have two systems that are independent and one says there is a problem and the other system doesn’t say… I would prefer us to have one source of truth. This watchdog, there could be a way where this gives alerts and they are not really that important. People start ignoring the alerts like the version warnings in the blocks. Everyone ignores them. That is a slippery slope to go in that direction. I would have to look at the code and conceptually what the plan is for it to play together with the actual network and what is says is fishy and what is not fishy.

What benefits do you see from this? Would it enable different methods of communication? Radio and mesh networking and so on?

I would have to think about it more and how it is going to be implemented. How does this AltNet play together with the main actual net. I don’t have a model in my head right now how that would work. I would need to think about it.

# secp256kfun

https://github.com/LLFourn/secp256kfun

I have released this library called secp256kfun. It is for doing experimental cryptography stuff using Rust and having it work on Bitcoin. It doesn’t use Pieter Wuille’s [library](https://github.com/bitcoin-core/secp256k1). It uses [one](https://github.com/paritytech/libsecp256k1) written by Parity to do Ethereum stuff. That is more low level and gives the things I needed to get the result I wanted. It is a mid level library really. It gives you nice, clean APIs to do things. It catches errors. For example the zero element one, you have the point at infinity which is often an invalid public key but it is a valid group element. It can know at compile time whether a point is the point at infinity or a valid public key and can give you a compile time error if you try to pass something that could be the point at infinity into a function. Also it has an interesting way of dealing with variable time and constant time. Constant time is running operations on secret data, it is really important it does constant time. By measuring the execution time you can’t figure out anything about the secret input to the function. With this library I mark things as secret or public and then the values themselves get passed into functions and depending on whether they are marked as secret or public it uses a variable time or constant time algorithm to do the operation. This is all decided at compile time, there is no runtime cost. It is experimental, I think it is cool. I’d like people to check it out if they are doing some Bitcoin cryptography research.

This is for playing around as a toy implementation and can be used for educational reasons too. A stepping stone to understanding what is going on in the actual secp256k1 library.

I think that is a strong point for it. There is an Examples folder and I implemented BIP 340 in a very few lines of code. It really helps you get an idea of how Schnorr signatures work or ECDSA signatures work because you aren’t forced to deal with these low level details. But at the same time you are not missing anything. You are not hacking around anything. You are doing the way you are meant to do it. As a teaching tool of how this cryptography works, it is pretty good. And for doing research quickly. We have been talking about Rust a lot. As a researcher I don’t want to go down into C and implement cryptography using libsecp, this C library. I am not very good at C. I do know Rust pretty well and there is a lot less work that goes into an implementation in Rust. Especially if you are willing to cut corners and are not paranoid about security. Just for a demonstration or getting benchmarks for your paper or getting a basic implementation to demonstrate your idea. That is what it is there for.