summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorBryan Bishop <kanzure@gmail.com>2024-11-06 17:28:54 -0600
committerBryan Bishop <kanzure@gmail.com>2024-11-06 17:28:54 -0600
commit5ec348d36f5b354734d129a74c3d0e37d447f49e (patch)
treeb9234cfa0a4051e354ef0c7c0f98615a455034cd
parentdee8e5ad5aca7a6e357439e2839c57d053946b5b (diff)
downloaddiyhpluswiki-5ec348d36f5b354734d129a74c3d0e37d447f49e.tar.gz
diyhpluswiki-5ec348d36f5b354734d129a74c3d0e37d447f49e.zip
fix broken urls for lists.linuxfoundation.org in transcripts/
For more information, see: https://github.com/bitcointranscripts/bitcointranscripts/pull/566 https://github.com/bitcoin/bitcoin/pull/29782#issuecomment-2460974096 https://gnusha.org/pi/bitcoindev/CABaSBaxDjj6ySBx4v+rmpfrw4pE9b=JZJPzPQj_ZUiBg1HGFyA@mail.gmail.com/ https://gnusha.org/url/
-rw-r--r--transcripts/2016-july-bitcoin-developers-miners-meeting/cali2016.mdwn2
-rw-r--r--transcripts/2018-01-24-rusty-russell-future-bitcoin-tech-directions.mdwn8
-rw-r--r--transcripts/2019-01-05-unchained-capital-socratic-seminar.mdwn6
-rw-r--r--transcripts/2019-02-09-mcelrath-on-chain-defense-in-depth.mdwn14
-rw-r--r--transcripts/adam3us-bitcoin-scaling-tradeoffs.mdwn2
-rw-r--r--transcripts/advancing-bitcoin/2020/2020-02-06-antoine-riard-taproot-lightning.mdwn4
-rw-r--r--transcripts/advancing-bitcoin/2020/2020-02-06-kalle-alm-signet-integration.mdwn2
-rw-r--r--transcripts/austin-bitcoin-developers/2019-06-29-hardware-wallets.mdwn2
-rw-r--r--transcripts/austin-bitcoin-developers/2019-08-22-socratic-seminar-2.mdwn6
-rw-r--r--transcripts/austin-bitcoin-developers/2020-01-21-socratic-seminar-5.mdwn2
-rw-r--r--transcripts/austin-bitcoin-developers/2022-02-17-socratic-seminar-25.mdwn10
-rw-r--r--transcripts/austin-bitcoin-developers/2022-05-19-socratic-seminar-28.mdwn6
-rw-r--r--transcripts/austin-bitcoin-developers/2022-08-18-socratic-seminar-31.mdwn2
-rw-r--r--transcripts/austin-bitcoin-developers/2022-09-15-socratic-seminar-32.mdwn2
-rw-r--r--transcripts/austin-bitcoin-developers/2022-10-20-socratic-seminar-33.mdwn2
-rw-r--r--transcripts/austin-bitcoin-developers/2022-12-15-socratic-seminar-35.mdwn6
-rw-r--r--transcripts/austin-bitcoin-developers/2023-09-21-socratic-seminar-44.mdwn6
-rw-r--r--transcripts/austin-bitcoin-developers/2023-11-16-socratic-seminar-46.mdwn2
-rw-r--r--transcripts/bitcoin-core-dev-tech/2017-09-07-merkleized-abstract-syntax-trees.mdwn16
-rw-r--r--transcripts/bitcoin-core-dev-tech/2018-03-05-cross-curve-atomic-swaps.mdwn2
-rw-r--r--transcripts/bitcoin-core-dev-tech/2018-10-08-mailing-list.mdwn2
-rw-r--r--transcripts/bitcoin-core-dev-tech/2019-06-06-great-consensus-cleanup.mdwn2
-rw-r--r--transcripts/bitcoin-core-dev-tech/2019-06-06-taproot.mdwn2
-rw-r--r--transcripts/bitcoin-core-dev-tech/2019-06-07-signet.mdwn2
-rw-r--r--transcripts/bitcoin-core-dev-tech/2019-06-07-statechains.mdwn2
-rw-r--r--transcripts/bitcoin-magazine/2020-08-03-eric-lombrozo-luke-dashjr-taproot-activation.mdwn4
-rw-r--r--transcripts/bitcoin-magazine/2021-02-26-taproot-activation-lockinontimeout.mdwn4
-rw-r--r--transcripts/bitcoin-magazine/2021-03-12-taproot-activation-speedy-trial.mdwn8
-rw-r--r--transcripts/bitcoin-magazine/2021-04-23-taproot-activation-update.mdwn2
-rw-r--r--transcripts/blockchain-protocol-analysis-security-engineering/2018/hardening-lightning.mdwn2
-rw-r--r--transcripts/blockchain-protocol-analysis-security-engineering/2018/schnorr-signatures-for-bitcoin-challenges-opportunities.mdwn6
-rw-r--r--transcripts/breaking-bitcoin/2017/changing-consensus-rules-without-breaking-bitcoin.mdwn4
-rw-r--r--transcripts/breaking-bitcoin/2017/interview-adam-back-elizabeth-stark.mdwn2
-rw-r--r--transcripts/c-lightning/2021-10-04-developer-call.md2
-rw-r--r--transcripts/c-lightning/2021-10-18-developer-call.md4
-rw-r--r--transcripts/c-lightning/2021-11-01-developer-call.md4
-rw-r--r--transcripts/chaincode-labs/2019-06-17-john-newbery-security-models.mdwn4
-rw-r--r--transcripts/chaincode-labs/2020-01-28-pieter-wuille.mdwn2
-rw-r--r--transcripts/chicago-bitdevs/2020-07-08-socratic-seminar.mdwn12
-rw-r--r--transcripts/chicago-bitdevs/2020-08-12-socratic-seminar.mdwn4
-rw-r--r--transcripts/gmaxwell-2017-08-28-deep-dive-bitcoin-core-v0.15.mdwn14
-rw-r--r--transcripts/gmaxwell-2017-11-27-advances-in-block-propagation.mdwn2
-rw-r--r--transcripts/gmaxwell-confidential-transactions.mdwn2
-rw-r--r--transcripts/greg-maxwell/greg-maxwell-taproot-pace.mdwn4
-rw-r--r--transcripts/honey-badger-diaries/2020-04-24-kevin-loaec-antoine-poinsot-revault.mdwn4
-rw-r--r--transcripts/la-bitdevs/2020-06-18-luke-dashjr-segwit-psbt-vulnerability.mdwn2
-rw-r--r--transcripts/layer2-summit/2018/lightning-overview.mdwn6
-rw-r--r--transcripts/layer2-summit/2018/scriptless-scripts.mdwn2
-rw-r--r--transcripts/lets-talk-bitcoin-podcast/2017-06-04-consensus-uasf-and-forks.mdwn2
-rw-r--r--transcripts/lightning-conference/2019/2019-10-20-antoine-riard-rust-lightning.mdwn2
-rw-r--r--transcripts/lightning-conference/2019/2019-10-20-bastien-teinturier-trampoline-routing.mdwn2
-rw-r--r--transcripts/lightning-conference/2019/2019-10-20-nadav-kohen-payment-points.mdwn2
-rw-r--r--transcripts/lightning-hack-day/2020-05-03-christian-decker-lightning-backups.mdwn2
-rw-r--r--transcripts/london-bitcoin-devs/2018-06-12-adam-gibson-unfairly-linear-signatures.mdwn2
-rw-r--r--transcripts/london-bitcoin-devs/2019-02-05-matt-corallo-betterhash.mdwn8
-rw-r--r--transcripts/london-bitcoin-devs/2020-02-04-andrew-poelstra-miniscript.mdwn2
-rw-r--r--transcripts/london-bitcoin-devs/2020-05-05-socratic-seminar-payjoins.mdwn2
-rw-r--r--transcripts/london-bitcoin-devs/2020-05-19-socratic-seminar-vaults.mdwn8
-rw-r--r--transcripts/london-bitcoin-devs/2020-05-26-kevin-loaec-antoine-poinsot-revault.mdwn4
-rw-r--r--transcripts/london-bitcoin-devs/2020-06-16-socratic-seminar-bip-schnorr.mdwn2
-rw-r--r--transcripts/london-bitcoin-devs/2020-06-23-socratic-seminar-coinswap.mdwn4
-rw-r--r--transcripts/london-bitcoin-devs/2020-07-21-socratic-seminar-bip-taproot.mdwn24
-rw-r--r--transcripts/london-bitcoin-devs/2020-08-19-socratic-seminar-signet.mdwn8
-rw-r--r--transcripts/london-bitcoin-devs/2021-07-20-socratic-seminar-taproot-rollout.mdwn6
-rw-r--r--transcripts/london-bitcoin-devs/2021-08-10-socratic-seminar-dlcs.mdwn4
-rw-r--r--transcripts/mimblewimble-podcast.mdwn2
-rw-r--r--transcripts/mit-bitcoin-expo-2017/scaling-and-utxos.mdwn2
-rw-r--r--transcripts/mit-bitcoin-expo-2018/improving-bitcoin-smart-contract-efficiency.mdwn4
-rw-r--r--transcripts/mit-bitcoin-expo-2020/2020-03-07-andrew-poelstra-taproot.mdwn2
-rw-r--r--transcripts/ruben-somsen/2020-05-11-ruben-somsen-succinct-atomic-swap.mdwn2
-rw-r--r--transcripts/scalingbitcoin/hong-kong/a-bevy-of-block-size-proposals-bip100-bip102-and-more.mdwn2
-rw-r--r--transcripts/scalingbitcoin/hong-kong/overview-of-bips-necessary-for-lightning.mdwn2
-rw-r--r--transcripts/scalingbitcoin/hong-kong/validation-cost-metric.mdwn2
-rw-r--r--transcripts/scalingbitcoin/milan/coin-selection.mdwn2
-rw-r--r--transcripts/scalingbitcoin/milan/mimblewimble.mdwn2
-rw-r--r--transcripts/scalingbitcoin/milan/onion-routing-in-lightning.mdwn2
-rw-r--r--transcripts/scalingbitcoin/stanford-2017/redesigning-bitcoin-fee-market.mdwn2
-rw-r--r--transcripts/scalingbitcoin/tel-aviv-2019/edgedevplusplus/blockchain-design-patterns.mdwn4
-rw-r--r--transcripts/scalingbitcoin/tel-aviv-2019/edgedevplusplus/statechains.mdwn2
-rw-r--r--transcripts/scalingbitcoin/tel-aviv-2019/work-in-progress.mdwn4
-rw-r--r--transcripts/scalingbitcoin/tokyo-2018/atomic-swaps.mdwn2
-rw-r--r--transcripts/scalingbitcoin/tokyo-2018/edgedevplusplus/taproot-and-graftroot.mdwn4
-rw-r--r--transcripts/scalingbitcoin/tokyo-2018/scriptless-ecdsa.mdwn2
-rw-r--r--transcripts/sf-bitcoin-meetup/2017-03-29-new-address-type-for-segwit-addresses.mdwn2
-rw-r--r--transcripts/sf-bitcoin-meetup/2017-07-08-bram-cohen-merkle-sets.mdwn12
-rw-r--r--transcripts/sf-bitcoin-meetup/2018-07-09-taproot-schnorr-signatures-and-sighash-noinput-oh-my.mdwn14
-rw-r--r--transcripts/sf-bitcoin-meetup/2019-12-16-bip-taproot-bip-tapscript.mdwn10
-rw-r--r--transcripts/sf-bitcoin-meetup/2020-11-30-socratic-seminar-20.mdwn8
-rw-r--r--transcripts/stanford-blockchain-conference/2019/htlcs-considered-harmful.mdwn2
-rw-r--r--transcripts/stephan-livera-podcast/2020-08-13-christian-decker-lightning-topics.mdwn8
-rw-r--r--transcripts/stephan-livera-podcast/2021-03-17-luke-dashjr-taproot-activation.mdwn20
-rw-r--r--transcripts/sydney-bitcoin-meetup/2020-05-19-socratic-seminar.mdwn18
-rw-r--r--transcripts/sydney-bitcoin-meetup/2020-06-23-socratic-seminar.mdwn6
-rw-r--r--transcripts/sydney-bitcoin-meetup/2020-07-21-socratic-seminar.mdwn10
-rw-r--r--transcripts/sydney-bitcoin-meetup/2020-08-25-socratic-seminar.mdwn4
-rw-r--r--transcripts/sydney-bitcoin-meetup/2021-02-23-socratic-seminar.mdwn6
-rw-r--r--transcripts/sydney-bitcoin-meetup/2021-06-01-socratic-seminar.mdwn4
-rw-r--r--transcripts/sydney-bitcoin-meetup/2021-07-06-socratic-seminar.mdwn12
-rw-r--r--transcripts/tftc-podcast/2021-02-11-matt-corallo-taproot-activation.mdwn4
-rw-r--r--transcripts/wasabi-research-club/2020-06-15-coinswap.mdwn2
100 files changed, 241 insertions, 241 deletions
diff --git a/transcripts/2016-july-bitcoin-developers-miners-meeting/cali2016.mdwn b/transcripts/2016-july-bitcoin-developers-miners-meeting/cali2016.mdwn
index af85b8b..4c03590 100644
--- a/transcripts/2016-july-bitcoin-developers-miners-meeting/cali2016.mdwn
+++ b/transcripts/2016-july-bitcoin-developers-miners-meeting/cali2016.mdwn
@@ -1294,7 +1294,7 @@ When you said you wanted to use the HK agreement to make it a community proposal
I mean everyone. The point is that politically, the Bitcoin ecosystem should not accept imposed rule-changes on the network. And so, a hard-fork that comes out of a closed-door meeting sounds like an imposed rule change on the network. There are many people who will principally reject this, reflexively. I want there to be collaboration. Most people will ignore it. But I want there to be collaboration so that we can say this is a product of the Bitcoin community. It cannot be a closed-door agreement.
-This was Luke's post on the mailing list: <https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-February/012389.html>
+This was Luke's post on the mailing list: <https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-February/012389.html>
At this point, it would be a little bit difficult or challenging to reframe the HK agreement to a broader community-based effort. Trying to reframe the HK agreement to an open community agreement would be difficult. The simplest way is to, we just based on the HK agreement, then we try to pull people into that and then gain consensus on that. That would be the simplest way.
diff --git a/transcripts/2018-01-24-rusty-russell-future-bitcoin-tech-directions.mdwn b/transcripts/2018-01-24-rusty-russell-future-bitcoin-tech-directions.mdwn
index 49ce264..dbf218e 100644
--- a/transcripts/2018-01-24-rusty-russell-future-bitcoin-tech-directions.mdwn
+++ b/transcripts/2018-01-24-rusty-russell-future-bitcoin-tech-directions.mdwn
@@ -93,7 +93,7 @@ But bip114 is not the only game in town. There's a separate pair of proposals (b
# Taproot
-<https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-January/015614.html>
+<https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-January/015614.html>
There is, however, a problem with MAST which is that MAST inputs are obvious. You're doing a MAST payment. There's a proposal called taproot. This is why you check twitter during talks. It uses a new style of pay-to-pubkeyhash. You can-- the users, when they are setting this up, use a base key and a hash of a script that they want the alternative to be. They use that to make a new key, then they just say use this new thing and pay to that key. They can pay using key and signature like before. Or you can reveal the base key and the script, and it will execute the script for you. In the common case, it looks like pay-to-pubkeyhash which helps fungibility. In the exceptional case, you provide the components and you can do MAST. This overrides what I had said earlier about MAST coming soon because this proposal looks really awesome but taproot doesn't have any code yet.
@@ -159,7 +159,7 @@ Covenants would be a powerful improvement to bitcoin scripts. Current bitcoin sc
<http://diyhpl.us/wiki/transcripts/gmaxwell-confidential-transactions/>
-<https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-December/015346.html>
+<https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-December/015346.html>
Enough about scrips for a moment. The blockchain is public. In particular, the amounts are public, and this actually reveals a lot of information. By looking at the amounts, you can often tell which one is a change output and going back to the same person. All that nodes really need to know is not necessarily the amounts, but rather that there is no inflation in the system. They only need to know that the sum of the input amounts is equal to the sum of the output amounts plus the miner fee amount. They also need to kknow that there was no overflow. You don't want a 1 million BTC transaction and a -1 million BTC transaction.
@@ -237,7 +237,7 @@ As far as I know, nobody is actually working on it, so it's in the further futur
# Rolling UTXO set hashes
-<https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-May/014337.html>
+<https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-May/014337.html>
Another subset of the UTXO set problem is that there's no compact way to prove that UTXO set is correct. XORing all the UTXOs together is not secure. There's this idea of a rolling UTXO set hash. You can update a 256-bit number, it's fairly easy to calculate, you can update this one hash when a new block comes in. If you're a full node, you can record your UTXO set hash and then validate that your UTXO set hasn't been corrupted. But it also helps the idea of initial node bootstrap and initial block download. If you get TXO hash from someone you trust, say someone writing the bitcoin software, you can go anywhere and get the UTXO set, and you can download it, and just check that the hash matches. Maybe if things keep getting bigger, then this might be a middle ground between running a lite node and running a fully-validating full node from the dawn of time itself.
@@ -293,7 +293,7 @@ Neutrino inverts this. The full nodes produce a 20 kilobyte block summary and th
# NODE\_NETWORK\_LIMITED (bip159)
-<https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-May/014314.html>
+<https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-May/014314.html>
<https://github.com/bitcoin/bips/blob/master/bip-0159.mediawiki>
diff --git a/transcripts/2019-01-05-unchained-capital-socratic-seminar.mdwn b/transcripts/2019-01-05-unchained-capital-socratic-seminar.mdwn
index c225508..98afc0a 100644
--- a/transcripts/2019-01-05-unchained-capital-socratic-seminar.mdwn
+++ b/transcripts/2019-01-05-unchained-capital-socratic-seminar.mdwn
@@ -253,11 +253,11 @@ Optech Newsletters:
* <https://bitcoinops.org/en/newsletters/2018/12/04/>
* <https://bitcoinops.org/en/newsletters/2018/12/11/>
-[bitcoin-dev] Schnorr and taproot (etc) upgrade <https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-December/016556.html>
+[bitcoin-dev] Schnorr and taproot (etc) upgrade <https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-December/016556.html>
-[bitcoin-dev] CPFP Carve-Out for Fee-Prediction Issues in Contracting Applications (eg Lightning) <https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-November/016518.html>
+[bitcoin-dev] CPFP Carve-Out for Fee-Prediction Issues in Contracting Applications (eg Lightning) <https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-November/016518.html>
-[bitcoin-dev] Safer sighashes and more granular SIGHASH\_NOINPUT <https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-November/016488.html>
+[bitcoin-dev] Safer sighashes and more granular SIGHASH\_NOINPUT <https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-November/016488.html>
Minisketch: an optimized library for BCH-based set reconciliation <https://github.com/sipa/minisketch>
diff --git a/transcripts/2019-02-09-mcelrath-on-chain-defense-in-depth.mdwn b/transcripts/2019-02-09-mcelrath-on-chain-defense-in-depth.mdwn
index 166a14b..34b2653 100644
--- a/transcripts/2019-02-09-mcelrath-on-chain-defense-in-depth.mdwn
+++ b/transcripts/2019-02-09-mcelrath-on-chain-defense-in-depth.mdwn
@@ -165,11 +165,11 @@ Remember, a vaulted UTXO is a UTXO encumbered such that the spending transaction
* Russell O'Connor, Marta Piekarska: [OP\_CHECKSIGFROMSTACK](https://fc17.ifca.ai/bitcoin/papers/bitcoin17-final28.pdf) and <https://blockstream.com/2016/11/02/covenants-in-elements-alpha/>
-* Jeremy Rubin, 2019: [OP\_SECURETHEBAG](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-June/016997.html) or [here (also has more on NOINPUT and CHECKSIGFROMSTACK etc)](https://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2019-06-06-noinput-etc/)
+* Jeremy Rubin, 2019: [OP\_SECURETHEBAG](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-June/016997.html) or [here (also has more on NOINPUT and CHECKSIGFROMSTACK etc)](https://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2019-06-06-noinput-etc/)
-* OP\_CAT + OP\_CHECKSIGFROMSTACK <https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-May/016946.html>
+* OP\_CAT + OP\_CHECKSIGFROMSTACK <https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-May/016946.html>
-* OP\_CHECKTXOUTSCRIPTHASHVERIFY <https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-October/016448.html>
+* OP\_CHECKTXOUTSCRIPTHASHVERIFY <https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-October/016448.html>
Eyal, Sirer and Moser wrote a paper in 2016 and showed it at Financial Crypto 2016. They proposed a new opcode OP\_CHECKOUTPUTVERIFY.
@@ -205,7 +205,7 @@ Because the SIGHASH depends on the input txid, which in turn depends on the prev
The way this works is that when you do EC recovery on a signature, you have three things really-- you have a signature, a pubkey, and you've got a message. Those are the three things involved in verifying a signature. The pubkey and signature are obvious to everyone here right? The message here is really the SIGHASH. It's a concatenation of a whole bunch of data extracted from the transaction. It depends on the version, hash of the previous outputs, and the outputs, and various other things. The signature authorizes that. What we want to do is not depend on the pubkey in the input. That creates a circular dependency where you can't do EC recovery.
-There's a proposal out there called [SIGHASH\_NOINPUT](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-December/016549.html) which was originally proposed for the lightning network where it would be nice to be able to take the set of outputs on the channel and be able to rearrange them, without making an on-chain transaction. If my transactions commit to the txids on-chain, then I can't do that. SIGHASH\_NOINPUT decouples your lightning transactions from those inputs and let you rearrange those, and therefore let you add more funds and remove funds from the transaction. I think it's very likely that NOINPUT will be implemented and deployed because it is so helpful to the lightning network. So, can we repurpose SIGHASH\_NOINPUT for a vault?
+There's a proposal out there called [SIGHASH\_NOINPUT](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-December/016549.html) which was originally proposed for the lightning network where it would be nice to be able to take the set of outputs on the channel and be able to rearrange them, without making an on-chain transaction. If my transactions commit to the txids on-chain, then I can't do that. SIGHASH\_NOINPUT decouples your lightning transactions from those inputs and let you rearrange those, and therefore let you add more funds and remove funds from the transaction. I think it's very likely that NOINPUT will be implemented and deployed because it is so helpful to the lightning network. So, can we repurpose SIGHASH\_NOINPUT for a vault?
# Schnorr BIP and SIGHASH\_NOINPUT discussion
@@ -311,15 +311,15 @@ In all cases here, batching is difficult. This is one consequence of this. You g
# Related
-<https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-February/015793.html>
+<https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-February/015793.html>
-<https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-August/017229.html> [or](https://www.coindesk.com/the-vault-is-back-bitcoin-coder-to-revive-plan-to-shield-wallets-from-theft) <https://twitter.com/kanzure/status/1159101146881036289>
+<https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-August/017229.html> [or](https://www.coindesk.com/the-vault-is-back-bitcoin-coder-to-revive-plan-to-shield-wallets-from-theft) <https://twitter.com/kanzure/status/1159101146881036289>
<https://blog.oleganza.com/post/163955782228/how-segwit-makes-security-better>
<https://bitcointalk.org/index.php?topic=5111656>
-"tick method": <https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-August/017237.html>
+"tick method": <https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-August/017237.html>
<http://web.archive.org/web/20180503151920/https://blog.sldx.com/re-imagining-cold-storage-with-timelocks-1f293bfe421f?gi=da99a4a00f67>
diff --git a/transcripts/adam3us-bitcoin-scaling-tradeoffs.mdwn b/transcripts/adam3us-bitcoin-scaling-tradeoffs.mdwn
index 4d03d23..46eaed5 100644
--- a/transcripts/adam3us-bitcoin-scaling-tradeoffs.mdwn
+++ b/transcripts/adam3us-bitcoin-scaling-tradeoffs.mdwn
@@ -146,7 +146,7 @@ Another thing is a more extensible script system, which allows for exmaple [Schn
<https://www.youtube.com/watch?v=HEZAlNBJjA0&t=1h>
-At the beginning, this is coming full circle back to the requirements in the beginning. These are the requirements that we were talking about, to double the transactions per second for three years in a row or something, and in parallel have Lightning scalability as well. This is a sketch of a sequence of upgrades which should be able to easily achieve that throughput. This is my opinion. Things can be done in a different sequence, or different developers might think that [IBLT](http://diyhpl.us/wiki/transcripts/scalingbitcoin/bitcoin-block-propagation-iblt-rusty-russell/) should happen before Schnorr or in parallel or afterwards or something, these details can get worked out. This is my sketch of what I think is reasonably realistic, using the [scalability roadmap](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-December/011865.html) ([FAQ](https://bitcoin.org/en/bitcoin-core/capacity-increases-faq)) as an outline.
+At the beginning, this is coming full circle back to the requirements in the beginning. These are the requirements that we were talking about, to double the transactions per second for three years in a row or something, and in parallel have Lightning scalability as well. This is a sketch of a sequence of upgrades which should be able to easily achieve that throughput. This is my opinion. Things can be done in a different sequence, or different developers might think that [IBLT](http://diyhpl.us/wiki/transcripts/scalingbitcoin/bitcoin-block-propagation-iblt-rusty-russell/) should happen before Schnorr or in parallel or afterwards or something, these details can get worked out. This is my sketch of what I think is reasonably realistic, using the [scalability roadmap](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-December/011865.html) ([FAQ](https://bitcoin.org/en/bitcoin-core/capacity-increases-faq)) as an outline.
If we start with the segregated witness soft-fork, we can get approximately 2 MB as wallets and companies opt-in, and that's in current late-stage testing. The last testnet before production is running right now, I think segnet4. That should be relatively soon if the ecosystem wants to activate it and opt-in and start adopting it to achieve scale and the other fixes it comes with.
diff --git a/transcripts/advancing-bitcoin/2020/2020-02-06-antoine-riard-taproot-lightning.mdwn b/transcripts/advancing-bitcoin/2020/2020-02-06-antoine-riard-taproot-lightning.mdwn
index 063c928..b528f0f 100644
--- a/transcripts/advancing-bitcoin/2020/2020-02-06-antoine-riard-taproot-lightning.mdwn
+++ b/transcripts/advancing-bitcoin/2020/2020-02-06-antoine-riard-taproot-lightning.mdwn
@@ -142,7 +142,7 @@ There is the idea of using the same cryptography trick of Schnorr linearity. Bef
# HTLC: stuck payments
-There is another issue right now which is being discussed on the [mailing list](https://lists.linuxfoundation.org/pipermail/lightning-dev/2019-June/002029.html). You send a payment, one of the hops on the payment path is going to be offline or not available. To cancel the payment and wait to send another one you have to first wait until the HTLC timelock expires to get the funds back to the original sender. Ideally you want a way so that the sender can cancel the payment without waiting.
+There is another issue right now which is being discussed on the [mailing list](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2019-June/002029.html). You send a payment, one of the hops on the payment path is going to be offline or not available. To cancel the payment and wait to send another one you have to first wait until the HTLC timelock expires to get the funds back to the original sender. Ideally you want a way so that the sender can cancel the payment without waiting.
# Schnorr Taproot HTLC: cancellable payments
@@ -176,5 +176,5 @@ A - There are multiple ways. First you can integrate Taproot for the funding out
Q - You said Lightning has privacy guarantees on its protocol but developers should make sure they don’t ruin the privacy guarantees on top of the base Lightning protocol. Do you see a tendency that applications are taking shortcuts on Lightning and ruining the privacy?
-A - Yes. Right now there is this idea of [trampoline routing](https://diyhpl.us/wiki/transcripts/lightning-conference/2019/2019-10-20-bastien-teinturier-trampoline-routing/) which is maybe great for user experience but on the privacy side it is broken. What gives us a lot of privacy in Lightning is source routing. Going to trampoline routing means the person who does the trampoline routing for you is going to learn who you are if you are using one hop and worse is going to know who you are sending funds to. There is trampoline routing, if you are not using privacy preserving Lightning clients… Nobody has done a real privacy study on Lightning clients. Neutrino, bloom filters, no one has done real research. They are not great, there are privacy leaks if you are using them. There are Lightning privacy issues and there are base layer privacy issues. If you are building an application you should have all of them in mind. It is really hard. Using the node pubkey I don’t think is great. I would like [rendez-vous routing](https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-November/001498.html) to be done on Lightning to avoid announcing my pubkey, having my invoice tied to my pubkey and my pubkey being part of Lightning. And channel announcement of course. I hope at some point we have some kind of proof of ownership so I can prove I own this channel without revealing which UTXO I own.
+A - Yes. Right now there is this idea of [trampoline routing](https://diyhpl.us/wiki/transcripts/lightning-conference/2019/2019-10-20-bastien-teinturier-trampoline-routing/) which is maybe great for user experience but on the privacy side it is broken. What gives us a lot of privacy in Lightning is source routing. Going to trampoline routing means the person who does the trampoline routing for you is going to learn who you are if you are using one hop and worse is going to know who you are sending funds to. There is trampoline routing, if you are not using privacy preserving Lightning clients… Nobody has done a real privacy study on Lightning clients. Neutrino, bloom filters, no one has done real research. They are not great, there are privacy leaks if you are using them. There are Lightning privacy issues and there are base layer privacy issues. If you are building an application you should have all of them in mind. It is really hard. Using the node pubkey I don’t think is great. I would like [rendez-vous routing](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-November/001498.html) to be done on Lightning to avoid announcing my pubkey, having my invoice tied to my pubkey and my pubkey being part of Lightning. And channel announcement of course. I hope at some point we have some kind of proof of ownership so I can prove I own this channel without revealing which UTXO I own.
diff --git a/transcripts/advancing-bitcoin/2020/2020-02-06-kalle-alm-signet-integration.mdwn b/transcripts/advancing-bitcoin/2020/2020-02-06-kalle-alm-signet-integration.mdwn
index f00df53..c520051 100644
--- a/transcripts/advancing-bitcoin/2020/2020-02-06-kalle-alm-signet-integration.mdwn
+++ b/transcripts/advancing-bitcoin/2020/2020-02-06-kalle-alm-signet-integration.mdwn
@@ -14,7 +14,7 @@ BIP 325: https://github.com/bitcoin/bips/blob/master/bip-0325.mediawiki
Signet on Bitcoin Wiki: https://en.bitcoin.it/wiki/Signet
-Bitcoin dev mailing list: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016734.html
+Bitcoin dev mailing list: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016734.html
Bitcoin Core PR 16411 (closed): https://github.com/bitcoin/bitcoin/pull/16411
diff --git a/transcripts/austin-bitcoin-developers/2019-06-29-hardware-wallets.mdwn b/transcripts/austin-bitcoin-developers/2019-06-29-hardware-wallets.mdwn
index ae87049..a8ec281 100644
--- a/transcripts/austin-bitcoin-developers/2019-06-29-hardware-wallets.mdwn
+++ b/transcripts/austin-bitcoin-developers/2019-06-29-hardware-wallets.mdwn
@@ -322,7 +322,7 @@ For signing, you don't need to reassemble the Shamir secret share private key be
Say you are doing SSS over a master private key. With each shard, we generate a partial signature over a transaction. A Schnorr signature is the sum of this random point plus the hash times the private key, and it's linear. We can apply the same function to recombine the signature parts. We can apply the same function to s1, s2 and s3 and then you get the full signature S this way without ever combining the parts of the keys into the full key.
-The multisignature is nice when you're not the only owner of the private keys, like escrow with your friends and family or whatever. The Shamir Secret Sharing scheme with Schnorr is great if you are the only owner of the key, so you only need the pieces of the key. There's a paper I can give you that explains how the shards are generated, or if the virtual master private key is generated on a single machine. Multisig is better for commercial custody, and Shamir is better for self-custody and self cold storage. Classic multisignature will still be available with Schnorr, you don't have to use the key combination you would still use the CHECKMULTISIG. I think you can still use it. ((Nope, you can't- see the "Design" section of [bip-tapscript](https://github.com/sipa/bips/blob/bip-schnorr/bip-tapscript.mediawiki) or [CHECKDLSADD](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-December/016556.html).)) From a mining perspective, CHECKMULTISIG makes it more expensive to validate that transaction because it has many signatures.
+The multisignature is nice when you're not the only owner of the private keys, like escrow with your friends and family or whatever. The Shamir Secret Sharing scheme with Schnorr is great if you are the only owner of the key, so you only need the pieces of the key. There's a paper I can give you that explains how the shards are generated, or if the virtual master private key is generated on a single machine. Multisig is better for commercial custody, and Shamir is better for self-custody and self cold storage. Classic multisignature will still be available with Schnorr, you don't have to use the key combination you would still use the CHECKMULTISIG. I think you can still use it. ((Nope, you can't- see the "Design" section of [bip-tapscript](https://github.com/sipa/bips/blob/bip-schnorr/bip-tapscript.mediawiki) or [CHECKDLSADD](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-December/016556.html).)) From a mining perspective, CHECKMULTISIG makes it more expensive to validate that transaction because it has many signatures.
I was thinking about using miniscript policies to unlock the secure element. To unlock the secure element, you could just use a PIN code or you could use it in a way where you need signatures from other devices or one-time codes from some other authentication mechanism. We needed to implement miniscript anyway. We're not restricted to bitcoin sigop limits or anything here; so the secure element should be able to verify this miniscript script with whatever authentication keys or passwords you are using. It can even be CHECKMULTISIG with the 15 key limit removed.
diff --git a/transcripts/austin-bitcoin-developers/2019-08-22-socratic-seminar-2.mdwn b/transcripts/austin-bitcoin-developers/2019-08-22-socratic-seminar-2.mdwn
index db2880e..e2c0174 100644
--- a/transcripts/austin-bitcoin-developers/2019-08-22-socratic-seminar-2.mdwn
+++ b/transcripts/austin-bitcoin-developers/2019-08-22-socratic-seminar-2.mdwn
@@ -18,7 +18,7 @@ I don't have anything prepared, but we can open up some of these links and I can
<https://bitcoinops.org/en/newsletters/2019/07/31/>
-<https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-July/017169.html>
+<https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-July/017169.html>
Fidelity bonds for providing sybil resistance to joinmarket. Has anyone used joinmarket before? No? Nobody? Nice try... Right. So, joinmarket is a wallet that is specifically designed for doing coinjoins. A coinjoin is a way to do a little bit of mixing or tumbling of coins to increase the privacy or fungibility of your coins. There's a few different options to it. It essentially uses IRC chatbots to solicit makers and takers. So if you really want to mix your coins, you're a taker, and a maker on the other side puts up funds to mix with your funds. So there's this maker/taker model which is interesting. I haven't used it, but it looks to be facilitated by IRC chat. The maker, the person putting in money, doesn't necessarily need privacy, makes a small percentage on their bitcoin. It's all done with smart contracts, and your coins aren't at risk at any point, except in as much that they are stored in a hot wallet to interact with the protocol. The sybil resistance that they are talking about here is that, so, Chris Belcher has a great privacy entry on the bitcoin wiki so check that out sometime. He's one of the joinmarket developers. He notices that it costs very little to flood the network with a bunch of makers if you're a malicious actor, and this breaks privacy because the chances of you running into a malicious or fraudulent chainalysis type company, it's not that they can take your coins, but they would be invading your privacy. The cost of them doing this is quite low, so the chances of them doing this is quite high as a result.
@@ -38,7 +38,7 @@ All you can do with signing is prove that you at some point had that private key
# Newsletter 57: Bloom filter discussion
-<https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-July/017145.html>
+<https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-July/017145.html>
<https://github.com/bitcoin/bitcoin/issues/16152>
@@ -180,7 +180,7 @@ A: I was talking with instagibbs who has been working on HWI. He says the trezor
Cool, thanks for describing that.
-<https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-August/017229.html>
+<https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-August/017229.html>
<https://www.coindesk.com/the-vault-is-back-bitcoin-coder-to-revive-plan-to-shield-wallets-from-theft>
diff --git a/transcripts/austin-bitcoin-developers/2020-01-21-socratic-seminar-5.mdwn b/transcripts/austin-bitcoin-developers/2020-01-21-socratic-seminar-5.mdwn
index 496cb79..b3e80c6 100644
--- a/transcripts/austin-bitcoin-developers/2020-01-21-socratic-seminar-5.mdwn
+++ b/transcripts/austin-bitcoin-developers/2020-01-21-socratic-seminar-5.mdwn
@@ -76,7 +76,7 @@ We missed a month because we had the taproot workshop last month. BitDevsNYC is
## OP\_CHECKTEMPLATEVERIFY
-<https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-November/017494.html>
+<https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-November/017494.html>
This is Jeremy Rubin's work. The idea is that it's a covenant proposal for bitcoin. The idea is that the UTXO can only... ((bryan took this one)). This workshop is going to be at the end of the month. Says he is going to sponsor people so if you're into this then consider it. Because it can be self-referential, you can have accidental turing completeness. The initial version had this problem. It might also be used by exchanges on withdrawal transactions to prevent or blacklist your future transactions.
diff --git a/transcripts/austin-bitcoin-developers/2022-02-17-socratic-seminar-25.mdwn b/transcripts/austin-bitcoin-developers/2022-02-17-socratic-seminar-25.mdwn
index c5879cd..d45a714 100644
--- a/transcripts/austin-bitcoin-developers/2022-02-17-socratic-seminar-25.mdwn
+++ b/transcripts/austin-bitcoin-developers/2022-02-17-socratic-seminar-25.mdwn
@@ -44,7 +44,7 @@ What's an anchor output? The anchor outputs are the way essentially that you pre
# TXHASH + CHECKSIGFROMSTACKVERIFY
-<https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-January/019813.html>
+<https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-January/019813.html>
Taproot activated recently. People are asking, what's next? Some people were asking about CTV and ANYPREVOUT. There's a heated discussion with drama lately. Then Russell O'Connor dropped a bomb about TXHASH + CHECKSIGFROMSTACKVERIFY which kind of lets you do both things in a single proposal. Well, kind of.
@@ -88,7 +88,7 @@ Alright, let's move on.
# CTV improve DLCs
-<https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-January/019808.html>
+<https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-January/019808.html>
DLCs are discrete log contracts. It's like a way to do DeFi on bitcoin. An oracle attests to an event and then you can form a bitcoin transaction on that and change the outcome based on what the oracle signs. You can do the same thing in bitcoin using adaptor signatures. For my super bowl bet, we had three transactions based on the possible outcomes. We had a signature for each of them, the oracle posted an event and when that oracle signature goes out they make one of the transactions valid and then it completes. Three outcomes is pretty simple, but with the bitcoin price there's technically infinite outcomes for what it could be but there's still millions of signatures required in DLCs. It's a nightmare.
@@ -104,7 +104,7 @@ The same thing that is possible is what makes another benefit of CTV possible is
# CTV signet
-<https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-February/019925.html>
+<https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-February/019925.html>
There is now a signet for CTV that behaves in a more predictable manner. If you want to play around with it, you can test it on this signet.
@@ -138,7 +138,7 @@ You can make people pay for BOLT12 messages and you can ratelimit it by making p
# Replace-by-fee
-<https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-January/019817.html>
+<https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-January/019817.html>
We have like 10 minutes left; we can keep talking about this, but I want to move on to RBF. If the network looks like this in 10 years, then you could argue that bitcoin has failed. We need to be paying transaction fees. The base layer needs to have utility. We need to figure out how to deal with this stuff. There were two debates on the mailing list that were super interesting recently. This is back to bitcoin layer 1 which implicates lightning though because we're talking about these second layer protocols that rely on previously signed transactions because paying for fees is really difficult...
@@ -152,7 +152,7 @@ When was package relay put into Bitcoin Core? It's not fully in there. There are
# Fee bumping
-<https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-February/019879.html>
+<https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-February/019879.html>
jamesob started a thread; he was noting that, wow this is complex. Why are we spending so much effort throwing in hacks? The way he described it is that CPFP and RBF are hacks because we're not dealing with fees in a sensical way for the way we're doing transactions today. The foundational problem being identified in all of these discussions is that right now when you're paying for fees, you're paying for fees- the entity paying for fees is the same thing that the fees are paying for. So a transaction has to pay for its own fees, and the propisition is... if you have a transaction you're setting in stone for lightning network for example where you don't know what the fee market is going to be in a few years if you're talking about inheritance protocols... you might be locking in too high of a fee or too low of a fee. jamesob gets into this; you should be able to pay separate from that thing you're paying for, so that you can prepare the transaction ahead of time and say okay I've signed and committed to spending these utxos; when I'm ready to broadcast it, then that's when I should decide how much to pay. This harkens back to transaction sponsors where you publish another transaction at the time that says I want to pay for this other transaction and if it doesn't get in then don't take my utxos to pay for that.
diff --git a/transcripts/austin-bitcoin-developers/2022-05-19-socratic-seminar-28.mdwn b/transcripts/austin-bitcoin-developers/2022-05-19-socratic-seminar-28.mdwn
index e73fdd4..a7c6470 100644
--- a/transcripts/austin-bitcoin-developers/2022-05-19-socratic-seminar-28.mdwn
+++ b/transcripts/austin-bitcoin-developers/2022-05-19-socratic-seminar-28.mdwn
@@ -100,7 +100,7 @@ Bitcoin Excavation and Exclamation Fund on top of ROAST.. so ROAST BEEF. There i
# Quantum protections for post-quantum taproot commitment
-<https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-April/020209.html>
+<https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-April/020209.html>
Today bitcoin uses the discrete log assumption. If quantum computers become a real thing, then they can break all the keys in bitcoin and we'd all lose all of our money. It would suck. Well, only the public keys that are fully public which is most of the keys... the nice thing is that we could have a way to do quantum signatures. The general cryptography space is moving towards this. SSH recently released something for quantum signatures. It's happening outside of bitcoin; it's not just bitcoin worried about this.
@@ -326,7 +326,7 @@ Wasn't there a DLC super smash bros tournament happening at btc++? Yes. There mi
# Blind signing risks
-<https://lists.linuxfoundation.org/pipermail/lightning-dev/2022-May/003579.html>
+<https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2022-May/003579.html>
We're going to skip this segment.
@@ -370,6 +370,6 @@ This explains why Moon's fees are high. They receive the LN bitcoin and hten the
# Wallet policies for descriptor wallets
-<https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-May/020423.html>
+<https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-May/020423.html>
diff --git a/transcripts/austin-bitcoin-developers/2022-08-18-socratic-seminar-31.mdwn b/transcripts/austin-bitcoin-developers/2022-08-18-socratic-seminar-31.mdwn
index 48a5855..78b4c8e 100644
--- a/transcripts/austin-bitcoin-developers/2022-08-18-socratic-seminar-31.mdwn
+++ b/transcripts/austin-bitcoin-developers/2022-08-18-socratic-seminar-31.mdwn
@@ -154,7 +154,7 @@ They are trying to find a maintainer for the Bitcoin Core p2p network. Recently
# Descriptors
-<https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-July/020791.html>
+<https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-July/020791.html>
The proposal is to add a ; semicolon inside of a descriptor to say it could be 0 or 1 at this part in a path. It's an optimization of how to describe a wallet. Can it be an array or is it only two options? There's no really interesting use case where you want the exact same ... if you have multiple accounts...
diff --git a/transcripts/austin-bitcoin-developers/2022-09-15-socratic-seminar-32.mdwn b/transcripts/austin-bitcoin-developers/2022-09-15-socratic-seminar-32.mdwn
index 09f09c6..1a8759f 100644
--- a/transcripts/austin-bitcoin-developers/2022-09-15-socratic-seminar-32.mdwn
+++ b/transcripts/austin-bitcoin-developers/2022-09-15-socratic-seminar-32.mdwn
@@ -214,7 +214,7 @@ The strangest thing is that the actual fee is constant. They made a constant fee
# Wallet label BIP
-<https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-August/020887.html>
+<https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-August/020887.html>
If you export a wallet to a new device or wallet, one thing you lose is the context or labels from your wallet. In this BIP, he is proposing a CSV-based export. In the comments, Trezor came back and said no there's already a json version of this please don't reinvent the wheel. "We have a standard that only we use".
diff --git a/transcripts/austin-bitcoin-developers/2022-10-20-socratic-seminar-33.mdwn b/transcripts/austin-bitcoin-developers/2022-10-20-socratic-seminar-33.mdwn
index 5580573..129029d 100644
--- a/transcripts/austin-bitcoin-developers/2022-10-20-socratic-seminar-33.mdwn
+++ b/transcripts/austin-bitcoin-developers/2022-10-20-socratic-seminar-33.mdwn
@@ -288,7 +288,7 @@ A: Not yet. I would like it to. I haven't put the time into that. MVP right now.
# Lightning fee rate cards
-<https://lists.linuxfoundation.org/pipermail/lightning-dev/2022-September/003685.html>
+<https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2022-September/003685.html>
I could talk about this a little bit. I gave a presentation bitcoin++ on these. This is a mailing list post which is apparently how things get done in bitcoin. There's this idea in lightning for a long time that it would maybe be a nice interesting thing if we would be able to have negative rates for channels on lightning. It has to do with lightning and liquidity and pricing of that liquidity. Right now there's really only positive prices so you can charge people money to send or forward a payment and you advertise what that rate is. This is a proposal for what we might want to do to change to make it so that the way we advertise rates in such a way that you could make it possible to advertise negative rates and not completely kill the lightning gossip network. The idea is that if you take the ... there is this proposal from a few weeks ago where why not make it so that you can have negative rates in lightning? Take the existing thing and make the number go negative, right? The problem with this is that you probably don't want to set all your liquidity at a negative rate, and you might want to update it quickly, and now the gossip on the lightning network becomes a lot and now gossip will have a monetary value in a way that it currently does but right now-- negative numbers would mean you get paid to send payments. So having up to date information about where the negative fees are would be a competitive advantage for payments because you could get paid to send payments so that probably wouldn't be a good idea. This is a proposal now for how to put gossip, and let people have negative rates for a certain amount of the liquidity in their channel. The other cool thing about this is that it's a dynamic pricing scheme that is done in a static way so in theory we could significantly reduce the amount of gossip getting put out on the network when balances and channels change. Right now sometimes balances update and then gossip gets updated; so anyway, there's a lot of gossip. This is one proposal for how to change advertising fee rates. This started off simple: how do we get negative rates on lightning? But when you think about it, a lot more complexity emerges. Rene Pickhardt is a data scientist that knows a lot of math and he likes building models about how things flow through the lightning network. His most recent thing is this "price of anarchy" stuff which is a way of calculating the Tragedy of the Commons except in mathy terms. He had a counter proposal where he proposed pricing stuff based on how large your HTLCs are instead of what percent of your liquidity, but larger HTLCs are inherently more expensive because you pay more the higher you... anyway, this is cool. The thing about advertising this data is that, it's an interesting thing is that, in terms of upgading gossip with data. Fee rate cards is like publishing information and it lets people publish more granular information about how they validate liquidity in their channel. You can add that to a protocol. Instead of writing new routing algorithms, you can take this information and use it in their route planning. I think this will get into the advertising side, and then maybe eventually people will start doing interesting routing things with that data. I think that having negative fees might increase arbitrage on liquidity on lightning which might make a new game to play on lightning.
diff --git a/transcripts/austin-bitcoin-developers/2022-12-15-socratic-seminar-35.mdwn b/transcripts/austin-bitcoin-developers/2022-12-15-socratic-seminar-35.mdwn
index 1072130..69ca548 100644
--- a/transcripts/austin-bitcoin-developers/2022-12-15-socratic-seminar-35.mdwn
+++ b/transcripts/austin-bitcoin-developers/2022-12-15-socratic-seminar-35.mdwn
@@ -24,7 +24,7 @@ The sighash types flags refers to the count of outputs, like all, none, single.
The way that signatures work in bitcoin is that you don't sign the whole data structure. You take a hash of the subset of the transaction and sign that. The signature has a flag for which kinds of things you are taking into consideration in your signature, so that someone else can validate the transaction. You could come up with interesting protocols around changing what you're going to sign. There was a recent proposal earlier this year for TXHASH that would have flags about every little item in the transaction that you could commit to.
-exotic sighash types: <https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-August/010759.html>
+exotic sighash types: <https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-August/010759.html>
A: Okay, I'm not going to type that into Google.
@@ -110,7 +110,7 @@ There was also a transcript of their discussion on this: <https://diyhpl.us/wiki
# Batch validation of CHECKMULTISIG using an extra hint field
-<https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-October/021048.html>
+<https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-October/021048.html>
Bitcoin multisig has a bug where it incorrectly pops an extra item off the stack which means the script will fail unless you put a nulldummy or a zero at the start. You say 2-of-3 multisig and then it tries to get 3 signatures off the stack which is obviously wrong. Off-by-one errors are always the hardest to deal with, right.
@@ -158,7 +158,7 @@ I read this comment in the pull request, and a lot of these complications are ab
# Ephemeral Anchors
-<https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-October/021036.html>
+<https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-October/021036.html>
This solves rule 3 and rule 5... No, it doesn't solve it for everything. It's for a small subset. Also it's opt-in. It's not a general solution. Why make it more complex? Make everything replaceable. At some level, most miner incentive compatible thing is to let anyone bid on re-bid on transactions. This is a whole separate topic. I'm sorry.
diff --git a/transcripts/austin-bitcoin-developers/2023-09-21-socratic-seminar-44.mdwn b/transcripts/austin-bitcoin-developers/2023-09-21-socratic-seminar-44.mdwn
index 1fb8587..017d137 100644
--- a/transcripts/austin-bitcoin-developers/2023-09-21-socratic-seminar-44.mdwn
+++ b/transcripts/austin-bitcoin-developers/2023-09-21-socratic-seminar-44.mdwn
@@ -128,7 +128,7 @@ He has a slide at tabconf of the whole breakdown of why it cost so much. I don't
# Replacement for APO + CTV
-<https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2023-August/021907.html>
+<https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2023-August/021907.html>
The guy who wrote this is - I think this was born out of a frustration from people who want APO vs who want CTV don't need to be at odds. It's a manufactured conflict. APO is ANYPREVOUT and CTV is CHECKTEMPLATEVERIFY. They were two separate proposals that introduced covenants in different ways.
@@ -190,7 +190,7 @@ Maybe it needs to be one piece of a bigger quorum. Maybe it's more like a hot wa
# Bitcoin-like script symbolic tracer
-<https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2023-August/021922.html>
+<https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2023-August/021922.html>
Bitcoin script is kind of hard to read. It's hard to parse. Basically this tool... the idea is that you put in a script, they have a fancy one here in the email, I don't know what that script does but he says this tool will create a tree of possible case that could happen so that you can check are there any edge cases or anything. It traces down every single possible path and so like, he has, he put this fancy script and says.. the first IF branch will always fail, so then you can remove that one. The analysis report will show the possible outcomes. You can parse out everything that can happen in the script including success conditions and fail conditions.
@@ -242,7 +242,7 @@ Didn't it used to be ElementsProject/ and now it is called BlockstreamResearch o
# Scaling lightning with simple covenants
-<https://lists.linuxfoundation.org/pipermail/lightning-dev/2023-September/004092.html>
+<https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2023-September/004092.html>
A lot of CTV motivations is for things like OP\_VAULT. But there are not a lot of talk yet for using CTV for scalability. So here they are talking about how to use covenants to scale Lightning. Right now you have to be online and share a UTXO between people. Lightning won't scale if every single person needs to do that. So we need a way to share UTXOs between people.
diff --git a/transcripts/austin-bitcoin-developers/2023-11-16-socratic-seminar-46.mdwn b/transcripts/austin-bitcoin-developers/2023-11-16-socratic-seminar-46.mdwn
index 8709f9d..fc29ef6 100644
--- a/transcripts/austin-bitcoin-developers/2023-11-16-socratic-seminar-46.mdwn
+++ b/transcripts/austin-bitcoin-developers/2023-11-16-socratic-seminar-46.mdwn
@@ -58,7 +58,7 @@ To use this as 2fa, the server that uses this gets a public key, then they get a
# bitcoin-dev mailing list
-<https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2023-November/022134.html>
+<https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2023-November/022134.html>
<https://twitter.com/kanzure/status/1721915515520659510>
diff --git a/transcripts/bitcoin-core-dev-tech/2017-09-07-merkleized-abstract-syntax-trees.mdwn b/transcripts/bitcoin-core-dev-tech/2017-09-07-merkleized-abstract-syntax-trees.mdwn
index f634eb9..9d25c57 100644
--- a/transcripts/bitcoin-core-dev-tech/2017-09-07-merkleized-abstract-syntax-trees.mdwn
+++ b/transcripts/bitcoin-core-dev-tech/2017-09-07-merkleized-abstract-syntax-trees.mdwn
@@ -2,7 +2,7 @@
# Merkleized abstract syntax trees (MAST)
-<https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/014932.html>
+<https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/014932.html>
I am going to talk about the scheme I posted to the mailing list yesterday which is to implement MAST (merkleized abstract syntax trees) in bitcoin in a minimally invasive way as possible. It's broken into two major consensus features that together gives us MAST. I'll start with the last BIP.
@@ -33,7 +33,7 @@ What are the issues with OP_CHECKSIGFROMSTACK? ... You could do <a href="http://
If you don't mind the exponential blow-up, then all things reduce to "long list of ways to unlock this, pick one". The exponential blow-up in a merkle tree turns into a linear blow-up in ways to unlock things, but still exponential work to construct it.
-One of the advantage of this over <a href="https://github.com/bitcoin/bips/blob/775f26c02696e772dac4060aa092d35dedbc647c/bip-0114.mediawiki">jl2012's original bip114</a> ((but <a href="https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/014963.html">see also</a>)) is that, besides being decomposed into two simpler components... go fetch pubkeys out of a tree, then have key tree signatures, it also helps you deal with exponential blow-up when you start hitting those limits, you could put more logic into the top-level script. How hard is it to make this ... pull out multiple things from the tree at once, because there's sharing to look at. Yes that was some of the feedback, it's doable, and you have to work out the proof structure because it's meant for single outputs. In the root you might have n which is the number of items to pull out. So it might be 3 root leaf leaf leaf proof. But without knowing what that n was, you basically have to use it as a constant in your script, the root is a constant. I think it would be interesting to have a fixed n here.
+One of the advantage of this over <a href="https://github.com/bitcoin/bips/blob/775f26c02696e772dac4060aa092d35dedbc647c/bip-0114.mediawiki">jl2012's original bip114</a> ((but <a href="https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/014963.html">see also</a>)) is that, besides being decomposed into two simpler components... go fetch pubkeys out of a tree, then have key tree signatures, it also helps you deal with exponential blow-up when you start hitting those limits, you could put more logic into the top-level script. How hard is it to make this ... pull out multiple things from the tree at once, because there's sharing to look at. Yes that was some of the feedback, it's doable, and you have to work out the proof structure because it's meant for single outputs. In the root you might have n which is the number of items to pull out. So it might be 3 root leaf leaf leaf proof. But without knowing what that n was, you basically have to use it as a constant in your script, the root is a constant. I think it would be interesting to have a fixed n here.
The outer version is the hash type or explaining what the hashes are. So the history behind this is that at some point we were thinking about these ... recoverable hashes which I don't think anyone is seriously considering at this point, but historically the reason for extending the size ... I think my idea at the time when this witness versioning came up, we only need 16 versions there because we only need the version number for what hashing scheme to use. You don't want to put the hashing version inside the witness which is constrained by that hash itself because now someone finds a bug and writes ripemd160 and now there's a preimage attack there and now someone can take a witness program but claim it's a ripemd160 hash and spend it that way. So at the very least the hashing scheme itself should be specified outside the witness. But pretty much everything else can be inside, and I don't know what structure that should have, like maybe a bitfield of features (i am not serious about this), but there could be a bit field that has the last two are hashed, into the program.
@@ -71,22 +71,22 @@ The version of merklebranchverify in the link is on top of Core and that's a har
So merklebranchverify maybe should be deployed with the non-SegWit version.. but maybe that would send a conflicting message to the users of bitcoin. Segwit's cleanstack rule prevents us from doing this immediately. Only in v0.
-Need some candidate soft-forks that are highly desirable by the users. Maybe signature aggregation, maybe luke-jr suggestion anti-replay by <a href="https://github.com/bitcoin/bips/blob/master/bip-0115.mediawiki">OP_CHECKBLOCKATHEIGHT</a> <a href="https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-September/013161.html">proposal</a>. Needs to be highly desirable by the user to qualify for this particular case though. Needs to be a small change, so maybe not signature aggregation, but maybe signature aggregation since it's still highly desirable.
+Need some candidate soft-forks that are highly desirable by the users. Maybe signature aggregation, maybe luke-jr suggestion anti-replay by <a href="https://github.com/bitcoin/bips/blob/master/bip-0115.mediawiki">OP_CHECKBLOCKATHEIGHT</a> <a href="https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-September/013161.html">proposal</a>. Needs to be highly desirable by the user to qualify for this particular case though. Needs to be a small change, so maybe not signature aggregation, but maybe signature aggregation since it's still highly desirable.
They can break a CHECKSIGFROMSTACK... in a hard-fork. CHECKBLOCKHASH has other implications, like transactions aren't-- in the immediately prior block, you can't reinsert the transaction, it's not reorg-safe. It should be restricted to like 100 blocks back at least.
[a]I approve this chatham house rule violation
----
-<https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/014932.html>
+<https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/014932.html>
-<https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/014979.html>
+<https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/014979.html>
-<https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/015022.html>
+<https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/015022.html>
-<https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/014963.html>
+<https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/014963.html>
-<https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/014960.html>
+<https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/014960.html>
<https://www.reddit.com/r/Bitcoin/comments/7p61xq/the_first_mast_pull_requests_just_hit_the_bitcoin/>
diff --git a/transcripts/bitcoin-core-dev-tech/2018-03-05-cross-curve-atomic-swaps.mdwn b/transcripts/bitcoin-core-dev-tech/2018-03-05-cross-curve-atomic-swaps.mdwn
index b652c85..3c80bf5 100644
--- a/transcripts/bitcoin-core-dev-tech/2018-03-05-cross-curve-atomic-swaps.mdwn
+++ b/transcripts/bitcoin-core-dev-tech/2018-03-05-cross-curve-atomic-swaps.mdwn
@@ -2,7 +2,7 @@
Draft of an upcoming scriptless scripts paper. This was at the beginning of 2017. But now an entire year has gone by.
-post-schnorr lightning transactions <https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-February/001031.html>
+post-schnorr lightning transactions <https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-February/001031.html>
An adaptor signature.. if you have different generators, then the two secrets to reveal, you just give someone both of them, plus a proof of a discrete log, and then you say learn the secret to one that gets the reveal to be the same. It's a proof of equivalence discrete log. You decompose a secret key into bits. For these purposes, it's okay to have a 128-bit secret key, it's only used once. 128 bits is much smaller than secp and stuff. We can definitely decompose it into bits. You need a private key, but lower than the group ordering in both. I am going to treat the public key... I'm going to assume-- it's going to be small enough, map from every integer into a range, bijection into the set of integers and set of scalars in secp and the set of scalars in... It's just conceptually because otherwise it doesn't make sense to do that. In practice, it's all the same numbers. You split it into bits, similar to the Monero Ring-CT thing, which is an overly complicated way to describe it. What about schnorr ring signatures? Basically, the way this works, a Schnorr signature has a choice of a nonce, you hash it, then you come up with an S value that somehow satisfies some equation involving the hash and the secret nonce and the secret key. The idea is that because the hash commits to everything including the public key and the nonce, the only way you can make this equation work is if you use -- the secret key-- and then you can compute S. And if hte hash didn't commit, then you could just solve for what the public nonce should be. But you can't do that because you have to choose a nonce before a hash. In a schnorr ring signature, you have a pile of public keys, you choose a nonce you get a hash, but then the next key you have to use that hash but the next key, and eventually you get bcak to the beginning and it wraps around. You start from one of the secret keys, you start one key passed that, you make up random signatures and solve them and go back, and eventually you can't make a random signature and solve it, and that's the final hash, which is already determined, and you're not allowed to make it again, you have to do the right thing, you need a secret key. The verifier doesn't know the order, they can't distinguish them. What's cool about this is that you're doing a bunch of algebra, you're throwing some shit into a hash, and then you're doing more algebra again and repeating. You could do this as long as you want, into the not-your-key ring signature, you just throw into random stuff into this hash. You could show a priemage and prove that it was one of those people, it's unlinkability stuff. You could build a range proof out of this schnorr ring signature. Suppose you have a pedersen commitment between 0 and 10, call it commitment C. If the commitment is 0, then I know the discrete log to C. If the commitment is to 1, then I know C - H, and if it's 2 then it's C-2H, and then I make a ring with C - H, up to C-10H, and if the range is in there, then I will know one of those discrete logs. You split your number into bits, you have a commitment to either 0, 1, or 0, 2, or 0, 4, or 0, 8, and you add all of these up. Each of these individual ring signatures is linear in the number of things. You can get a log sized proof by doing this. These hashes-- because we're putting points into these hashes, and hashes is just using this data. I can do a simultaneous ring signature where every state I'm going to share a hash function, I will do both ring signatures, but I'm using the same hash for both of them, so I choose a random nonce over here, I hash them both, and then I compute an S value on that hash on both sides, I get another nonce and I put both of those into the next hash, and eventually I'll have to actually solve on both sides. So this is clearly two different ring signatures that both are sharing some.... But it's true that the same index.. I have to know the secret key of the same index on both of them. One way to think about this is that I have secp and ed, and I am finding a group structure in this in the obvious way, and then my claim is that these two points, like Tsecp and Ted have the same discrete log. In this larger group, I'm claiming this discrete log of this multipoint is T, T and both the components are the same. I do a ring signature in this cartesian product group, and I'm proving that they are the same in both step, and this is equivalent to the same thing where I was combining hashes.
diff --git a/transcripts/bitcoin-core-dev-tech/2018-10-08-mailing-list.mdwn b/transcripts/bitcoin-core-dev-tech/2018-10-08-mailing-list.mdwn
index 2c8ca50..c136c02 100644
--- a/transcripts/bitcoin-core-dev-tech/2018-10-08-mailing-list.mdwn
+++ b/transcripts/bitcoin-core-dev-tech/2018-10-08-mailing-list.mdwn
@@ -4,7 +4,7 @@ Warren Togami
Satoshi's original vision was apparently sourceforge and the sourceforge mailing list. In 2015, we moved away from sourceforge for the mailing list. We had a hard time picking a host deemed to be neutral at the time. We didn't want to deal with this. At the same time, nobody wants to do any work.
-<https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-June/008637.html>
+<https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-June/008637.html>
At the time, there was an effort to upgrade to mailman3 because a bigger number is better. Also, some moderators were appointed. Over time it dwindled down to one moderator. Unfortunately, late last year, the Linux Foundation informed us about their intent to discontinue their mailing list hosting services. Mailman2 has been unmaintained upstream for years. It has severe problems from a maintainability perspective, security perspective, the next slide is about that. They tried to throw money at this to fix mailman2 but they failed. Now they want to shut it down, they told me that they had 20,000 other project lists most of which are moved to groups.io. I haven't tried it, but being a commercial service, I think many of us have a kneejerk reaction to proprietary non-open-source things. At least that was the thinking back then.
diff --git a/transcripts/bitcoin-core-dev-tech/2019-06-06-great-consensus-cleanup.mdwn b/transcripts/bitcoin-core-dev-tech/2019-06-06-great-consensus-cleanup.mdwn
index 7f2545d..312bbb8 100644
--- a/transcripts/bitcoin-core-dev-tech/2019-06-06-great-consensus-cleanup.mdwn
+++ b/transcripts/bitcoin-core-dev-tech/2019-06-06-great-consensus-cleanup.mdwn
@@ -1,6 +1,6 @@
2019-06-06
-Great consensus cleanup <https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016714.html>
+Great consensus cleanup <https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016714.html>
<https://twitter.com/kanzure/status/1136591286012698626>
diff --git a/transcripts/bitcoin-core-dev-tech/2019-06-06-taproot.mdwn b/transcripts/bitcoin-core-dev-tech/2019-06-06-taproot.mdwn
index 6476d53..234b093 100644
--- a/transcripts/bitcoin-core-dev-tech/2019-06-06-taproot.mdwn
+++ b/transcripts/bitcoin-core-dev-tech/2019-06-06-taproot.mdwn
@@ -2,7 +2,7 @@ Taproot
<https://github.com/sipa/bips/blob/bip-schnorr/bip-taproot.mediawiki>
-<https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-May/016914.html>
+<https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-May/016914.html>
<https://bitcoinmagazine.com/articles/taproot-coming-what-it-and-how-it-will-benefit-bitcoin/>
diff --git a/transcripts/bitcoin-core-dev-tech/2019-06-07-signet.mdwn b/transcripts/bitcoin-core-dev-tech/2019-06-07-signet.mdwn
index 5225df7..ffb521f 100644
--- a/transcripts/bitcoin-core-dev-tech/2019-06-07-signet.mdwn
+++ b/transcripts/bitcoin-core-dev-tech/2019-06-07-signet.mdwn
@@ -1,6 +1,6 @@
Signet
-<https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016734.html>
+<https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016734.html>
<https://twitter.com/kanzure/status/1136980462524608512>
diff --git a/transcripts/bitcoin-core-dev-tech/2019-06-07-statechains.mdwn b/transcripts/bitcoin-core-dev-tech/2019-06-07-statechains.mdwn
index 44ebe59..68efddc 100644
--- a/transcripts/bitcoin-core-dev-tech/2019-06-07-statechains.mdwn
+++ b/transcripts/bitcoin-core-dev-tech/2019-06-07-statechains.mdwn
@@ -2,7 +2,7 @@ Blind statechains: UTXO transfer with a blind signing server
<https://twitter.com/kanzure/status/1136992734953299970>
-"Formalizing Blind Statechains as a minimalistic blind signing server" <https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-June/017005.html>
+"Formalizing Blind Statechains as a minimalistic blind signing server" <https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-June/017005.html>
overview: <https://medium.com/@RubenSomsen/statechains-non-custodial-off-chain-bitcoin-transfer-1ae4845a4a39>
diff --git a/transcripts/bitcoin-magazine/2020-08-03-eric-lombrozo-luke-dashjr-taproot-activation.mdwn b/transcripts/bitcoin-magazine/2020-08-03-eric-lombrozo-luke-dashjr-taproot-activation.mdwn
index d78c9a5..21412e2 100644
--- a/transcripts/bitcoin-magazine/2020-08-03-eric-lombrozo-luke-dashjr-taproot-activation.mdwn
+++ b/transcripts/bitcoin-magazine/2020-08-03-eric-lombrozo-luke-dashjr-taproot-activation.mdwn
@@ -220,7 +220,7 @@ EL: Yeah.
# Modern Soft Fork Activation
-AvW: The other perspective in this debate would be for example Matt Corallo’s [Modern Soft Fork Activation](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-January/017547.html). I assume that you are aware of that. I will explain it real quick myself. Modern Soft Fork Activation, the idea is that basically you use the old fashioned BIP 9 upgrade process for a year. Let miners activate it for a year. If it doesn’t work then developers will reconsider for 6 months, see if there was a problem with Taproot in this case after all, something that they had missed, some concern miners had with it. Review it for 6 months, if after the 6 months it is found there was no actual problem and miners were delaying for whatever reason then activation is redeployed with a hard deadline activation at 2 years. What do you think of this?
+AvW: The other perspective in this debate would be for example Matt Corallo’s [Modern Soft Fork Activation](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-January/017547.html). I assume that you are aware of that. I will explain it real quick myself. Modern Soft Fork Activation, the idea is that basically you use the old fashioned BIP 9 upgrade process for a year. Let miners activate it for a year. If it doesn’t work then developers will reconsider for 6 months, see if there was a problem with Taproot in this case after all, something that they had missed, some concern miners had with it. Review it for 6 months, if after the 6 months it is found there was no actual problem and miners were delaying for whatever reason then activation is redeployed with a hard deadline activation at 2 years. What do you think of this?
LD: If there is possibly a problem we shouldn’t even get to that first step.
@@ -316,7 +316,7 @@ LD: Instead of everybody running the smart contract it is just participants.
EL: Which is the way it should’ve been in the beginning but I think that it took a while until people realized that that is what the script should be doing. Initially it was thought we could have these scripts that run onchain. This is the way it was done because I don’t think Satoshi thought this completely through. He just wanted to launch something. We have a lot of hindsight now that we didn’t have back then. Now it is obvious that really the blockchain is about authorizing transactions, it is not about processing the conditions of contracts themselves. That can all be done offchain and that can be done very well offchain. In the end the only thing is that the participants need to sign off that it did happen and that’s it. That is all that everyone really cares about. Everyone agreed so what is the big deal? If everyone agrees there is no big issue.
-AvW: There doesn’t appear to be any downside to Taproot? Is there any downside to it, have you heard about any concern? I think there was an [email](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-February/017618.html) on the mailing list a while ago with some concerns.
+AvW: There doesn’t appear to be any downside to Taproot? Is there any downside to it, have you heard about any concern? I think there was an [email](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-February/017618.html) on the mailing list a while ago with some concerns.
LD: I can’t think of any. There is no reason not to deploy it at least, I can’t think of any reason not to use it either. It does carry over the bias that SegWit toward bigger blocks but that is something that has to be considered independently. There is no reason to tie the features to the block size.
diff --git a/transcripts/bitcoin-magazine/2021-02-26-taproot-activation-lockinontimeout.mdwn b/transcripts/bitcoin-magazine/2021-02-26-taproot-activation-lockinontimeout.mdwn
index 6dcc0a0..a4b93a2 100644
--- a/transcripts/bitcoin-magazine/2021-02-26-taproot-activation-lockinontimeout.mdwn
+++ b/transcripts/bitcoin-magazine/2021-02-26-taproot-activation-lockinontimeout.mdwn
@@ -10,9 +10,9 @@ Video: https://www.youtube.com/watch?v=7ouVGgE75zg
BIP 8: https://github.com/bitcoin/bips/blob/master/bip-0008.mediawiki
-Arguments for LOT=true and LOT=false (T1-T6 and F1-F6): https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018380.html
+Arguments for LOT=true and LOT=false (T1-T6 and F1-F6): https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018380.html
-Additional argument for LOT=false (F7): https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018415.html
+Additional argument for LOT=false (F7): https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018415.html
Aaron van Wirdum article on LOT=true or LOT=false: https://bitcoinmagazine.com/articles/lottrue-or-lotfalse-this-is-the-last-hurdle-before-taproot-activation
diff --git a/transcripts/bitcoin-magazine/2021-03-12-taproot-activation-speedy-trial.mdwn b/transcripts/bitcoin-magazine/2021-03-12-taproot-activation-speedy-trial.mdwn
index f75e8c8..e3112c9 100644
--- a/transcripts/bitcoin-magazine/2021-03-12-taproot-activation-speedy-trial.mdwn
+++ b/transcripts/bitcoin-magazine/2021-03-12-taproot-activation-speedy-trial.mdwn
@@ -8,7 +8,7 @@ Date: March 12th 2021
Video: https://www.youtube.com/watch?v=oCPrjaw3YVI
-Speedy Trial proposal: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-March/018583.html
+Speedy Trial proposal: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-March/018583.html
Transcript by: Michael Folkson
@@ -32,9 +32,9 @@ SP: That’s right.
# Speedy Trial proposal
-Speedy Trial proposal: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-March/018583.html
+Speedy Trial proposal: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-March/018583.html
-Proposed timeline: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-March/018594.html
+Proposed timeline: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-March/018594.html
AvW: Should we begin with Speedy Trial, what is Speedy Trial Sjors?
@@ -42,7 +42,7 @@ SP: I think that is a good idea to do. With the proposals that we talked about l
AvW: That was LOT=true or LOT=false. The debate was on whether or not it should end with forced signaling or not. That’s the LOT=true, LOT=false thing.
-SP: The thing to keep in mind is that the first signaling, it would be a while before that starts happening. Until that time we really don’t know essentially. What Speedy Trial proposes is to say “Rather than discussing whether or not there is going to be signaling and having lots of arguments about it, let’s just try that really quickly.” Instead there would be a release maybe around April, of course there’s nobody in charge of actual timelines. In that case the signaling would [start](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-March/018594.html) much earlier, I’m not entirely sure when, maybe in May or pretty early. The signaling would only be for 3 months. At the end of 3 months it would give up.
+SP: The thing to keep in mind is that the first signaling, it would be a while before that starts happening. Until that time we really don’t know essentially. What Speedy Trial proposes is to say “Rather than discussing whether or not there is going to be signaling and having lots of arguments about it, let’s just try that really quickly.” Instead there would be a release maybe around April, of course there’s nobody in charge of actual timelines. In that case the signaling would [start](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-March/018594.html) much earlier, I’m not entirely sure when, maybe in May or pretty early. The signaling would only be for 3 months. At the end of 3 months it would give up.
AvW: It would end on LOT=false basically.
diff --git a/transcripts/bitcoin-magazine/2021-04-23-taproot-activation-update.mdwn b/transcripts/bitcoin-magazine/2021-04-23-taproot-activation-update.mdwn
index 87ec28e..0decfd3 100644
--- a/transcripts/bitcoin-magazine/2021-04-23-taproot-activation-update.mdwn
+++ b/transcripts/bitcoin-magazine/2021-04-23-taproot-activation-update.mdwn
@@ -124,7 +124,7 @@ AvW: I also want to clarify. We don’t know what it is going to look like yet.
# Alternative to Bitcoin Core (Bitcoin Core 0.21.0-based Taproot Client)
-Update on Taproot activation releases: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-April/018790.html
+Update on Taproot activation releases: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-April/018790.html
AvW: Another client was also launched. There is a lot of debate on the name. We are going to call it the LOT=true client.
diff --git a/transcripts/blockchain-protocol-analysis-security-engineering/2018/hardening-lightning.mdwn b/transcripts/blockchain-protocol-analysis-security-engineering/2018/hardening-lightning.mdwn
index 31dcb39..1961b93 100644
--- a/transcripts/blockchain-protocol-analysis-security-engineering/2018/hardening-lightning.mdwn
+++ b/transcripts/blockchain-protocol-analysis-security-engineering/2018/hardening-lightning.mdwn
@@ -144,7 +144,7 @@ There's some downsides, which is that we have a distinct transaction for every H
The solution here is to use covenants in the HTLC outputs. This eliminates signature + verify with commitment creation, and eliminates signature storage of current state. We're basically making an off-chain covenant with 2-of-2 multisig. We can just say, if we actualy have real covenants then we don't them anymore. The goal of the covenants was to force them to wait the CSV delay when they were trying to claim the output. But with this, they can only spend the output if the output that it created in the spending transaction actually has a CSV delay clause. You basically add an independent script for HTLC revocation clause reusing the commitment invalidation technique.
-As a stopgap, you could do something with sighash flags to allow you to coalesce these transactions together. Right now if I have 5 HTLCs, I have to do that on chain, there's 5 different transactions. We could allow you to coalesce these into a single transaction by using <a href="https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-August/010759.html">more liberal sighash flags</a>, which is cool because then you have this single transaction get confirmed and after a few blocks you can sweep those into your own outputs and it works out fine.
+As a stopgap, you could do something with sighash flags to allow you to coalesce these transactions together. Right now if I have 5 HTLCs, I have to do that on chain, there's 5 different transactions. We could allow you to coalesce these into a single transaction by using <a href="https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-August/010759.html">more liberal sighash flags</a>, which is cool because then you have this single transaction get confirmed and after a few blocks you can sweep those into your own outputs and it works out fine.
# Multi-party channels
diff --git a/transcripts/blockchain-protocol-analysis-security-engineering/2018/schnorr-signatures-for-bitcoin-challenges-opportunities.mdwn b/transcripts/blockchain-protocol-analysis-security-engineering/2018/schnorr-signatures-for-bitcoin-challenges-opportunities.mdwn
index ecfefb8..765a5b3 100644
--- a/transcripts/blockchain-protocol-analysis-security-engineering/2018/schnorr-signatures-for-bitcoin-challenges-opportunities.mdwn
+++ b/transcripts/blockchain-protocol-analysis-security-engineering/2018/schnorr-signatures-for-bitcoin-challenges-opportunities.mdwn
@@ -62,7 +62,7 @@ Seems like an almost obvious question and win. We can make the same security ass
# Taproot
-One scheme that can benefit from this sort of new Schnorr signature validation opcode is <a href="https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-January/015614.html">taproot</a>, which if you've been following the bitcoin-dev mailing list over the past few days you may have seen mentioned. Taproot is a proposal by Greg Maxwell where effectively the realization is that almost all cases where a script gets satisfied (where an actual spend occurs) and there are multiple parties involved can almost always be written as "either everyone involved agrees, or some more complex conditions are satisfied". Taproot encodes a public key or the hash of a script inside just one public key that goes on to the chain. You cannot tell from the key whether it's just a key or if it's a key that also commits to a script. The proposed semantics for this allow you to either just spend it by providing a signature with the key that is there, or you reveal that it's a commitment to a script and then you give the inputs to satisfy that script. If that signature used in the taproot proposal was a Schnorr signature, then we get all the advantages I talked about for Schnorr signatures. So not only could this be used for a single signer, but it could also be the "everyone agrees automatically" by using a native Schnorr multi-signature.
+One scheme that can benefit from this sort of new Schnorr signature validation opcode is <a href="https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-January/015614.html">taproot</a>, which if you've been following the bitcoin-dev mailing list over the past few days you may have seen mentioned. Taproot is a proposal by Greg Maxwell where effectively the realization is that almost all cases where a script gets satisfied (where an actual spend occurs) and there are multiple parties involved can almost always be written as "either everyone involved agrees, or some more complex conditions are satisfied". Taproot encodes a public key or the hash of a script inside just one public key that goes on to the chain. You cannot tell from the key whether it's just a key or if it's a key that also commits to a script. The proposed semantics for this allow you to either just spend it by providing a signature with the key that is there, or you reveal that it's a commitment to a script and then you give the inputs to satisfy that script. If that signature used in the taproot proposal was a Schnorr signature, then we get all the advantages I talked about for Schnorr signatures. So not only could this be used for a single signer, but it could also be the "everyone agrees automatically" by using a native Schnorr multi-signature.
# Scriptless scripts
@@ -122,7 +122,7 @@ We're working on a BIP for Bellare-Neven based interactive aggregate signatures.
That's all.
-<a href="https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-July/016203.html">bip-schnorr</a>
+<a href="https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-July/016203.html">bip-schnorr</a>
# Q&A
@@ -148,4 +148,4 @@ A: The most commonly deployed Schnorr-like signature is ed25519 which is very we
----
-"<a href="https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-January/015696.html">Design approaches for cross-input signature aggregation</a>"
+"<a href="https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-January/015696.html">Design approaches for cross-input signature aggregation</a>"
diff --git a/transcripts/breaking-bitcoin/2017/changing-consensus-rules-without-breaking-bitcoin.mdwn b/transcripts/breaking-bitcoin/2017/changing-consensus-rules-without-breaking-bitcoin.mdwn
index b4e689b..3a793f8 100644
--- a/transcripts/breaking-bitcoin/2017/changing-consensus-rules-without-breaking-bitcoin.mdwn
+++ b/transcripts/breaking-bitcoin/2017/changing-consensus-rules-without-breaking-bitcoin.mdwn
@@ -126,7 +126,7 @@ Once we had segwit implemented, then we were thinking we'd deploy with versionbi
# Flag dates and user-activated soft-forks
-So then we started to think maybe miner-activated soft-forks on their own aren't good enough. Well what about going back to user-activated soft-forks like the flag date? This is when <a href="https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-February/013643.html">shaolinfry</a> <a href="https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-March/013714.html">proposed</a> <a href="https://github.com/bitcoin/bips/blob/master/bip-0148.mediawiki">bip148</a> which required that miners signal segwit. It was kind of like a rube goldberg device where one device triggers another. This was a way to trigger all the nodes already out there ready to activate. And so we did some game theory analysis on this and actually NicolasDorier did some nice diagrams here. Here's the decision tree if you decide not to run a bip148 node-- if the industry or miners decide not to go along with it, thenyou get a chain split and possibly a massive reorg. On the other hand if you do run a bip148 node, then they would have to collude for you to get a permanent chain split. The game theory here- it's a game of chicken yes, and assuming it's not in their interest to do that, then they will opt to not go for the chain split. But if they do split the chain then it will probably be for reasons related to real economic interests like bcash where it's controversial and some people might wonder. My personal take is that I think it would be ineviatable that some miners would have interests that encourage them to have another chain. It didn't really adversely effect bitcoin too much and some of us got free money from bcash so thank you.
+So then we started to think maybe miner-activated soft-forks on their own aren't good enough. Well what about going back to user-activated soft-forks like the flag date? This is when <a href="https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-February/013643.html">shaolinfry</a> <a href="https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-March/013714.html">proposed</a> <a href="https://github.com/bitcoin/bips/blob/master/bip-0148.mediawiki">bip148</a> which required that miners signal segwit. It was kind of like a rube goldberg device where one device triggers another. This was a way to trigger all the nodes already out there ready to activate. And so we did some game theory analysis on this and actually NicolasDorier did some nice diagrams here. Here's the decision tree if you decide not to run a bip148 node-- if the industry or miners decide not to go along with it, thenyou get a chain split and possibly a massive reorg. On the other hand if you do run a bip148 node, then they would have to collude for you to get a permanent chain split. The game theory here- it's a game of chicken yes, and assuming it's not in their interest to do that, then they will opt to not go for the chain split. But if they do split the chain then it will probably be for reasons related to real economic interests like bcash where it's controversial and some people might wonder. My personal take is that I think it would be ineviatable that some miners would have interests that encourage them to have another chain. It didn't really adversely effect bitcoin too much and some of us got free money from bcash so thank you.
The problem with bip148 is that the segwit2x collusion agreement came up and it was too late to activate with just bip9. James Hilliard proposed a reduction of the activation threshold to 80% using <a href="https://github.com/bitcoin/bips/blob/master/bip-0091.mediawiki">bip91</a>. It was a way to avoid the chain split with the segwit2x collusion agreement, but it did not avoid the chainsplit with the bcash thing which I think it was inevitable at that point. The bcash hard-fork was a separate proposal unrelated to bip91 and bip8.
@@ -140,7 +140,7 @@ This is a big dilemma about how we are going to deploy soft-forks in the future.
Near-term soft-forkable changes that people have been looking into include things like: <a href="https://diyhpl.us/wiki/transcripts/blockchain-protocol-analysis-security-engineering/2018/schnorr-signatures-for-bitcoin-challenges-opportunities/">Schnorr signatures</a>, signature aggregation, which is much more efficient than the currently used ECDSA scheme. And MASTs for <a href="https://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2018-03-06-merkleized-abstract-syntax-trees-mast/">merkleized abstract syntax trees</a>, and there are at least two different proposals for this. MAST allows you to compress scripts where there's a single execution pathway that might be taken so you can have this entire tree of potential execution pathways and the proof to authorize a transaction only needs to include one particular leaf of the tree.
-We're looking at <a href="https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-October/015141.html">new script versions</a>. At this point we don't need to do the OP\_DROP stuff, we can add new opcodes and a new scripting language or replace it with something that's not a stack machine if we wanted-- not that we want to, but in principle we could replace it with another language. It gives us that option. It's something interesting to think about. Satoshi thought that bitcoin script would be the extensability mechanism to support smart contracts, and now we're looking at the opposite which is that the way the proofs are constructed. Adding new scripting languages turns out to be simple.
+We're looking at <a href="https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-October/015141.html">new script versions</a>. At this point we don't need to do the OP\_DROP stuff, we can add new opcodes and a new scripting language or replace it with something that's not a stack machine if we wanted-- not that we want to, but in principle we could replace it with another language. It gives us that option. It's something interesting to think about. Satoshi thought that bitcoin script would be the extensability mechanism to support smart contracts, and now we're looking at the opposite which is that the way the proofs are constructed. Adding new scripting languages turns out to be simple.
# Potential changes requiring hard-forks
diff --git a/transcripts/breaking-bitcoin/2017/interview-adam-back-elizabeth-stark.mdwn b/transcripts/breaking-bitcoin/2017/interview-adam-back-elizabeth-stark.mdwn
index ee87935..37b09a1 100644
--- a/transcripts/breaking-bitcoin/2017/interview-adam-back-elizabeth-stark.mdwn
+++ b/transcripts/breaking-bitcoin/2017/interview-adam-back-elizabeth-stark.mdwn
@@ -38,7 +38,7 @@ adam3us: As long as you run your own full node and configure your wallet to poin
stark: There's never a dull moment in bitcoin. There's been discussion in the community about upgrades and block size. How do you think we can most securely upgrade the bitcoin protocol?
-adam3us: I think soft-forks are a safe way to upgrade bitcoin. With hard-forks, it's opt-in and it's not clear if everyone will opt-in, thus creating two chains when someone doesn't opt-in. There are ways to combine soft-forks and hard-forks, called <a href="https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-December/012173.html">evil forks</a>, which is even more coercive. People didn't want to talk about it for a while but it's been public for a while now. It's basically the idea that, you make a new block... Johnson Lau causes this forcenet on <a href="https://bitcoinhardforkresearch.github.io/">bitcoinhardforkresearch.github.io</a> page.. You make a new block, the original block has a transaction that has a hash referring to the new block. The consensus rule is that the original block has to be empty. The original clients and nodes see no transactions. If miners enforce that consensus rule, then it forces a change. In that circumstance, if a change was made that users didn't like, then they would just soft-fork it out, they would have to make a change to avoid it.
+adam3us: I think soft-forks are a safe way to upgrade bitcoin. With hard-forks, it's opt-in and it's not clear if everyone will opt-in, thus creating two chains when someone doesn't opt-in. There are ways to combine soft-forks and hard-forks, called <a href="https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-December/012173.html">evil forks</a>, which is even more coercive. People didn't want to talk about it for a while but it's been public for a while now. It's basically the idea that, you make a new block... Johnson Lau causes this forcenet on <a href="https://bitcoinhardforkresearch.github.io/">bitcoinhardforkresearch.github.io</a> page.. You make a new block, the original block has a transaction that has a hash referring to the new block. The consensus rule is that the original block has to be empty. The original clients and nodes see no transactions. If miners enforce that consensus rule, then it forces a change. In that circumstance, if a change was made that users didn't like, then they would just soft-fork it out, they would have to make a change to avoid it.
stark: We've seen proposals from the community about ways to upgrade bitcoin. How would you do those differently?
diff --git a/transcripts/c-lightning/2021-10-04-developer-call.md b/transcripts/c-lightning/2021-10-04-developer-call.md
index f4ffda4..d327d59 100644
--- a/transcripts/c-lightning/2021-10-04-developer-call.md
+++ b/transcripts/c-lightning/2021-10-04-developer-call.md
@@ -12,7 +12,7 @@ The conversation has been anonymized by default to protect the identities of the
# Dust HTLC exposure (Lisa Neigut)
-Antoine Riard email to the Lightning dev mailing list: <https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-October/003257.html>
+Antoine Riard email to the Lightning dev mailing list: <https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-October/003257.html>
The email Antoine sent to the mailing list this morning. With c-lightning there is a dust limit. When you get a HTLC it has got an amount that it represents, let’s say 1000 sats. If the dust limit is at like 543 which is where c-lightning defaults to, if the amount in the HTLC is above the dust limit then we consider it trimmed which means it doesn’t appear in the commitment transaction and the entire output amount gets added to the fees that you’d pay to a miner if that commitment transaction were to go onchain. There are two problems. In other implementations the dust limit is set by the peer and it is tracked against what the reserve is. They can set that limit really high. You get HTLCs coming in, they could be underneath the dust limit even if your fee rate for the commitment transaction isn’t very high. Basically the idea is if you have a high dust limit, let’s say 5000 sats, every HTLC that you get which is 1000 sats will be automatically be trimmed to dust. If you’ve got a channel with a pretty high dust limit and someone puts a lot of HTLCs on it then all of a sudden your amount of miner fees on that commitment transaction are quite high. If they unilaterally close the channel, the dust limit is used on the signatures that you send to your peer, then all of that amount that was dust now goes to the miner. If the miner was going to pay them back money it is a way that they could extract value. This has been a longstanding problem with Lightning. I gave a [talk](https://www.youtube.com/watch?v=e9o6xepAD9E) on dust stuff at Bitcoin 2019. It is not really new, I think the new part of the attack is that the peer can set the dust limit. c-lightning has a check where the max dust that your peer can set is the channel reserve, like 1 percent of the channel. So c-lightning isn’t as badly affected as some of the other implementations. That’s one side of it. The other side, the recommended fix is that now when you are taking a HTLC and you are saying “This is now dust”, we have a bucket and we add every HTLC amount as dust to this bucket. When the bucket gets full we stop accepting HTLCs that are considered dusty.
diff --git a/transcripts/c-lightning/2021-10-18-developer-call.md b/transcripts/c-lightning/2021-10-18-developer-call.md
index 02f80cd..40e1c2d 100644
--- a/transcripts/c-lightning/2021-10-18-developer-call.md
+++ b/transcripts/c-lightning/2021-10-18-developer-call.md
@@ -14,7 +14,7 @@ The conversation has been anonymized by default to protect the identities of the
<https://medium.com/blockstream/c-lightning-v0-10-2-bitcoin-dust-consensus-rule-33e777d58657>
-We are nominally past the release date. The nominal release date is usually the 10th of every second month. This time I’m release captain so I am the one who is to blame for any delays. In this case we have two more changes that we are going to apply for the release itself. One being the dust fix. If you’ve read two weeks ago there was an [announcement](https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-October/003257.html) about a vulnerability in the specification itself. All implementations were affected. Now we are working on a mitigation. It turns out however that the mitigation that was proposed for the specification is overly complex and has some weird corner cases. We are actually discussing both internally and with the specification itself how to address this exactly. Rusty has a much cleaner solution. We are trying to figure out how we can have this simple dust fix and still be compatible with everybody else who already released theirs. The hot fix has been a bit messy here. Communication could have gone better.
+We are nominally past the release date. The nominal release date is usually the 10th of every second month. This time I’m release captain so I am the one who is to blame for any delays. In this case we have two more changes that we are going to apply for the release itself. One being the dust fix. If you’ve read two weeks ago there was an [announcement](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-October/003257.html) about a vulnerability in the specification itself. All implementations were affected. Now we are working on a mitigation. It turns out however that the mitigation that was proposed for the specification is overly complex and has some weird corner cases. We are actually discussing both internally and with the specification itself how to address this exactly. Rusty has a much cleaner solution. We are trying to figure out how we can have this simple dust fix and still be compatible with everybody else who already released theirs. The hot fix has been a bit messy here. Communication could have gone better.
I have been trying to ignore the whole issue. I knew there was an issue, I was like “Everyone else will handle it” and I have learned my lesson. The spec fix doesn’t work in general. It was messy. When I read what they were actually doing, “No we can’t actually do that”. I proposed a much simpler fix. Unfortunately my timing was terrible, I should have done it a month ago. I apologize for that. I read Lisa’s PR and went back and read what the spec said, “Uh oh”.
@@ -178,7 +178,7 @@ fiatjaf is a big contributor to LNBits and it is Ben Arc who started it, there a
# Full RBF in Core
-One of the discussions at the Core dev meeting was on getting [full RBF](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-June/019074.html) in Core. This is obviously going to be challenging because some businesses, I’m not sure how many, do use zero confirmation transactions. Thoughts on how important this would be for the Lightning protocol to get full RBF in Core?
+One of the discussions at the Core dev meeting was on getting [full RBF](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-June/019074.html) in Core. This is obviously going to be challenging because some businesses, I’m not sure how many, do use zero confirmation transactions. Thoughts on how important this would be for the Lightning protocol to get full RBF in Core?
We are already moving towards RBF on Lightning with the dual funded channel proposal. All those transactions that we create are RBFable. I feel like the question you are asking, not explicitly, is the zero conf channel proposal which is currently spec’ed. We didn’t talk about it but full RBF would interact quite poorly with zero conf channels. RBF means that any transaction that gets published to the mempool can then be replaced before it is mined in a block. Zero conf kind of assumes that whatever transaction you publish to the mempool will end up in a block. There is tension there. I don’t think there is an easy answer to that other than maybe zero conf channels aren’t really meant for general consumption. The general idea with zero conf in general is that it is between two semi trusted parties. I don’t think that’s a great answer but I think there is definitely a serious concern there where zero conf channels are concerned.
diff --git a/transcripts/c-lightning/2021-11-01-developer-call.md b/transcripts/c-lightning/2021-11-01-developer-call.md
index b57bd64..9d849bc 100644
--- a/transcripts/c-lightning/2021-11-01-developer-call.md
+++ b/transcripts/c-lightning/2021-11-01-developer-call.md
@@ -102,7 +102,7 @@ Last week I worked on some internal Blockstream stuff. I have also been updating
When I was in Zurich a few weeks ago I spent some time talking to Christian about how to update our accounting stuff. I would really like to get an accounting plugin done soon. I did some rethinking about how we do events, it is an event based system. Coins move around, c-lightning emits an event. I am going to make some changes to how we are keeping track of things. I think the biggest notable change is we will no longer be emitting events about chain fees which kind of sucks. There is a good reason to not do that. Instead the accounting plugin will have to do fee calculations on its own which I think is fine. That is probably going to be the biggest change. Working through that today, figuring out what needs to change. Hopefully the in c-lightning stuff will be quite lightweight and then I can spend a lot of time getting the accounting plugin exactly where I want it. That will be really exciting. I am also going to be in Atlanta, Wednesday through Sunday at the TAB conference. I am giving a [talk](https://www.youtube.com/watch?v=mVihRFrbsbc&t=6470s), appearing on a panel and running some other stuff. I will probably be a little busy this week preparing for the myriad of things someone has signed me up for. If anyone has suggestions about topics to talk about you have 24 hours to submit submissions if there are things on Lightning you want to hear about.
-Something to get rid of next after we’ve got [rid of the mempool](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-October/019572.html).
+Something to get rid of next after we’ve got [rid of the mempool](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-October/019572.html).
That was brave. I think it is great. I think you are wrong but that is ok.
@@ -238,7 +238,7 @@ It is fatal. Then you go to the error logs and it says that the password wasn’
I think there should be something out to stderror. But also we should probably use a specific exit error code in the case... That would be a lot easier to detect. PRs welcome. It is a simple one but it is the kind of thing that no one thought about. Just document it in the lightningd man page. Pick error codes, as long as nothing else gets that error code it would be quite reliable. I think `1` is our general error code, `0` is fine. `1` is something went wrong, anything up to about `125` is probably a decent error, exit code for hsm decoding issues. As you say it makes perfect sense because it is a common use case. If you ever want to automate it you need to know.
-Minisketch looks as if it is close to merge, if you want to use Minisketch for gossip stuff. You posted an idea before on the [mailing list](https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-December/001741.html). I have an old branch that pulls in a previous version of Minisketch.
+Minisketch looks as if it is close to merge, if you want to use Minisketch for gossip stuff. You posted an idea before on the [mailing list](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-December/001741.html). I have an old branch that pulls in a previous version of Minisketch.
You did actually code it up as well? It was more than just a mailing list post? I didn’t see the branch.
diff --git a/transcripts/chaincode-labs/2019-06-17-john-newbery-security-models.mdwn b/transcripts/chaincode-labs/2019-06-17-john-newbery-security-models.mdwn
index 8c35cbe..242ef52 100644
--- a/transcripts/chaincode-labs/2019-06-17-john-newbery-security-models.mdwn
+++ b/transcripts/chaincode-labs/2019-06-17-john-newbery-security-models.mdwn
@@ -220,7 +220,7 @@ John: Correct. So what can they do? So when the block's transaction's chain is v
# Fraud-Proofs
-So, we were talking about fraud proofs. In general, they're pretty difficult. This is a [post from Luke](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-March/013756.html) on the mailing list. The generalized case of fraud proofs is likely impossible, but he had an implementation of a fraud-proof scheme showing the block isn't over a certain size. This was during the 2x time period. People were worried that a 2x chain might fool SPV clients. This would be a way to tell an SPV client that this header you've got is committing to a block that is larger than a certain size. And you don't need to download the entire block. That's a narrow example of fraud-proof.
+So, we were talking about fraud proofs. In general, they're pretty difficult. This is a [post from Luke](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-March/013756.html) on the mailing list. The generalized case of fraud proofs is likely impossible, but he had an implementation of a fraud-proof scheme showing the block isn't over a certain size. This was during the 2x time period. People were worried that a 2x chain might fool SPV clients. This would be a way to tell an SPV client that this header you've got is committing to a block that is larger than a certain size. And you don't need to download the entire block. That's a narrow example of fraud-proof.
Audience member: Does this solution protect you against the DoS attack vector?
@@ -406,7 +406,7 @@ John: I don't know about that.
Cross talk...
-John: I know there is a GitHub repo from Peter Todd called [bloom-io-attack](https://github.com/petertodd/bloom-io-attack) so maybe you can try that at home if you want. But, "a single syncing wallet causes 80GB of disk reads and a large amount of CPU time to be consumed processing this data." [[source]](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-May/012636.html) So it seems trivial to DoS.
+John: I know there is a GitHub repo from Peter Todd called [bloom-io-attack](https://github.com/petertodd/bloom-io-attack) so maybe you can try that at home if you want. But, "a single syncing wallet causes 80GB of disk reads and a large amount of CPU time to be consumed processing this data." [[source]](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-May/012636.html) So it seems trivial to DoS.
Audience member: Highly asymmetric.
diff --git a/transcripts/chaincode-labs/2020-01-28-pieter-wuille.mdwn b/transcripts/chaincode-labs/2020-01-28-pieter-wuille.mdwn
index 2595155..f35b4f3 100644
--- a/transcripts/chaincode-labs/2020-01-28-pieter-wuille.mdwn
+++ b/transcripts/chaincode-labs/2020-01-28-pieter-wuille.mdwn
@@ -156,7 +156,7 @@ Pieter: Exactly. We can talk about the boundary in trying to abstract the part o
John: That condition is much harder.
-Pieter: That’s much harder. It is not a usual thing you design things for. Maybe a good thing to bring up is [BIP66](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-July/009697.html) DER signature failure. You also had getting rid of OpenSSL on the list of things to talk about. Validation of signatures in Bitcoin’s reference code used to use OpenSSL for validation. Signatures were encoded in whatever data OpenSSL expects.
+Pieter: That’s much harder. It is not a usual thing you design things for. Maybe a good thing to bring up is [BIP66](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-July/009697.html) DER signature failure. You also had getting rid of OpenSSL on the list of things to talk about. Validation of signatures in Bitcoin’s reference code used to use OpenSSL for validation. Signatures were encoded in whatever data OpenSSL expects.
John: Let’s take a step back and talk about Satoshi implementing Bitcoin. Satoshi wrote a white paper and then produced a reference implementation of Bitcoin. In that reference implementation there was a dependency on OpenSSL that was used for many things.
diff --git a/transcripts/chicago-bitdevs/2020-07-08-socratic-seminar.mdwn b/transcripts/chicago-bitdevs/2020-07-08-socratic-seminar.mdwn
index dc2b613..2bb5ba4 100644
--- a/transcripts/chicago-bitdevs/2020-07-08-socratic-seminar.mdwn
+++ b/transcripts/chicago-bitdevs/2020-07-08-socratic-seminar.mdwn
@@ -14,7 +14,7 @@ The conversation has been anonymized by default to protect the identities of the
# Tainting, CoinJoin, PayJoin, CoinSwap Bitcoin dev mailing list post (Nopara)
-https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-June/017957.html
+https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-June/017957.html
….
@@ -108,7 +108,7 @@ Yes because it is not a OP_CLTV it is nLockTime. You can change the script to in
# Disclosure of a fee blackmail attack (Rene Pickhardt)
-https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-June/002735.html
+https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-June/002735.html
Let’s move onto this cousin I would say of attacking the Lightning Network. This is more or less very similar to what we just talked about. Instead of the attacker getting the funds it is a blackmail situation. Nobody can claim the funds but you can negotiate with your counterparty, you get half I get half sort of thing. I think Rene Pickhardt had a good TL;DR here. This attack demonstrates why opening a channel is not an entirely trustless activity. You actually do need to have a little bit of trust with your peer. With this attack the attacker will reliably only be able to force the victim to lose this amount of Bitcoin.
@@ -136,7 +136,7 @@ I can broadcast a commitment transaction and claim the HTLC later, you don’t k
# Pinning: The Good, The Bad, The Ugly (Antoine Riard)
-https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-June/002758.html
+https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-June/002758.html
This is related to a lot of this higher level work on fees on Lightning. The game theory of the Lightning Network really relies on how fees interact with things and how quickly we can get things included in the blockchain to enforce these HTLC contracts. This mailing list post is talking about how the mempool works in Bitcoin. A mempool is where we collect transactions before they get included in a block. We usually prioritize them by fee rate so if you have the highest fee you are likely to be included in the next block. If you have a low fee it may be a few days before your transaction is included. We saw this behavior play out in the 2017 rampant market speculation that was going on. As Antoine says here the “Lightning security model relies on the unilateral capability for a channel participant to confirm transactions.” That means I don’t care what my peer thinks in my Lightning channel, I want to go to the chain now and he can’t do anything to stop me. That is a fundamental property that we need in Lightning to make the security model work. “This security model is actually turning back the double-spend problem to a private matter” which I thought was a really interesting insight here. “… making the duty of each channel participant to timely enforce its balance against the competing interest of its counterparties.” Not the entire Bitcoin ecosystem is responsible for enforcing say consensus rules, it is you that is responsible for enforcing the rules of your channel. There is nobody else that can help you modulo maybe watchtowers or something like that. As a byproduct of this from the base level you need to make sure we have effective propagation and timely confirmation of Lightning transactions because it is a cornerstone of the security model of Lightning. This is where it really starts to get interesting. Antoine points out here that “network mempools aren’t guaranteed to be convergent”. If somebody is running a Bitcoin node in Japan, somebody is running one in San Francisco, there is some network latency, one transaction is propagated to you faster on one side of the world. Another conflicting transaction is propagated faster on the other side of the world. You could end up with two different states of the world and then reject the other person’s state because it conflicts with what you see. You think it is invalid because you saw something else first. Order matters when keeping a local mempool. As Antoine says here “If subset X observes Alice commitment transaction and subset Y observes Bob commitment transaction, Alice’s HTLC-timeout spending her commitment won’t propagate beyond the X-Y set boundaries.” They will have two different states of the world. One set of the Bitcoin nodes will think the transaction is invalid, another set will think it is invalid and we won’t come to any sort of convergence on what mempools are. The whole point of a blockchain is to converge on a set of UTXOs over a long period of time. This is why we need confirmations to make sure that history is final onchain. A proposal that Antoine has is being able to unilaterally and dynamically bump the fee rate on any commitment transaction is a property that we need to ensure the Lightning Network’s security model is fully realized. “Assuming mempool congestion levels we have seen in the past months, currently deployed LN peers aren’t secure against scenario 2a and 2b.” One of these scenarios requires work at the base layer.
@@ -178,7 +178,7 @@ https://bitcoinops.org/en/newsletters/2020/04/29/#new-attack-against-ln-payment-
David Harding wrote this. He puts in concrete terms the different sorts of attacks on Lightning. You can have “preimage denial” which is Mallory can prevent Bob from learning the preimage by giving the preimage settlement transaction a low fee rate that keeps it from getting confirmed quickly. If Bob is only looking for preimages in the blockchain he won’t see Mallory’s transaction while it remains unconfirmed. The key thing with this preimage denial attack is Bob doesn’t have a mempool. He can only see what is in the blockchain. If Mallory, being malicious, gives a transaction a low fee it can look like the payment preimage has been revealed and Bob doesn’t know it. Mallory can steal Bob’s funds here. The second thing David talks about is prior to broadcasting the preimage settlement transaction she can prevent miners and Bitcoin relay nodes from accepting Bob’s later broadcast of the refund settlement transaction because the two transactions conflict which is just what we were talking about. They “both spend the same input (a UTXO created in the commitment transaction). In theory Bob’s refund settlement transaction will pay a higher fee rate and so can replace Mallory’s preimage settlement but in practice Mallory can use various transaction pinning techniques to prevent that replacement from happening.”
-The scenario described by David there, it is the first scenario described in Antoine’s mailing list [post](https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-June/002758.html). Since then we have discovered a new scenario of pinning on the commitment transaction. Those are the most concerning ones and not fixed by anchor output.
+The scenario described by David there, it is the first scenario described in Antoine’s mailing list [post](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-June/002758.html). Since then we have discovered a new scenario of pinning on the commitment transaction. Those are the most concerning ones and not fixed by anchor output.
There are three proposed solutions. Require a mempool, beg or pay for preimages and settlement transaction anchor outputs. It is interesting stuff. A lot more scrutiny on the Lightning Network security model in the last month so we can build a more resilient network.
@@ -202,7 +202,7 @@ You want the mempool evaluation of them to be atomic.
# BIP draft: BIP 32 Path Templates (Dmitry Petukhov)
-https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-July/018024.html
+https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-July/018024.html
This is similar to a wallet descriptor but it is kind of like a BIP 32 xpub descriptor, what valid BIP 32 paths to use with this xpub. What this lets you do is say if you are developing a wallet and your wallet only uses an account at index 0 with HD purpose 84, 0 and 1 for if it is change or not. In that example 0 to 5000, for the coin index, this makes it a lot easier for other wallets to be compatible with each other. Right now if you import an Electrum wallet into another wallet you will need to rescan and randomly search around to try to find the coins. With this you say exactly where your coins are going to be because it comes from this wallet with this descriptor. On Wasabi there is a lot of pushback on going off the most minimal implementation of BIP 44 where you are just using the purposes, the default account, zero and one for if it is change or not. This allows you to get outside of the box with that stuff and use it in more creative ways without other wallets not being able to find it unless they explicitly look for those random things. You just tell them my coins will be here, you should look here. I think that will be huge for backups and wallet interoperability.
@@ -244,7 +244,7 @@ Absolutely. Anything where you have an untrusted but computationally strong serv
# Time Dilation Attacks on the Lightning Network (Gleb Naumenko, Antoine Riard)
-https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-June/017920.html
+https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-June/017920.html
Right now for a Lightning channel if I am broadcasting a revoked state or if I am trying to close a channel and redeem a HTLC you need onchain access. You need to view the chain. We know that since the beginning of Lightning you need to watch the chain. What we didn’t know is eclipse attacks against Lightning nodes are far more worrying because I can announce the block on the real chain but I am doing so with a slowdown in announcements. Accumulating this slowdown the view of the chain by your Lightning node is going to be 20 blocks back. When I know that your view of the blockchain is 20 blocks back I am going to close the channel and the revocation delay is going to be 19 blocks. I am going to withdraw the funds and you are not going to be able to punish me. When your Lightning peer reaches the same height the timelock is going to already be expired. You need to see the chain but you also need to see the chain in a timely manner. Time is super important in Lightning. The cool thing about these attacks is you don’t need to assume hashrate for the attacker. Your counterparty doesn’t need to be a miner.
diff --git a/transcripts/chicago-bitdevs/2020-08-12-socratic-seminar.mdwn b/transcripts/chicago-bitdevs/2020-08-12-socratic-seminar.mdwn
index 3df14a1..1da8330 100644
--- a/transcripts/chicago-bitdevs/2020-08-12-socratic-seminar.mdwn
+++ b/transcripts/chicago-bitdevs/2020-08-12-socratic-seminar.mdwn
@@ -42,7 +42,7 @@ Most of it is 1 satoshi per byte so I think it is a trailing estimate.
# Dynamic Commitments: Upgrading Channels Without On-Chain Transactions (Laolu Osuntokun)
-https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-July/002763.html
+https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-July/002763.html
Here is our first technical topic. This is to do with upgrading already existing Lightning channels.
@@ -62,7 +62,7 @@ Yeah and that is discussed later in this thread. ZmnSCPxj was talking about goin
# Advances in Bitcoin Contracting: Uniform Policy and Package Relay (Antoine Riard)
-https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-July/018063.html
+https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-July/018063.html
This is the P2P Bitcoin development space, this package and relay policy across the Bitcoin P2P network. What this problem is trying to solve is there are certain layer 2 protocols that are built on top of Bitcoin such as Lightning that require transactions to be confirmed in a timely manner. Also sometimes it is three transactions that all spend each other that need to be confirmed at the same time to make sure that you can get your money back specifically with Lightning. As Antoine writes here, “Lightning, the most deployed time-sensitive protocol as of now, relies on the timely confirmations of some of its transactions to enforce its security model.” Lightning boils down to if you cheat me on the Lightning Network I go back down to the Bitcoin blockchain and take your money. The assumption there is that you can actually get a transaction confirmed in the Bitcoin blockchain. If you can’t do that the security model for Lightning crumbles. Antoine also writes here that to be able to do this you sometimes need to adjust the fee rate of a transaction. As we all know blockchains have dynamic fee rates depending on what people are doing on the network. It could be 1 satoshis per byte and other times it could be 130 satoshis per byte. Or what we saw with this Clark Moody dashboard, one person may think it is 130 satoshis per byte while another person is like “I have a better view of the network and this person doesn’t know what they are talking about. It is really 10 satoshis per byte.” You can have these disagreements on these Layer 2 protocols too. It is really important that you have an accurate view of what it takes to enforce your Layer 2 transactions and get them confirmed in the network. The idea that is being tossed around to do this is this package relay policy. Antoine did a really good job of laying out exactly what you need here. You need to be able to propagate a transaction across the network so that everyone can see the transaction in a timely manner. Each node has different rules for transactions that they allow into the mempool. The mempool is the staging area where nodes hold transactions before they are mined in a block. Depending on your node settings, you could be running a node on a Raspberry Pi or one of these high end servers with like 64GB of RAM. Depending on what kind of hardware you are running you obviously have limitations on how big your mempool can be. On a Raspberry Pi maybe your mempool is limited to 500MB. On these high end servers you could have 30GB of transactions or something like that. Depending upon which node you are operating your view of the network is different. In terms of Layer 2 protocols you don’t want that because you want everybody to have the same view of the network so they can confirm your transactions when you need them to be confirmed.
diff --git a/transcripts/gmaxwell-2017-08-28-deep-dive-bitcoin-core-v0.15.mdwn b/transcripts/gmaxwell-2017-08-28-deep-dive-bitcoin-core-v0.15.mdwn
index 7947eb4..9b6aaa4 100644
--- a/transcripts/gmaxwell-2017-08-28-deep-dive-bitcoin-core-v0.15.mdwn
+++ b/transcripts/gmaxwell-2017-08-28-deep-dive-bitcoin-core-v0.15.mdwn
@@ -174,7 +174,7 @@ OK, time for a drink. Just water. ((laughter))
There's lots of really cool things going on. And often in the bitcoin space people are looking for the bitcoin developers to set a roadmap for what's going to happen. But the bitcoin project is an open collaboration, and it's hard to do anything that's like a roadmap.
-<a href="https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-July/014726.html">I have</a> a quote from Andrew Morton about the Linux kernel developers, which I'd like to read, where he says: "Instead of a roadmap, there are technical guidelines. Instead of a central resource allocation, there are persons and companies who all have a stake in the further development of the Linux kernel, quite independently from one another: People like Linus Torvalds and I don’t plan the kernel evolution. We don’t sit there and think up the roadmap for the next two years, then assign resources to the various new features. That's because we don’t have any resources. The resources are all owned by the various corporations who use and contribute to Linux, as well as by the various independent contributors out there. It's those people who own the resources who decide."
+<a href="https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-July/014726.html">I have</a> a quote from Andrew Morton about the Linux kernel developers, which I'd like to read, where he says: "Instead of a roadmap, there are technical guidelines. Instead of a central resource allocation, there are persons and companies who all have a stake in the further development of the Linux kernel, quite independently from one another: People like Linus Torvalds and I don’t plan the kernel evolution. We don’t sit there and think up the roadmap for the next two years, then assign resources to the various new features. That's because we don’t have any resources. The resources are all owned by the various corporations who use and contribute to Linux, as well as by the various independent contributors out there. It's those people who own the resources who decide."
It's the same kind of thing that also applies to bitcoin. What's going to happen in bitcoin development? The real answer to that is another question: what are you going to make happen in bitcoin development? Every person involved has a stake in making it better and contributing. Nobody can really tell you what's going to happen for sure. But I can certainly talk about what I know people are working on and what I know other people are working on, which might seem like a great magic trick of prediction but it's really not I promise.
@@ -192,7 +192,7 @@ There are other wallet improvements that people are working on. I had mentioned
# More improvements
-There is support being worked on for hardware wallets and easy off-line signing. You can do offline signing with Bitcoin Core today but it's the sort of thing that I even I don't love. It's complicated. Andrew Chow has a bip proposal recently posted to the mailing list for <a href="https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-August/014838.html">partially signed bitcoin transactions</a> (see also <a href="https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-August/013008.html">hardware wallet standardization things</a>), it's a format that can carry all the data that an offline wallet needs for signing, including hardware wallets. The deployment of this into Bitcoin Core is much easier for us to do safely and efficiently with segwit in place.
+There is support being worked on for hardware wallets and easy off-line signing. You can do offline signing with Bitcoin Core today but it's the sort of thing that I even I don't love. It's complicated. Andrew Chow has a bip proposal recently posted to the mailing list for <a href="https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-August/014838.html">partially signed bitcoin transactions</a> (see also <a href="https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-August/013008.html">hardware wallet standardization things</a>), it's a format that can carry all the data that an offline wallet needs for signing, including hardware wallets. The deployment of this into Bitcoin Core is much easier for us to do safely and efficiently with segwit in place.
Another thing that people are working on is <a href="http://murch.one/wp-content/uploads/2016/11/erhardt2016coinselection.pdf">branch-and-bound coin selection</a> to produce changeless outputs much of the time. So this is <a href="http://diyhpl.us/wiki/transcripts/scalingbitcoin/milan/coin-selection/">Murch's design</a> which Andrew Chow has been working on implementing. There's <a href="https://github.com/bitcoin/bitcoin/pull/10637">a pull request</a> barely missed going into v0.15 but the end result of this will be transactions less expensive for users and making the network more efficient because it's creating change outputs much less often.
@@ -204,7 +204,7 @@ There's some work on hashed timelock contracts (HTLCs) so that you can do more i
# Rolling UTXO set hashes
-There are interesting things going on with network and consensus. One is a proposal for <a href="https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-May/014337.html">rolling UTXO set hashes</a> (<a href="https://github.com/bitcoin/bitcoin/pull/10434">PR #10434</a>). This is a design basically compute a hash of the UTXO set that is very efficient to incrementally update every block so that you don't need to go through the entire UTXO set to compute a new hash. This can make it easier to validate that a node isn't corrupted. But it also opens up new potentials for syncing a new node-- where you say you don't want to validate history from before a year ago, and you want to sync up to a state where up to that point and then continue on further. That's a security trade-off, but we think we have ways of making that more realistic. We have some interesting design questions open-- there are two competing approaches and they have different performance tradeoffs, like different performance in different cases.
+There are interesting things going on with network and consensus. One is a proposal for <a href="https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-May/014337.html">rolling UTXO set hashes</a> (<a href="https://github.com/bitcoin/bitcoin/pull/10434">PR #10434</a>). This is a design basically compute a hash of the UTXO set that is very efficient to incrementally update every block so that you don't need to go through the entire UTXO set to compute a new hash. This can make it easier to validate that a node isn't corrupted. But it also opens up new potentials for syncing a new node-- where you say you don't want to validate history from before a year ago, and you want to sync up to a state where up to that point and then continue on further. That's a security trade-off, but we think we have ways of making that more realistic. We have some interesting design questions open-- there are two competing approaches and they have different performance tradeoffs, like different performance in different cases.
<div id="signature-aggregation" />
# Signature aggregation
@@ -219,7 +219,7 @@ When this is combined with segwit, if everyone is using it, is about 20%. So it'
Other things going on with the network and consensus... There's <a href="https://github.com/bitcoin/bips/blob/master/bip-0150.mediawiki">bip150</a> and <a href="https://github.com/bitcoin/bips/blob/master/bip-0151.mediawiki">bip151</a> (see the <a href="http://diyhpl.us/wiki/transcripts/scalingbitcoin/milan/bip151-peer-encryption/">bip151 talk</a>) for encrypted and optionally authenticated p2p. I think the bips are half-way done. Jonas Schnelli will be talking about these in more detail next week. We've been waiting on network refactors before implementing this into Bitcoin Core. So this should be work that comes through pretty quickly.
-There's been work ongoing regarding private transaction announcement (the <a href="http://diyhpl.us/~bryan/papers2/bitcoin/Dandelion:%20Redesigning%20the%20bitcoin%20network%20for%20anonymity%20-%202017.pdf">Dandelion paper</a> and <a href="https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-June/014571.html">proposed dandelion BIP</a>). Right now in the bitcoin network, there are people who connect to nodes all over the network and try to monitor where transactions are originating in an attempt to deanonymize people. There are countermeasures against this in the bitcoin protocol, but they are not especially strong. There is a recent paper proposing a technique called Dandelion which makes it much stronger. The authors have been working on an <a href="https://github.com/gfanti/bitcoin/tree/dandelion">implementation</a> and I've <a href="https://bitcointalk.org/index.php?topic=1377345.0">sort of</a> been <a href="https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-June/014573.html">guiding</a> them on that-- either they will finish their implementation or I'll reimplement it-- and we'll get this in relatively soon probably in the v0.16 timeframe. It requires a slight extension to the p2p protocol where you tell a peer that you would like it to relay a transaction but only to one peer. The idea is that transactions are relayed in a line through the network, just one node to one node to one node and then after basically after a random timeout they hit a spot where they expand to everything and they curve through the network and then explodes everywhere. Their paper makes a very good argument for the improvements in privacy of this technique. Obviously, if you want privacy then you should be running Bitcoin Core over tor. Still, I think this is a good technique to implement as well.
+There's been work ongoing regarding private transaction announcement (the <a href="http://diyhpl.us/~bryan/papers2/bitcoin/Dandelion:%20Redesigning%20the%20bitcoin%20network%20for%20anonymity%20-%202017.pdf">Dandelion paper</a> and <a href="https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-June/014571.html">proposed dandelion BIP</a>). Right now in the bitcoin network, there are people who connect to nodes all over the network and try to monitor where transactions are originating in an attempt to deanonymize people. There are countermeasures against this in the bitcoin protocol, but they are not especially strong. There is a recent paper proposing a technique called Dandelion which makes it much stronger. The authors have been working on an <a href="https://github.com/gfanti/bitcoin/tree/dandelion">implementation</a> and I've <a href="https://bitcointalk.org/index.php?topic=1377345.0">sort of</a> been <a href="https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-June/014573.html">guiding</a> them on that-- either they will finish their implementation or I'll reimplement it-- and we'll get this in relatively soon probably in the v0.16 timeframe. It requires a slight extension to the p2p protocol where you tell a peer that you would like it to relay a transaction but only to one peer. The idea is that transactions are relayed in a line through the network, just one node to one node to one node and then after basically after a random timeout they hit a spot where they expand to everything and they curve through the network and then explodes everywhere. Their paper makes a very good argument for the improvements in privacy of this technique. Obviously, if you want privacy then you should be running Bitcoin Core over tor. Still, I think this is a good technique to implement as well.
Q: Is the dandelion approach going to effect how fast your transactions might get into a block? If I'm not rushed...?
@@ -245,7 +245,7 @@ Work has started on something called "peer interrogation" to basically more rapi
There's been ongoing work on improved block fetch robustness. So right now the way that fetching works is that assuming you're not using compact blocks high-bandwidth opportunistic send where peers send blocks even without you asking for it, you will only ask for blocks from a single peer at a time. So if I say give me a block, and he falls asleep and doesn't send it, I will wait for a long multi-minute timeout before I try to get the block from someone else. So there's some work ongoing to have the software try to fetch a block from multiple peers at the same time, and occasionally waste time.
-Another thing that I expect to come in relatively soon is ... proposal for <a href="https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-June/014474.html">gcs-lite-client BIP</a> for bloom "map" for blocks. We'll implement that in Bitcoin Core as well.
+Another thing that I expect to come in relatively soon is ... proposal for <a href="https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-June/014474.html">gcs-lite-client BIP</a> for bloom "map" for blocks. We'll implement that in Bitcoin Core as well.
# Further further....
@@ -347,7 +347,7 @@ A: A million times faster than nvidia chip... bitcoin mining today is done with
Q: There's a few different proposals nad trying to solve the same thing, like weak blocks, thin blocks, invertible bloom filters, is there anything in that realm on the horizon, what do you think is the most probable development there?
-A: There's a class of proposals called pre-consensus, like <a href="https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-September/011158.html">weak blocks</a> (or <a href="https://diyhpl.us/wiki/transcripts/scalingbitcoin/hong-kong/invertible-bloom-lookup-tables-and-weak-block-propagation-performance/">here</a>), where participants come to agreement in advance on what they will put into blocks before it is found. I think those techniques are neat, I've done some work on them, I think other people will work on them. There are many design choices, we could run multiples of these in parallel. We have made great progress on <a href="https://people.xiph.org/~greg/efficient.block.xfer.txt">efficient block transmission</a> with <a href="https://www.reddit.com/r/btc/comments/6p076l/segwit_only_allows_170_of_current_transactions/dkmugw5/">FIBRE and compact blocks</a>. We went 5000 blocks ish without an orphan a couple weeks ago, so between FIBRE and v0.14 speedups, we've seen the orphan rate drop, it's not as much of a concern. We might see it pop back up as segwit gets utilized, we'll have to see.
+A: There's a class of proposals called pre-consensus, like <a href="https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-September/011158.html">weak blocks</a> (or <a href="https://diyhpl.us/wiki/transcripts/scalingbitcoin/hong-kong/invertible-bloom-lookup-tables-and-weak-block-propagation-performance/">here</a>), where participants come to agreement in advance on what they will put into blocks before it is found. I think those techniques are neat, I've done some work on them, I think other people will work on them. There are many design choices, we could run multiples of these in parallel. We have made great progress on <a href="https://people.xiph.org/~greg/efficient.block.xfer.txt">efficient block transmission</a> with <a href="https://www.reddit.com/r/btc/comments/6p076l/segwit_only_allows_170_of_current_transactions/dkmugw5/">FIBRE and compact blocks</a>. We went 5000 blocks ish without an orphan a couple weeks ago, so between FIBRE and v0.14 speedups, we've seen the orphan rate drop, it's not as much of a concern. We might see it pop back up as segwit gets utilized, we'll have to see.
Q: Is rolling utxo hash a segwit precondition?
@@ -367,7 +367,7 @@ A: I think that the most important highlight is education. Ultimately bitcoin is
Q: Hey uh... so it sounds like you, in November whenever s2x gets implemented, and it gets more work than bitcoin, I mean it sounds like you consider it an altcoin it's like UASF territory at what point is bitcoin is bitcoin and what would y'all do with sha256 algorithm if s2x gets more work on it?
-A: I think that <a href="https://en.bitcoin.it/wiki/Segwit_support">the major contributors on Bitcoin Core are pretty consistent and clear on their views on s2x</a> (<a href="https://bitcoincore.org/en/2017/08/18/btc1-misleading-statements/">again</a>), we're <a href="https://www.reddit.com/r/Bitcoin/comments/6h612o/can_someone_explain_to_me_why_core_wont_endorse/divtc93/">not interested</a> and <a href="https://www.reddit.com/r/Bitcoin/comments/6p3sex/why_segwit2x_is_the_best_path_for_bitcoin_jeff/dkmdlcy/?ontext=2">not going along with it</a>. I think it will be unlikely to get more work on it. Miners are going to follow the money. Hypothetically? Well, I've never been of the opinion that more work matters. It's always secondary to following the rules. Ethereum might have had more joules pumped into its mining than bitcoin, although I haven't done the math that's at least possible. I wouldn't say ethereum is now bitcoin though... just because of the joules. Every version of bitcoin all the way back has had nodes enforcing the rules. It's essential to bitcoin. Can I think bitcoin can hard-fork? Yeah, but all the users have to agree, and maybe that's hard to achieve because we can do things without hard-forks. And I think that's fine. If we can change bitcoin for a controversial change, then I think that's bad because you could make other controversial changes. Bitcoin is a digital asset that is not going to change out from you. As for future proof-of-work functions, that's unclear. If s2x gets more hashrate, then I think that would be because users as a whole were adopting it, and I think if that was the case then perhaps the Bitcoin developers would go do something else instead of Bitcoin development. It <a href="https://github.com/BitcoinHardfork/bitcoin/pull/1">might make sense to use a different proof of work function</a>. Changing a PoW function is a nuclear option and you don't do it unless you have no other choice. But if you have no other choice, <a href="https://twitter.com/bramcohen/status/843917600119832576?lang=en">yeah</a> <a href="https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-March/013744.html?utm_content=buffer3b2c4&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer">do it</a>.
+A: I think that <a href="https://en.bitcoin.it/wiki/Segwit_support">the major contributors on Bitcoin Core are pretty consistent and clear on their views on s2x</a> (<a href="https://bitcoincore.org/en/2017/08/18/btc1-misleading-statements/">again</a>), we're <a href="https://www.reddit.com/r/Bitcoin/comments/6h612o/can_someone_explain_to_me_why_core_wont_endorse/divtc93/">not interested</a> and <a href="https://www.reddit.com/r/Bitcoin/comments/6p3sex/why_segwit2x_is_the_best_path_for_bitcoin_jeff/dkmdlcy/?ontext=2">not going along with it</a>. I think it will be unlikely to get more work on it. Miners are going to follow the money. Hypothetically? Well, I've never been of the opinion that more work matters. It's always secondary to following the rules. Ethereum might have had more joules pumped into its mining than bitcoin, although I haven't done the math that's at least possible. I wouldn't say ethereum is now bitcoin though... just because of the joules. Every version of bitcoin all the way back has had nodes enforcing the rules. It's essential to bitcoin. Can I think bitcoin can hard-fork? Yeah, but all the users have to agree, and maybe that's hard to achieve because we can do things without hard-forks. And I think that's fine. If we can change bitcoin for a controversial change, then I think that's bad because you could make other controversial changes. Bitcoin is a digital asset that is not going to change out from you. As for future proof-of-work functions, that's unclear. If s2x gets more hashrate, then I think that would be because users as a whole were adopting it, and I think if that was the case then perhaps the Bitcoin developers would go do something else instead of Bitcoin development. It <a href="https://github.com/BitcoinHardfork/bitcoin/pull/1">might make sense to use a different proof of work function</a>. Changing a PoW function is a nuclear option and you don't do it unless you have no other choice. But if you have no other choice, <a href="https://twitter.com/bramcohen/status/843917600119832576?lang=en">yeah</a> <a href="https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-March/013744.html?utm_content=buffer3b2c4&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer">do it</a>.
Q: So sha256 is not a defining characteristic of bitcoin?
diff --git a/transcripts/gmaxwell-2017-11-27-advances-in-block-propagation.mdwn b/transcripts/gmaxwell-2017-11-27-advances-in-block-propagation.mdwn
index ed5280f..76872f2 100644
--- a/transcripts/gmaxwell-2017-11-27-advances-in-block-propagation.mdwn
+++ b/transcripts/gmaxwell-2017-11-27-advances-in-block-propagation.mdwn
@@ -109,7 +109,7 @@ Summarizing compact blocks and its two modes-- the high bandwidth mode takes hal
# What about xthin?
-There's another protocol called xthin which I'll comment on it briefly because otherwise people are going to ask me about it. So, it was a parallel development where basically Matt and Pieter did this bloom filter block stuff in 2013. Mike Hearn re-earthed it and made a patch for Bitcoin-XT which didn't work particularly well. The BU people picked up Mike Hearn's work and apparently they were unaware of all the other development that had been in progress on this-- developers don't always communicate well... It's a very similar protocol to compact blocks. Some BU advocates have irritated me pretty extensively by arguing that compact blocks was copied from xthin. Who cares, and it wasn't. They were just parallel developed. It has some major differences. One is that it uses 8 bytes, so 64-bit short ids, and it doesn't salt it, which means it has this vulnerability where you could easily construct a collision. It turns out that a bunch of the BU developers and their advocates didn't know about the <a href="https://en.wikipedia.org/wiki/Birthday_attack">birthday paradox</a> effect on how easy it is to create collisions. They <a href="https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-May/012647.html">argued it wasn't feasible to construct a 64-bit collision</a> and I had a bunch of <a href="https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-May/012648.html">fun</a> on reddit for 2 days responding to every post giving out collisions generated bespoke from the messages because they said it would take hours to generate them. I was generating them in about 3 seconds or something. So, the other thing that it does differently from compact blocks is that it can't be used in this high-bandwidth mode where you send an unsolicited block. With xthin, there's always an INV message, and then the response to the INV message, sends a bloom filter of the receiving node's mempool, basically says "here's an approximate list of the transactions I know about", and as a result of that bloom filter which is on the order of 20kb typically, the most of the time there's no need to get extra transactions because the sender will know what transactions are missing and it will send them. So basically I look at this optimization is that it costs a constant 1 roundtrip time because you can't use high-bandwidth mode, plus bandwidth plus CPU plus attack surface, to save a roundtrip time less than 15% of the time because high bandwidth mode normally doesn't have roundtrip time 85% of the time. I don't think that is useless, I think it would be useful for the non high bandwidth mode use case, but it's a lot more code and attack surface, for a relatively low improvement. This is particularly interesting for xthin because of political drama it was rushed in production because they wanted to claim they had it first, and it resulted in at least three exploited crash bugs that knocked out every Bitcoin Unlimited node on the network. And every Bitcoin Classic fork node on the network (not Bitcoin Core nodes). And when they fixed some of those bugs later, that could cause nodes to get split from the network; particularly, interestingly, they introduced a bug where if you introduced a short id collision, the node would get stuck, a combination of two vulnerabilities so kind of something to learn from that.
+There's another protocol called xthin which I'll comment on it briefly because otherwise people are going to ask me about it. So, it was a parallel development where basically Matt and Pieter did this bloom filter block stuff in 2013. Mike Hearn re-earthed it and made a patch for Bitcoin-XT which didn't work particularly well. The BU people picked up Mike Hearn's work and apparently they were unaware of all the other development that had been in progress on this-- developers don't always communicate well... It's a very similar protocol to compact blocks. Some BU advocates have irritated me pretty extensively by arguing that compact blocks was copied from xthin. Who cares, and it wasn't. They were just parallel developed. It has some major differences. One is that it uses 8 bytes, so 64-bit short ids, and it doesn't salt it, which means it has this vulnerability where you could easily construct a collision. It turns out that a bunch of the BU developers and their advocates didn't know about the <a href="https://en.wikipedia.org/wiki/Birthday_attack">birthday paradox</a> effect on how easy it is to create collisions. They <a href="https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-May/012647.html">argued it wasn't feasible to construct a 64-bit collision</a> and I had a bunch of <a href="https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-May/012648.html">fun</a> on reddit for 2 days responding to every post giving out collisions generated bespoke from the messages because they said it would take hours to generate them. I was generating them in about 3 seconds or something. So, the other thing that it does differently from compact blocks is that it can't be used in this high-bandwidth mode where you send an unsolicited block. With xthin, there's always an INV message, and then the response to the INV message, sends a bloom filter of the receiving node's mempool, basically says "here's an approximate list of the transactions I know about", and as a result of that bloom filter which is on the order of 20kb typically, the most of the time there's no need to get extra transactions because the sender will know what transactions are missing and it will send them. So basically I look at this optimization is that it costs a constant 1 roundtrip time because you can't use high-bandwidth mode, plus bandwidth plus CPU plus attack surface, to save a roundtrip time less than 15% of the time because high bandwidth mode normally doesn't have roundtrip time 85% of the time. I don't think that is useless, I think it would be useful for the non high bandwidth mode use case, but it's a lot more code and attack surface, for a relatively low improvement. This is particularly interesting for xthin because of political drama it was rushed in production because they wanted to claim they had it first, and it resulted in at least three exploited crash bugs that knocked out every Bitcoin Unlimited node on the network. And every Bitcoin Classic fork node on the network (not Bitcoin Core nodes). And when they fixed some of those bugs later, that could cause nodes to get split from the network; particularly, interestingly, they introduced a bug where if you introduced a short id collision, the node would get stuck, a combination of two vulnerabilities so kind of something to learn from that.
# What about "Xpedied"?
diff --git a/transcripts/gmaxwell-confidential-transactions.mdwn b/transcripts/gmaxwell-confidential-transactions.mdwn
index 527809f..b70755f 100644
--- a/transcripts/gmaxwell-confidential-transactions.mdwn
+++ b/transcripts/gmaxwell-confidential-transactions.mdwn
@@ -188,7 +188,7 @@ Well, you can use CT for bitcoin already with sidechains. And that's what we're
Politics are a big hurdle-- some people don't want bitcoin to improve on the privacy aspects, and some people don't want bitcoin to improve at all. But I think that privacy and the existence of CT and sidechains and so on will remove these arguments. If bitcoin should use CT, and competing systems use it, I don't think it will take forever.
-There are designs for <a href="https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-January/012194.html">soft-fork CT</a> and deploying it in a backwards-compatible manner, but right now they have some severe limitations, in particular if the chain has a reorgs once coins have been moved from CT back to non-CT transactions, the transactions around that reorg wont survive. Once you break the coins out of CT, you have to have a protocol rule to not spend the coins for like 100 blocks. This is the same issue that extension block proposals have, and is a reason why I have not been too supportive of extension block proposals in the past. If it's the only way to do it then maybe it's the only viable way, but I'd really like to find something better-- haven't yet, but there are many things that in bitcoin I have looked at and didn't immediately know how to do better until later.
+There are designs for <a href="https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-January/012194.html">soft-fork CT</a> and deploying it in a backwards-compatible manner, but right now they have some severe limitations, in particular if the chain has a reorgs once coins have been moved from CT back to non-CT transactions, the transactions around that reorg wont survive. Once you break the coins out of CT, you have to have a protocol rule to not spend the coins for like 100 blocks. This is the same issue that extension block proposals have, and is a reason why I have not been too supportive of extension block proposals in the past. If it's the only way to do it then maybe it's the only viable way, but I'd really like to find something better-- haven't yet, but there are many things that in bitcoin I have looked at and didn't immediately know how to do better until later.
So the tech is still maturing for CT. If the reason for it not be deploying right now is 15x, well is 10x enough? With improvements we can get this down to lower amounts. As the technology improves, we could make this story better. I am happy to help with other people experimenting with this. I think it's premature to do CT on litecoin today, Charlie, but I would like to see this get more use of course.
diff --git a/transcripts/greg-maxwell/greg-maxwell-taproot-pace.mdwn b/transcripts/greg-maxwell/greg-maxwell-taproot-pace.mdwn
index d88e453..8a6dbf6 100644
--- a/transcripts/greg-maxwell/greg-maxwell-taproot-pace.mdwn
+++ b/transcripts/greg-maxwell/greg-maxwell-taproot-pace.mdwn
@@ -10,7 +10,7 @@ https://www.reddit.com/r/Bitcoin/comments/hrlpnc/technical_taproot_why_activate/
# Is Taproot development moving too fast or too slow?
-Taproot has been discussed for [2.5 years already](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-January/015614.html) and by the time it would activate it will certainly at this point be over three years.
+Taproot has been discussed for [2.5 years already](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-January/015614.html) and by the time it would activate it will certainly at this point be over three years.
The bulk of the Taproot proposal, other than Taproot itself and specific encoding details, is significantly older too. (Enough that earlier versions of our proposals have been copied and activated in other cryptocurrencies already)
@@ -18,7 +18,7 @@ Taproot's implementation is also extremely simple, and will make common operatio
Taproot's changes to bitcoin's consensus code are under 520 lines of difference, about 1/4th that of Segwit's. Unlike Segwit, Taproot requires no P2P changes or changes to mining software, nor do we have to have a whole new address type for it. It is also significantly [de-risked](https://twitter.com/theinstagibbs/status/1285018236719976448) by the script version extension mechanisms added by Segwit. It has also undergone significantly more review than P2SH did, which is the most similar analogous prior change and which didn't enjoy the benefits of Segwit.
-Segwit went from [early public discussions](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-December/011935.html) to [merged](https://bitcoinmagazine.com/articles/segregated-witness-will-be-merged-into-bitcoin-core-release-soon-1466787770) in six months. So in spite of being more complex and subject to more debate due to splash back from the block size drama, Segwit was still done in significantly less time already.
+Segwit went from [early public discussions](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-December/011935.html) to [merged](https://bitcoinmagazine.com/articles/segregated-witness-will-be-merged-into-bitcoin-core-release-soon-1466787770) in six months. So in spite of being more complex and subject to more debate due to splash back from the block size drama, Segwit was still done in significantly less time already.
Taproot has also been exceptionally widely discussed by the wider bitcoin community for a couple years now. Its application is narrow, users who don't care to use it are ultimately unaffected by it (it should decrease resource consumption by nodes, rather than increase it) and no one is forced to use it for their own coins. It also introduces new tools to make other future improvements simpler, safer (particularly Taproot leaf versions), and more private... so there is a good reason that other future improvements are waiting on Tapoot.
diff --git a/transcripts/honey-badger-diaries/2020-04-24-kevin-loaec-antoine-poinsot-revault.mdwn b/transcripts/honey-badger-diaries/2020-04-24-kevin-loaec-antoine-poinsot-revault.mdwn
index cb67f23..eed3c3b 100644
--- a/transcripts/honey-badger-diaries/2020-04-24-kevin-loaec-antoine-poinsot-revault.mdwn
+++ b/transcripts/honey-badger-diaries/2020-04-24-kevin-loaec-antoine-poinsot-revault.mdwn
@@ -12,7 +12,7 @@ Video: https://www.youtube.com/watch?v=xDTCT75VwvU
Aaron: So you guys built something. First tell me are you a company? Is this a startup? What is the story here?
-Kevin: My personal interest in vaults started last year when Bryan Bishop published his [email](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-August/017229.html) on the mailing list. I was like “That is an interesting concept.” After that I started engaging with him on Twitter proposing a few different ideas. There were some limitations and some attacks around it. I didn’t go much further than that. At the end of the year a hedge fund reached out to my company Chainsmiths to architect a solution for them to be their own custodian while in a multi-stakeholder situation. They are four people in the company and they have two active traders that move funds from exchanges back to their company and stuff like that. They wanted a way to be able to have decent security within their own fund without having to rely on a third party like most funds do. I started working on that in December and quickly after that I started to reach out to other people who could help me. I reached out to Antoine and his company Leonod to help me build out the prototype and figure out the deep technical ideas about the architecture. Then Antoine helped me with the architecture as well to tweak a few things. Right now it is still a project that is open source and there is no owner of it as such. It was delivered as an open source project to our clients. Right now we are considering making it a product because it is just an architecture, it is just a prototype. Nobody can really use it today, it is just Python code, it is not secure or designed to be secure right now. We are trying to look for other people, companies that could support us either as sponsors or whatever for building the implementation. Or setting up a separate entity like a spin off of our company just to focus on that.
+Kevin: My personal interest in vaults started last year when Bryan Bishop published his [email](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-August/017229.html) on the mailing list. I was like “That is an interesting concept.” After that I started engaging with him on Twitter proposing a few different ideas. There were some limitations and some attacks around it. I didn’t go much further than that. At the end of the year a hedge fund reached out to my company Chainsmiths to architect a solution for them to be their own custodian while in a multi-stakeholder situation. They are four people in the company and they have two active traders that move funds from exchanges back to their company and stuff like that. They wanted a way to be able to have decent security within their own fund without having to rely on a third party like most funds do. I started working on that in December and quickly after that I started to reach out to other people who could help me. I reached out to Antoine and his company Leonod to help me build out the prototype and figure out the deep technical ideas about the architecture. Then Antoine helped me with the architecture as well to tweak a few things. Right now it is still a project that is open source and there is no owner of it as such. It was delivered as an open source project to our clients. Right now we are considering making it a product because it is just an architecture, it is just a prototype. Nobody can really use it today, it is just Python code, it is not secure or designed to be secure right now. We are trying to look for other people, companies that could support us either as sponsors or whatever for building the implementation. Or setting up a separate entity like a spin off of our company just to focus on that.
Aaron: I’m not sure what the best order is to tackle this. Why didn’t they just use Bryan’s vault design?
@@ -56,7 +56,7 @@ Antoine: A big point that we can easily forget is that it reduces the incentive
Aaron: You have coded this up? Is it ready to be used? What is the status of it?
-Antoine: The architecture is almost final I think. I am going to write a [post](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-April/017793.html) to the mailing list to get some feedback from other people, other Bitcoin developers. Maybe we overlooked something. We can’t be sure but the state of implementation is a toy implementation, a demo which doesn’t even have a user interface. I did run some functional tests so it works. Maybe we overlooked something.
+Antoine: The architecture is almost final I think. I am going to write a [post](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-April/017793.html) to the mailing list to get some feedback from other people, other Bitcoin developers. Maybe we overlooked something. We can’t be sure but the state of implementation is a toy implementation, a demo which doesn’t even have a user interface. I did run some functional tests so it works. Maybe we overlooked something.
Aaron: You have mentioned this trading desk. What are other good examples of companies that could use it?
diff --git a/transcripts/la-bitdevs/2020-06-18-luke-dashjr-segwit-psbt-vulnerability.mdwn b/transcripts/la-bitdevs/2020-06-18-luke-dashjr-segwit-psbt-vulnerability.mdwn
index 9240c02..e67cee0 100644
--- a/transcripts/la-bitdevs/2020-06-18-luke-dashjr-segwit-psbt-vulnerability.mdwn
+++ b/transcripts/la-bitdevs/2020-06-18-luke-dashjr-segwit-psbt-vulnerability.mdwn
@@ -12,7 +12,7 @@ CVE: https://nvd.nist.gov/vuln/detail/CVE-2020-14199
Trezor blog post on the vulnerability: https://blog.trezor.io/latest-firmware-updates-correct-possible-segwit-transaction-vulnerability-266df0d2860
-Greg Sanders Bitcoin dev mailing list post in April 2017: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-August/014843.html
+Greg Sanders Bitcoin dev mailing list post in April 2017: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-August/014843.html
# The vulnerability
diff --git a/transcripts/layer2-summit/2018/lightning-overview.mdwn b/transcripts/layer2-summit/2018/lightning-overview.mdwn
index 415080d..509d958 100644
--- a/transcripts/layer2-summit/2018/lightning-overview.mdwn
+++ b/transcripts/layer2-summit/2018/lightning-overview.mdwn
@@ -38,7 +38,7 @@ Channels on their own are great, but they aren't enough. A channel when you thin
# Atomic multi-path payments
-<https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-February/000993.html>
+<https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-February/000993.html>
Now we're going to move on to one new technology called atomic multi-path payments (amps). The problem is that you need a path on the network between multiple nodes on the graph. The problem is that if Alice wants to send 8 BTC she has to-- these numbers are the capacities in the direction towards Felix. There's a capacity in both directions. If Alice wants to pay Felix she wants to send 8 BTC but she can't because each path on its own doesn't have enough capacity. And Felix at a time can only receive up to 10 BTC because he has that inbound liquidity, but he's unable to because of the single path constraint. This is solved by atomic multi-path payments.
@@ -62,7 +62,7 @@ We're going to be working on AMPs in the next few months.
# Splicing overview
-<https://lists.linuxfoundation.org/pipermail/lightning-dev/2017-May/000692.html>
+<https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2017-May/000692.html>
Splicing is another cool technology. I think roasbeef came up with the scheme I'm going to present today. There's a lot of different ways to do this. We're still in the research phase. I am just going to present an example that is sort of familiar and easy to graph.
@@ -102,7 +102,7 @@ In terms of sweeping outputs, here's an example script of one of the scripts in
HTLC output scripts are a little more involved but they have script templates too and follow the same general format. There's SIGHASH\_ALL which is one of the sighash flags used in bitcoin required for this... the state space can manifest on chain because of thse 2-state layers of HTLC. Using SIGHASH\_SINGLE it's more liberal than SIGHASH\_ALL and allows us to get this down to a linear amount of space required for the signatures.
-And finally, in <a href="https://blockstream.com/eltoo.pdf">eltoo</a>, there's a recent proposal for <a href="https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-April/015908.html">SIGHASH\_NOINPUT</a> which is evne more liberal and it requires just one signature for all HTLCs. This will make this watchtower stuff pretty optimal in my opinion.
+And finally, in <a href="https://blockstream.com/eltoo.pdf">eltoo</a>, there's a recent proposal for <a href="https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-April/015908.html">SIGHASH\_NOINPUT</a> which is evne more liberal and it requires just one signature for all HTLCs. This will make this watchtower stuff pretty optimal in my opinion.
# Watchtower upgrade proposals
diff --git a/transcripts/layer2-summit/2018/scriptless-scripts.mdwn b/transcripts/layer2-summit/2018/scriptless-scripts.mdwn
index 14adeed..004cb6c 100644
--- a/transcripts/layer2-summit/2018/scriptless-scripts.mdwn
+++ b/transcripts/layer2-summit/2018/scriptless-scripts.mdwn
@@ -124,7 +124,7 @@ Because I am encoding all of these semantics into signatures themselves at the t
# New developments
-<a href="https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-February/001031.html">Lightning with scriptless scripts</a>-- getting that into lightning protocol is quite difficult. ajtowns has decided that he is doing it. He posted a message to lightning-dev, and he's doing it. That's awesome.
+<a href="https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-February/001031.html">Lightning with scriptless scripts</a>-- getting that into lightning protocol is quite difficult. ajtowns has decided that he is doing it. He posted a message to lightning-dev, and he's doing it. That's awesome.
Doing <a href="http://diyhpl.us/~bryan/papers2/bitcoin/Scriptless%20scripts%20with%20ECDSA%20-%202018-04-26.pdf">scriptless scripts with ECDSA</a> would be interesting today. Monero maybe-- doesn't have refund support. None of this works unless you have ECDSA. It turns out there was a paper dropped on lightning-dev about this. There's some groups working on implementing this multi-party ECDSA protocol which is exciting. This could be happening today. People could be doing it today, there's no evidence on the blockchain. You have no idea how many people were involved or what kind of smart contract they were involved. And if you are working for a Chainalysis company then you are lying to yourself and others.
diff --git a/transcripts/lets-talk-bitcoin-podcast/2017-06-04-consensus-uasf-and-forks.mdwn b/transcripts/lets-talk-bitcoin-podcast/2017-06-04-consensus-uasf-and-forks.mdwn
index 4b9cd1e..eae2804 100644
--- a/transcripts/lets-talk-bitcoin-podcast/2017-06-04-consensus-uasf-and-forks.mdwn
+++ b/transcripts/lets-talk-bitcoin-podcast/2017-06-04-consensus-uasf-and-forks.mdwn
@@ -74,7 +74,7 @@ AL: It is just the redundancy we are talking about, nobody can force a sea chang
# Innovation on upgrade mechanisms
-AA: I think we are going to see resolution of the scaling debate, let me add that as a positive and optimistic note. And the reason we are going to see resolution of the scaling debate is rather simple. Over the last two years during this scaling debate we have seen the emergence of more than a dozen proposals. We have seen an enormous amount of research and development and resources poured into finding technical means by which to clarify, secure, make these solutions better, less disruptive, understand the implications of the different mechanisms for upgrading. The state of the art on upgrading a decentralized consensus algorithm has advanced tremendously. Two years ago we didn’t even have the term hard fork, now we are talking about four different categories of deliberate upgrade forks, miner activated, user activated, soft fork, hard fork and all the combinations. And we are in fact discovering there might be more nuances within that. SegWit didn’t exist two years ago, a bit more than two years ago. SegWit as a soft fork was an invention specifically designed to cause a less disruptive approach towards the scaling debate. That is currently being signaled under BIP 9. BIP 9 itself as a signaling mechanism for miner activated soft forks which were not signaled in that way before, is a new one. [Spoonnet](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-February/013542.html), Spoonnet2 and potentially Spoonnet3 which are a series of proposals by Johnson Lau that give us a reformatting of the block header in order to solve several different problems, not just the block size but also to make hard forks much cleaner and less disruptive by ensuring that there is replay protection, so transactions cannot be replayed from one side of the fork to the other, as well as wipeout protection so that you can’t have a very, very long chain develop that then gets wiped out in a reorganization. Those developments did not exist two years ago. We are now much, much more advanced in our understanding, in our research and in our development to do these things in a live network and upgrade them. The proposals that come out are more and more complicated, in some cases that is counterproductive, but they are also more and more sophisticated. You are seeing that people are actively trying to find ways to create solutions both political which I don’t think is the right approach, but also technical solutions to this debate that try to resolve the underlying technical issues in a way that is the least disruptive possible. And I am confident that eventually we are going to see convergence on a solution that is broadly acceptable, that offers a road forward, that is least disruptive, that the community and every one of the five constituencies get behind. And we will see pretty much all of the above. We’ll see an activation of something that is SegWit or very similar to Segregated Witness for transaction malleability and witness scaling and all the other things that SegWit does. We are going to see a base block size increase in addition to SegWit’s block weight increase eventually. We are going to see a reformatting of the block header in order to introduce new features such as extra nonces for miners, a more flexible header format that improves a lot of other issues. We might see a change in the transaction format. We are going to see Schnorr signatures, we are going to see signature aggregation techniques, we are going to see UTXO sets and potentially things like MMR, Merkle Mountain Ranges and other proposals for creating fraud proofs and verifiable UTXO sets and optimizations like that. All of the above is the scaling solution. The question that remains is not what do we do, the question that remains is in what sequence and how do we do it in the safest, least disruptive way to the broader ecosystem. That question has not been resolved and it is a technical issue. It is overshadowed by the political struggle and the power struggle but at the bottom line this is a matter of science. I am confident that we will see a road forward.
+AA: I think we are going to see resolution of the scaling debate, let me add that as a positive and optimistic note. And the reason we are going to see resolution of the scaling debate is rather simple. Over the last two years during this scaling debate we have seen the emergence of more than a dozen proposals. We have seen an enormous amount of research and development and resources poured into finding technical means by which to clarify, secure, make these solutions better, less disruptive, understand the implications of the different mechanisms for upgrading. The state of the art on upgrading a decentralized consensus algorithm has advanced tremendously. Two years ago we didn’t even have the term hard fork, now we are talking about four different categories of deliberate upgrade forks, miner activated, user activated, soft fork, hard fork and all the combinations. And we are in fact discovering there might be more nuances within that. SegWit didn’t exist two years ago, a bit more than two years ago. SegWit as a soft fork was an invention specifically designed to cause a less disruptive approach towards the scaling debate. That is currently being signaled under BIP 9. BIP 9 itself as a signaling mechanism for miner activated soft forks which were not signaled in that way before, is a new one. [Spoonnet](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-February/013542.html), Spoonnet2 and potentially Spoonnet3 which are a series of proposals by Johnson Lau that give us a reformatting of the block header in order to solve several different problems, not just the block size but also to make hard forks much cleaner and less disruptive by ensuring that there is replay protection, so transactions cannot be replayed from one side of the fork to the other, as well as wipeout protection so that you can’t have a very, very long chain develop that then gets wiped out in a reorganization. Those developments did not exist two years ago. We are now much, much more advanced in our understanding, in our research and in our development to do these things in a live network and upgrade them. The proposals that come out are more and more complicated, in some cases that is counterproductive, but they are also more and more sophisticated. You are seeing that people are actively trying to find ways to create solutions both political which I don’t think is the right approach, but also technical solutions to this debate that try to resolve the underlying technical issues in a way that is the least disruptive possible. And I am confident that eventually we are going to see convergence on a solution that is broadly acceptable, that offers a road forward, that is least disruptive, that the community and every one of the five constituencies get behind. And we will see pretty much all of the above. We’ll see an activation of something that is SegWit or very similar to Segregated Witness for transaction malleability and witness scaling and all the other things that SegWit does. We are going to see a base block size increase in addition to SegWit’s block weight increase eventually. We are going to see a reformatting of the block header in order to introduce new features such as extra nonces for miners, a more flexible header format that improves a lot of other issues. We might see a change in the transaction format. We are going to see Schnorr signatures, we are going to see signature aggregation techniques, we are going to see UTXO sets and potentially things like MMR, Merkle Mountain Ranges and other proposals for creating fraud proofs and verifiable UTXO sets and optimizations like that. All of the above is the scaling solution. The question that remains is not what do we do, the question that remains is in what sequence and how do we do it in the safest, least disruptive way to the broader ecosystem. That question has not been resolved and it is a technical issue. It is overshadowed by the political struggle and the power struggle but at the bottom line this is a matter of science. I am confident that we will see a road forward.
# Prospect of a community split and an altcoin
diff --git a/transcripts/lightning-conference/2019/2019-10-20-antoine-riard-rust-lightning.mdwn b/transcripts/lightning-conference/2019/2019-10-20-antoine-riard-rust-lightning.mdwn
index 81550cd..34c2a5f 100644
--- a/transcripts/lightning-conference/2019/2019-10-20-antoine-riard-rust-lightning.mdwn
+++ b/transcripts/lightning-conference/2019/2019-10-20-antoine-riard-rust-lightning.mdwn
@@ -16,7 +16,7 @@ Hi everyone, super happy to be here at the Lightning Conference. I’ve had an a
# Why Lightning?
-So why Lightning? Why are we here? What do we want to build with Lightning? Do we want to reach Bitcoin promises of instant transaction, scaling to the masses, these types of hopes? Do we want to enable fancy financial contracts? Do we want to build streams of microtransactions? It is not really clear. When you are reading the Lightning white paper people have different views on how you can use Lightning and what you can use Lightning for? Why should you work on Lightning if you are a young developer? It is one of the most wide and unchartered territories. There are so many things to do, so many things to build, it is really exciting. There are still a lot of unknowns. We are building this network of pipes but we don’t know yet the how of the pipes. We don’t know what they will be used for, we don’t know where they will be used and by who. There is a lot of uncertainty. Right now it is single funded channels, really simple to understand. Tomorrow there are things like channel factories, multiparty channels… maybe splicing and a coinjoin transaction will open a set of channels. Maybe something like OP_SECUREBAG to do Lightning… There are a lot of efforts. So what are we going to send through these pipes? Are we going to send only HTLC or more complex stuff like DLC or a combination of DLC, conditional payments. If you follow [Lightning-dev](https://lists.linuxfoundation.org/pipermail/lightning-dev/) there is an awesome ongoing conversation on payment points and what you can build thanks to that. Where? Are we going to deploy Lightning on the internet? There are a lot of ideas on how to use Lightning to fund mesh nets and this kind of stuff. Or it could be a device and you are going to pay for what you consume from a stream. Maybe hardware security modules if you are an exchange, you are going to deploy Lightning on some architecture without a broadband connection. Who are going to use our stuff? I think that it is the biggest question to ask. You don’t have the same bandwidth if you live in New York or you live in South Africa or you live in Germany. People have different viewpoints on this, they have different resources. A basic consumer is not going to use Lightning the way a merchant is going to use Lightning, Lightning liquidity providers are going to set infrastructures. There are a lot of open questions. Who? What? How? When? We can look at the history of software engineering and how it solves these issues. I believe in the UNIX philosophy of doing something similar, doing something modular and combine the building blocks.
+So why Lightning? Why are we here? What do we want to build with Lightning? Do we want to reach Bitcoin promises of instant transaction, scaling to the masses, these types of hopes? Do we want to enable fancy financial contracts? Do we want to build streams of microtransactions? It is not really clear. When you are reading the Lightning white paper people have different views on how you can use Lightning and what you can use Lightning for? Why should you work on Lightning if you are a young developer? It is one of the most wide and unchartered territories. There are so many things to do, so many things to build, it is really exciting. There are still a lot of unknowns. We are building this network of pipes but we don’t know yet the how of the pipes. We don’t know what they will be used for, we don’t know where they will be used and by who. There is a lot of uncertainty. Right now it is single funded channels, really simple to understand. Tomorrow there are things like channel factories, multiparty channels… maybe splicing and a coinjoin transaction will open a set of channels. Maybe something like OP_SECUREBAG to do Lightning… There are a lot of efforts. So what are we going to send through these pipes? Are we going to send only HTLC or more complex stuff like DLC or a combination of DLC, conditional payments. If you follow [Lightning-dev](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/) there is an awesome ongoing conversation on payment points and what you can build thanks to that. Where? Are we going to deploy Lightning on the internet? There are a lot of ideas on how to use Lightning to fund mesh nets and this kind of stuff. Or it could be a device and you are going to pay for what you consume from a stream. Maybe hardware security modules if you are an exchange, you are going to deploy Lightning on some architecture without a broadband connection. Who are going to use our stuff? I think that it is the biggest question to ask. You don’t have the same bandwidth if you live in New York or you live in South Africa or you live in Germany. People have different viewpoints on this, they have different resources. A basic consumer is not going to use Lightning the way a merchant is going to use Lightning, Lightning liquidity providers are going to set infrastructures. There are a lot of open questions. Who? What? How? When? We can look at the history of software engineering and how it solves these issues. I believe in the UNIX philosophy of doing something similar, doing something modular and combine the building blocks.
# rust-lightning
diff --git a/transcripts/lightning-conference/2019/2019-10-20-bastien-teinturier-trampoline-routing.mdwn b/transcripts/lightning-conference/2019/2019-10-20-bastien-teinturier-trampoline-routing.mdwn
index 230b812..f74fa56 100644
--- a/transcripts/lightning-conference/2019/2019-10-20-bastien-teinturier-trampoline-routing.mdwn
+++ b/transcripts/lightning-conference/2019/2019-10-20-bastien-teinturier-trampoline-routing.mdwn
@@ -44,7 +44,7 @@ Right now when we are doing AMP in the network with normal payments the sender d
# To Infinity and Beyond
-That is all I had. This is a high level view. Of course there are a lot of gory details that are not completely fleshed out yet and we would love to have more feedback on the proposal if you have ideas. There are parts of it that we are not completely happy with, especially when dealing with legacy recipients that do not support trampoline. If you want to look at the proposal and add your ideas or contribute we would welcome that. There is currently a [spec PR](https://github.com/lightningnetwork/lightning-rfc/pull/654) that we will update soon because some small things have changed. I sent a [mail](https://lists.linuxfoundation.org/pipermail/lightning-dev/2019-August/002100.html) to the mailing list a few months ago to detail the high level view of how we think it is going to work. Have a look at that and don’t hesitate to put comments in there or just come and reach me to ask any questions. I think we have two minutes for questions.
+That is all I had. This is a high level view. Of course there are a lot of gory details that are not completely fleshed out yet and we would love to have more feedback on the proposal if you have ideas. There are parts of it that we are not completely happy with, especially when dealing with legacy recipients that do not support trampoline. If you want to look at the proposal and add your ideas or contribute we would welcome that. There is currently a [spec PR](https://github.com/lightningnetwork/lightning-rfc/pull/654) that we will update soon because some small things have changed. I sent a [mail](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2019-August/002100.html) to the mailing list a few months ago to detail the high level view of how we think it is going to work. Have a look at that and don’t hesitate to put comments in there or just come and reach me to ask any questions. I think we have two minutes for questions.
# Q & A
diff --git a/transcripts/lightning-conference/2019/2019-10-20-nadav-kohen-payment-points.mdwn b/transcripts/lightning-conference/2019/2019-10-20-nadav-kohen-payment-points.mdwn
index 7ffd59b..e0bbcec 100644
--- a/transcripts/lightning-conference/2019/2019-10-20-nadav-kohen-payment-points.mdwn
+++ b/transcripts/lightning-conference/2019/2019-10-20-nadav-kohen-payment-points.mdwn
@@ -48,7 +48,7 @@ This next scheme proposed by Z-man is called escrow over Lightning. Essentially
# Selling (Schnorr) Signatures
-Another thing we can do with points is you can sell Schnorr signatures trustlessly. You can sell Schnorr signatures over Lightning today with HTLCs but it is not trustless and also the signature will get revealed to everyone if any of the hops go onchain. Essentially what I mean by selling Schnorr signatures over the Lightning Network is you use your Schnorr signature, the last 32 bytes of it, as the payment preimage. If you wanted to do that today, the person who sets up the payment has no way of knowing that the hash that they’re using is actually the hash of the signature that they want. Here’s the math. Basically the thing that you need to know is that you can compute s*G where s is the signature of some message just from public information. R which is a public key, X which is a public key and the message. From public information and the message that you want a signature on with these specific public keys you can compute the public point, set up a payment using that point as the payment point. Then you know that you will get a valid signature to a specific message with specific keys if and only if your money gets claimed. In order to claim your money they must reveal the signature to you. Essentially we can trustlessly sell Schnorr signatures over the Lightning Network in nice, private ways. This can be used with blind Schnorr signatures as well. That is a [link](https://lists.linuxfoundation.org/pipermail/lightning-dev/2019-July/002088.html) to a Jonas Nick talk from a while ago about how you can use blind servers to implement e-cash systems and various things like this. Here’s the great way you can sell signatures in a trustless fashion. Another thing that you can do selling signatures is you can have discreet log contract like options contracts. Essentially rather than having both parties have the ability to execute some contract, one party sells its signatures to some contract, to the other party in return for a premium. You essentially then have an option on some future event. That is on the mailing list if you’re interested in hearing more. Just in general, selling Schnorr signatures is a new, nice atomic thing that you can use in various schemes.
+Another thing we can do with points is you can sell Schnorr signatures trustlessly. You can sell Schnorr signatures over Lightning today with HTLCs but it is not trustless and also the signature will get revealed to everyone if any of the hops go onchain. Essentially what I mean by selling Schnorr signatures over the Lightning Network is you use your Schnorr signature, the last 32 bytes of it, as the payment preimage. If you wanted to do that today, the person who sets up the payment has no way of knowing that the hash that they’re using is actually the hash of the signature that they want. Here’s the math. Basically the thing that you need to know is that you can compute s*G where s is the signature of some message just from public information. R which is a public key, X which is a public key and the message. From public information and the message that you want a signature on with these specific public keys you can compute the public point, set up a payment using that point as the payment point. Then you know that you will get a valid signature to a specific message with specific keys if and only if your money gets claimed. In order to claim your money they must reveal the signature to you. Essentially we can trustlessly sell Schnorr signatures over the Lightning Network in nice, private ways. This can be used with blind Schnorr signatures as well. That is a [link](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2019-July/002088.html) to a Jonas Nick talk from a while ago about how you can use blind servers to implement e-cash systems and various things like this. Here’s the great way you can sell signatures in a trustless fashion. Another thing that you can do selling signatures is you can have discreet log contract like options contracts. Essentially rather than having both parties have the ability to execute some contract, one party sells its signatures to some contract, to the other party in return for a premium. You essentially then have an option on some future event. That is on the mailing list if you’re interested in hearing more. Just in general, selling Schnorr signatures is a new, nice atomic thing that you can use in various schemes.
# Pay for Decommitment (Pay for Nonce)
diff --git a/transcripts/lightning-hack-day/2020-05-03-christian-decker-lightning-backups.mdwn b/transcripts/lightning-hack-day/2020-05-03-christian-decker-lightning-backups.mdwn
index 09064e7..5d337c9 100644
--- a/transcripts/lightning-hack-day/2020-05-03-christian-decker-lightning-backups.mdwn
+++ b/transcripts/lightning-hack-day/2020-05-03-christian-decker-lightning-backups.mdwn
@@ -152,7 +152,7 @@ Q - What can we do to encourage better modularity? Is this important an approach
A - I think the modularity of the protocol and the modularity of the implementations pretty much go hand in hand. If the specification has very nice modular boundaries where you have separation of concerns, one thing manages updates of state and one thing manages how we communicate with the blockchain and one thing manages how we do multihop security. That automatically leads to a structure which is very modular. The issue that we have currently have is that the Lightning penalty mechanism namely the fact that whatever output we create in our state must be penalizable makes it so that this update mechanism leaks into the rest of the protocol stack. I showed before how we punish the commitment transaction if I were ever to publish an old commitment. But if we had a HTLC attached to that, this HTLC too would have to have the facility for me to punish you if you published this old state with this HTLC that had been resolved correctly or incorrectly or timed out. It is really hard in the penalty mechanism to have a clear cut separation between the update mechanism and the multihop mechanism and whatever else we build on top of the update mechanism. It leaks into each other. That is something that I really like about eltoo. We have this clear separation of this is the update mechanism and this is the multihop mechanism and there is no interference between the two of them. I think by clearing up the protocol stack we will end up with more modular implementations. Of course at c-lightning we try to expose as much as possible from the internals to plugins so that plugins are first class citizens in the Lightning nodes themselves. They have the same power as most of our pre-shipped tools have. One little known fact is that the pay command which is used to pay a BOLT11 invoice is also implemented as a plugin. The plugin takes care of decoding the invoice of initiating a payment, retrying if a payment fails, splitting a payment if it is too large or adding a shadow route or adding fuzzing or all of this. It is all implemented in a plugin and the bare bones implementation of c-lighting is very light. It doesn’t come with a lot of bells and whistles but we make it so that you have the power of customizing it and so on. There we do try to keep a modular aspect to c-lightning despite the protocol not being a perfectly modular system itself.
-Q - There was a [mailing list post](https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-February/002547.html) from Joost (Jager). What are the issues with upfront payments? There was some discussion about it then it seems to have stopped. Why haven’t we seen more development in that direction yet?
+Q - There was a [mailing list post](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-February/002547.html) from Joost (Jager). What are the issues with upfront payments? There was some discussion about it then it seems to have stopped. Why haven’t we seen more development in that direction yet?
A - Upfront payments is a proposal that came up when we first started probing the network. Probing involves sending a payment that can never terminate correctly. By looking at the error code we receive back we learn about the network. It is for free because the payments never actually terminate. That brought up the question of aren’t we using somebody else’s resources by creating HTLCs with their funds as well but not paying them for it? The idea came up of having upfront payments which means that if I try to route a payment I will definitely leave a fee even if that payment fails. That is neat but the balance between working and not working is hard to get right. The main issue is that if we pay upfront for them receiving a payment and not forwarding a payment then they may be happy to take the upfront fee and just fail without any stake in the system. If I were to receive an incoming HTLC from Jeff and I need to forward it to you Michael and Jeff is paying me 10 millisatoshis for the privilege of talking to me I might not actually take my half a Bitcoin and lock it up in a HTLC to you. I might be happy taking those 10 millisatoshis and say “I’m ok with this. You try another route.” It is an issue of incentivizing good behavior versus incentivizing abusing the system to maximize your outcome. A mix of upfront payments and fees contingent on the success of the actual payment is probably the right way. We need to discuss a bit more and people’s time is tight when it comes to these proposals. There has been too much movement I guess.
diff --git a/transcripts/london-bitcoin-devs/2018-06-12-adam-gibson-unfairly-linear-signatures.mdwn b/transcripts/london-bitcoin-devs/2018-06-12-adam-gibson-unfairly-linear-signatures.mdwn
index b6ad3b6..4bf873d 100644
--- a/transcripts/london-bitcoin-devs/2018-06-12-adam-gibson-unfairly-linear-signatures.mdwn
+++ b/transcripts/london-bitcoin-devs/2018-06-12-adam-gibson-unfairly-linear-signatures.mdwn
@@ -177,7 +177,7 @@ A - Taproot also uses that linearity. I have deliberately avoided talking about
# Aggregation schemes - 2
-As well as Musig there is something called Bellare-Neven which existed before Musig which is another way of doing the same thing. It doesn’t aggregate the keys in the same way because it would require you to publish all the public keys. Have people used a multisig before? If you ever do it in Bitcoin you will know you will see that all the public keys are published and however many signatures you need are also published. It is a lot of data. Bellare-Neven has a cleaner security proof but since it requires your keys for verification you would have to publish them. Remember with Bitcoin verification is not one person it is everyone. You would have to publish all the keys. The latest version of the Musig paper talks about three rounds of interaction because you commit to these R values first. Everyone doesn’t just handover their R values, they hash them and they send the hash of the R values. That forces you to fix your R value before knowing anyone else’s R value. There is a [link](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-May/015951.html) here. There are different ways we can use one or both of these constructions (Musig and Bellare-Neven) which are slightly different but have this goal of allowing us to aggregate either the keys or the signatures. We could have multiple signatures aggregated together using these clever constructions. It is all based on the Schnorr signature and combining multiple Schnorr signatures together.
+As well as Musig there is something called Bellare-Neven which existed before Musig which is another way of doing the same thing. It doesn’t aggregate the keys in the same way because it would require you to publish all the public keys. Have people used a multisig before? If you ever do it in Bitcoin you will know you will see that all the public keys are published and however many signatures you need are also published. It is a lot of data. Bellare-Neven has a cleaner security proof but since it requires your keys for verification you would have to publish them. Remember with Bitcoin verification is not one person it is everyone. You would have to publish all the keys. The latest version of the Musig paper talks about three rounds of interaction because you commit to these R values first. Everyone doesn’t just handover their R values, they hash them and they send the hash of the R values. That forces you to fix your R value before knowing anyone else’s R value. There is a [link](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-May/015951.html) here. There are different ways we can use one or both of these constructions (Musig and Bellare-Neven) which are slightly different but have this goal of allowing us to aggregate either the keys or the signatures. We could have multiple signatures aggregated together using these clever constructions. It is all based on the Schnorr signature and combining multiple Schnorr signatures together.
Q - The per transaction aggregation would mean there is just one transaction in a block?
diff --git a/transcripts/london-bitcoin-devs/2019-02-05-matt-corallo-betterhash.mdwn b/transcripts/london-bitcoin-devs/2019-02-05-matt-corallo-betterhash.mdwn
index f075154..a2e7cef 100644
--- a/transcripts/london-bitcoin-devs/2019-02-05-matt-corallo-betterhash.mdwn
+++ b/transcripts/london-bitcoin-devs/2019-02-05-matt-corallo-betterhash.mdwn
@@ -8,7 +8,7 @@ Date: February 5th 2019
Video: https://www.youtube.com/watch?v=0lGO5I74qJM
-Announcement of BetterHash on the mailing list: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-June/016077.html
+Announcement of BetterHash on the mailing list: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-June/016077.html
Draft BIP: https://github.com/TheBlueMatt/bips/blob/betterhash/bip-XXXX.mediawiki
@@ -106,7 +106,7 @@ A - XXX. No I haven’t bothered to get it a BIP number yet. There are a few mor
Q - Is it public?
-A - There is a post on the [mailing list](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-June/016077.html) and there’s a [link](https://github.com/TheBlueMatt/bips/blob/betterhash/bip-XXXX.mediawiki) actually in the Meetup description to the current version on GitHub.
+A - There is a post on the [mailing list](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-June/016077.html) and there’s a [link](https://github.com/TheBlueMatt/bips/blob/betterhash/bip-XXXX.mediawiki) actually in the Meetup description to the current version on GitHub.
Vendor messages, there’s a general trend right now. If you run a mining farm, monitoring and management is actually a pain in the ass. There is one off the shelf solution that I am aware of. They charge you a certain number of dollars per device. It is Windows only, it is not a remote management thing, you have to be there and it is really bad. Most farms end up rolling their own management and monitoring software which is terrible because most of them are people who have cheap power, they are not necessarily technical people who know what they are doing. We want some extensibility there but I am also not going to bake in all the “Please tell me your current temperature” kind of messages into the spec. Instead there is an explicit extensibility thing where you can say “Hi. This is a vendor message. Here is the type of message. Ignore it if you don’t know what that means. Do something with it if you do.” That is all there. That is actually really nice for pools hopefully and for clients. I don’t know why you’d want this but someone asked me to add this so I did. I wrote up a little blurb and how you can use the vendor messages to make the header only, final Work protocol go over UDP because someone told me that they wanted to set up their farm to take the data from broadcast UDP, you don’t have to have individual connections per device and shove it blindly into the ASIC itself without an ASIC controller whatsoever. I don’t know why you’d want to do that but if you do and you are crazy this is explicitly supported. There is a write up of how you might imagine doing such a completely insane thing.
@@ -208,7 +208,7 @@ A - No it doesn’t really change it. Payouts are orthogonal and in fact not imp
Q - Could you not do it in a trustless way? That would be cool. Somehow the shares already open a Lightning channel or somehow push things inside a channel?
-A - Intuitively my answer is no. I haven’t spent much time thinking about it. Bob McElrath has some designs around fancier [P2Pool](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-August/014893.html) that he claims does it. I am intuitively skeptical but I haven’t thought about it. Maybe it works, talk to Bob or go read his post.
+A - Intuitively my answer is no. I haven’t spent much time thinking about it. Bob McElrath has some designs around fancier [P2Pool](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-August/014893.html) that he claims does it. I am intuitively skeptical but I haven’t thought about it. Maybe it works, talk to Bob or go read his post.
Q - It could be some referential thing, maybe it is Lightning-esque where whatever you are doing in the Lightning channel points back to the hash of the block that you are doing it in?
@@ -232,7 +232,7 @@ Q - Bob didn’t come up with this, it was me.
A - Chris, do you want to explain P2Pool?
-Chris Belcher: [P2Pool](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-August/014893.html) works, these shares form a share chain. Every node in P2Pool verifies this share chain and makes sure it pays out to the right people. When a real block is found the hashes get paid in proportion to how much work they have contributed to the share chain. You could make it trustless so that they can’t cheat each other. That’s a summary of how it works but it is dead, there are loads of problems with it unfortunately.
+Chris Belcher: [P2Pool](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-August/014893.html) works, these shares form a share chain. Every node in P2Pool verifies this share chain and makes sure it pays out to the right people. When a real block is found the hashes get paid in proportion to how much work they have contributed to the share chain. You could make it trustless so that they can’t cheat each other. That’s a summary of how it works but it is dead, there are loads of problems with it unfortunately.
Its biggest problem was it was bad UX. It reported a higher stale rate because it had this low inter block time so you had naturally high stale rates. But what mattered for the payouts, for a centralized pool if you have a stale rate you miss that many payouts, in P2Pool if you have a stale rate you can still get payouts for stale shares, the only thing that matters is your stale rate in comparison to other clients. And so you have a lot of miners who were running P2Pool and it said “Hey you have a 2 percent stale rate” and they were like “F\*\*\* this. My centralized pool says I have 0.1 percent stale rate. I am not going to use P2Pool.” And so P2Pool died for no reason because it had bad UX.
diff --git a/transcripts/london-bitcoin-devs/2020-02-04-andrew-poelstra-miniscript.mdwn b/transcripts/london-bitcoin-devs/2020-02-04-andrew-poelstra-miniscript.mdwn
index c4f61ed..0475e55 100644
--- a/transcripts/london-bitcoin-devs/2020-02-04-andrew-poelstra-miniscript.mdwn
+++ b/transcripts/london-bitcoin-devs/2020-02-04-andrew-poelstra-miniscript.mdwn
@@ -106,7 +106,7 @@ Q - It might be useful if one side of the channel is a multisig set up and doesn
A - The core developer in the back points out that it would be useful if you could have a Lightning HTLC where the individual public keys were instead complicated policies. That might be multiple public keys or a CHECKMULTISIG or something and that’s true. If you used Miniscript with Lightning HTLCs then you could have two parties open a channel where one person proposes a HTLC that isn’t actually the standard HTLC template. It is a HTLC template where the keys are replaced by more interesting policies and then your counterparty would be able to verify that. That’s true. That would be a benefit of using Miniscript with Lightning. There is a trade-off between having that ability and having to add special purpose Lightning things into Miniscript that would complicate the system. Maybe we made the wrong choice on that trade-off and maybe we want to extend Miniscript.
-Q - It depends if the scripts are going to be regularly redesigned or if there are going to be different alternative paths or if there are contracts on top of Lightning. I know Z-man has talked about [arbitrary contracts](https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-August/001383.html) where there potentially could be scripts or Miniscripts on top of Lightning.
+Q - It depends if the scripts are going to be regularly redesigned or if there are going to be different alternative paths or if there are contracts on top of Lightning. I know Z-man has talked about [arbitrary contracts](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-August/001383.html) where there potentially could be scripts or Miniscripts on top of Lightning.
Q - I think you can use scriptless scripts to express everything you want on top of Lightning.
diff --git a/transcripts/london-bitcoin-devs/2020-05-05-socratic-seminar-payjoins.mdwn b/transcripts/london-bitcoin-devs/2020-05-05-socratic-seminar-payjoins.mdwn
index 611d551..149a4a0 100644
--- a/transcripts/london-bitcoin-devs/2020-05-05-socratic-seminar-payjoins.mdwn
+++ b/transcripts/london-bitcoin-devs/2020-05-05-socratic-seminar-payjoins.mdwn
@@ -102,7 +102,7 @@ It doesn’t matter really, you are right. For example, for a scriptPubKey you c
One thing that I also just realized is that the sender can do a double spending attack because he makes the first signed transaction and then he receives the pre-signed or partially signed transaction from the receiver. So he knows the fee rate. He can simply not sign the second transaction, the payjoin transaction and double spend the original transaction paying back to himself with a slightly higher fee rate. If he is the one to broadcast transaction first the original transaction may not even be broadcast because full nodes will think it is a double spend if RBF is not activated. That might be an issue to.
-I can’t remember the thinking around that. I seem to remember reading about it somewhere, that is an interesting point for sure. Let’s keep going because we want to get to the meat of it, what people are doing now. I wanted to give the history. That was the BIP (BIP 79). It is a very interesting read, it is not difficult to read. I did have some issues of it with it though and I raised them in [January 2019](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-January/016625.html). Ryan Havar had posted the proposal and I wrote some thoughts on it here. Fairly minor things, I didn’t like the name. Protocol versioning, other people didn’t seem to think it mattered that much though apparently some of them are changing their mind now. Then there was the whole thing about the unnecessary input heuristic which we talked about earlier. It is interesting to discuss whether something like that should be put in a BIP, a standardization document, or whether it should be left up to an implementation. We will come back to that at the end. There are some other technical questions, you can read that if you are interested in such stuff. The conversation in different venues has been ongoing.
+I can’t remember the thinking around that. I seem to remember reading about it somewhere, that is an interesting point for sure. Let’s keep going because we want to get to the meat of it, what people are doing now. I wanted to give the history. That was the BIP (BIP 79). It is a very interesting read, it is not difficult to read. I did have some issues of it with it though and I raised them in [January 2019](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-January/016625.html). Ryan Havar had posted the proposal and I wrote some thoughts on it here. Fairly minor things, I didn’t like the name. Protocol versioning, other people didn’t seem to think it mattered that much though apparently some of them are changing their mind now. Then there was the whole thing about the unnecessary input heuristic which we talked about earlier. It is interesting to discuss whether something like that should be put in a BIP, a standardization document, or whether it should be left up to an implementation. We will come back to that at the end. There are some other technical questions, you can read that if you are interested in such stuff. The conversation in different venues has been ongoing.
# Payjoin in BTCPay Server
diff --git a/transcripts/london-bitcoin-devs/2020-05-19-socratic-seminar-vaults.mdwn b/transcripts/london-bitcoin-devs/2020-05-19-socratic-seminar-vaults.mdwn
index 6648162..69cd454 100644
--- a/transcripts/london-bitcoin-devs/2020-05-19-socratic-seminar-vaults.mdwn
+++ b/transcripts/london-bitcoin-devs/2020-05-19-socratic-seminar-vaults.mdwn
@@ -212,7 +212,7 @@ BB: You should pre-sign a push transaction instead of having to go to your cold
BM: Yes. This starts to get into a lot of design as to how do you organize these transactions. What Bryan is discussing is what we call a push-to-recovery-wallet transaction. The thief has gotten in. I have to do something and I am going to push this to another wallet. Now I have three sets of keys. I have the spending keys that I want to use, I have my emergency back out keys and then if I have to use those emergency back out keys I have to somewhere to send those funds that the thief wouldn’t have access to. These vault designs end up getting rather complicated rather fast. I am now talking about three different wallets, each of which in principle should be multisig. If I do 2-of-3 I am now talking about 3 devices. In addition, when this happens, when a thief gets in and tries to steal funds I want to push this transaction. Who does that and how? This implies a set of watchtowers similar to Lightning watchtowers that look for this event and are tasked with broadcasting a transaction which will send it to my super, super backup wallet.
-BB: One idea that I will throw out is that in my [email](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-August/017229.html) to the bitcoin-dev mailing list last year I pointed out that what you want to do is split up your coins into a bunch of UTXOs and slowly transfer it over to your destination wallet one at a time. If you see at the destination that something gets stolen then you stop broadcasting to that wallet and you send to cold storage instead. The other important rule is that you only allow by for example enforcing a watchtower rule only allow one UTXO to be available in that hot wallet. If the thief steals one UTXO and you’ve split it into 100 by definition there is one percent that they have stolen. Then you know and you stop sending to the thief. Bob calls it a policy recommendation.
+BB: One idea that I will throw out is that in my [email](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-August/017229.html) to the bitcoin-dev mailing list last year I pointed out that what you want to do is split up your coins into a bunch of UTXOs and slowly transfer it over to your destination wallet one at a time. If you see at the destination that something gets stolen then you stop broadcasting to that wallet and you send to cold storage instead. The other important rule is that you only allow by for example enforcing a watchtower rule only allow one UTXO to be available in that hot wallet. If the thief steals one UTXO and you’ve split it into 100 by definition there is one percent that they have stolen. Then you know and you stop sending to the thief. Bob calls it a policy recommendation.
MF: There are different designs here. I am trying to logically get it in my mind. Are there certain frameworks that we can hang the different designs on? We will get onto your mailing list post in a minute Bryan. Kevin has got a different design, Bob seemed to be talking about an earlier design. How do I structure this inside of my head in terms of the different options? Are they are all going to be personalized for specific situations?
@@ -326,7 +326,7 @@ JR: I would love that. I have some code that I can donate for that. Generating o
BB: Definitely send that to Christopher Allen. He also has an air gapped signing Bitcoin wallet based off of a stripped down iPod touch with a camera and a screen for QR codes. That is on Blockchain Commons GitHub.
-MF: Let’s go onto to your mailing list posts Bryan. There are two. [Bitcoin vaults with anti-theft recovery/clawback mechanisms](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-August/017229.html) and [On-chain vaults prototype](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-April/017755.html).
+MF: Let’s go onto to your mailing list posts Bryan. There are two. [Bitcoin vaults with anti-theft recovery/clawback mechanisms](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-August/017229.html) and [On-chain vaults prototype](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-April/017755.html).
BB: There is good news actually. There were actually three. There were two on the first day. While the first one was somewhat interesting it is actually wrong and you should focus on the second one that occurred on that same day which was the one that quickly said “Aaron van Wirdum pointed out that this insecure and the adversary can just wait for you to broadcast an unlocking transaction and then steal your funds.” I was like “Yes that’s true.” The solution is the sharding which I talked about earlier today. Basically the idea is that if someone is going to steal your money you want them to steal less than 100 percent of your money. You can achieve something like that with vaults.
@@ -364,7 +364,7 @@ KL: In the original architecture from Bryan last year the funds are not in the v
# Mempool transaction pinning problems
-MF: Before we wrap I do want to touch on Antoine’s mailing list [post](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-May/017835.html) on mempool transaction pinning problems. Is this a weakness of your design Kevin? Does Bob have any thoughts on this?
+MF: Before we wrap I do want to touch on Antoine’s mailing list [post](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-May/017835.html) on mempool transaction pinning problems. Is this a weakness of your design Kevin? Does Bob have any thoughts on this?
BM: I think Jeremy is probably the best to respond to that as he is actively working on this.
@@ -376,7 +376,7 @@ Q - What are your thoughts on the watchtower requirement here? I see a path of e
BB: I will probably hand this over to Bob or Jeremy about watchtowers. It is a huge problem. The prototype I put together did not include a watchtower even though it is absolutely necessary. It is really interesting. One comment I made to Bob is that vaults have revealed things that we should be doing with normal Bitcoin wallets that we just don’t do. Everyone should be watching their coins onchain at all times but most people don’t do that. In vaults it becomes absolutely necessary but is that a property of vaults or is that actually a normal everyday property of how to use Bitcoin that we have mostly been ignoring. I don’t know.
-BM: There are many uses of watchtowers. As time goes on we are going to see more. Another use for watchtowers that has come up recently is the statechain discussion. Tom Trevethan [posted](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-May/017832.html) a ECDSA based statechain that I think is pretty interesting. It also has the requirement for watchtowers. It is a method to transfer UTXOs. What you want to know is did a previous holder of the UTXO broadcast his redemption transaction and how can you deal with that? I think there is a path here to combine all of these ideas but there is so much uncertainty around it we currently wouldn’t know how to do it. There are multiple state update mechanisms in Lightning and that is still in flux. Once you start to add in vaults and then statechains with different ways to update their state there is going to be a wide variety of watchtower needs. Then you get to things like now I want to pay a watchtower. Is the watchtower a service I pay for? Can it be decentralized? Can I open a Lightning channel and pay a little bit over time to make sure this guy is still watching from his watchtower? How do I get guarantees that he is still watching my transactions for me? There is a lot of design space there which is largely unexplored. It is a terribly interesting thing to do if anybody is interested.
+BM: There are many uses of watchtowers. As time goes on we are going to see more. Another use for watchtowers that has come up recently is the statechain discussion. Tom Trevethan [posted](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-May/017832.html) a ECDSA based statechain that I think is pretty interesting. It also has the requirement for watchtowers. It is a method to transfer UTXOs. What you want to know is did a previous holder of the UTXO broadcast his redemption transaction and how can you deal with that? I think there is a path here to combine all of these ideas but there is so much uncertainty around it we currently wouldn’t know how to do it. There are multiple state update mechanisms in Lightning and that is still in flux. Once you start to add in vaults and then statechains with different ways to update their state there is going to be a wide variety of watchtower needs. Then you get to things like now I want to pay a watchtower. Is the watchtower a service I pay for? Can it be decentralized? Can I open a Lightning channel and pay a little bit over time to make sure this guy is still watching from his watchtower? How do I get guarantees that he is still watching my transactions for me? There is a lot of design space there which is largely unexplored. It is a terribly interesting thing to do if anybody is interested.
JR: I think part of the issue is that we are trying to solve too many problems at once. The reality is we don’t even have a good watchtower that I am operating myself and I fully trust. That should be the first step. We don’t even have the code to run your own server for these things. That has to be where you start. I agree longer term outsourcing makes sense but for sophisticated contracts we need to have at least something that does this functionality that you can run yourself. Then we can figure out these higher order constraints. I think we are putting the cart before the horse on completely functional watchtowers that are bonded. That stuff can come later.
diff --git a/transcripts/london-bitcoin-devs/2020-05-26-kevin-loaec-antoine-poinsot-revault.mdwn b/transcripts/london-bitcoin-devs/2020-05-26-kevin-loaec-antoine-poinsot-revault.mdwn
index 68f0c83..2275c85 100644
--- a/transcripts/london-bitcoin-devs/2020-05-26-kevin-loaec-antoine-poinsot-revault.mdwn
+++ b/transcripts/london-bitcoin-devs/2020-05-26-kevin-loaec-antoine-poinsot-revault.mdwn
@@ -66,7 +66,7 @@ Then you have a co-signer. A co-signer can take different forms. It could be a s
# Clawback
-Now let’s go to the fun stuff. This is an idea from 2013 which is a clawback. This is also where the name Revault comes from. It is about sending back a transaction to a vault. Doing this is quite interesting. It looks quite heavy so I am going to explain this. You start with a transaction. Let’s call it the vaulting transaction where you have an output that you will use to be spent with a pre-signed transaction that has two different exit points. Either you use your hot wallet key like any hot wallet on your phone plus a CheckSequenceVerify of a few hours. The other way of spending from it is to use the clawback transaction that is sending into a vault. The key is deleted here. Now what it means is that we are back to the key deletion slide that I had before. This vault transaction can only be spent by this transaction because we deleted the private key. If you want to use the funds either you use your hot wallet key and you have to wait for the delay of the CSV or you can also trigger a clawback which instantly without the delay can put these funds back into a vault. Giving back the same protection. Sending them to a different set of keys no matter what. You can choose this at the time of crafting your clawback. Because the clawback doesn’t have the OP_CSV here you also need to delete the key for signing this clawback. You have to delete the key for the unvaulting transaction and you have to delete the key for the clawback transaction. You don’t have to do it this way but the good way of doing it is that the clawback itself is also a vaulting transaction. You are doing a kind of loop here. You need to also have deleted the key of the unvaulting at the bottom here. That means you also need another clawback here that also needs to be deleted. To build such a transaction you can do that with SegWit because now we have more stability in the txid. But the problem is that you should already pre-sign and delete all your keys before doing the first transaction in it. In this type of vault because the amount would change the txid you have to know the exact amount before you are able to craft the transaction here, sign it and delete the key. Same here, you need to know the amount you are spending from because otherwise the txid would change. When you do a vault with a clawback you need to know how much money you are receiving in it. If somebody reuses the address that you have in your output here of the vaulting transaction and sends no matter what amount of funds the txid would have changed so you would have no way of spending from it because the private key has been deleted. It works but you have to be extremely careful because once you have deleted your private keys it is too late. If you do any mistake it is gone. Another thing is that if you lose the pre-signed transaction your funds are gone as well. You need to do backups of all of that. That is what Bryan Bishop proposed to the [mailing list](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-April/017755.html) with his implementation recently. The first time it was discussed was probably in 2013. Also Bob McElrath has written a paper at some point on this, maybe 2015. A lot of people have thought about such a protection mechanism. I think it is pretty cool. We are not using this in Revault but this is the most advanced thing you can do right now although no one has it in production anywhere.
+Now let’s go to the fun stuff. This is an idea from 2013 which is a clawback. This is also where the name Revault comes from. It is about sending back a transaction to a vault. Doing this is quite interesting. It looks quite heavy so I am going to explain this. You start with a transaction. Let’s call it the vaulting transaction where you have an output that you will use to be spent with a pre-signed transaction that has two different exit points. Either you use your hot wallet key like any hot wallet on your phone plus a CheckSequenceVerify of a few hours. The other way of spending from it is to use the clawback transaction that is sending into a vault. The key is deleted here. Now what it means is that we are back to the key deletion slide that I had before. This vault transaction can only be spent by this transaction because we deleted the private key. If you want to use the funds either you use your hot wallet key and you have to wait for the delay of the CSV or you can also trigger a clawback which instantly without the delay can put these funds back into a vault. Giving back the same protection. Sending them to a different set of keys no matter what. You can choose this at the time of crafting your clawback. Because the clawback doesn’t have the OP_CSV here you also need to delete the key for signing this clawback. You have to delete the key for the unvaulting transaction and you have to delete the key for the clawback transaction. You don’t have to do it this way but the good way of doing it is that the clawback itself is also a vaulting transaction. You are doing a kind of loop here. You need to also have deleted the key of the unvaulting at the bottom here. That means you also need another clawback here that also needs to be deleted. To build such a transaction you can do that with SegWit because now we have more stability in the txid. But the problem is that you should already pre-sign and delete all your keys before doing the first transaction in it. In this type of vault because the amount would change the txid you have to know the exact amount before you are able to craft the transaction here, sign it and delete the key. Same here, you need to know the amount you are spending from because otherwise the txid would change. When you do a vault with a clawback you need to know how much money you are receiving in it. If somebody reuses the address that you have in your output here of the vaulting transaction and sends no matter what amount of funds the txid would have changed so you would have no way of spending from it because the private key has been deleted. It works but you have to be extremely careful because once you have deleted your private keys it is too late. If you do any mistake it is gone. Another thing is that if you lose the pre-signed transaction your funds are gone as well. You need to do backups of all of that. That is what Bryan Bishop proposed to the [mailing list](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-April/017755.html) with his implementation recently. The first time it was discussed was probably in 2013. Also Bob McElrath has written a paper at some point on this, maybe 2015. A lot of people have thought about such a protection mechanism. I think it is pretty cool. We are not using this in Revault but this is the most advanced thing you can do right now although no one has it in production anywhere.
Bob McElrath: Two little points. One is that you discussed having the ability to do a timelock from the time of signing. It is cryptographically possible using timelock puzzles or verifiable delay functions but I think this is very far from anything people would want to use it right now. For those that are interested in crypto something to keep an eye on. Secondly somebody asks in the chat how do you prove the private key was deleted? Generally you can’t. An alternative is to use ECDSA key recovery but that cannot be used with Bitcoin today because the txid commits to the pubkey which you don’t know at the time you do the key recovery.
@@ -128,7 +128,7 @@ I am going to talk about some fun we had while designing it and challenges encou
The first challenge we encountered is a very common one which we have been discussing a lot for the Lightning Network. How to get pre-signed transactions confirmed in a timely manner for multiparty contracts. Our security models on our Revault transactions to be enforceable at any time. We need to find a way for transactions to pay the right fees according the fee market when we want to broadcast them. We could use something like the [update_fee](https://github.com/lightningnetwork/lightning-rfc/blob/master/02-peer-protocol.md#updating-fees-update_fee) message currently used in the Lightning Network protocol which asks our peer to sign a new commitment transaction with us as the current fee rate increases or a new one when the fee rate decreases. We can’t trust the other parties to sign the transaction. Firstly, because they might be part of the attack. With vault we are talking bid only so they have a strong incentive to take the vault money and act badly and refuse to sign a Revault transaction. Secondly even if they are honest it would just require an attacker to compromise one party to prevent a Revault transaction being executed. Finally they may not be able to sign it in the first place. They may have their HSM available. In addition it would require them to draw their HSM each time there is a fee rate bump. This is just not practical. We are left with either using anchor outputs, what has been planned for the Lightning Network or letting each party attach inputs to the transactions aka bring your own fees. We went for the second one.
-At first we leverage the fact that emergency transactions have just one input and one output. We use SIGHASH_SINGLE safely because if there is a difference between the number we may encounter the SIGHASH_SINGLE [bug](https://www.mail-archive.com/bitcoin-development@lists.sourceforge.net/msg06466.html). This allows any party to bump the fee rate by adding an input and output to the transaction. Or if the transaction already has fees at the time of broadcast just replace a SINGLE ANYONECANPAY with a SIGHASH_ALL signature before broadcasting. Unfortunately this opens up a possibility for transaction pinning as we allow any stakeholder to attach a new output paying to themselves. This will allow them to decrease the CANCEL transaction fee rate while we want them to add a high input and a low output. They just could add a high output and low input to take all the fees of the transaction and keep it above the minimum relay fees. They could also pin the transaction in the mempool as the output is just paying to themselves according to BIP 125 [rules](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-May/017835.html). We went for SIGHASH_ALL ANYONECANPAY signatures to be exchanged between the stakeholders because they would commit to all the outputs and thus would not allow any party to add an output paying only to themselves. They cannot anymore pin the Revault transaction itself in the mempool. Nor can they decrease the transaction fee. This adds a burden on the fee bumper because they need to create a transaction if the input is too big and they want a change output. They need a fee bump transaction to attach as inputs of the Revault transaction. The worst case here is that the fee bump transactions can still be pinned by a change output from an attacker. By the second rule of the RBF BIP the unconfirmed input would not be replaced in the Revault transaction. Even in this worst case scenario the party bumping the fee rate could create either a high fee rate fee bump transaction and wait for it to be confirmed in the next two blocks. We can expect the Unvault transaction to have a CSV of 15 at least. It will be confirmed and then the BIP 125 second rule will not apply anymore. I have described it more in detail in my mailing list [post](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-May/017835.html).
+At first we leverage the fact that emergency transactions have just one input and one output. We use SIGHASH_SINGLE safely because if there is a difference between the number we may encounter the SIGHASH_SINGLE [bug](https://www.mail-archive.com/bitcoin-development@lists.sourceforge.net/msg06466.html). This allows any party to bump the fee rate by adding an input and output to the transaction. Or if the transaction already has fees at the time of broadcast just replace a SINGLE ANYONECANPAY with a SIGHASH_ALL signature before broadcasting. Unfortunately this opens up a possibility for transaction pinning as we allow any stakeholder to attach a new output paying to themselves. This will allow them to decrease the CANCEL transaction fee rate while we want them to add a high input and a low output. They just could add a high output and low input to take all the fees of the transaction and keep it above the minimum relay fees. They could also pin the transaction in the mempool as the output is just paying to themselves according to BIP 125 [rules](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-May/017835.html). We went for SIGHASH_ALL ANYONECANPAY signatures to be exchanged between the stakeholders because they would commit to all the outputs and thus would not allow any party to add an output paying only to themselves. They cannot anymore pin the Revault transaction itself in the mempool. Nor can they decrease the transaction fee. This adds a burden on the fee bumper because they need to create a transaction if the input is too big and they want a change output. They need a fee bump transaction to attach as inputs of the Revault transaction. The worst case here is that the fee bump transactions can still be pinned by a change output from an attacker. By the second rule of the RBF BIP the unconfirmed input would not be replaced in the Revault transaction. Even in this worst case scenario the party bumping the fee rate could create either a high fee rate fee bump transaction and wait for it to be confirmed in the next two blocks. We can expect the Unvault transaction to have a CSV of 15 at least. It will be confirmed and then the BIP 125 second rule will not apply anymore. I have described it more in detail in my mailing list [post](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-May/017835.html).
# Optimization / standardness fun
diff --git a/transcripts/london-bitcoin-devs/2020-06-16-socratic-seminar-bip-schnorr.mdwn b/transcripts/london-bitcoin-devs/2020-06-16-socratic-seminar-bip-schnorr.mdwn
index d8f746b..d09dbc8 100644
--- a/transcripts/london-bitcoin-devs/2020-06-16-socratic-seminar-bip-schnorr.mdwn
+++ b/transcripts/london-bitcoin-devs/2020-06-16-socratic-seminar-bip-schnorr.mdwn
@@ -10,7 +10,7 @@ Video: https://www.youtube.com/watch?v=uE3lLsf38O4
Pastebin of the resources discussed: https://pastebin.com/uyktht33
-August 2020 update: Since this Socratic on BIP Schnorr there has been a proposed [change](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-August/018081.html) to the BIP revisting the squaredness tiebreaker for the R point.
+August 2020 update: Since this Socratic on BIP Schnorr there has been a proposed [change](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-August/018081.html) to the BIP revisting the squaredness tiebreaker for the R point.
The conversation has been anonymized by default to protect the identities of the participants. Those who have given permission for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.
diff --git a/transcripts/london-bitcoin-devs/2020-06-23-socratic-seminar-coinswap.mdwn b/transcripts/london-bitcoin-devs/2020-06-23-socratic-seminar-coinswap.mdwn
index a87224f..5db8659 100644
--- a/transcripts/london-bitcoin-devs/2020-06-23-socratic-seminar-coinswap.mdwn
+++ b/transcripts/london-bitcoin-devs/2020-06-23-socratic-seminar-coinswap.mdwn
@@ -166,7 +166,7 @@ AG: Yeah.
# Design for a CoinSwap implementation (Chris Belcher, 2020)
-https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-May/017898.html
+https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-May/017898.html
MF: This is Chris’ mailing list post from May of this year. There’s also the [gist](https://gist.github.com/chris-belcher/9144bd57a91c194e332fb5ca371d0964) with more details on the design. How does this proposal differs from your work Adam in 2017?
@@ -400,7 +400,7 @@ BM: We already have Coinjoin and Lightning.
RS: I like the argument of Coinjoining first and then using the result in a CoinSwap. That seems pretty good in the sense that you do have some anonymity already. Whoever receives that UTXO is not going to receive a fully tainted UTXO, it is already mixed with other Coinjoins.
-nothingmuch: For completeness, submarine swaps also bridge the gap. That pertains to Adam’s point about CoinjoinXT. You can effectively have a Coinjoin transaction where one of the outputs is a submarine swap that moves some of the balance into a Lightning channel just like you can have a CoinSwap transaction either funded or settled through a Coinjoin. An interesting [post](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-June/017970.html) from the Bitcoin dev mailing list recently is about batched CoinSwaps where you have a single counterparty as a maker servicing multiple takers’ CoinSwaps simultaneously which again blurs the boundary between Coinjoins and CoinSwaps.
+nothingmuch: For completeness, submarine swaps also bridge the gap. That pertains to Adam’s point about CoinjoinXT. You can effectively have a Coinjoin transaction where one of the outputs is a submarine swap that moves some of the balance into a Lightning channel just like you can have a CoinSwap transaction either funded or settled through a Coinjoin. An interesting [post](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-June/017970.html) from the Bitcoin dev mailing list recently is about batched CoinSwaps where you have a single counterparty as a maker servicing multiple takers’ CoinSwaps simultaneously which again blurs the boundary between Coinjoins and CoinSwaps.
# Succinct Atomic Swaps
diff --git a/transcripts/london-bitcoin-devs/2020-07-21-socratic-seminar-bip-taproot.mdwn b/transcripts/london-bitcoin-devs/2020-07-21-socratic-seminar-bip-taproot.mdwn
index cfe75fa..7b7b738 100644
--- a/transcripts/london-bitcoin-devs/2020-07-21-socratic-seminar-bip-taproot.mdwn
+++ b/transcripts/london-bitcoin-devs/2020-07-21-socratic-seminar-bip-taproot.mdwn
@@ -80,7 +80,7 @@ MF: Russell do you remember looking through the BIPs from Johnson Lau and Mark F
RO: I wasn’t really involved in the construction of those proposals so I am not a good person to discuss them.
-MF: Some of the interesting stuff that I saw was this [tail call](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/015028.html) stuff. An implicit tail call execution semantics in P2SH and how “a normal script is supposed to finish with just true or false on the stack. Any script that finishes execution with more than a single element on the stack is in violation of the so-called clean-stack rule and is considered non-standard.” I don’t think we have anybody on the call who has any more details on those BIPs, the Friedenbach and Johnson Lau work. There was also Jeremy Rubin’s [paper](https://rubin.io/public/pdfs/858report.pdf) on Merklized Abstract Syntax Trees which again I don’t think Jeremy is here and I don’t think people on the call remember the details.
+MF: Some of the interesting stuff that I saw was this [tail call](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/015028.html) stuff. An implicit tail call execution semantics in P2SH and how “a normal script is supposed to finish with just true or false on the stack. Any script that finishes execution with more than a single element on the stack is in violation of the so-called clean-stack rule and is considered non-standard.” I don’t think we have anybody on the call who has any more details on those BIPs, the Friedenbach and Johnson Lau work. There was also Jeremy Rubin’s [paper](https://rubin.io/public/pdfs/858report.pdf) on Merklized Abstract Syntax Trees which again I don’t think Jeremy is here and I don’t think people on the call remember the details.
PW: One comment I wanted to make is I think what Russell and I talked about originally with the term MAST isn’t exactly what it is referred to now. Correct me if I’m wrong Russell but I think the name MAST better applies to the Simplicity style where you have an actual abstract syntax tree where every node is a Merklization of its subtree as opposed to BIP 114, 116, BIP-Taproot, which is just a Merkle tree of conditions and the scripts are all at the bottom. Does that distinction make sense? In BIP 340 we don’t use the term MAST except as a reference to the name because what it is doing shouldn’t be called MAST. There is no abstract syntax tree.
@@ -206,7 +206,7 @@ PW: The way to accomplish that is by saying “We are going to take the key path
# Greg Maxwell Bitcoin dev mailing list post on Taproot (2018)
-https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-January/015614.html
+https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-January/015614.html
MF: One of the key points here once we have discussed that conceptual type stuff is that pre-Taproot we thought it was going to be an inefficiency to try to have this construction. The key breakthrough with Taproot is that it avoids any larger scripts going onchain and really doesn’t have any downsides. Greg says “You make use cases as indistinguishable as possible from the most common and boring payments.” No privacy downsides, in fact privacy is better and also efficiency. We are getting the best of both worlds on a number of different axes.
@@ -256,7 +256,7 @@ MF: With a pay-to-script-hash you still need to have that script to be able to s
# AJ Towns on formalizing the Taproot proposal (December 2018)
-https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-December/016556.html
+https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-December/016556.html
MF: The next item on the reading list was AJ Town’s first attempt to formalize this in a mailing list post in 2018. How much work did it take to go from that idea to formalizing it into a BIP?
@@ -268,7 +268,7 @@ PW: Clearly we cannot just put every possible idea and every possible improvemen
# John Newbery on reducing size of Taproot output by 1 vbyte (May 2019)
-https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-May/016943.html
+https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-May/016943.html
MF: One of the first major changes was this post from John (Newbery) on reducing the size of the pubkey. The consideration always is we don’t want anyone to lose out. Whatever use case they have, whether they have a small script or a really large script, we don’t want them to be any worse off than before because otherwise you then have this problem of some people losing out. It seems like a fiendish problem to make sure that at least everyone’s use case is not hurt even if it is a very small byte difference. I suppose that is what is hanging over this discussion and John’s post here.
@@ -300,7 +300,7 @@ MF: A few people were dreading the conversation. But we won’t discuss activati
# Pieter Wuille mailing list post on Taproot updates (no P2SH wrapped Taproot, tagged hashes, increased depth of Merkle tree, October 2019)
-https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-October/017378.html
+https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-October/017378.html
MF: The next item on the reading list, you gave an update Pieter in October 2019 on the mailing list. The key items here were no P2SH wrapped Taproot. Perhaps you could talk about why people wanted P2SH wrapped Taproot. I suspect it is exactly the same reason why people wanted P2SH wrapped SegWit. There is also tagged hashes and increased depth of Merkle tree.
@@ -348,11 +348,11 @@ https://bitcoinops.org/en/newsletters/2020/02/19/#discussion-about-taproot-versu
MF: There hasn’t been much criticism and there doesn’t appear to have been much opposition to Taproot itself. We won’t talk about [quantum resistance](https://bitcoin.stackexchange.com/questions/91049/why-does-hashing-public-keys-not-actually-provide-any-quantum-resistance) because it has already discussed a thousand times. There was this post on the mailing list post with potential criticisms of Taproot in February that is covered by the Optech guys. Was there any valid criticism in this? Any highlights from this post? It didn’t seem as if the criticism was grounded in too much concern or reality.
-PW: I am not going to comment. There was plenty of good discussion on the [mailing list](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-February/017618.html) around it.
+PW: I am not going to comment. There was plenty of good discussion on the [mailing list](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-February/017618.html) around it.
# Andrew Kozlik on committing to all scriptPubKeys in the signature message (April 2020)
-https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-April/017801.html
+https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-April/017801.html
MF: This is what Russell was alluding to. This is Andrew Kozlik’s post on committing to all scriptPubKeys in the signature message. Why is it important to commit to scriptPubKeys in the signature message?
@@ -376,23 +376,23 @@ PW: It was a known problem and we had to fix it. In any successor proposal whate
Greg Maxwell on Graftroot (Feb 2018)
-https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-February/015700.html
+https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-February/015700.html
AJ Towns on G’root (July 2018)
-https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-July/016249.html

+https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-July/016249.html

Pieter Wuille on G’root (October 2018)
-https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-October/016461.html

+https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-October/016461.html

AJ Towns on cross input signature aggregation (March 2018)
-https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-March/015838.html
+https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-March/015838.html
AJ Towns on SIGHASH_ANYPREVOUT (May 2019)
-https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-May/016929.html
+https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-May/016929.html
MF: The next links are things that didn’t make it in. There is Graftroot, G’root, cross input signature aggregation, ANYPREVOUT/NOINPUT. As the authors of Taproot what thought do you have to put in in terms of making sure that we are in the best position to add these later?
diff --git a/transcripts/london-bitcoin-devs/2020-08-19-socratic-seminar-signet.mdwn b/transcripts/london-bitcoin-devs/2020-08-19-socratic-seminar-signet.mdwn
index f06799e..a8768a4 100644
--- a/transcripts/london-bitcoin-devs/2020-08-19-socratic-seminar-signet.mdwn
+++ b/transcripts/london-bitcoin-devs/2020-08-19-socratic-seminar-signet.mdwn
@@ -138,7 +138,7 @@ MF: I don’t think many people are monitoring it for obvious reasons. Because i
# testnet4
-https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-June/017031.html
+https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-June/017031.html
MF: Peter Todd back in 2018 when they were discussing a reset of testnet put forward the view that testnet should be a large blockchain. That motivation that AJ said earlier about wanting to reset the testnet just because it gets too big, it takes too long to sync, IBD takes too long. Peter is saying in this mailing list post that you actually want it to be similar size to mainnet if not bigger. Let’s say the block size, block weight discussion starts up again in 5 years, you want to know what is possible before there starts to be performance issues across the network. Perhaps you do want testnet to be really big for those use cases, to experiment with a massive chain prior to thinking about whether we should blocks sizes smaller or bigger or whatever.
@@ -224,7 +224,7 @@ KA: Yes the signature commits to the block itself. So you have to have the block
# Signet on bitcoin-dev mailing list (March 2019)
-https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016734.html
+https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016734.html
MF: This is the announcement on the dev mailing list. Let’s have this discussion then on having one Signet which everyone uses or having multiple Signets. The problem that I foresee if there is just one is that then you get into the same issue that you do on mainnet where it is like “I want my change on Signet.” Jeremy Rubin comes and says “I want CHECKTEMPLATEVERIFY” and you go “Right fine.” Then someone else comes along with a change that no one is particularly interested or everyone thinks is a bad idea. “I want this on Signet.” Then you are in the same situation that we are on mainnet where it is very hard to get new features on the Signet. It needs at least with you and AJ as the signers it needs you or AJ to sign off that change is worthy of experimentation on Signet. Would you agree that is one of the downsides to having one Signet that everybody uses?
@@ -286,7 +286,7 @@ MF: At your [workshop](https://diyhpl.us/wiki/transcripts/advancing-bitcoin/2020
KA: There are several issues that you have to tackle when you are trying to test out new features with the Bitcoin setup. Once you turn it on if you find a bug do we reset the chain now? I think AJ has been working on various [ideas](https://bitcoin.stackexchange.com/questions/98642/can-we-experiment-on-signet-with-multiple-proposed-soft-forks-whilst-maintaining) to make it so that you can turn it on but then you say “Oops we didn’t actually turn it on. That was fake. Ignore that stuff.” Then you turn it on again later. So you can test out different stages of a feature even early stages. You can turn it on, see what happens and then selectively turn them off after you realize things were broken.
-AT: The problem is if you turned Taproot on two weeks ago all the signatures for that would’ve been with the [square version of R](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-August/018081.html). If we change it to the even version of R they’d be invalid if you tried running the future Taproot rules against them. I think you can make that work as long as you keep updating the activation date of Taproot so that all those transactions, they weren’t actually Taproot transactions at all, they were pretend OP_TRUE sort of things that you don’t need to run any validation code over anymore. You only have to do the Taproot rules from this block instead that previous block. Going back to the current version of Signet, that does allow you to accept non-standard transactions.
+AT: The problem is if you turned Taproot on two weeks ago all the signatures for that would’ve been with the [square version of R](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-August/018081.html). If we change it to the even version of R they’d be invalid if you tried running the future Taproot rules against them. I think you can make that work as long as you keep updating the activation date of Taproot so that all those transactions, they weren’t actually Taproot transactions at all, they were pretend OP_TRUE sort of things that you don’t need to run any validation code over anymore. You only have to do the Taproot rules from this block instead that previous block. Going back to the current version of Signet, that does allow you to accept non-standard transactions.
KA: I don’t think so.
@@ -328,7 +328,7 @@ MF: I hadn’t thought much about Lightning in terms of what you’d need on Sig
KA: I have pull requests to make it possible to do Lightning stuff.
-MF: I am just going to highlight some good resources on Signet for the video’s sake. There’s your mailing list [post](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016734.html) initially announcing Signet. Then there’s the Signet [wiki](https://en.bitcoin.it/wiki/Signet) which talks about some of the reasons why you’d want to run Signet and some of the differences between mainnet and Signet. I think we’ve gone through a few of these.
+MF: I am just going to highlight some good resources on Signet for the video’s sake. There’s your mailing list [post](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016734.html) initially announcing Signet. Then there’s the Signet [wiki](https://en.bitcoin.it/wiki/Signet) which talks about some of the reasons why you’d want to run Signet and some of the differences between mainnet and Signet. I think we’ve gone through a few of these.
“All signets use the same hardcoded genesis block (block 0) but independent signets can be differentiated by their network magic (message start bytes). In the updated protocol, the message start bytes are the first four bytes of a hash digest of the network’s challenge script (the script used to determine whether a block has a valid signature). The change was motivated by a desire to simplify the development of applications that want to use multiple signets but which need to call libraries that hardcode the genesis block for the networks they support.” Bitcoin Optech
diff --git a/transcripts/london-bitcoin-devs/2021-07-20-socratic-seminar-taproot-rollout.mdwn b/transcripts/london-bitcoin-devs/2021-07-20-socratic-seminar-taproot-rollout.mdwn
index 6d3df60..994da73 100644
--- a/transcripts/london-bitcoin-devs/2021-07-20-socratic-seminar-taproot-rollout.mdwn
+++ b/transcripts/london-bitcoin-devs/2021-07-20-socratic-seminar-taproot-rollout.mdwn
@@ -182,7 +182,7 @@ AJ Towns tweet on those running a premature Taproot ruleset patch being forked o
MF: I did listen to your podcast Andrew with Stephan Livera that came out in the last couple of days. You said a lot of people have been trying with their wallets to send Bitcoin to mainnet Taproot addresses. Some of them failed, some of them succeeded and some of them just locked up their funds so they couldn’t even spend them with an anyone-can-spend. Do you know what happened there?
-AC: There is a [mailing list post](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-October/018255.html) from November where this testing was happening. I think Mike Schmidt had a big [summary](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-November/018268.html) of every wallet that he tested and the result. There were some wallets that sent successfully. These used a bech32 address, not a bech32m. This was before bech32m was finalized. Some of them sent successfully and made a SegWit v1 outputs, some of them failed to parse the address, some of them failed to make the transaction. They accepted the address but something else down the line failed and so the transaction wasn’t made. Some of them made a SegWit v0 address which means that the coins are now burnt. As we saw on the website there are some Taproot outputs out there and there are some that should have been Taproot outputs but aren’t.
+AC: There is a [mailing list post](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-October/018255.html) from November where this testing was happening. I think Mike Schmidt had a big [summary](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-November/018268.html) of every wallet that he tested and the result. There were some wallets that sent successfully. These used a bech32 address, not a bech32m. This was before bech32m was finalized. Some of them sent successfully and made a SegWit v1 outputs, some of them failed to parse the address, some of them failed to make the transaction. They accepted the address but something else down the line failed and so the transaction wasn’t made. Some of them made a SegWit v0 address which means that the coins are now burnt. As we saw on the website there are some Taproot outputs out there and there are some that should have been Taproot outputs but aren’t.
MF: They sent them to bech32 Taproot addresses rather than bech32m Taproot addresses and the bech32 Taproot addresses can’t be spent. Is that why they are locked up forever?
@@ -256,7 +256,7 @@ CR: Sparrow does support and use descriptors but it doesn’t directly influence
# The descriptor BIPs
-MF: Descriptors overdue for a BIP, Luke says. There is a [bunch](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-June/019151.html) of descriptor BIPs.
+MF: Descriptors overdue for a BIP, Luke says. There is a [bunch](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-June/019151.html) of descriptor BIPs.
AC: There is a [PR](https://github.com/bitcoin/bips/pull/1143) open. Feel free to assign 7 numbers for me, thanks.
@@ -308,7 +308,7 @@ Then I have some links on Lightning.
https://btctranscripts.com/advancing-bitcoin/2020/2020-02-06-antoine-riard-taproot-lightning/
-https://lists.linuxfoundation.org/pipermail/lightning-dev/2019-December/002375.html
+https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2019-December/002375.html
https://github.com/ElementsProject/scriptless-scripts/blob/master/md/multi-hop-locks.md
diff --git a/transcripts/london-bitcoin-devs/2021-08-10-socratic-seminar-dlcs.mdwn b/transcripts/london-bitcoin-devs/2021-08-10-socratic-seminar-dlcs.mdwn
index 5b88ef9..9f46bb1 100644
--- a/transcripts/london-bitcoin-devs/2021-08-10-socratic-seminar-dlcs.mdwn
+++ b/transcripts/london-bitcoin-devs/2021-08-10-socratic-seminar-dlcs.mdwn
@@ -176,7 +176,7 @@ NK: I still think it is totally possible to take that approach but I would again
CS: I was talking with some Lightning folks this weekend and talking about what’s on the roadmap for various Lightning companies. My argument to them is we need more adoption on the Lightning Network, I think everybody agrees with this, and number is going up currently and that is great. One of the reasons that number go up could be attributed to lnd adding new features like [keysend](https://bitcoinops.org/en/topics/spontaneous-payments/). I have issues with keysend but I don’t think it can be argued that there is a lot more applications that are enabled by this. I think we need to pitch them the same way as PTLCs. We have written a whole [archive](https://suredbits.com/category/payment-points/) about things you can do with PTLCs, it is going to enhance the expressiveness of the Lightning Network which is going to make number go up even more on the Lightning Network. We need to pitch them on this new feature set, your last features didn’t really go through the consensus process for Lightning to integrate. You’ve got some interesting apps out there for that. The same thing can happen in PTLC world. Unfortunately going back to what we already hashed over was it is much more invasive to do PTLC stuff around the core HTLC state machine logic that already exists.
-NK: I posted an [update](https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-April/002647.html) to the Lightning dev mailing list around a year ago, after the hack day had happened and we had ECDSA adaptor signatures. I posted an update on PTLCs, what work had been done, onchain proofs of concept that we had executed and the set of things that needed to be changed in the Lightning Network were. And I believe roasbeef [responded](https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-April/002659.html) to this mailing list “I wouldn’t call this a minimal change, this is the largest change that would have ever happened to the Lightning Network. You are changing the state machine.” And I was like “Ok I guess you are right.” For context roasbeef once said this would be the biggest change to Lightning that has ever happened. Not DLCs to be clear, something much smaller than DLCs that are a required first step, changing the state machine at all which has not happened yet.
+NK: I posted an [update](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-April/002647.html) to the Lightning dev mailing list around a year ago, after the hack day had happened and we had ECDSA adaptor signatures. I posted an update on PTLCs, what work had been done, onchain proofs of concept that we had executed and the set of things that needed to be changed in the Lightning Network were. And I believe roasbeef [responded](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-April/002659.html) to this mailing list “I wouldn’t call this a minimal change, this is the largest change that would have ever happened to the Lightning Network. You are changing the state machine.” And I was like “Ok I guess you are right.” For context roasbeef once said this would be the biggest change to Lightning that has ever happened. Not DLCs to be clear, something much smaller than DLCs that are a required first step, changing the state machine at all which has not happened yet.
MF: I get the sense roasbeef is more on the disruptive side rather than the conservative side.
@@ -190,7 +190,7 @@ MF: I don’t know how much of it is literally on Lightning. Is there a big upti
FD: It is custodial in the beginning, nobody even knows yet if the wallet is going to be onchain or a Lightning wallet, the wallet coming up on September 7th. We are hearing that it will have an integration with Lightning from the get go. Whether it is a custodial or a non-custodial wallet I think it doesn’t really matter because ultimately if the wallet can allow for people to withdraw onchain or through Lightning then anyone can use any wallet out here. That is what the government is trying to push forward, for anybody to use any wallet that is available. There is Bitcoin Beach, Wallet of Satoshi, some people are using Muun and Strike of course. I think over time as people get more comfortable and they understand how to use wallets they will use whatever wallet works for them. I think there will be increased level of usage. I don’t want to be a prophet and say something that might not happen but what I can see here there is going to be an explosion of usage.
-MF: This might seem like a diversion from DLCs but it is not actually as a diversion. AJ Towns’ idea that he posted on the [mailing list](https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-January/002937.html) and he also spoke at the [Sydney Socratic](https://btctranscripts.com/sydney-bitcoin-meetup/2021-02-23-socratic-seminar/#lightning-dice-aj-towns) on this, AJ is trying to get more adoption on Lightning, more people using Lightning. AJ’s argument was that in the early days of Bitcoin Satoshi Dice was the biggest use case of Bitcoin. There were tonnes and tonnes of transactions. Now we look back and go “How terrible it was. My node has to verify all these terrible transactions”. But you could argue back then any adoption… Bitcoin was nothing, no one was using it. It had a purpose that people were actually using Bitcoin for something. AJ’s argument was, you can read the mailing list post, that you could do similar Satoshi Dice gambling on dice, gambling on coin tosses on Lightning to get the adoption up on Lightning if your objective is to get adoption up on Lightning, more people using Lightning. It sounds like from your perspective Fode that we don’t need to be as concerned with adoption on Lightning, people are going to be using Lightning for payments.
+MF: This might seem like a diversion from DLCs but it is not actually as a diversion. AJ Towns’ idea that he posted on the [mailing list](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-January/002937.html) and he also spoke at the [Sydney Socratic](https://btctranscripts.com/sydney-bitcoin-meetup/2021-02-23-socratic-seminar/#lightning-dice-aj-towns) on this, AJ is trying to get more adoption on Lightning, more people using Lightning. AJ’s argument was that in the early days of Bitcoin Satoshi Dice was the biggest use case of Bitcoin. There were tonnes and tonnes of transactions. Now we look back and go “How terrible it was. My node has to verify all these terrible transactions”. But you could argue back then any adoption… Bitcoin was nothing, no one was using it. It had a purpose that people were actually using Bitcoin for something. AJ’s argument was, you can read the mailing list post, that you could do similar Satoshi Dice gambling on dice, gambling on coin tosses on Lightning to get the adoption up on Lightning if your objective is to get adoption up on Lightning, more people using Lightning. It sounds like from your perspective Fode that we don’t need to be as concerned with adoption on Lightning, people are going to be using Lightning for payments.
FD: Not at all.
diff --git a/transcripts/mimblewimble-podcast.mdwn b/transcripts/mimblewimble-podcast.mdwn
index e7eaf86..1299ab1 100644
--- a/transcripts/mimblewimble-podcast.mdwn
+++ b/transcripts/mimblewimble-podcast.mdwn
@@ -341,7 +341,7 @@ PW: It took him 7 years to write it, right. It's not something you would expect
host: So if it took 7 years to write, it could take 7+ years to read. The book said that, right. Lots of mathematicians had difficulty grasping it because it was so arcane and deep and so involved, so that's interesting, it's not surprising that you go down one rabbit hole and not everyone can necessarily follow you without that same amount of time to go down that rabbit hole. Interesting response. Yeah it's something, I got to AP Calculus and that's about it. The higher-order math and cryptography has always fascinated me. So I was really curious to hear what you thought about that, so it's an interesting response, thank you. Great discussion on mimblewimble. We have a minute left. It's been a pleasure to have you guys on. Thank you.
-AP: <http://diyhpl.us/~bryan/papers2/bitcoin/mimblewimble.txt> and <https://www.reddit.com/r/Bitcoin/comments/4vub3y/mimblewimble_noninteractive_coinjoin_and_better/> and <https://www.reddit.com/r/Bitcoin/comments/4woyc0/mimblewimble_interview_with_andrew_poelstra_and/> and <https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-August/012927.html>
+AP: <http://diyhpl.us/~bryan/papers2/bitcoin/mimblewimble.txt> and <https://www.reddit.com/r/Bitcoin/comments/4vub3y/mimblewimble_noninteractive_coinjoin_and_better/> and <https://www.reddit.com/r/Bitcoin/comments/4woyc0/mimblewimble_interview_with_andrew_poelstra_and/> and <https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-August/012927.html>
and <https://www.reddit.com/r/Bitcoin/comments/4xge51/mimblewimble_how_a_strippeddown_version_of/>
diff --git a/transcripts/mit-bitcoin-expo-2017/scaling-and-utxos.mdwn b/transcripts/mit-bitcoin-expo-2017/scaling-and-utxos.mdwn
index 2ffe8ac..8f956d1 100644
--- a/transcripts/mit-bitcoin-expo-2017/scaling-and-utxos.mdwn
+++ b/transcripts/mit-bitcoin-expo-2017/scaling-and-utxos.mdwn
@@ -42,7 +42,7 @@ I setup some nodes on scaleway and it took about 5 days for them to get started.
<http://diyhpl.us/wiki/transcripts/mit-bitcoin-expo-2016/fraud-proofs-petertodd/>
-<https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-May/012715.html>
+<https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-May/012715.html>
<https://s3.amazonaws.com/peter.todd/bitcoin-wizards-13-10-17.log>
diff --git a/transcripts/mit-bitcoin-expo-2018/improving-bitcoin-smart-contract-efficiency.mdwn b/transcripts/mit-bitcoin-expo-2018/improving-bitcoin-smart-contract-efficiency.mdwn
index 35c33a9..cc1ad1b 100644
--- a/transcripts/mit-bitcoin-expo-2018/improving-bitcoin-smart-contract-efficiency.mdwn
+++ b/transcripts/mit-bitcoin-expo-2018/improving-bitcoin-smart-contract-efficiency.mdwn
@@ -36,7 +36,7 @@ So you have these two different output types in bitcoin. And these two different
<http://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2018-03-06-taproot-graftroot-etc/>
-<a href="https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-January/015614.html">Taproot</a> is an idea from Greg Maxwell, who is somewhat of a bitcoin guru. He knows a lot about bitcoin. He wrote a post introducing taproot a couple of weeks ago. It merges p2sh and p2pkh. It does this in a very annoyingly-simple way where we started wondering why nobody thought of this before.
+<a href="https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-January/015614.html">Taproot</a> is an idea from Greg Maxwell, who is somewhat of a bitcoin guru. He knows a lot about bitcoin. He wrote a post introducing taproot a couple of weeks ago. It merges p2sh and p2pkh. It does this in a very annoyingly-simple way where we started wondering why nobody thought of this before.
You start by making a public key, the same way you do to create your addresses. Say you have your script S. What you do is you compute the key C and you send the taproot. I skipped the elliptic curve operations, but if you're familiar with any of this stuff then this is how you turn a private key into a public key, you multiply by G. You can add public keys together, it's quick and really simple to do. And if you add the public keys, then you can sign with the sum of the private keys, which is a really cool detail. What you do is say you have a regular key pair, and you also have this script and you perform this taproot equation. You hash the script and the public key together, you use that as a private key, turn that into a public key, and add that to my existing public key. This allows you to have both a script and a key squished into one thing. C is essentially a public key but it also has a script in it. When you want to spend from it, you have the option to use the key part or the script part. If you want to treat it as P2PKH, where you're a person signing off with this, then you know your private key will just be your regular private key plus this hash that you computed which you know. So you sign as if there were no scripts and nobody will be able to detect that there was a script -it looks like a regular signature. But if you want to reveal the script, you can do so, and then people can verify that it is still valid, and then they can execute the script. It's a way of merging p2pkh and p2sh into one.
@@ -46,7 +46,7 @@ It's nice because in many cases in smart contracts, there's a bunch of people ge
# Graftroot
-<https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-February/015700.html>
+<https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-February/015700.html>
<http://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2018-03-06-taproot-graftroot-etc/>
diff --git a/transcripts/mit-bitcoin-expo-2020/2020-03-07-andrew-poelstra-taproot.mdwn b/transcripts/mit-bitcoin-expo-2020/2020-03-07-andrew-poelstra-taproot.mdwn
index c52a2ae..1c105df 100644
--- a/transcripts/mit-bitcoin-expo-2020/2020-03-07-andrew-poelstra-taproot.mdwn
+++ b/transcripts/mit-bitcoin-expo-2020/2020-03-07-andrew-poelstra-taproot.mdwn
@@ -70,7 +70,7 @@ Bitcoin, I checked this morning, has a market capitalization of about 170 billio
# Tradeoffs
-A couple of quick words about cryptography. In the first half of the talk I was talking about all these cool things we can do with just keys, just signatures. Isn’t this great? No additional resources on the chain. That is not quite true. You would think adding these new features would involve some increase of resources at least for some users. But in fact we have been able to keep this to a couple of bytes here and there. In certain really specific scenarios somebody has to reveal more hashes than they otherwise would. We have been spoilt with the magic of cryptography over the last several years. We have been able by grinding on research to find all these cool new scalability and privacy improvements that have no trade-offs other than deployment complexity and so forth. Cryptography can’t do everything, we think. There aren’t really any hard limits on what cryptography can do that necessarily prevent us from just doing everything in an arbitrarily small amount of space. But it is an ongoing research project. Every new thing is something that takes many years of research to come up with. When we are making deployments, I said if we make anyone’s lives worse then it is not going to go through. This includes wasting a couple of bytes. For example on Taproot one technical [thing](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-August/017247.html) I am going to go into is we had public keys that took 33 bytes to represent. 32 bytes plus one extra bit which represents a choice of two different points that have the same x coordinate. We found a way to drop that extra bit, we had to add some complexity. There was an argument about how we wanted to drop that extra bit and what the meaning of the bit would have been. Would it be the evenness or oddness of the number we alighted, would it be whether it was quadratic residue? Would it be what part of the range of possible values it lives in, stuff like this. That is the kind of stuff that we spent quite a while grinding on even though it is not very exciting. It is certainly not some cool new flash loan technology or whatever that various other projects are deploying. This is stuff that is important for getting something through on a worldwide system where everybody is a stakeholder and no one wants to spend money on extra bytes.
+A couple of quick words about cryptography. In the first half of the talk I was talking about all these cool things we can do with just keys, just signatures. Isn’t this great? No additional resources on the chain. That is not quite true. You would think adding these new features would involve some increase of resources at least for some users. But in fact we have been able to keep this to a couple of bytes here and there. In certain really specific scenarios somebody has to reveal more hashes than they otherwise would. We have been spoilt with the magic of cryptography over the last several years. We have been able by grinding on research to find all these cool new scalability and privacy improvements that have no trade-offs other than deployment complexity and so forth. Cryptography can’t do everything, we think. There aren’t really any hard limits on what cryptography can do that necessarily prevent us from just doing everything in an arbitrarily small amount of space. But it is an ongoing research project. Every new thing is something that takes many years of research to come up with. When we are making deployments, I said if we make anyone’s lives worse then it is not going to go through. This includes wasting a couple of bytes. For example on Taproot one technical [thing](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-August/017247.html) I am going to go into is we had public keys that took 33 bytes to represent. 32 bytes plus one extra bit which represents a choice of two different points that have the same x coordinate. We found a way to drop that extra bit, we had to add some complexity. There was an argument about how we wanted to drop that extra bit and what the meaning of the bit would have been. Would it be the evenness or oddness of the number we alighted, would it be whether it was quadratic residue? Would it be what part of the range of possible values it lives in, stuff like this. That is the kind of stuff that we spent quite a while grinding on even though it is not very exciting. It is certainly not some cool new flash loan technology or whatever that various other projects are deploying. This is stuff that is important for getting something through on a worldwide system where everybody is a stakeholder and no one wants to spend money on extra bytes.
# Political Things
diff --git a/transcripts/ruben-somsen/2020-05-11-ruben-somsen-succinct-atomic-swap.mdwn b/transcripts/ruben-somsen/2020-05-11-ruben-somsen-succinct-atomic-swap.mdwn
index 3db434c..966a9ad 100644
--- a/transcripts/ruben-somsen/2020-05-11-ruben-somsen-succinct-atomic-swap.mdwn
+++ b/transcripts/ruben-somsen/2020-05-11-ruben-somsen-succinct-atomic-swap.mdwn
@@ -48,7 +48,7 @@ The negative is the online requirement for one of the two parties, in this case
# Positive
-It works today, that is a good thing. You can do this with MuSig and Schnorr. That will be the most efficient way of doing it without any weird math that you have to do. Recently Lloyd Fournier, he came up with a way to do a single signer ECDSA adaptor signature. That allows you to do this today. If you utilize that kind of technique then you can do adaptor signatures with single signatures on the Bitcoin blockchain today. That’s really cool. Lloyd also helped me out by reviewing this Succinct Atomic Swap that I created so I want to thank him for that. Another advantage that it is two transactions not four which is great. It is scriptless so you don’t really have anything huge going to the blockchain. It really is in the case of MuSig one signature, in the case of ECDSA two signatures going to the blockchain per transaction. It is asymmetric meaning that one of the chains only has one transaction going onchain at any time even if the protocol fails. That is nice because if one of the two chains is more expensive to use, let’s say you go from Litecoin to Bitcoin, then you want to have Bitcoin be the place where only one transaction takes place. That is more efficient. The other thing already mentioned is that one of the two chains doesn’t require a timelock. That might be good if there are some blockchains out there that don’t have any scripting whatsoever including timelocks. Lastly there is something called Payswap which might be useful to do with this protocol. [Payswap](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-January/017595.html) is an idea by ZmnSCPxj on the Bitcoin mailing list where you have a payment where you send a full output to one person and the change, which is normally inside of the same transaction, is an atomic swap. I might be sending 1.5 Bitcoin to somebody for buying something and in another transaction that is seemingly unrelated that person sends me back 0.5 because I only intended to send 1 Bitcoin let’s say. The nice thing about this is now you don’t really have any connection between the amounts. The amounts are different now. It is as obvious as if you were to do an atomic swap where the amounts are the same. You do a payment and an atomic swap in one and that gives you an additional amount of privacy. This protocol wasn’t very practical before because it required four transactions. But now you could maybe do it in two transactions or three if you don’t want the online requirements.
+It works today, that is a good thing. You can do this with MuSig and Schnorr. That will be the most efficient way of doing it without any weird math that you have to do. Recently Lloyd Fournier, he came up with a way to do a single signer ECDSA adaptor signature. That allows you to do this today. If you utilize that kind of technique then you can do adaptor signatures with single signatures on the Bitcoin blockchain today. That’s really cool. Lloyd also helped me out by reviewing this Succinct Atomic Swap that I created so I want to thank him for that. Another advantage that it is two transactions not four which is great. It is scriptless so you don’t really have anything huge going to the blockchain. It really is in the case of MuSig one signature, in the case of ECDSA two signatures going to the blockchain per transaction. It is asymmetric meaning that one of the chains only has one transaction going onchain at any time even if the protocol fails. That is nice because if one of the two chains is more expensive to use, let’s say you go from Litecoin to Bitcoin, then you want to have Bitcoin be the place where only one transaction takes place. That is more efficient. The other thing already mentioned is that one of the two chains doesn’t require a timelock. That might be good if there are some blockchains out there that don’t have any scripting whatsoever including timelocks. Lastly there is something called Payswap which might be useful to do with this protocol. [Payswap](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-January/017595.html) is an idea by ZmnSCPxj on the Bitcoin mailing list where you have a payment where you send a full output to one person and the change, which is normally inside of the same transaction, is an atomic swap. I might be sending 1.5 Bitcoin to somebody for buying something and in another transaction that is seemingly unrelated that person sends me back 0.5 because I only intended to send 1 Bitcoin let’s say. The nice thing about this is now you don’t really have any connection between the amounts. The amounts are different now. It is as obvious as if you were to do an atomic swap where the amounts are the same. You do a payment and an atomic swap in one and that gives you an additional amount of privacy. This protocol wasn’t very practical before because it required four transactions. But now you could maybe do it in two transactions or three if you don’t want the online requirements.
# Maybe
diff --git a/transcripts/scalingbitcoin/hong-kong/a-bevy-of-block-size-proposals-bip100-bip102-and-more.mdwn b/transcripts/scalingbitcoin/hong-kong/a-bevy-of-block-size-proposals-bip100-bip102-and-more.mdwn
index e7aea0f..78421fa 100644
--- a/transcripts/scalingbitcoin/hong-kong/a-bevy-of-block-size-proposals-bip100-bip102-and-more.mdwn
+++ b/transcripts/scalingbitcoin/hong-kong/a-bevy-of-block-size-proposals-bip100-bip102-and-more.mdwn
@@ -20,7 +20,7 @@ High-level miners mining without validating. Miners as it relates to block size,
Thinking about the fee market. From a user experience standpoint, fees are very difficult to reason and predict by design, that's just how the system works. Fees are disconnected by transaction value, because it's size-based. You might have a low-value transaction that is big in terms of bytes, so you are paying a high fee on a low-value transaction. You might have a super-large value transaction that has only one small UTXO, so the fee is tiny.
-From a user point of view, they only have choices in terms of what they get for what they pay. You can pay a high fee, and that's I want it as soon as possible, you can pay average fee, slightly below fee, or zero and have a very long wait. These are the definitions from the user's point of view. They don't have direct control based on fees. The block generation times are noisy, you might have a burst of two blocks inside of a minute, and you might have to wait an hour for another block. Even if you pay a high fee, there's no guarantee that it will confirm in the next ten minutes, only that it will confirm in the next block. From a user perspective, transaction fees are hard to reason about. Wallets have a difficult time figuring out what the best fee is to pay ((see [Bram Cohen's work on how wallets can handle transaction fee estimation](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-November/011685.html))). How do you present that in the user interface?
+From a user point of view, they only have choices in terms of what they get for what they pay. You can pay a high fee, and that's I want it as soon as possible, you can pay average fee, slightly below fee, or zero and have a very long wait. These are the definitions from the user's point of view. They don't have direct control based on fees. The block generation times are noisy, you might have a burst of two blocks inside of a minute, and you might have to wait an hour for another block. Even if you pay a high fee, there's no guarantee that it will confirm in the next ten minutes, only that it will confirm in the next block. From a user perspective, transaction fees are hard to reason about. Wallets have a difficult time figuring out what the best fee is to pay ((see [Bram Cohen's work on how wallets can handle transaction fee estimation](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-November/011685.html))). How do you present that in the user interface?
The fee market status, the changes, the economics, market reaction, all plays into the block size as well. The fee market exists today in a narrow band based on simplistic wallet software fee behavior. If you think through some scenarios about block size changes, you think if you have full blocks and then we change the block size, that might reboot the fee market and then introduce chaos into the user experience. If you don't have full blocks, then you might not have that hurdle. A large block size step might reboot the fee market.
diff --git a/transcripts/scalingbitcoin/hong-kong/overview-of-bips-necessary-for-lightning.mdwn b/transcripts/scalingbitcoin/hong-kong/overview-of-bips-necessary-for-lightning.mdwn
index f12e68d..e5c17a5 100644
--- a/transcripts/scalingbitcoin/hong-kong/overview-of-bips-necessary-for-lightning.mdwn
+++ b/transcripts/scalingbitcoin/hong-kong/overview-of-bips-necessary-for-lightning.mdwn
@@ -98,6 +98,6 @@ slides re: time and bitcoin <http://lightning.network/lightning-network-presenta
<https://github.com/ElementsProject/lightning>
-<https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev>
+<https://gnusha.org/url/https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev>
irc.freenode.net #lightning-dev
diff --git a/transcripts/scalingbitcoin/hong-kong/validation-cost-metric.mdwn b/transcripts/scalingbitcoin/hong-kong/validation-cost-metric.mdwn
index e8ea773..3aa69bb 100644
--- a/transcripts/scalingbitcoin/hong-kong/validation-cost-metric.mdwn
+++ b/transcripts/scalingbitcoin/hong-kong/validation-cost-metric.mdwn
@@ -156,5 +156,5 @@ But as we showed we can build on existing blocksize proposals to get some of the
----
-see also <https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-November/011662.html>
+see also <https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-November/011662.html>
diff --git a/transcripts/scalingbitcoin/milan/coin-selection.mdwn b/transcripts/scalingbitcoin/milan/coin-selection.mdwn
index 536202c..94d946d 100644
--- a/transcripts/scalingbitcoin/milan/coin-selection.mdwn
+++ b/transcripts/scalingbitcoin/milan/coin-selection.mdwn
@@ -122,7 +122,7 @@ A: Pieter suggested that a few weeks ago. I ran the numbers with P2WPKH for witn
# References
-<https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-September/013131.html>
+<https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-September/013131.html>
paper <http://murch.one/wp-content/uploads/2016/09/CoinSelection.pdf>
diff --git a/transcripts/scalingbitcoin/milan/mimblewimble.mdwn b/transcripts/scalingbitcoin/milan/mimblewimble.mdwn
index b2ff347..3645adc 100644
--- a/transcripts/scalingbitcoin/milan/mimblewimble.mdwn
+++ b/transcripts/scalingbitcoin/milan/mimblewimble.mdwn
@@ -67,7 +67,7 @@ original mimblewimble paper <http://diyhpl.us/~bryan/papers2/bitcoin/mimblewimbl
mimblewimble podcast <http://diyhpl.us/wiki/transcripts/mimblewimble-podcast/>
-other mimblewimble follow-up <https://www.reddit.com/r/Bitcoin/comments/4vub3y/mimblewimble_noninteractive_coinjoin_and_better/> and <https://www.reddit.com/r/Bitcoin/comments/4woyc0/mimblewimble_interview_with_andrew_poelstra_and/> and <https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-August/012927.html>
+other mimblewimble follow-up <https://www.reddit.com/r/Bitcoin/comments/4vub3y/mimblewimble_noninteractive_coinjoin_and_better/> and <https://www.reddit.com/r/Bitcoin/comments/4woyc0/mimblewimble_interview_with_andrew_poelstra_and/> and <https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-August/012927.html>
and <https://www.reddit.com/r/Bitcoin/comments/4xge51/mimblewimble_how_a_strippeddown_version_of/>
diff --git a/transcripts/scalingbitcoin/milan/onion-routing-in-lightning.mdwn b/transcripts/scalingbitcoin/milan/onion-routing-in-lightning.mdwn
index 7d8f828..0c659c7 100644
--- a/transcripts/scalingbitcoin/milan/onion-routing-in-lightning.mdwn
+++ b/transcripts/scalingbitcoin/milan/onion-routing-in-lightning.mdwn
@@ -85,7 +85,7 @@ A: For the payment network problems, we could say all payments in lightning are
-onion routing specification <https://lists.linuxfoundation.org/pipermail/lightning-dev/2016-July/000557.html>
+onion routing specification <https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2016-July/000557.html>
onion routing protocol for lightning <https://github.com/cdecker/lightning-rfc/blob/master/bolts/onion-protocol.md>
diff --git a/transcripts/scalingbitcoin/stanford-2017/redesigning-bitcoin-fee-market.mdwn b/transcripts/scalingbitcoin/stanford-2017/redesigning-bitcoin-fee-market.mdwn
index de9ea40..12ce38d 100644
--- a/transcripts/scalingbitcoin/stanford-2017/redesigning-bitcoin-fee-market.mdwn
+++ b/transcripts/scalingbitcoin/stanford-2017/redesigning-bitcoin-fee-market.mdwn
@@ -2,7 +2,7 @@ Redesigning bitcoin's fee market
Or Sattath (The Hebrew University)
-<https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/015093.html>
+<https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/015093.html>
<https://www.reddit.com/r/Bitcoin/comments/72qi2r/redesigning_bitcoins_fee_market_a_new_paper_by/>
diff --git a/transcripts/scalingbitcoin/tel-aviv-2019/edgedevplusplus/blockchain-design-patterns.mdwn b/transcripts/scalingbitcoin/tel-aviv-2019/edgedevplusplus/blockchain-design-patterns.mdwn
index eb2a26b..2405485 100644
--- a/transcripts/scalingbitcoin/tel-aviv-2019/edgedevplusplus/blockchain-design-patterns.mdwn
+++ b/transcripts/scalingbitcoin/tel-aviv-2019/edgedevplusplus/blockchain-design-patterns.mdwn
@@ -139,7 +139,7 @@ Before I talk about taproot... Up til now, I have been talking about Schnorr sig
But bitcoin isn't mimblewimble, and people use scripts like timelocks and lightning channels. petertodd has some weird things like coins out there that you can get if you collide sha1 or sha256 or basically any of the hash functions that bitcoin supports. You can implement hash collision bounties and the blockchain enforces it.
-(An explanation of the following Q&A exchange can be found [here](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-September/017316.html).)
+(An explanation of the following Q&A exchange can be found [here](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-September/017316.html).)
Q: Can you do timelocks iwth adaptor signatures?
@@ -157,7 +157,7 @@ Q: No, there's two transactions already existing. Before locktime, you can spend
A: You'd have to diagram that out for me. There's a few ways to do this, some that I know, but yours isn't one of them.
-<https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-September/017316.html>
+<https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-September/017316.html>
For timelocks, it appears that you need script support. That's my current belief at this time of day.
diff --git a/transcripts/scalingbitcoin/tel-aviv-2019/edgedevplusplus/statechains.mdwn b/transcripts/scalingbitcoin/tel-aviv-2019/edgedevplusplus/statechains.mdwn
index e13146e..9fba0dd 100644
--- a/transcripts/scalingbitcoin/tel-aviv-2019/edgedevplusplus/statechains.mdwn
+++ b/transcripts/scalingbitcoin/tel-aviv-2019/edgedevplusplus/statechains.mdwn
@@ -50,5 +50,5 @@ How do you prove to somebody that you know a private key? You sign a message and
<https://diyhpl.us/wiki/transcripts/scalingbitcoin/tokyo-2018/statechains/>
-<https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-June/017005.html>
+<https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-June/017005.html>
diff --git a/transcripts/scalingbitcoin/tel-aviv-2019/work-in-progress.mdwn b/transcripts/scalingbitcoin/tel-aviv-2019/work-in-progress.mdwn
index 4634664..f21d856 100644
--- a/transcripts/scalingbitcoin/tel-aviv-2019/work-in-progress.mdwn
+++ b/transcripts/scalingbitcoin/tel-aviv-2019/work-in-progress.mdwn
@@ -190,7 +190,7 @@ oh wait that's me (who types for the typer?)
<https://www.coindesk.com/the-vault-is-back-bitcoin-coder-to-revive-plan-to-shield-wallets-from-theft>
-<https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-August/017229.html>
+<https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-August/017229.html>
-<https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-August/017231.html>
+<https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-August/017231.html>
diff --git a/transcripts/scalingbitcoin/tokyo-2018/atomic-swaps.mdwn b/transcripts/scalingbitcoin/tokyo-2018/atomic-swaps.mdwn
index 30cf8a7..8936d98 100644
--- a/transcripts/scalingbitcoin/tokyo-2018/atomic-swaps.mdwn
+++ b/transcripts/scalingbitcoin/tokyo-2018/atomic-swaps.mdwn
@@ -79,4 +79,4 @@ How can we construct HTLCs or conditional payments in general without knowing th
* <https://diyhpl.us/wiki/transcripts/realworldcrypto/2018/mimblewimble-and-scriptless-scripts/>
* zero-knowledge contingent payments: <https://bitcoincore.org/en/2016/02/26/zero-knowledge-contingent-payments-announcement/>
* Equivalent secret values across curves: <https://0bin.net/paste/Q5perGCU3+QMVnhz#fNpHXjX0me3Wa-UBItl4hTeK7wjBkl8JlFAmsbTlZVA>
-* <https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-August/017245.html>
+* <https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-August/017245.html>
diff --git a/transcripts/scalingbitcoin/tokyo-2018/edgedevplusplus/taproot-and-graftroot.mdwn b/transcripts/scalingbitcoin/tokyo-2018/edgedevplusplus/taproot-and-graftroot.mdwn
index 78fec80..4db674c 100644
--- a/transcripts/scalingbitcoin/tokyo-2018/edgedevplusplus/taproot-and-graftroot.mdwn
+++ b/transcripts/scalingbitcoin/tokyo-2018/edgedevplusplus/taproot-and-graftroot.mdwn
@@ -58,9 +58,9 @@ When it comes to script, if you add expressability, it tends to lower privacy as
# References
-* taproot: <https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-January/015614.html>
+* taproot: <https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-January/015614.html>
-* graftroot <https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-February/015700.html>
+* graftroot <https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-February/015700.html>
* <https://github.com/Blockstream/contracthashtool>
diff --git a/transcripts/scalingbitcoin/tokyo-2018/scriptless-ecdsa.mdwn b/transcripts/scalingbitcoin/tokyo-2018/scriptless-ecdsa.mdwn
index f68a77f..340d9e1 100644
--- a/transcripts/scalingbitcoin/tokyo-2018/scriptless-ecdsa.mdwn
+++ b/transcripts/scalingbitcoin/tokyo-2018/scriptless-ecdsa.mdwn
@@ -4,7 +4,7 @@ Conner Fromknecht (Lightning Labs)
<https://twitter.com/kanzure/status/1048483254087573504>
-maybe <https://eprint.iacr.org/2018/472.pdf> and <https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-April/001221.html>
+maybe <https://eprint.iacr.org/2018/472.pdf> and <https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-April/001221.html>
# Introduction
diff --git a/transcripts/sf-bitcoin-meetup/2017-03-29-new-address-type-for-segwit-addresses.mdwn b/transcripts/sf-bitcoin-meetup/2017-03-29-new-address-type-for-segwit-addresses.mdwn
index 5a00c6a..a667b7d 100644
--- a/transcripts/sf-bitcoin-meetup/2017-03-29-new-address-type-for-segwit-addresses.mdwn
+++ b/transcripts/sf-bitcoin-meetup/2017-03-29-new-address-type-for-segwit-addresses.mdwn
@@ -20,7 +20,7 @@ Transcript completed by: Bryan Bishop Edited by: Michael Folkson
# Intro
-Can everyone hear me fine through this microphone? Anyone who can't hear me please raise your hand. Oh wait. All good now? Tonight I will be speaking on a project I've been working on on and off for the past year or so, which is the question of what kind of addresses we will be using in Bitcoin in the future. Recently I proposed a [BIP](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-March/013749.html) after several long discussions among some people. I think we have a great proposal. So today I will be talking about the proposal itself and how it came to be. This was joint work with several people, in particular Greg Maxwell who is here as well, and my colleagues at Blockstream. Most of this work was done thanks to the computation power of their computers. I'll talk about that more. So this is the outline of my talk. First I'll talk about why we need a new address type going forward. The decision to use [base32](https://en.wikipedia.org/wiki/Base32>) rather than [base58](https://en.bitcoin.it/wiki/Base58Check_encoding) as has been used historically. Once the choice for base32 has been made, there are a bunch of open design questions like what checksum to use, what character set to use, and what the address structure looks like. Optimal character set depends on optimal choice of checksum, which may be surprising. And then combining this into a new format, which I am calling [bech32](https://github.com/bitcoin/bips/blob/master/bip-0173.mediawiki)
+Can everyone hear me fine through this microphone? Anyone who can't hear me please raise your hand. Oh wait. All good now? Tonight I will be speaking on a project I've been working on on and off for the past year or so, which is the question of what kind of addresses we will be using in Bitcoin in the future. Recently I proposed a [BIP](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-March/013749.html) after several long discussions among some people. I think we have a great proposal. So today I will be talking about the proposal itself and how it came to be. This was joint work with several people, in particular Greg Maxwell who is here as well, and my colleagues at Blockstream. Most of this work was done thanks to the computation power of their computers. I'll talk about that more. So this is the outline of my talk. First I'll talk about why we need a new address type going forward. The decision to use [base32](https://en.wikipedia.org/wiki/Base32>) rather than [base58](https://en.bitcoin.it/wiki/Base58Check_encoding) as has been used historically. Once the choice for base32 has been made, there are a bunch of open design questions like what checksum to use, what character set to use, and what the address structure looks like. Optimal character set depends on optimal choice of checksum, which may be surprising. And then combining this into a new format, which I am calling [bech32](https://github.com/bitcoin/bips/blob/master/bip-0173.mediawiki)
# Why?
diff --git a/transcripts/sf-bitcoin-meetup/2017-07-08-bram-cohen-merkle-sets.mdwn b/transcripts/sf-bitcoin-meetup/2017-07-08-bram-cohen-merkle-sets.mdwn
index 29cd5d7..6d2ac37 100644
--- a/transcripts/sf-bitcoin-meetup/2017-07-08-bram-cohen-merkle-sets.mdwn
+++ b/transcripts/sf-bitcoin-meetup/2017-07-08-bram-cohen-merkle-sets.mdwn
@@ -86,15 +86,15 @@ A: Yeah. The block structure does not change the root hash whatsoever. I impleme
# TXO bitfields
-Any more questions? Okay, something that I am fairly sure will be controversial. Unfortunately the <a href="https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev">bitcoin-dev mailing list</a> has got completely drowned in discussions of subtleties of upgrade paths recently (laughter). So there hasn't been too much discussion of real engineering, much to my dismay. But, so, using a merkle set is kind of a blunt hammer, all problems can be solved with a merkle set. And maybe in some cases, something a little bit more white box, that knows a little bit more about what's going on under the hood, might perform better. So here's a thought. My merkle set proposal has an implementation with defined behavior. But right now, the way things work, you implicitly have a UTXO set, and a wallet has a private key, it generates a transaction, sends it to a full node that has a UTXO set so that it can validate the transaction, and the UTXO set size is approximately the number of things in the UTXO set multiplied by 32 bytes. So the number over here is kind of big and you might want it to be smaller.
+Any more questions? Okay, something that I am fairly sure will be controversial. Unfortunately the <a href="https://gnusha.org/url/https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev">bitcoin-dev mailing list</a> has got completely drowned in discussions of subtleties of upgrade paths recently (laughter). So there hasn't been too much discussion of real engineering, much to my dismay. But, so, using a merkle set is kind of a blunt hammer, all problems can be solved with a merkle set. And maybe in some cases, something a little bit more white box, that knows a little bit more about what's going on under the hood, might perform better. So here's a thought. My merkle set proposal has an implementation with defined behavior. But right now, the way things work, you implicitly have a UTXO set, and a wallet has a private key, it generates a transaction, sends it to a full node that has a UTXO set so that it can validate the transaction, and the UTXO set size is approximately the number of things in the UTXO set multiplied by 32 bytes. So the number over here is kind of big and you might want it to be smaller.
So there's been discussions of maybe using UTXO bit fields, I had a good discussion with petertodd about his alternative approach to this whole thing, wherein I said this weird comment because he likes UTXO bit fields, I said the thing that might be useful there is that you can compress things down a lot. And it turns out that this really helps a lot. I had this idea for a UTXO bit field. UTXO is all the unspent transaction outputs. It includes all unspents. The idea with a UTXO bit field is that you have a wallet, a proof of position and its private key. As things are added to blocks, each one has a position, and no matter how many things are added later, it is already in the same position always. So to make the validation easier for the full node, the wallet will give the proof of position which it remembers for the relevant inputs that it's using, and bundles that with the transaction to send it to the full node, who then a miner puts it into a block. The proofs of positions will be substantially larger than the transactions, so that's a tradeoff.
-So this goes to a full node, and it has to remember a little bit more than before. It has to remember a list of position roots per block. For every single block, it remembers a root of a commitment that can be canonically calculated for that block, of the positions for all the inputs in it, to allow it to verify the proof of position. And it also has a <a href="https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-March/013928.html">TXO bitfield</a>. The full node takes the proof of position and verifies the alleged position using the data it remembers for validating that. And then it looks it up in the TXO bit field, which in the simplest implementation is a trivial data structure, it's a bit field, and you look it up by position. It's not complex at all. It's one look up, it's probably very close to other lookups because you're probably looking at recent information, and these are all 1 bit each, so they are much closer to each other. The size of the TXO bit field is equal to the TXO size divided by 8. So this is a constant factor improvement of 256. Computer science people usually don't care about constant factors, but 256 makes a big difference in the real world. This also has the beenfit that my merkle set is a couple hundred lines of fairly technical code, it has extensive testing but it's not something that I particularly trust someone to go ahead and re-implement well from that, it's something where I would expect someone to port the existing code and existing tests. Whereas I would have much ore confidence that someone could implement a TXO bit field from a spec and get it right.
+So this goes to a full node, and it has to remember a little bit more than before. It has to remember a list of position roots per block. For every single block, it remembers a root of a commitment that can be canonically calculated for that block, of the positions for all the inputs in it, to allow it to verify the proof of position. And it also has a <a href="https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-March/013928.html">TXO bitfield</a>. The full node takes the proof of position and verifies the alleged position using the data it remembers for validating that. And then it looks it up in the TXO bit field, which in the simplest implementation is a trivial data structure, it's a bit field, and you look it up by position. It's not complex at all. It's one look up, it's probably very close to other lookups because you're probably looking at recent information, and these are all 1 bit each, so they are much closer to each other. The size of the TXO bit field is equal to the TXO size divided by 8. So this is a constant factor improvement of 256. Computer science people usually don't care about constant factors, but 256 makes a big difference in the real world. This also has the beenfit that my merkle set is a couple hundred lines of fairly technical code, it has extensive testing but it's not something that I particularly trust someone to go ahead and re-implement well from that, it's something where I would expect someone to port the existing code and existing tests. Whereas I would have much ore confidence that someone could implement a TXO bit field from a spec and get it right.
The downside is that these proofs of positions are much bigger than the transactions. And this is based on the TXO set size, rather than the UTXO set size which is probably trending towards some constant. In the long term it might grow iwthout bound. There is a pretty straightforward way of fixing that, which is making a fancier bit field. When it's sparse, at the expense of it being much more interesting to implement, you can retain the exact same semantics while making the physical size that the bitfield usig is equal to the number of bits that are set to 1 times the log of the total number of bits, which isn't scary in the slightest. So this is still going to be quite small. To get this working, you'd have to go through an exercise about making a good merkle set, and someone is going to have to think about it, experiment with it and figure out what implementation is going to perform well. Whereas a straightforward bit field is just trivial.
-<a href="https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-June/012758.html">Any</a> <a href="https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-March/013928.html">discussion</a> of <a href="https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-February/013592.html">this</a> got drowned out with the mailing list's very important discussion about segwit. But I think this is an approach worth considering, with the given caveats about its performane, it has a lot going for it, rather than doing UTXO commimtents in blocks in bitcoin. Merkle sets can do a lot more things for a lot more stuff than just in bitcoin. I think they shine when you have truly gigantic sets of data-- probably people that have permissioned blockchains should be using merkle sets, not really bitcoin I guess.
+<a href="https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-June/012758.html">Any</a> <a href="https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-March/013928.html">discussion</a> of <a href="https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-February/013592.html">this</a> got drowned out with the mailing list's very important discussion about segwit. But I think this is an approach worth considering, with the given caveats about its performane, it has a lot going for it, rather than doing UTXO commimtents in blocks in bitcoin. Merkle sets can do a lot more things for a lot more stuff than just in bitcoin. I think they shine when you have truly gigantic sets of data-- probably people that have permissioned blockchains should be using merkle sets, not really bitcoin I guess.
Q: The proof of position... is that position data immutable?
@@ -178,8 +178,8 @@ Q: But we already have linear amount of data from the blocks that are available.
# Other links
-delayed txo commitments <https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-May/012715.html>
+delayed txo commitments <https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-May/012715.html>
-TXO commitments do not need a soft-fork to be useful <https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-February/013591.html>
+TXO commitments do not need a soft-fork to be useful <https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-February/013591.html>
-rolling UTXO set hashes <https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-May/014337.html>
+rolling UTXO set hashes <https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-May/014337.html>
diff --git a/transcripts/sf-bitcoin-meetup/2018-07-09-taproot-schnorr-signatures-and-sighash-noinput-oh-my.mdwn b/transcripts/sf-bitcoin-meetup/2018-07-09-taproot-schnorr-signatures-and-sighash-noinput-oh-my.mdwn
index b35d7b2..647270b 100644
--- a/transcripts/sf-bitcoin-meetup/2018-07-09-taproot-schnorr-signatures-and-sighash-noinput-oh-my.mdwn
+++ b/transcripts/sf-bitcoin-meetup/2018-07-09-taproot-schnorr-signatures-and-sighash-noinput-oh-my.mdwn
@@ -30,7 +30,7 @@ This has many similarities with proof systems. In the extreme, we can aim for a
# Signature system improvements
-Regarding improvements, I want to talk about three things. One is <a href="https://diyhpl.us/wiki/transcripts/blockchain-protocol-analysis-security-engineering/2018/schnorr-signatures-for-bitcoin-challenges-opportunities/">Schnorr signatures</a>. Some of you may have seen that I recently a couple days ago published a <a href="https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-July/016203.html">draft BIP for incorporating Schnorr signatures into bitcoin</a>. I'll talk a bit about that.
+Regarding improvements, I want to talk about three things. One is <a href="https://diyhpl.us/wiki/transcripts/blockchain-protocol-analysis-security-engineering/2018/schnorr-signatures-for-bitcoin-challenges-opportunities/">Schnorr signatures</a>. Some of you may have seen that I recently a couple days ago published a <a href="https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-July/016203.html">draft BIP for incorporating Schnorr signatures into bitcoin</a>. I'll talk a bit about that.
I will also be talking about <a href="https://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2017-09-06-signature-aggregation/">signature aggregation</a> (or <a href="https://bitcoincore.org/en/2017/03/23/schnorr-signature-aggregation/">here</a>), in particular aggregation across multiple inputs in a transaction. There's really two separate things, I believe. There's the signature system and then the integration and we should talk about them separately. Lots of the interest in the media on this topic are easily conflated between the two issues.
@@ -55,7 +55,7 @@ Another thing that we can do is focusing on this too, we want <a href="https://d
# Schnorr signature BIP draft
-A few days ago, I published a <a href="https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-July/016203.html">Schnorr signature BIP draft</a> which was the combined work of a number of people including Greg Maxwell. And many other people who are listed in the BIP draft. This BIP accomplishes all of those goals. It's really just a signature scheme, it doesn't talk about how we might go about integrating that into bitcoin. I'm going to talk about my ideas about that integration problem later.
+A few days ago, I published a <a href="https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-July/016203.html">Schnorr signature BIP draft</a> which was the combined work of a number of people including Greg Maxwell. And many other people who are listed in the BIP draft. This BIP accomplishes all of those goals. It's really just a signature scheme, it doesn't talk about how we might go about integrating that into bitcoin. I'm going to talk about my ideas about that integration problem later.
# Schnorr signature properties
@@ -79,7 +79,7 @@ Another advantage of Schnorr signatures, and it's one of the more exciting thing
# Cross-input signature aggregation
-<a href="https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-January/015696.html">Cross-input signature aggregation</a> is from the fact that the Schnorr signature construction where you can sign the same message with multiple keys, this can be generalized to having multiple different messages be signed by different people and still have just a single signature. The ability to do so really would in theory allow us to reduce the total number of signatures in a transaction to just one signature. This has been the initial drive for going and looking into Schnorr signatures and it's such an awesome win. There are many complications in implementing this, it turns out, but this is the goal that we want to get to. It has an impact on how we validate transactions. Right now, every input you just run the scripts and out comes TRUE or FALSE and if there's FALSE then the transaction is invalid. This needs to be changed to a model where script validation returns TRUE or FALSE and also returns a list of pubkeys which is the set of keys that must still sign for that input, and then we need a single signature that can do this, and it needs to be a transaction-wide rather than transaction input-wide.
+<a href="https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-January/015696.html">Cross-input signature aggregation</a> is from the fact that the Schnorr signature construction where you can sign the same message with multiple keys, this can be generalized to having multiple different messages be signed by different people and still have just a single signature. The ability to do so really would in theory allow us to reduce the total number of signatures in a transaction to just one signature. This has been the initial drive for going and looking into Schnorr signatures and it's such an awesome win. There are many complications in implementing this, it turns out, but this is the goal that we want to get to. It has an impact on how we validate transactions. Right now, every input you just run the scripts and out comes TRUE or FALSE and if there's FALSE then the transaction is invalid. This needs to be changed to a model where script validation returns TRUE or FALSE and also returns a list of pubkeys which is the set of keys that must still sign for that input, and then we need a single signature that can do this, and it needs to be a transaction-wide rather than transaction input-wide.
Another complication is soft-fork compatibility. The complication is that when you want different versions of software to validate the same set of inputs, and there is only a single signature, you must make sure that this signature and they both understand that this signature is about the same set of keys. If they disagree about the set of keys or about who has to sign, then that would be bad. Any sort of new feature that gets added to the scripting language that changes the set of signers, is inherently incompatible with aggregation. This is solvable, but it's something to take into account, and it interacts with many things.
@@ -89,7 +89,7 @@ Another new development is thinking about new sighash modes. When I'm signing fo
# SIGHASH\_NOINPUT
-Recently there has been a proposal by cdecker for <a href="https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-April/015908.html">SIGHASH\_NOINPUT</a> where you sign the scripts and not the txids. The scary downside of this construction is that they are replayable. I pay to an address, you spend using the SIGHASH\_NOINPUT and then someone else for whatever reason can send to the same address then the receiver of that first spend can take the first signature, put it in a new transaction and can take the new coins that were spent there. So this is something that should only be used in certain applications that can make sure this is not a problem.
+Recently there has been a proposal by cdecker for <a href="https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-April/015908.html">SIGHASH\_NOINPUT</a> where you sign the scripts and not the txids. The scary downside of this construction is that they are replayable. I pay to an address, you spend using the SIGHASH\_NOINPUT and then someone else for whatever reason can send to the same address then the receiver of that first spend can take the first signature, put it in a new transaction and can take the new coins that were spent there. So this is something that should only be used in certain applications that can make sure this is not a problem.
# eltoo
@@ -115,7 +115,7 @@ I want to make an intermediate step here, where I want to go into what is a 'una
# Taproot
-However, if you think about the scenario here.. we want everyone to agree generally, still we have to publish on the chain is our key and the top-right branch hash. It's an additional 64 bytes that need to be revealed just for this super common case that hopefully will be taken all the time. Can we do better? That is where <a href="https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-January/015614.html">taproot</a> comes in. It's something that Greg Maxwell came up with.
+However, if you think about the scenario here.. we want everyone to agree generally, still we have to publish on the chain is our key and the top-right branch hash. It's an additional 64 bytes that need to be revealed just for this super common case that hopefully will be taken all the time. Can we do better? That is where <a href="https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-January/015614.html">taproot</a> comes in. It's something that Greg Maxwell came up with.
Taproot gives the ability to say-- it's based on the idea that we can use a construction called pay-to-contract which was originally invented by Timo Hanke in 2013 I think, to tweak a public key with a script using the equation there on the screen.
@@ -133,13 +133,13 @@ If we start from this assumption that there exists a single key that represents
Delegation means we're now going to permit spending, by saying "I have a signature with this taproot key (the key that represents everyone involved)", revealing a script, revealing a signature with a key on that script, and the inputs to it. There's a group of participants that represent the "everyone agrees" case, and they have the ability to delegate spending to other scripts and other particpants.
-The advantage of <a href="https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-February/015700.html">graftroot</a> over a merkle tree is, you can have as many spending paths as you want and they are all the same size. All you do is reveal a single signature. The downside here is that it is an inherently interactive key setup. You cannot spend as-- if you are a part of this s1, s2, or s3. You cannot spend without having been given the signature by the keys involved.
+The advantage of <a href="https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-February/015700.html">graftroot</a> over a merkle tree is, you can have as many spending paths as you want and they are all the same size. All you do is reveal a single signature. The downside here is that it is an inherently interactive key setup. You cannot spend as-- if you are a part of this s1, s2, or s3. You cannot spend without having been given the signature by the keys involved.
This may mean difficulty with backups for example, because your money is lost if you lose the signature for example.
# Half-aggregation
-There is another concept called <a href="https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-May/014272.html">half-aggregation</a> ((unpublished improvements forthcoming)) that Tadge Dryja came up with that lets you half-aggregate signatures non-interactively together. It doesn't turn them into an entire single thing but it turns them into half the size. With that, graftroot even for the most simple of cases is more efficient in terms of space than a merkle branch. But there are tradeoffs.
+There is another concept called <a href="https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-May/014272.html">half-aggregation</a> ((unpublished improvements forthcoming)) that Tadge Dryja came up with that lets you half-aggregate signatures non-interactively together. It doesn't turn them into an entire single thing but it turns them into half the size. With that, graftroot even for the most simple of cases is more efficient in terms of space than a merkle branch. But there are tradeoffs.
# Taproot and graftroot in practice
diff --git a/transcripts/sf-bitcoin-meetup/2019-12-16-bip-taproot-bip-tapscript.mdwn b/transcripts/sf-bitcoin-meetup/2019-12-16-bip-taproot-bip-tapscript.mdwn
index 6a3905b..30cb700 100644
--- a/transcripts/sf-bitcoin-meetup/2019-12-16-bip-taproot-bip-tapscript.mdwn
+++ b/transcripts/sf-bitcoin-meetup/2019-12-16-bip-taproot-bip-tapscript.mdwn
@@ -24,7 +24,7 @@ Thank you, Mark. My name is Pieter Wuille. I do bitcoin stuff. I work at Blockst
Over the past few weeks, Bitcoin Optech has organized structured <a href="https://github.com/ajtowns/taproot-review">taproot review sessions</a> (<a href="https://www.coindesk.com/an-army-of-bitcoin-devs-is-battle-testing-upgrades-to-privacy-and-scaling">news</a>) (and <a href="https://bitcoinops.org/workshops/#taproot-workshop">workshops</a> and <a href="https://diyhpl.us/wiki/transcripts/bitcoinops/schnorr-taproot-workshop-2019/notes/">workshop transcript</a> and <a href="https://github.com/bitcoinops/taproot-workshop">here</a>) which has brought in attention and lots of comments from lots of people which have been very useful.
-Of course, the original idea of taproot is due to <a href="https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-January/015614.html">Greg Maxwell who came up with it a year or two ago</a>. Thanks to him as well. And all the other people have been involved in this, too.
+Of course, the original idea of taproot is due to <a href="https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-January/015614.html">Greg Maxwell who came up with it a year or two ago</a>. Thanks to him as well. And all the other people have been involved in this, too.
I always make <a href="https://prezi.com/view/AlXd19INd3isgt3SvW8g/">my slides</a> at the very last minute. These people have not seen my slides. If there's anything wrong on them, that's on me.
@@ -32,7 +32,7 @@ I always make <a href="https://prezi.com/view/AlXd19INd3isgt3SvW8g/">my slides</
Okay, so what will this talk be about? I wanted to talk about the actual BIPs and the actual changes that we're proposing to make to bitcoin to bring taproot, Schnorr signatures, and merkle trees, and a whole bunch of other things. I am mostly not going to talk about taproot as an abstract concept. <a href="https://diyhpl.us/wiki/transcripts/sf-bitcoin-meetup/2018-07-09-taproot-schnorr-signatures-and-sighash-noinput-oh-my/">I previously gave a talk about taproot</a> here I think 1.5 years ago in July 2018. So if you want to know more about the history or the reasoning why we want this sort of thing, then please go have a look at that talk. Here, I am going to talk about a lot of the other things that were brought in that we realized we had to change along the way or that we could and should. I am going to try to justify those things.
-I think we're nearing the end of-- we're nearing the point where these BIPs are getting ready to be an actual proposal for bitcoin. But still feel free, if you have comments, then you're more than welcome to post them on <a href="https://github.com/sipa/bips">my github repository</a>, on the <a href="https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev">mailing list</a>, on <a href="http://gnusha.org/bitmetas">IRC</a>, or here in person to make them and I'm happy to answer any questions.
+I think we're nearing the end of-- we're nearing the point where these BIPs are getting ready to be an actual proposal for bitcoin. But still feel free, if you have comments, then you're more than welcome to post them on <a href="https://github.com/sipa/bips">my github repository</a>, on the <a href="https://gnusha.org/url/https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev">mailing list</a>, on <a href="http://gnusha.org/bitmetas">IRC</a>, or here in person to make them and I'm happy to answer any questions.
So during this talk, I am really going to go over step-by-step a whole bunch of small and less small details that were introduced. Feel free to raise your hand at any time if you have questions. As I said, I am not going to talk so much about taproot as a concept, but this might mean that the justification or rationale for things is not clear, so feel free to ask. Okay.
@@ -48,7 +48,7 @@ I'd like to, when working on proposals for bitcoin, I like to focus on things th
I think it's important to point out that it's not even-- a question you could ask is, well, but it should be possible to say "optionally introduce a feature that people can use that changes the security assumptions". Probably that is what we want to do at some point eventually, but even--- if I don't trust some new digital signature scheme that offers some new awesome features that we may want to use, and you do, so you use the wallet that uses it, effectively your coins become at risk and if I'm interacting with you... Eventually the whole ecosystem of bitcoin transactions is relying on.... say several million coins are somehow encumbered by security assumptions that I don't trust. Then I probably won't have faith in the currency anymore. What I'm trying to get at is that the security assumptions of the system are not something you can just choose and take. It must be something that really the whole ecosystem accepts. For that reason, I'm just focusing on not changing them at all, because that's obviously the easiest thing to argue for. The result of this is that we end up exploring to the extent possible all the possible things that are possible with these, and we have discovered some pretty neat things along the way.
-Over the past few years, a whole bunch of technologies and techniques have been invented that could be used to improve the efficiency, flexibility or privacy of bitcoin Script in some way. There's merkle trees and MASTs, taproot, <a href="https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-February/015700.html">graftroot</a>, generalized taproot (also known as groot). Then there's some ideas about new opcodes, new sighash modes such as <a href="https://github.com/bitcoin/bips/blob/master/bip-0118.mediawiki">SIGHASH\_NOINPUT</a>, <a href="https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-March/015838.html">cross-input aggregation</a> which is actually what started all of this... A problem is that there's a tradeoff. The tradeoff is, on the one hand we want to have-- it turns out that combining more things actually gives you better efficiency and privacy especially when we're talking about policy privacy, say there's a dozen possible things that interact and every two months there's a new soft-fork that introduces some new feature.... clearly, you're going to be revealing more to the network because you're using script version 7 or something, and it added this new feature, and you must have had a reason to migrate to script version 7. This makes for an automatic incentive to combine things together. Also, the fact that probably not-- people will not want to go through an upgrade changing their wallet logic every couple of months. When you introduce a change like this, you want to make it large enough that people are effectively incentivized to adopt it. On the other hand, putting everything all at once together becomes really complex, becomes hard to explain, and is just from an engineering perspective and a political perspective too, a really hard thing. "Here's 300 pages of specification of a new thing, take it or leave it" is really not how you want to do things.
+Over the past few years, a whole bunch of technologies and techniques have been invented that could be used to improve the efficiency, flexibility or privacy of bitcoin Script in some way. There's merkle trees and MASTs, taproot, <a href="https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-February/015700.html">graftroot</a>, generalized taproot (also known as groot). Then there's some ideas about new opcodes, new sighash modes such as <a href="https://github.com/bitcoin/bips/blob/master/bip-0118.mediawiki">SIGHASH\_NOINPUT</a>, <a href="https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-March/015838.html">cross-input aggregation</a> which is actually what started all of this... A problem is that there's a tradeoff. The tradeoff is, on the one hand we want to have-- it turns out that combining more things actually gives you better efficiency and privacy especially when we're talking about policy privacy, say there's a dozen possible things that interact and every two months there's a new soft-fork that introduces some new feature.... clearly, you're going to be revealing more to the network because you're using script version 7 or something, and it added this new feature, and you must have had a reason to migrate to script version 7. This makes for an automatic incentive to combine things together. Also, the fact that probably not-- people will not want to go through an upgrade changing their wallet logic every couple of months. When you introduce a change like this, you want to make it large enough that people are effectively incentivized to adopt it. On the other hand, putting everything all at once together becomes really complex, becomes hard to explain, and is just from an engineering perspective and a political perspective too, a really hard thing. "Here's 300 pages of specification of a new thing, take it or leave it" is really not how you want to do things.
The balance we end up with is combining some things by focusing on just one thing, and its dependencies, bugfixes and extensions to it, but let's not do everything at once. In particular, we avoid things that can be done independently. If we can argue that doing some feature as a new soft-fork independently is just as good as doing it at the same time, then we avoid it. As you'll see, there's a whole bunch of extension mechanisms that we prefer over adding features themselves. In particular, there will be a way to easily add new sighash modes, and as a result we don't have to worry about having those integrated into the proposal right away. Also new opcodes; we're not really adding new opcodes because there's an extension mechanism that will easily let us do that later.
@@ -68,7 +68,7 @@ How are we constructing an output? On the slide, you can see s1, s2, and s3. Tho
So we're going to introduce a new witness version. Segwit as defined in bip141 offered the possibility of having multiple script versions. So we're going to use that instead of using v0 as we've used so far, we define a new one which is script v1. Its program is not a hash, it is in fact the x-coordinate of that point q. There's some interesting observations here. One, we just store the x-coordinate and not the y-coordinate. A common intuition that people have is that by dropping the y-coordinate we're actually reducing the key space in half. So people think well maybe this is 1/2 bits reduction in security. It's easy to prove that this is in fact no reduction in security at all. The intuition is that, if you have an algorithm to break a public key given just an x-coordinate you would in fact always use it. You would even use it on public keys that also had a y-coordinate. So it is true that there's some structure in public keys, and we're exploiting that by just storing the x-coordinate, but that structure is always there and it can always be exploited. It's easy to actually prove this. Jonas Nick wrote a blog post about that not so long ago, which is an interesting read and gives a glimpse of the proofs that are used in this sort of thing.
-As I said, we're defining witness v1. Other witness versions remain unencumbered, obviously, because we don't want to say anything yet about v2 and beyond. But also, we want to keep other lengths unencumbered... I believe this was a mistake we made in witness v0, which is only valid with 20 or 32 bytes hash ... a 20 byte corresponds to a public key hash, and the 32 bytes refers to scripthash. The space of witness versions and their programs is limited. It's sad that we've removed the possibility to use v0. There's only 16 versions. To avoid that, we leave other lengths unencumbered but the downside is that this exposes us to -- a couple of months ago, a <a href="https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-December/017521.html">issue</a> was discovered in <a href="https://diyhpl.us/wiki/transcripts/sf-bitcoin-meetup/2017-03-29-new-address-type-for-segwit-addresses/">bech32</a> (<a href="https://github.com/bitcoin/bips/blob/master/bip-0173.mediawiki">bip173</a>) where it is under some circumstances possible to insert characters into an address without invalidating it. I've <a href="https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-December/017521.html">posted on the mailing list</a> a strategy and an analysis that shows how to fix that bech32 problem. It's unfortunate though that we're now becoming exposed by not doing this encumberence.
+As I said, we're defining witness v1. Other witness versions remain unencumbered, obviously, because we don't want to say anything yet about v2 and beyond. But also, we want to keep other lengths unencumbered... I believe this was a mistake we made in witness v0, which is only valid with 20 or 32 bytes hash ... a 20 byte corresponds to a public key hash, and the 32 bytes refers to scripthash. The space of witness versions and their programs is limited. It's sad that we've removed the possibility to use v0. There's only 16 versions. To avoid that, we leave other lengths unencumbered but the downside is that this exposes us to -- a couple of months ago, a <a href="https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-December/017521.html">issue</a> was discovered in <a href="https://diyhpl.us/wiki/transcripts/sf-bitcoin-meetup/2017-03-29-new-address-type-for-segwit-addresses/">bech32</a> (<a href="https://github.com/bitcoin/bips/blob/master/bip-0173.mediawiki">bip173</a>) where it is under some circumstances possible to insert characters into an address without invalidating it. I've <a href="https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-December/017521.html">posted on the mailing list</a> a strategy and an analysis that shows how to fix that bech32 problem. It's unfortunate though that we're now becoming exposed by not doing this encumberence.
As I said, the output is an x-coordinate directly. It's not a hash. This is perhaps the most controversial part that I expect of the taproot proposal. There is a common thing that people say. They say, oh bitcoin is quantum resistant because it hashes public keys. I think that statement is nonsense. There's several reasons why it's nonsense. First, it makes assumptions about how fast quantum computers are. Clearly when spending an output, you're revealing the public key. If within that time it can be attacked, then it can be attacked. Plus, there are several million bitcoin right now available on outputs that are actually have known public keys and can be spent with known public keys. There's no real reason to assume that number will go down. The reason for that is that really, any interesting use of the bitcoin protocol involves revealing public keys. If you're using lightning, you're revealing public keys. If you're using multisig, you're revealing public keys to your cosigners. If you're using various kinds of lite clients, they are sending public keys to their servers. It's just an unreasonable assumption that.... simply said, we cannot treat public keys as secret.
@@ -138,7 +138,7 @@ sipa: It is at parse time. There's a preprocessing step where the script gets de
Of all the different upgrade mechanisms that are in this proposal, OP\_SUCCESS is the one I don't want to lose. The leaf versions can effectively be subsumed by OP\_SUCCESS just start your script with an opcode like OP\_UPGRADE and now your script has completely new semantics. This is really powerful and should make it much easier to add new opcodes that do this or that.
-Another thing is upgradeable pubkey types. The idea is that if you have a public key that is passed to CHECKSIG, CHECKSIGVERIFY or CHECKSIGADD, that is not the usual 32 bytes (not 33 bytes anymore, it's 32 bytes because also there we're just using the x-coordinate). If it's not 32 then we treat that public key as an unknown public key type whose signature check will automatically succeed. This means that you can do things like introduce a new digital signature scheme without introducing new opcodes every time. Maybe more short-term, it means that it's also usable to introduce new signature hashing schemes where otherwise you would have to say oh I have slightly different sighash semantics like SIGHASH\_NOINPUT or <a href="https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-May/016929.html">ANYPREVOUT</a> or whatever it's called these days. Introducing them would, every time you would need to have three new opcodes otherwise. Using upgradeable public key types, this problem goes away.
+Another thing is upgradeable pubkey types. The idea is that if you have a public key that is passed to CHECKSIG, CHECKSIGVERIFY or CHECKSIGADD, that is not the usual 32 bytes (not 33 bytes anymore, it's 32 bytes because also there we're just using the x-coordinate). If it's not 32 then we treat that public key as an unknown public key type whose signature check will automatically succeed. This means that you can do things like introduce a new digital signature scheme without introducing new opcodes every time. Maybe more short-term, it means that it's also usable to introduce new signature hashing schemes where otherwise you would have to say oh I have slightly different sighash semantics like SIGHASH\_NOINPUT or <a href="https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-May/016929.html">ANYPREVOUT</a> or whatever it's called these days. Introducing them would, every time you would need to have three new opcodes otherwise. Using upgradeable public key types, this problem goes away.
Another is making "minimal IF" a consensus rule. "Minimal IF" is currently a standardness rule that has been in segwit forever and I haven't seen anyone complain about it. It says that the input to an OP\_IF or an OP\_NOTIF in the scripting language has to be exactly TRUE or FALSE and it cannot be any other number or bytearray. It's really hard to make non-malleable scripts without it. This is actually something that we stumbled upon when doing research on miniscript and tried to formalize what non-malleability in bitcoin script means, and we have to rely on "Minimal IF" and otherwise you get ridiculous scripts where you have two or three opcodes before every IF to guarantee they're right. So that's the justification, it's always been there, we're forced to rely on it, we better make it a consensus rule.
diff --git a/transcripts/sf-bitcoin-meetup/2020-11-30-socratic-seminar-20.mdwn b/transcripts/sf-bitcoin-meetup/2020-11-30-socratic-seminar-20.mdwn
index 81e30f5..a333654 100644
--- a/transcripts/sf-bitcoin-meetup/2020-11-30-socratic-seminar-20.mdwn
+++ b/transcripts/sf-bitcoin-meetup/2020-11-30-socratic-seminar-20.mdwn
@@ -252,7 +252,7 @@ XX06: Also, we can always start with the least aggressive and as we build raport
# Revisiting squaredness tiebreaker for R point in bip340
-<https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-August/018081.html>
+<https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-August/018081.html>
XX01: Pieter asked about revisiting the squaredness tiebreaker for schnorr signatures. Do you want to give an overview of that discussion and the end result?
@@ -264,7 +264,7 @@ XX03: This isn't just public keys, but also the public nonce inside the signatur
# Bitcoin archaeology
-<https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-November/018269.html>
+<https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-November/018269.html>
BB: (timestamp your old emails, archaeology....)
@@ -279,7 +279,7 @@ XX01: That's a pretty interesting project. I like that.
# bech32 updates
-<https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-October/018236.html>
+<https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-October/018236.html>
XX01: Rusty brought up some ideas for how to deal with potential issues with bech32 encoding including the malleability issue discovered a year or two ago at this point with the checksum. Pieter or Rusty, do you want to give a high level overview of this discussion?
@@ -311,7 +311,7 @@ XX03: Another angle that might have caused the problem, mainly.... OP\_0 is diff
# Hold fees
-<https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-October/002826.html>
+<https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-October/002826.html>
XX01: A little bit of a lightning discussion. I thought this was interesting. This idea of trying to prevent spam on the lightning network, locking up HTLCs, pinging nodes too much, creating a-- requiring payments for actually even doing things in the lightning network. This email is basically positing a few ideas of how to prevent spam on a lightning network. Just a cool conversation.
diff --git a/transcripts/stanford-blockchain-conference/2019/htlcs-considered-harmful.mdwn b/transcripts/stanford-blockchain-conference/2019/htlcs-considered-harmful.mdwn
index f23a1ff..5f0489c 100644
--- a/transcripts/stanford-blockchain-conference/2019/htlcs-considered-harmful.mdwn
+++ b/transcripts/stanford-blockchain-conference/2019/htlcs-considered-harmful.mdwn
@@ -44,7 +44,7 @@ There's a lot of reasons. It's a free option, there's a griefing attack, and the
This problem has gone mainstream recently which is great. The problem here is that we had Alice locked hers first and then Bob locks her. This matters when you're doing a multi-asset bet with HTLC. Usually Alice goes to complete the payment immediately, but what if Alice just sits and watches the price and decides last minute to do it? Alice is getting a free option to execute the transaction. It could be, though, that the price moves and the trade is no longer economic. They could let the HTLC timeout and cancel the trade. In the unlikely event that litecoin market price rises, Alice can complete the transaction and get her new money. This is basically an American-style call option. This is worth a premium, but Alice isn't actually paying that premium in an HTLC. In fact, in lightning, if you bail an HTLC you don't actually pay any fee for it at all. So this is a vulnerability and can be attacked.
-My good friend ZmnSCPxj made a good argument for a single-asset lightning network on lightning-dev a month ago. It's a good analyiss, I recommend looking at: <https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-December/001752.html>
+My good friend ZmnSCPxj made a good argument for a single-asset lightning network on lightning-dev a month ago. It's a good analyiss, I recommend looking at: <https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-December/001752.html>
# Griefing problem
diff --git a/transcripts/stephan-livera-podcast/2020-08-13-christian-decker-lightning-topics.mdwn b/transcripts/stephan-livera-podcast/2020-08-13-christian-decker-lightning-topics.mdwn
index 803d585..e6ff545 100644
--- a/transcripts/stephan-livera-podcast/2020-08-13-christian-decker-lightning-topics.mdwn
+++ b/transcripts/stephan-livera-podcast/2020-08-13-christian-decker-lightning-topics.mdwn
@@ -18,7 +18,7 @@ Stephan Livera (SL): Christian welcome back to the show.
Christian Decker (CD): Hey Stephan, thanks for having me.
-SL: I wanted to chat with you about a bunch of stuff that you’ve been doing. We’ve got a couple of things that I was really interested to chat with you about: ANYPREVOUT, MPP, Lightning attacks, the latest with Lightning Network. Let’s start with ANYPREVOUT. I see that yourself and AJ Towns just recently did an update and I think AJ Towns did an [email](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-July/018038.html) to the mailing list saying “Here’s the update to ANYPREVOUT.” Do you want to give us a bit of background? What motivated this recent update?
+SL: I wanted to chat with you about a bunch of stuff that you’ve been doing. We’ve got a couple of things that I was really interested to chat with you about: ANYPREVOUT, MPP, Lightning attacks, the latest with Lightning Network. Let’s start with ANYPREVOUT. I see that yourself and AJ Towns just recently did an update and I think AJ Towns did an [email](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-July/018038.html) to the mailing list saying “Here’s the update to ANYPREVOUT.” Do you want to give us a bit of background? What motivated this recent update?
CD: When I wrote up the NOINPUT BIP it was just a bare bones proposal that did not consider or take into consideration Taproot at all simply because we didn’t know as much about Taproot as we do now. What I did for NOINPUT (BIP118) was to have a minimal working solution that we could use to implement eltoo on top and a number of other proposals. But we didn’t integrate it with Taproot simply because that wasn’t at a stage where we could use it as a solid foundation yet. Since then that has changed. AJ went ahead and did the dirty work of actually integrating the two proposals with eachother. That’s where ANYPREVOUT and ANYPREVOUTANYSCRIPT, the two variants, came out. Now it’s very nicely integrated with the Taproot system. Once Taproot goes live we can deploy ANYPREVOUT directly without a lot of adaption that that has to happen. That’s definitely a good change. ANYPREVOUT supersedes the NOINPUT proposal which was a bit of a misnomer. Using ANYPREVOUT we get the effects that we want to have for eltoo and some other protocols and have them nicely integrated with Taproot. We can propose them once Taproot is merged.
@@ -48,11 +48,11 @@ CD: You can certainly try. But since we are still talking about 2-of-2 multisig
# RBF pinning
-Matt Corallo on RBF pinning: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-April/017757.html
+Matt Corallo on RBF pinning: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-April/017757.html
SL: From watching the Bitcoin dev mailing list, I saw some discussion around this idea of whether the Lightning node should also be looking into what’s going on in the mempool of Bitcoin versus only looking for transactions that actually get confirmed into the chain. Can you comment on how you’re thinking about the security model? As I understand, you’re thinking more that we’re just looking at what’s happening on the chain and the mempool watching is a nice to have.
-CD: With all of these protocols we can usually replay them only onchain and we don’t need to look at the mempool. That’s true for eltoo as it is true for Lightning penalty. Recently we had a lengthy discussion about an issue that is dubbed [RBF pinning](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-April/017757.html) attack which makes this a bit harder. The attack is a bit involved but it basically boils down to the attacker placing a placeholder transaction in the mempool of the peers making sure that that transaction does not confirm. But being in the mempool that transaction can result in rejections for future transactions. That comes into play when we are talking about HTLCs which span multiple channels. We can have effects where the downstream channel is still locked because the attacker placed a placeholder transaction in the mempool. We are frantically trying to react to this HTLC being timed out but our transaction is not making it into the mempool because it is being rejected by this poison transaction there. If that happens on a single channel that’s ok because eventually we will be able to resolve that and a HTLC is not a huge amount usually. Where this becomes a problem is if we were forwarding that payment and we have a matching upstream HTLC that now also needs to timeout or have a success. That depends on the downstream HTLC which we don’t get to see. So it might happen that the upstream timeout gets timed out. Our upstream node told us “Here’s 1 dollar. I promised to give it to you if you can show me this hash preimage in a reasonable amount of time.” You turned around and forwarded that promise and said “Hey, your attacker, here’s 1 dollar. You can have it if you give me the secret in time.” The downstream attacker doesn’t tell you in time so you will be ok with the upstream one timing out. But it turns out the downstream one can succeed. So you’re out of pocket in the end of the forwarded amount. That is a really difficult problem to solve without looking at the mempool because the mempool is the only indication that this attack is going on and therefore that that we should be more aggressive in reacting to this attack being performed. Most lightning nodes do not actually look at the mempool currently. There’s two proposals that we’re trying to do. One is to make the mempool logic a bit less unpredictable, namely that we can still make progress without reaction even though there is this poison transaction. That is something that we’re trying to get the Bitcoin core developers interested in. On the other side we are looking into mechanisms to look at the mempool, see what is happening and then start alerting nodes that “Hey you might be under attack. Please take precautions and and react accordingly.”
+CD: With all of these protocols we can usually replay them only onchain and we don’t need to look at the mempool. That’s true for eltoo as it is true for Lightning penalty. Recently we had a lengthy discussion about an issue that is dubbed [RBF pinning](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-April/017757.html) attack which makes this a bit harder. The attack is a bit involved but it basically boils down to the attacker placing a placeholder transaction in the mempool of the peers making sure that that transaction does not confirm. But being in the mempool that transaction can result in rejections for future transactions. That comes into play when we are talking about HTLCs which span multiple channels. We can have effects where the downstream channel is still locked because the attacker placed a placeholder transaction in the mempool. We are frantically trying to react to this HTLC being timed out but our transaction is not making it into the mempool because it is being rejected by this poison transaction there. If that happens on a single channel that’s ok because eventually we will be able to resolve that and a HTLC is not a huge amount usually. Where this becomes a problem is if we were forwarding that payment and we have a matching upstream HTLC that now also needs to timeout or have a success. That depends on the downstream HTLC which we don’t get to see. So it might happen that the upstream timeout gets timed out. Our upstream node told us “Here’s 1 dollar. I promised to give it to you if you can show me this hash preimage in a reasonable amount of time.” You turned around and forwarded that promise and said “Hey, your attacker, here’s 1 dollar. You can have it if you give me the secret in time.” The downstream attacker doesn’t tell you in time so you will be ok with the upstream one timing out. But it turns out the downstream one can succeed. So you’re out of pocket in the end of the forwarded amount. That is a really difficult problem to solve without looking at the mempool because the mempool is the only indication that this attack is going on and therefore that that we should be more aggressive in reacting to this attack being performed. Most lightning nodes do not actually look at the mempool currently. There’s two proposals that we’re trying to do. One is to make the mempool logic a bit less unpredictable, namely that we can still make progress without reaction even though there is this poison transaction. That is something that we’re trying to get the Bitcoin core developers interested in. On the other side we are looking into mechanisms to look at the mempool, see what is happening and then start alerting nodes that “Hey you might be under attack. Please take precautions and and react accordingly.”
# SIGHASH flags
@@ -220,7 +220,7 @@ CD: Absolutely. It is one of my pet peeves that I have with the Bitcoin communit
Bastien Teinturier at Lightning Conference: https://diyhpl.us/wiki/transcripts/lightning-conference/2019/2019-10-20-bastien-teinturier-trampoline-routing/
-Bastien Teinturier on the Lightning dev mailing list: https://lists.linuxfoundation.org/pipermail/lightning-dev/2019-August/002100.html
+Bastien Teinturier on the Lightning dev mailing list: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2019-August/002100.html
SL: That’s a very good comment there. I wanted to talk about trampoline routing. You mentioned this earlier as well. I know the ACINQ guys are keen on this idea though I know that there has also been some discussion on GitHub from some other Lightning developers who said “I see a privacy issue there because there might not be enough people who run trampoline routers and therefore there’s a privacy concern there. All those mobile users will be doxing their privacy to these trampoline routers.” Do you have any thoughts on that or where are you placed on that idea?
diff --git a/transcripts/stephan-livera-podcast/2021-03-17-luke-dashjr-taproot-activation.mdwn b/transcripts/stephan-livera-podcast/2021-03-17-luke-dashjr-taproot-activation.mdwn
index 1d678a3..ffb14da 100644
--- a/transcripts/stephan-livera-podcast/2021-03-17-luke-dashjr-taproot-activation.mdwn
+++ b/transcripts/stephan-livera-podcast/2021-03-17-luke-dashjr-taproot-activation.mdwn
@@ -8,11 +8,11 @@ Date: March 17th 2021
Audio: https://stephanlivera.com/episode/260/
-Luke Dashjr arguments against LOT=false: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018498.html
+Luke Dashjr arguments against LOT=false: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018498.html
-T1-T6 and F1-F6 arguments for LOT=true and LOT=false: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018380.html
+T1-T6 and F1-F6 arguments for LOT=true and LOT=false: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018380.html
-F7 argument for LOT=false: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018415.html
+F7 argument for LOT=false: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018415.html
Transcript by: Stephan Livera Edited by: Michael Folkson
@@ -108,9 +108,9 @@ LD: I’m trying to think. I’m not sure that there was too much else.
SL: So let’s move on to Taproot now. We’ve got this new soft fork that most people want. There’s been no serious sustained objections to it. Can you spell out your thoughts on how Taproot activation has gone so far?
-LD: We had three, maybe four [meetings](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-January/018370.html) a month or two ago. Turnout wasn’t that great, only a hundred people or so showed up for them. At the end of the day we came to consensus on pretty much everything except for the one lockinontimeout (LOT) [parameter](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018425.html). Since then a bunch of people have started throwing out completely new ideas. It is great to discuss them but I think they should be saved for the next soft fork. We’ve already got near consensus on Taproot activation, might as well just go forward with that. There’s not consensus on lockinontimeout but there’s enough community support to enforce it. I think we should just move forward with that how it is and we can do something different next time if there’s a better idea that comes around. Right now that is the least risky option on the table.
+LD: We had three, maybe four [meetings](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-January/018370.html) a month or two ago. Turnout wasn’t that great, only a hundred people or so showed up for them. At the end of the day we came to consensus on pretty much everything except for the one lockinontimeout (LOT) [parameter](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018425.html). Since then a bunch of people have started throwing out completely new ideas. It is great to discuss them but I think they should be saved for the next soft fork. We’ve already got near consensus on Taproot activation, might as well just go forward with that. There’s not consensus on lockinontimeout but there’s enough community support to enforce it. I think we should just move forward with that how it is and we can do something different next time if there’s a better idea that comes around. Right now that is the least risky option on the table.
-SL: With lockinontimeout there’s been a lot of discussion back and forth about true or false. And other ideas proposed such as just straight flag day activation or this other idea of [Speedy Trial](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-March/018583.html). Could you outline some of the differences between those different approaches?
+SL: With lockinontimeout there’s been a lot of discussion back and forth about true or false. And other ideas proposed such as just straight flag day activation or this other idea of [Speedy Trial](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-March/018583.html). Could you outline some of the differences between those different approaches?
LD: The lockintimeout=true is essentially what we ended up having to do with SegWit. It gives a full year to the miners so they can collaborate cooperatively and protect the network while it’s being activated early. If the miners don’t do that for whatever reason, at the end it activates. If we were to set lockinontimeout=false we essentially undo that bug fix and give miners control again. It would be like reintroducing the inflation bug that was fixed not so long ago. It doesn’t really make sense to do that. At the end of the day it is a lot less secure. You don’t really want to be running it as an economic actor so you would logically want to run lockinontimeout=true. Therefore a lot of economic actors are likely to run it true. In most of the polls I’ve seen most of the community seems to want true. As far as a flag day, that’s essentially the same thing as lockinontimeout=true except that it doesn’t have the ability for miners to activate it early. So we’d have to wait the whole 18 months for it to activate and it doesn’t have any signaling. At the end of the day we don’t really know if it activated or if the miners are just not mining stuff that violates Taproot which is the difference of whether it is centralized or decentralized verification. It is economic majority still, that will matter for the enforcement, but you want to be able to say “This chain has Taproot activated.” You don’t want it to be an opinion. I say Taproot is activated, you say it isn’t. Who’s to say which one of us is right? Without a signal on the chain we’re both in a limbo where we’re both saying the same thing about the same chain and there’s no clear objective answer to that question, is Taproot activated.
@@ -150,15 +150,15 @@ LD: You would hope so. But regardless, what we do with legitimate soft forks has
# Bitcoin Core releasing LOT=false and UASF releasing LOT=true
-Luke Dashjr on why LOT=false shouldn’t be used: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018498.html
+Luke Dashjr on why LOT=false shouldn’t be used: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018498.html
-T1-T6 and F1-F6 arguments for LOT=true and LOT=false: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018380.html
+T1-T6 and F1-F6 arguments for LOT=true and LOT=false: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018380.html
-F7 argument for LOT=false: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018415.html
+F7 argument for LOT=false: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018415.html
SL: The other argument I have heard is if Bitcoin Core were to release a client with LOT=false and another contingent of developers and users who want to go out and do similar to the UASF and release an alternate client with LOT=true. The average user can’t review all the Bitcoin code and they would now have to decide whether they want to run this alternate client that does include LOT=true. So what are your thoughts on that aspect?
-LD: That’s no riskier than running the one with LOT=false. LOT=false for other reasons doesn’t come to a coherent view of consensus. It will not be very useful to people who are on the LOT=false client. For that reason, I think Core releasing LOT=false would actually be an abdication of duties towards users. Obviously Bitcoin Core, there’s this expectation that it’s going to follow what the users want and be safe to use. LOT=false is simply [not safe](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018498.html) to use.
+LD: That’s no riskier than running the one with LOT=false. LOT=false for other reasons doesn’t come to a coherent view of consensus. It will not be very useful to people who are on the LOT=false client. For that reason, I think Core releasing LOT=false would actually be an abdication of duties towards users. Obviously Bitcoin Core, there’s this expectation that it’s going to follow what the users want and be safe to use. LOT=false is simply [not safe](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018498.html) to use.
# Dealing with unlikely chain splits
@@ -182,7 +182,7 @@ LD: After the full year you’re no longer relying on that assumption. The miner
# Speedy Trial
-Speedy Trial proposal: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-March/018583.html
+Speedy Trial proposal: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-March/018583.html
Speedy Trial support: https://gist.github.com/michaelfolkson/92899f27f1ab30aa2ebee82314f8fe7f
diff --git a/transcripts/sydney-bitcoin-meetup/2020-05-19-socratic-seminar.mdwn b/transcripts/sydney-bitcoin-meetup/2020-05-19-socratic-seminar.mdwn
index 24df4c5..c47d7d2 100644
--- a/transcripts/sydney-bitcoin-meetup/2020-05-19-socratic-seminar.mdwn
+++ b/transcripts/sydney-bitcoin-meetup/2020-05-19-socratic-seminar.mdwn
@@ -64,7 +64,7 @@ This rule applies to every block apart from these two blocks.
# PTLCs (lightning-dev mailing list)
-https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-April/002647.html
+https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-April/002647.html
This is the mailing list post from Nadav Kohen summarizing a lot of the different ideas about PTLCs.
@@ -76,25 +76,25 @@ Jonas, Nadav and the rest of the team worked on doing a PTLC. The latest one, th
There was a question on IRC yesterday about a specific mailing list post on one party ECDSA. Are there different schemes on how to do adaptor like signatures with one party ECDSA?
-The short story is that Andrew Poelstra invented the adaptor signature. Then Pedro Moreno-Sanchez invented the ECDSA one. When he explained it he only explained it with a two party protocol. What I did is say we can simplify this to a single signer one. This doesn’t give you all the benefits but it makes far simpler and much more practical for use in Bitcoin today. That was that mailing list [post](https://lists.linuxfoundation.org/pipermail/lightning-dev/2019-November/002316.html). It seems solid to me. There might be some minor things that might change but from a security perspective it passes all the things that I would look for. I implemented it myself and I made several mistakes. When I looked at whether Jonas Nick had made the same mistakes, he didn’t. He managed to do all that in a hackathon, very impressive.
+The short story is that Andrew Poelstra invented the adaptor signature. Then Pedro Moreno-Sanchez invented the ECDSA one. When he explained it he only explained it with a two party protocol. What I did is say we can simplify this to a single signer one. This doesn’t give you all the benefits but it makes far simpler and much more practical for use in Bitcoin today. That was that mailing list [post](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2019-November/002316.html). It seems solid to me. There might be some minor things that might change but from a security perspective it passes all the things that I would look for. I implemented it myself and I made several mistakes. When I looked at whether Jonas Nick had made the same mistakes, he didn’t. He managed to do all that in a hackathon, very impressive.
# On the scalability issues of onboarding millions of LN clients (lightning-dev mailing list)
-https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-May/002678.html
+https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-May/002678.html
Antoine Riard was asking the question of scalability, what happens with BIP 157 and the incentive for people to run a full node versus SPV or miner consensus takeover.
-I think the best summary of my thoughts on it in much better words than I could is John Newbery’s [answer](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-May/017823.html). These are valid concerns that Antoine is raising. On the other hand they are not specific to BIP 157. They are general concerns for light clients. BIP 157 is a better light client than we have before with bloom filters. As long as we accept that we will have light clients we should do BIP 157.
+I think the best summary of my thoughts on it in much better words than I could is John Newbery’s [answer](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-May/017823.html). These are valid concerns that Antoine is raising. On the other hand they are not specific to BIP 157. They are general concerns for light clients. BIP 157 is a better light client than we have before with bloom filters. As long as we accept that we will have light clients we should do BIP 157.
This conversation happened on both the bitcoin-dev mailing list and lightning-dev mailing list.
This sums up my feelings on it. There were a lot of messages but I would definitely recommend reading John’s one.
-The Luke Dashjr [post](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-May/017820.html) was referring back to some of Nicolas Dorier’s [arguments](https://medium.com/@nicolasdorier/neutrino-is-dangerous-for-my-self-sovereignty-18fac5bcdc25). He would rather people use an explorer wallet in his parlance, Samourai wallet or whatever, calling back to the provider as opposed to a SPV wallet. John’s comment addresses that at least somewhat. It is not specific to BIP 157.
+The Luke Dashjr [post](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-May/017820.html) was referring back to some of Nicolas Dorier’s [arguments](https://medium.com/@nicolasdorier/neutrino-is-dangerous-for-my-self-sovereignty-18fac5bcdc25). He would rather people use an explorer wallet in his parlance, Samourai wallet or whatever, calling back to the provider as opposed to a SPV wallet. John’s comment addresses that at least somewhat. It is not specific to BIP 157.
The argument is really about do we want to make light clients not too good in order to disincentivize people to use them or do we not want to do that? It also comes down how do we believe the future will play out. That’s why I would recommend people to read Nicolas Dorier’s article and look at the arguments. It is important that we still think about these things. Maybe I am on the side of being a little more optimistic about the more future. That more people will run their own full nodes still or enough people. I think that is also because everyone will value their privacy will want to run a full node still. It is not like people will forget about that because there is a good light client. I honestly believe that we will have more full nodes. It is not like full nodes will stay at the same number because we have really good light clients.
-To reflect some of the views I saw elsewhere in the thread. [Richard Myers](https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-May/002702.html) is working on mesh networking and people trying to operate using Bitcoin who can’t run their own full node. If they are in a more developing world country and don’t have the same level of internet or accessibility of a home node they can call back to. Even there there are people working on different ideas. One guy in Africa just posted about how they are trying to use Raspiblitz as a set up with solar panels. That is a pretty cool idea as well.
+To reflect some of the views I saw elsewhere in the thread. [Richard Myers](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-May/002702.html) is working on mesh networking and people trying to operate using Bitcoin who can’t run their own full node. If they are in a more developing world country and don’t have the same level of internet or accessibility of a home node they can call back to. Even there there are people working on different ideas. One guy in Africa just posted about how they are trying to use Raspiblitz as a set up with solar panels. That is a pretty cool idea as well.
This is the PGP model. When PGP first came out it was deliberately hard to use so you would have to read documentation so you wouldn’t use it insecurely. That was a terrible idea. It turns out that users who are not prepared to invest time, you scare off all these users. The idea that we should make it hard for them so they do the right thing, if you make it hard for them they will go and do something else. There is a whole graduation here. At the moment a full node is pretty heavy but this doesn’t necessarily have to be so. A full node is something that builds up its own its own UTXO set. At some stage it does need to download all the blocks. It doesn’t need to do so to start with. You could have a node that over time builds it up and becomes a full node for example, gradually catches up. Things like Blockstream’s satellite potentially help with that. Adding more points on the spectrum is helpful. This idea that if they are not doing things we like we should serve them badly hasn’t worked already. People using Electrum random peers and using them instead. If we’ve got a better way of doing it we should do it. I’ve always felt we should channel the Bitcoin block headers at least over the Lightning Network. The more sources of block headers we have the less chance you’ve got of being eclipsed and sybil attacked.
@@ -102,7 +102,7 @@ I recall some people were talking about the idea of fork detection. Maybe you mi
There are certainly more things we can do. The truth is that people are running light clients today. I don’t think that is going to stop. If you really want to stop them work really hard on making a better full client than we have today.
-I also liked Laolu’s [post](https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-May/002685.html) too. You can leverage existing infrastructure to serve blocks. There was a discussion around how important is economic weight in the Bitcoin network and how easy would it be to try to takeover consensus and change to their fork of 22 million Bitcoin or whatever.
+I also liked Laolu’s [post](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-May/002685.html) too. You can leverage existing infrastructure to serve blocks. There was a discussion around how important is economic weight in the Bitcoin network and how easy would it be to try to takeover consensus and change to their fork of 22 million Bitcoin or whatever.
This was what happened with SegWit2x, aggressively going after the SPV wallets. They were like “We are going to carry all the SPV wallets with us”. It is a legitimate threat. There is an argument that if you have significant economic weight you should be running a full node, that is definitely true. But everyone with their little mobile phone with ten dollars on it is maybe not so important.
@@ -272,7 +272,7 @@ In my recent [episode](https://stephanlivera.com/episode/168/) with Lisa (Neigut
We would love to get out of the wallet game. PSBT to some extent lets us do that. PSBT gives us an interoperable layer so we can get rid of our internal wallet altogether and you can use whatever wallet you want. I think that is where we are headed. The main advantage of having normal wallets understand a little bit of Lightning is that you could theoretically import your Lightning seed and get them to scrape and dredge up your funds which would be cool. Then they need to understand some more templated script outputs. We haven’t seen demand for that yet. It is certainly something that we have talked about in the past. A salvage operation. If you constrain the CSV delay for example it gets a lot easier. Unfortunately for many of the outputs that you care about you can’t generate all from a seed, some of the information has to come from the peer. Unless you talk with the peer again you don’t know enough to even figure out what the script actually is. Some of them you can get, some of them are more difficult.
-I read Lisa’s initial mailing list [post](https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-November/001532.html). She wrote down it works. This is a protocol design question. Why is it add input, add output and not just one message which adds a bunch of inputs and outputs and then the other guy sends you a message with his inputs and outputs and you go from there?
+I read Lisa’s initial mailing list [post](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-November/001532.html). She wrote down it works. This is a protocol design question. Why is it add input, add output and not just one message which adds a bunch of inputs and outputs and then the other guy sends you a message with his inputs and outputs and you go from there?
It has to be stateful anyway because you have to have multiple rounds. If you want to let them negotiate with multiple parties they are not going to know everything upfront. I send to Alice and Bob “Here’s things I want to add.” Then Alice sends me stuff. I have to mirror that and send it to Bob. We have to have multiple rounds. If we are going to have multiple rounds keep the protocol as simple as possible. Add, remove, delete. We don’t care about byte efficiency, there is not much efficiency to be gained. A simple protocol to just have “Add input, add input, add input” rather than “Add inputs” with a number n. It simplifies the protocol.
@@ -296,7 +296,7 @@ https://twitter.com/LukeDashjr/status/1260598347322265600?s=20
This one was on soft fork deployment.
-We discussed this last month. AJ and Matt Corallo put up mailing list [posts](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-January/017547.html) on soft fork activation. We knew that Luke Dashjr and others have strong views. This was asking what his view is. I think we are going to need a new BIP which will be BIP 8 updated to be BIP 9 and BIP 148 but I haven’t looked into how that compares to what Matt and AJ were discussing on the mailing list.
+We discussed this last month. AJ and Matt Corallo put up mailing list [posts](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-January/017547.html) on soft fork activation. We knew that Luke Dashjr and others have strong views. This was asking what his view is. I think we are going to need a new BIP which will be BIP 8 updated to be BIP 9 and BIP 148 but I haven’t looked into how that compares to what Matt and AJ were discussing on the mailing list.
What was discussed on the mailing list at the start of the year was as Luke says was BIP 9 plus BIP 149. BIP 149 was going to be the SegWit activation which expires in November, we’ll give it a couple of months and then we’ll do a new activation in January 2018 I guess that will last a year. There will be signaling but at the end of that year whether the signaling works or not it will just activate. It divides it up into two periods. You’ve got the first period which is regular BIP 9 and the second period is essentially the same thing again but at the end it succeeds no matter what. The difference between that and BIP 148 is what miners have to do. What we actually had happen was before the 12 months of SegWit’s initial signaling ended we said “For this particular period starting around August miners have to signal for SegWit. If they don’t this hopefully economic majority of full nodes will drop their blocks so that they will be losing money.” Because they are all signaling for it and we’ve got this long timeframe it will activate no matter what. It will all be great. The difference is mostly BIP 148 got it working in a shorter timeframe because we didn’t need to have to start it again in January but it also forced the miners to have this panic “We might be losing money. How do we coordinate?” That was BIP 91 that allowed them to coordinate. The difference is that BIP 148 we know has worked at least once. BIP 149 is similar to how stuff was activated in the early days like P2SH. But we don’t know 100 percent it will work but it is less risk for miners at least.
diff --git a/transcripts/sydney-bitcoin-meetup/2020-06-23-socratic-seminar.mdwn b/transcripts/sydney-bitcoin-meetup/2020-06-23-socratic-seminar.mdwn
index d10233a..97e06dc 100644
--- a/transcripts/sydney-bitcoin-meetup/2020-06-23-socratic-seminar.mdwn
+++ b/transcripts/sydney-bitcoin-meetup/2020-06-23-socratic-seminar.mdwn
@@ -26,7 +26,7 @@ I am going to describe what the setup is. Later on I will have a more complex sl
The next step is going to be doing the actual swap. What we want here is the opposite to happen. Instead of AS being revealed and Alice getting her money back we want Bob to get the money and BS to get revealed to Alice. How do we do that? We create another transaction to Bob and that transaction if Bob sends to the blockchain will reveal BS. Because this one has no timelocks it happens before everything else we have created thus far. This enables the swap. Now if this transaction goes to the blockchain, BS is revealed, Alice already knowing Alice’s secret and now also knowing Bob’s secret gains control over the transaction on the Litecoin side. Bob if he has sent this to the blockchain, now he has the money on the Bitcoin side. This is a three transaction protocol. The main point here is that this already better than the original atomic swap. With the original atomic swap you would have 4 transactions. Here on the bottom side, nothing happens. It is just one single transaction and the key is split into two parts. Either Bob gets it or Alice gets it because they give one of the parts to each other. The next step would be how to turn this into a 2 transaction variation. What we do is we don’t broadcast this transaction at the top. We give it to Bob but Bob doesn’t broadcast it and instead Bob just gives Bob’s secret to Alice. Bob knows he could broadcast it and get the money, he is guaranteed to receive it but he doesn’t do so and he just gives the secret to Alice. Alice now has control over the Litecoin transaction and Bob if he sends that transaction to the blockchain would also get control over the Bitcoin transaction. However instead of doing that Alice gives Alice’s key to Bob. Now the swap is basically complete in two transactions. They both learn the secret of the other person on the other side of the chain. This would literally be a 2 transaction swap. But there is still this little issue of this transaction existing where Alice gets her money back. This timelocked transaction still exists. What this means is that even though the protocol has completed in 2 transactions there is still a need for Bob to watch the blockchain to see if Alice tries to get her money back. The way the timelocks are set up, particularly with this relative timelock at the end there, what this means is that Alice can send this middle transaction A+B (1 day lock) to the blockchain but in doing so Bob will notice that the transaction goes to the blockchain. Since Bob has Alice’s key he can spend it before the third transaction becomes valid. Bob will always be able to react just like a Lightning channel where one party tries to close the channel in a way that is not supposed to happen. The other party has to respond to it. It requires Bob to be online in order for this to be a true 2 transaction setup.
-To go through the negatives, there is an online requirement for Bob, not for Alice. As soon as the swap is complete Alice is done but Bob has to pay attention. If he doesn’t Alice could try funny. There is an introduction state. We’ve got these secrets that are being swapped. If you forget the secret or if you lose the information that was given to you you will no longer have access to your money. That is different to how a regular Bitcoin wallet works where you have a seed, you back it up and when you lose your phone or something you get your backup. That is not the case here. Every time you do a swap you have some extra secrets and you have to backup those secrets. The positives are that this all works today. You can do this with ECDSA. Lloyd’s work, I am going to call this 1P-ECDSA but I’m not sure if Lloyd agrees. You can do a very simple adaptor signature if you have a literal single key that is owned by a single owner. Using that we can do adaptor signatures on ECDSA today. It only requires 2 transactions as you already know. It is scriptless or at least it is mainly scriptless. There is one way to add another transaction to it that isn’t completely scriptless in order to make it a little bit cheaper to cover the worst case scenario where Alice locks up her money and Bob does nothing. If you want it to be completely scriptless it can be. It is asymmetric. That has some advantages. You could think of this as one blockchain doing all the work and the other blockchain literally do nothing. This allows for two things. The first thing is that you can pick your chain. This is the cheaper chain, maybe some altcoin chain and this is the more expensive chain like the Bitcoin chain. On the Bitcoin chain you do the simple version of the swap. On the altcoin chain you do the complex version. If something goes wrong then the cheaper blockchain will bear the brunt. It is without any timelocks. That is really nice because that means that there is no timelock requirement on one of these two blockchains. I think Monero is one example that doesn’t have timelocks. It is very compatible with even non-blockchain protocols where you have some kind of key that has ownership or you can transfer ownership. This can be very useful for [Payswap](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-January/017595.html) or other privacy protocols and people are talking about these on the mailing list today. How to do atomic swaps for privacy. Hopefully Succinct Atomic Swaps can be part of that. A way to do it more efficiently. One thing I still need to point out. We can make use of MuSig on ECDSA on the Litecoin side of things. The problem with ECDSA is that you can’t do multisig signing or at least without 2P-ECDSA which is a more complex protocol in order to signing with a single key. In this case we are not really signing anything. We just have this one transaction on the blockchain. There are two secrets. Part of one of these secrets gets revealed to the other party but at no time do two people need to collaborate to sign something. There are no transactions that are spending it. We can use MuSig here and replace this key with M in this case and have it be the MuSig of AS and BS. On the blockchain this bottom transaction looks like a very basic transaction like if you were making a payment from a single key. It would be indistinguishable without any advanced 2P-ECDSA or Schnorr. We can do this today.
+To go through the negatives, there is an online requirement for Bob, not for Alice. As soon as the swap is complete Alice is done but Bob has to pay attention. If he doesn’t Alice could try funny. There is an introduction state. We’ve got these secrets that are being swapped. If you forget the secret or if you lose the information that was given to you you will no longer have access to your money. That is different to how a regular Bitcoin wallet works where you have a seed, you back it up and when you lose your phone or something you get your backup. That is not the case here. Every time you do a swap you have some extra secrets and you have to backup those secrets. The positives are that this all works today. You can do this with ECDSA. Lloyd’s work, I am going to call this 1P-ECDSA but I’m not sure if Lloyd agrees. You can do a very simple adaptor signature if you have a literal single key that is owned by a single owner. Using that we can do adaptor signatures on ECDSA today. It only requires 2 transactions as you already know. It is scriptless or at least it is mainly scriptless. There is one way to add another transaction to it that isn’t completely scriptless in order to make it a little bit cheaper to cover the worst case scenario where Alice locks up her money and Bob does nothing. If you want it to be completely scriptless it can be. It is asymmetric. That has some advantages. You could think of this as one blockchain doing all the work and the other blockchain literally do nothing. This allows for two things. The first thing is that you can pick your chain. This is the cheaper chain, maybe some altcoin chain and this is the more expensive chain like the Bitcoin chain. On the Bitcoin chain you do the simple version of the swap. On the altcoin chain you do the complex version. If something goes wrong then the cheaper blockchain will bear the brunt. It is without any timelocks. That is really nice because that means that there is no timelock requirement on one of these two blockchains. I think Monero is one example that doesn’t have timelocks. It is very compatible with even non-blockchain protocols where you have some kind of key that has ownership or you can transfer ownership. This can be very useful for [Payswap](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-January/017595.html) or other privacy protocols and people are talking about these on the mailing list today. How to do atomic swaps for privacy. Hopefully Succinct Atomic Swaps can be part of that. A way to do it more efficiently. One thing I still need to point out. We can make use of MuSig on ECDSA on the Litecoin side of things. The problem with ECDSA is that you can’t do multisig signing or at least without 2P-ECDSA which is a more complex protocol in order to signing with a single key. In this case we are not really signing anything. We just have this one transaction on the blockchain. There are two secrets. Part of one of these secrets gets revealed to the other party but at no time do two people need to collaborate to sign something. There are no transactions that are spending it. We can use MuSig here and replace this key with M in this case and have it be the MuSig of AS and BS. On the blockchain this bottom transaction looks like a very basic transaction like if you were making a payment from a single key. It would be indistinguishable without any advanced 2P-ECDSA or Schnorr. We can do this today.
What exactly are the requirements for the alternative coin? It doesn’t need to have timelocks, does it need Script?
@@ -78,7 +78,7 @@ We had Bitcoin Optech 98 but one of the key items on there was Succinct Atomic S
# CoinSwap
-https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-May/017898.html
+https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-May/017898.html
It is looking at how we can use atomic swaps in such a way that we increase privacy. It goes through different setups you can create. One of them would be on one side you give somebody 1 Bitcoin and on the other side you give somebody 3 UTXOs, 0.1, 0.2 and 0.7. You can do these fan out swaps and you can make swaps depend on other swaps. You can do swaps in a circle where Alice gets Bob’s money, Bob gets Carol’s money, Carol gets Alice’s money and things like that.
@@ -100,7 +100,7 @@ Do they all have to be the same amount like Coinjoin?
I think it can be split up. You could have the same amount but from different chunks.
-That is right. You can split it up. You can do [Payswap](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-January/017595.html) where you are swapping and paying at the same time. Your change output is what is getting swapped back to you. Now the amounts are not exactly the same anymore. Those are all methods to make it a little less obvious. If you literally swap 1 Bitcoin for somebody else’s Bitcoin. In that case first you would have to wonder if a swap is happening or not. Is someone just paying 1 Bitcoin to somebody? The second thing would be who could this person have been swapping with? That depends on how long the timelock is and how many 1 Bitcoin transactions have occurred. Every 1 Bitcoin transaction has occurred in that period, that is the anonymity set as opposed to Coinjoin where your anonymity set is literally just everybody within that specific transaction. That has some potential to have a larger anonymity set. The other thing that is interesting is that the anonymity set is not limited to Bitcoin itself. If you do a cross chain swap it means that every transaction on every blockchain could’ve been the potential source of your swap. If a lot of people do these cross chain swaps that makes the anonymity set even larger and makes it more complex to figure out.
+That is right. You can split it up. You can do [Payswap](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-January/017595.html) where you are swapping and paying at the same time. Your change output is what is getting swapped back to you. Now the amounts are not exactly the same anymore. Those are all methods to make it a little less obvious. If you literally swap 1 Bitcoin for somebody else’s Bitcoin. In that case first you would have to wonder if a swap is happening or not. Is someone just paying 1 Bitcoin to somebody? The second thing would be who could this person have been swapping with? That depends on how long the timelock is and how many 1 Bitcoin transactions have occurred. Every 1 Bitcoin transaction has occurred in that period, that is the anonymity set as opposed to Coinjoin where your anonymity set is literally just everybody within that specific transaction. That has some potential to have a larger anonymity set. The other thing that is interesting is that the anonymity set is not limited to Bitcoin itself. If you do a cross chain swap it means that every transaction on every blockchain could’ve been the potential source of your swap. If a lot of people do these cross chain swaps that makes the anonymity set even larger and makes it more complex to figure out.
That is definitely a big use case of atomic swaps. Say you want to use zero knowledge proofs on Zcash or use Monero to get some privacy with your Bitcoin and then get it back into Bitcoin. Does that bring strong benefits? Or use a sidechain with confidential transactions on it. This seems like a useful use case.
diff --git a/transcripts/sydney-bitcoin-meetup/2020-07-21-socratic-seminar.mdwn b/transcripts/sydney-bitcoin-meetup/2020-07-21-socratic-seminar.mdwn
index 60566c8..078fe07 100644
--- a/transcripts/sydney-bitcoin-meetup/2020-07-21-socratic-seminar.mdwn
+++ b/transcripts/sydney-bitcoin-meetup/2020-07-21-socratic-seminar.mdwn
@@ -80,7 +80,7 @@ I think one of the things with the way it works at the moment is if you have got
This is one of the issues I really wanted to know more about. It has obviously been discussed because it is the same way eltoo works?
-Yeah eltoo doesn’t work that way. There is a [post](https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-January/002448.html) of getting it to work despite that with the ANYPREVOUT stuff. (Also a historical [discussion](https://lists.linuxfoundation.org/pipermail/lightning-dev/2015-November/000339.html) on the mailing list)
+Yeah eltoo doesn’t work that way. There is a [post](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-January/002448.html) of getting it to work despite that with the ANYPREVOUT stuff. (Also a historical [discussion](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2015-November/000339.html) on the mailing list)
You can actually circumvent this problem in eltoo?
@@ -198,7 +198,7 @@ They even had a poor implementation of Coinjoin at some point. It was completely
# BIP 118 and SIGHASH_ANYPREVOUT
-https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-July/018038.html
+https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-July/018038.html
Is anyone not familiar with NOINPUT/ANYPREVOUT as of a year ago, basic concept?
@@ -268,9 +268,9 @@ It is still interesting in an academic sense to think about what should be the d
# Thoughts on soft fork activation (AJ Towns)
-https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-July/018043.html
+https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-July/018043.html
-This was building off Matt Corallo’s [idea](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-January/017547.html) but with slight differences. Mandatory activation disabled in Bitcoin Core unless you manually do something for that. One of Matt Corallo’s concerns was that there are all the core developers and are they making a decision for the users when maybe the users need to actively opt-in to it. The counter view is that the Bitcoin Core developers are meant to reflect the view of the users. If the users don’t like it they can just not run that code and not upgrade.
+This was building off Matt Corallo’s [idea](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-January/017547.html) but with slight differences. Mandatory activation disabled in Bitcoin Core unless you manually do something for that. One of Matt Corallo’s concerns was that there are all the core developers and are they making a decision for the users when maybe the users need to actively opt-in to it. The counter view is that the Bitcoin Core developers are meant to reflect the view of the users. If the users don’t like it they can just not run that code and not upgrade.
There are a billion points under this topic that you could probably talk about forever. Obviously talking about things forever is how we are going at the moment. We don’t want to have Bitcoin be run by a handful of developers that just dictate what is going on. Then we have reinvented the central bank control board or whatever. One of the problems if you do that is that then everyone who is trying to dictate where Bitcoin goes in future starts putting pressure on those people. That gets pretty uncomfortable if you are one of those people and you don’t want that sort of political pressure. The ideal would be that developers think about the code and try to understand the technical trade-offs and what is going to happen if people do something. Somehow giving that as an option to the wider Bitcoin marketplace, community, industry, however you want to describe it. The 1MB soft fork where the block size got limited, Satoshi quietly committed some code to activate it, released the code seven days later and then made the code not have any activation parameters another seven days after that. That was when Bitcoin was around 0.0002 cents per Bitcoin. Maybe it is fine with that sort of market cap. But that doesn’t seem like the way you’d want to go today. Since then it has transitioned off. There has been a flag day activation or two that had a few months notice. Then there has been the version number voting which has taken a month or a year for the two of those that happened. Then we switched onto BIP 9 which at least in theory lets us do multiple activations at once and have activations that don’t end up succeeding which is nice. Then SegWit went kind of crazy and so we want to have something a little bit more advanced than that too. SegWit had a whole lot of pressure for the people who were deeply involved at the time. That is something we would not like to repeat. Conversely we have taken a lot more time with Taproot than any of the activations have in the past too. It might be a case of the pendulum swinging a bit too far the other way. There is a bunch of different approaches on how to deal with that. If you read Harding’s [post](https://gist.github.com/harding/dda66f5fd00611c0890bdfa70e28152d) it is mostly couched in terms of BIP 8 which I am calling the simplest possible approach. That is a pretty good place to start to think about these things. BIP 8 is about saying “We’ll accept miners signaling for a year or however long and then at the end of the year assuming everyone is running a client that has lock in on timeout, however that bit ends up getting set, we’ll stop accepting miners not signaling. The only valid chain at that point will have whatever we’re activating activated.” Matt’s modern soft fork activation is two of those steps combined. The first one has that bit set to false and then there is a little bit of a gap. Then there is a much longer one where it is set to TRUE where it will activate at the end. There are some differences in how they get signaled but those are details that don’t ultimately matter that much. The decreasing threshold one is basically the same thing as Matt’s again except that instead of having 95 percent of blocks in a retarget period have to signal, that gradually decreases to 50 percent until the end of the time period. If you manage to convince 65 percent of miners to signal that gives you an incremental speed up in how fast it activates. At least that way there is some kind of game theory incentive for people to signal even if it is clear that it is not going to get to 95 percent.
@@ -318,7 +318,7 @@ If you want to get more time wasted then there is a IRC channel \#\#taproot-acti
As a relative newcomer to the space, I thought that the whole Taproot process was handled magnificently. The open source design and discussion about it was so much better than any other software project in general that I have seen. That was really good and that seems to have built a lot of good support and goodwill amongst everyone. Although people may enjoy discussing all the different ways of activating it it feels like it will get activated with any one of those processes.
-One thing I might add is that we have still got a few technical things to merge to get Taproot in. There is the updates to [libsecp](https://github.com/bitcoin-core/secp256k1/pull/558) to get Schnorr. One of the things that Greg mentioned on the \#\#taproot-activation channel is that the libsecp stuff would really like more devs doing review of code even if you are not a high level crypto dev. Making sure the code makes sense, making sure the comments make sense, being able to understand C/C++ code and making sure that there isn’t obvious mistakes not including the complicated crypto stuff that has probably already had lots of thought put into it. Making sure APIs make sense and are usable. Adding review there is always good. The Taproot [merge](https://github.com/bitcoin/bitcoin/pull/17977) and the [wtxid relay](https://github.com/bitcoin/bitcoin/pull/18044) stuff, both of those are pretty deep Bitcoin things but are worth a look if you want to look into Bitcoin and have a reasonable grasp of C++. Hopefully we are getting a bit closer to a [signet merge](https://github.com/bitcoin/bitcoin/pull/18267) which is a more reliable test network than testnet is hopefully. There should be a post to the [mailing list](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016734.html) about some updates for that in the next few weeks. I am hoping that we can get Taproot on signet so that we can start playing around doing things like Lightning clients running against signet to try out new features and doing development and interaction on that as well sometime soon.
+One thing I might add is that we have still got a few technical things to merge to get Taproot in. There is the updates to [libsecp](https://github.com/bitcoin-core/secp256k1/pull/558) to get Schnorr. One of the things that Greg mentioned on the \#\#taproot-activation channel is that the libsecp stuff would really like more devs doing review of code even if you are not a high level crypto dev. Making sure the code makes sense, making sure the comments make sense, being able to understand C/C++ code and making sure that there isn’t obvious mistakes not including the complicated crypto stuff that has probably already had lots of thought put into it. Making sure APIs make sense and are usable. Adding review there is always good. The Taproot [merge](https://github.com/bitcoin/bitcoin/pull/17977) and the [wtxid relay](https://github.com/bitcoin/bitcoin/pull/18044) stuff, both of those are pretty deep Bitcoin things but are worth a look if you want to look into Bitcoin and have a reasonable grasp of C++. Hopefully we are getting a bit closer to a [signet merge](https://github.com/bitcoin/bitcoin/pull/18267) which is a more reliable test network than testnet is hopefully. There should be a post to the [mailing list](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016734.html) about some updates for that in the next few weeks. I am hoping that we can get Taproot on signet so that we can start playing around doing things like Lightning clients running against signet to try out new features and doing development and interaction on that as well sometime soon.
Is the first blocker getting that [Schnorr PR](https://github.com/bitcoin-core/secp256k1/pull/558) merged into libsecp? That code is obviously replicated in the Bitcoin Core Taproot PR. But is that the first blocker? Get the libsecp PR merged and then the rest of the Taproot Core PR.
diff --git a/transcripts/sydney-bitcoin-meetup/2020-08-25-socratic-seminar.mdwn b/transcripts/sydney-bitcoin-meetup/2020-08-25-socratic-seminar.mdwn
index c7d26ac..efd784c 100644
--- a/transcripts/sydney-bitcoin-meetup/2020-08-25-socratic-seminar.mdwn
+++ b/transcripts/sydney-bitcoin-meetup/2020-08-25-socratic-seminar.mdwn
@@ -136,7 +136,7 @@ lnprototest on GitHub: https://github.com/rustyrussell/lnprototest
Rusty presenting at Bitcoin Magazine Technical Tuesday on lnprototest: https://www.youtube.com/watch?v=oe1hQ7WaX4c
-This started over 12 months ago. The idea was we should write some tests that take a Bitcoin node and feed it messages and check that it gives the correct responses according to the spec. It should be this test suite that goes with the spec. It seemed like a nice idea. It kind of worked reasonably well but it was really painful to write those tests. You’d do this and then “What will the commitment transaction look like? It is going to send the signatures …” As the spec involved there were implementation differences which are perfectly legitimate. It means that you couldn’t simply go “It will send exactly this message.” It would send a valid signature but you can’t say exactly what it would look like. What we did find were two bugs with the original implementation. One was that c-lightning had stopped ignoring unknown odd packets which was a dumb thing that we’d lost. Because you never send unknown packets to each other so a test suite never hit it. You are supposed to ignore them and that code had somehow got factored out. The other one was the [CVE](https://lists.linuxfoundation.org/pipermail/lightning-dev/2019-September/002174.html) of course. I was testing the opening path and I realized we weren’t doing some checks that we needed to check in c-lightning. I spoke to the other implementations and they were exposed to the same bug in similar ways. It was a spec bug. The spec should have said “You must check this” and it didn’t. Everyone fell in the same hole. That definitely convinced me that we needed something like this but the original one was kind of a proof of concept and pretty crappy. I sat down for a month and rewrote it from scratch. The result is lnprototest. It is a pure Python3 test system and some packages to interface with the spec that currently live in the c-lightning repository. You run lnprototest and it has these scripts and goes “I will send this” and you will send back this. It can keep state and does some quite sophisticated things. It has a whole heap of scaffolding to understand commitment transactions, anchor outputs and a whole heap of other things. Then you write these scripts that say “If I send this it should send this” or “If I send this instead…”. You create this DAG, a graph of possible things that could happen and it runs through all of them and checks that happens. It has been really useful. It is really good for protocol development too not just testing existing stuff. When you want to modify the spec you can write that half and run it against your own node. It almost inevitably find bugs. Lisa (Neigut) has been using it for the dual funding testing. That protocol dev is really important. Both lnd and eclair are looking at integrating their stuff into lnprototest. You have to write a driver for lnprototest and I have no doubt that they will find bugs when they do it. It tests things that are really hard to test in real life. Things that don’t happen like sending unexpected packets at different times. There has been some really good interest in it and it is fantastic to see that taking off. Some good bug reports too. I spent yesterday fixing the README and fixing a few details. The documentation lied about how you’d get it to work. That is fixed now.
+This started over 12 months ago. The idea was we should write some tests that take a Bitcoin node and feed it messages and check that it gives the correct responses according to the spec. It should be this test suite that goes with the spec. It seemed like a nice idea. It kind of worked reasonably well but it was really painful to write those tests. You’d do this and then “What will the commitment transaction look like? It is going to send the signatures …” As the spec involved there were implementation differences which are perfectly legitimate. It means that you couldn’t simply go “It will send exactly this message.” It would send a valid signature but you can’t say exactly what it would look like. What we did find were two bugs with the original implementation. One was that c-lightning had stopped ignoring unknown odd packets which was a dumb thing that we’d lost. Because you never send unknown packets to each other so a test suite never hit it. You are supposed to ignore them and that code had somehow got factored out. The other one was the [CVE](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2019-September/002174.html) of course. I was testing the opening path and I realized we weren’t doing some checks that we needed to check in c-lightning. I spoke to the other implementations and they were exposed to the same bug in similar ways. It was a spec bug. The spec should have said “You must check this” and it didn’t. Everyone fell in the same hole. That definitely convinced me that we needed something like this but the original one was kind of a proof of concept and pretty crappy. I sat down for a month and rewrote it from scratch. The result is lnprototest. It is a pure Python3 test system and some packages to interface with the spec that currently live in the c-lightning repository. You run lnprototest and it has these scripts and goes “I will send this” and you will send back this. It can keep state and does some quite sophisticated things. It has a whole heap of scaffolding to understand commitment transactions, anchor outputs and a whole heap of other things. Then you write these scripts that say “If I send this it should send this” or “If I send this instead…”. You create this DAG, a graph of possible things that could happen and it runs through all of them and checks that happens. It has been really useful. It is really good for protocol development too not just testing existing stuff. When you want to modify the spec you can write that half and run it against your own node. It almost inevitably find bugs. Lisa (Neigut) has been using it for the dual funding testing. That protocol dev is really important. Both lnd and eclair are looking at integrating their stuff into lnprototest. You have to write a driver for lnprototest and I have no doubt that they will find bugs when they do it. It tests things that are really hard to test in real life. Things that don’t happen like sending unexpected packets at different times. There has been some really good interest in it and it is fantastic to see that taking off. Some good bug reports too. I spent yesterday fixing the README and fixing a few details. The documentation lied about how you’d get it to work. That is fixed now.
This testing suite allows people to develop a feature… would that help them check compatibility against another implementation for example?
@@ -174,7 +174,7 @@ We’ve got our DummyRunner that passes all the tests. It always gives you what
# Dynamic Commitments: Upgrading Channels Without Onchain Transactions (Laolu Osuntokun)
-https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-July/002763.html
+https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-July/002763.html
We had this item here on upgrading channels without an onchain transaction. This is a roasbeef post. It is talking about how you could upgrade a channel. One example was around changing the static remote key.
diff --git a/transcripts/sydney-bitcoin-meetup/2021-02-23-socratic-seminar.mdwn b/transcripts/sydney-bitcoin-meetup/2021-02-23-socratic-seminar.mdwn
index 890943f..3228f67 100644
--- a/transcripts/sydney-bitcoin-meetup/2021-02-23-socratic-seminar.mdwn
+++ b/transcripts/sydney-bitcoin-meetup/2021-02-23-socratic-seminar.mdwn
@@ -16,7 +16,7 @@ The conversation has been anonymized by default to protect the identities of the
# PoDLEs revisited (Lloyd Fournier)
-https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-January/002929.html
+https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-January/002929.html
We’ll start with me talking about my research into UTXO probing attacks on Lightning in the dual funding proposal. I will go quickly on it because there are a lot of details in that post and they are not super relevant because my conclusion wipes it all away. I think we’ve discussed this before if you are a long time Sydney Socratic attendee, maybe the first or second meeting we had this topic come up when the dual funding proposal was first made by Lisa from Blockstream. This is a proposal to allow Lightning channels to be funded by two parties. The reason for doing that, both parties have capacity at both sides. Both sides can make a payment through that channel right at the beginning of the channel. The difficulty with that is that it creates this opportunity for the person who is requesting to open the channel, say “I am going to use this UTXO”, wait for the other guy to say “I will dual fund this with you with my UTXO” and then just leave. Once the attacker has learnt what UTXO you were going to use he now knows the UTXO from your wallet and then just aborts the protocol. You can imagine if you have a bunch of these nodes on the network that are offering dual funding the attacker goes to all of them at once and just gets a bunch of information about which node owns which UTXO on the blockchain and then leaves, does it again in a hour or something. We want to have a way to prevent this attack, prevent leaking the UTXOs of every Lightning node that offers this dual funding. We can guess that with dual funding, probably your node at home is not offering that, maybe it is but you would have to enable it and you would have to carefully think about what that meant. But certainly it is a profitable thing to do because one of the businesses in Lightning is these services like Bitrefill where you pay for capacity. If anyone at home with their money could offer capacity in some way to dual fund it might become a popular thing and it may offer a big attack surface. One very intuitive proposal you might think of is as soon as this happens to you, you broadcast the UTXO that the attacker proposed and you tell everyone “This guy is a bad UTXO”. You shouldn’t open channels with this guy because he is just going to learn your UTXO and abort. Maybe that isn’t such a great idea because what if it was just an accident? Now you’ve sent this guy’s UTXO around to everyone saying “He is about to open a Lightning channel”. Maybe not the end of the world but the proposal from Lisa is to do a bit better using a trick from Joinmarket which is this proof of discrete logarithm equality or we’ve called it PoDLE. What this does is creates an image of your public key and UTXO against a different base point. It is fully determined by your public key, by your secret key, but it cannot be linked to your public key. It is like another public key that is determined by your public key but cannot be linked to it unless you have a proof that links the two. What you do is instead of broadcasting a UTXO you broadcast these unlinked coins. No one can link to the onchain UTXO but if that attacker connects to them they’ll be able to link it.
@@ -46,7 +46,7 @@ Correct. You would learn new funds before they are used but they are eventually
# Lightning dice (AJ Towns)
-https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-January/002937.html
+https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-January/002937.html
Slides: https://www.dropbox.com/s/xborgrl1cofyads/AJ%20Towns-%20Lightning%20Dice.pdf
@@ -186,7 +186,7 @@ A - To have a channel, a routing node pointing towards a use case like this whic
# Taproot activation
-Taproot activation meeting 2: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018380.html
+Taproot activation meeting 2: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018380.html
Let’s talk about Taproot activation so we can actually have this thing working on mainnet.
diff --git a/transcripts/sydney-bitcoin-meetup/2021-06-01-socratic-seminar.mdwn b/transcripts/sydney-bitcoin-meetup/2021-06-01-socratic-seminar.mdwn
index 5325a67..fc8dee9 100644
--- a/transcripts/sydney-bitcoin-meetup/2021-06-01-socratic-seminar.mdwn
+++ b/transcripts/sydney-bitcoin-meetup/2021-06-01-socratic-seminar.mdwn
@@ -40,7 +40,7 @@ So would this be a strong enough reason to not do PTLCs until this is solved or
You could do the PTLCs and just not do the randomization bit of it. There are other ways you can leverage PTLCs. I don’t know if I would say I would not do it but it is definitely worth the Lightning developers considering this problem before switching I would say. It is the operating nodes that have to pay the cost for this protocol change. Of course they can just not accept that if they don’t want to, it can be an optional thing.
-Are you planning to attend Antoine Riard’s [workshops](https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-April/003002.html) on Lightning problems? I think they are coming up next month.
+Are you planning to attend Antoine Riard’s [workshops](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-April/003002.html) on Lightning problems? I think they are coming up next month.
This is the particular one about fees. He writes a lot about [Lightning problems](https://github.com/ariard/L2-zoology) but he is also doing a workshop on this fee bumping, what should be the rules to evict transactions from the mempool? I have a Bitcoin problem as a [PR](https://github.com/bitcoin-problems/bitcoin-problems.github.io/pull/13) right now on that topic as I’m trying to teach myself and he has done a nice review for me. I need to go through that. Eventually there will be a Bitcoin problem on solving that. If we solve this problem out of these meetings then it can be the first Bitcoin problem to be solved.
@@ -124,7 +124,7 @@ Seems like a simple solution.
# Designing Bitcoin Smart Contracts with Sapio (Jeremy Rubin)
-<https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-April/018759.html>
+<https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-April/018759.html>
I’ve got this [blog post](https://judica.org/blog/sapio-tutorial/), this is being released by Judica which is a research lab / development team started by Jeremy Rubin. Their first big release is this Sapio language, this Sapio compiler for the language. It is a smart contract compiler but smart contract doesn’t necessarily mean one transaction. It is a contract that could be expressed through many transaction trees. What’s the angle here? I have taken a look at this blog post. What he has got here is a really basic public key contract which means these funds can be taken by making a signature under somebody’s public key.
diff --git a/transcripts/sydney-bitcoin-meetup/2021-07-06-socratic-seminar.mdwn b/transcripts/sydney-bitcoin-meetup/2021-07-06-socratic-seminar.mdwn
index 63d7abf..ba00842 100644
--- a/transcripts/sydney-bitcoin-meetup/2021-07-06-socratic-seminar.mdwn
+++ b/transcripts/sydney-bitcoin-meetup/2021-07-06-socratic-seminar.mdwn
@@ -12,9 +12,9 @@ The conversation has been anonymized by default to protect the identities of the
Agenda: <https://github.com/bitcoin-sydney/socratic/blob/master/README.md#2021-07>
-First IRC workshop on L2 onchain support: <https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-June/019079.html>
+First IRC workshop on L2 onchain support: <https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-June/019079.html>
-Second IRC workshop on L2 onchain support: <https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-June/019148.html>
+Second IRC workshop on L2 onchain support: <https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-June/019148.html>
# Basics and BIP 125 RBF
@@ -300,11 +300,11 @@ We do have a check right now called min relay transaction fee. That is requiring
# Future ideas - SIGHASH_IOMAP and fee sponsorship
-SIGHASH_IOMAP: <https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-May/019031.html>
+SIGHASH_IOMAP: <https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-May/019031.html>
-Fee sponsorship: <https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-September/018168.html>
+Fee sponsorship: <https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-September/018168.html>
-We’ve been through pinning, we’ve been through these rules and packages and package relay. Those are all relevant now or soon. Let’s talk now about the future to finish this off. What can the future look like? What is the ideal mechanism to do fee bumping of pre-signed transactions such that it can bypass these rules and still not cause denial of service attacks and be sound from a P2P perspective, bandwidth perspective and from a layer 2 protocol design perspective? The two proposals I want to focus on are Antoine Riard’s SIGHASH_IOMAP and Jeremy Rubin’s sponsorship idea. [This](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-September/018168.html) is the sponsoring one. Let’s talk about that first. This is Jeremy Rubin’s idea. What he has proposed is like CPFP but on crack or without having to plan for it you can sign your transaction, sign one of the inputs and say “I am increasing the fee of this transaction. You can only include this transaction in the block if that transaction I’m sponsoring is in the block as well.” It is very flexible in that you can take any transaction you want to be in there, any layer 2 pre-signed transaction with whatever fee rate and say “I am going to sponsor this transaction with one of my wallet inputs”. That’s basically the idea.
+We’ve been through pinning, we’ve been through these rules and packages and package relay. Those are all relevant now or soon. Let’s talk now about the future to finish this off. What can the future look like? What is the ideal mechanism to do fee bumping of pre-signed transactions such that it can bypass these rules and still not cause denial of service attacks and be sound from a P2P perspective, bandwidth perspective and from a layer 2 protocol design perspective? The two proposals I want to focus on are Antoine Riard’s SIGHASH_IOMAP and Jeremy Rubin’s sponsorship idea. [This](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-September/018168.html) is the sponsoring one. Let’s talk about that first. This is Jeremy Rubin’s idea. What he has proposed is like CPFP but on crack or without having to plan for it you can sign your transaction, sign one of the inputs and say “I am increasing the fee of this transaction. You can only include this transaction in the block if that transaction I’m sponsoring is in the block as well.” It is very flexible in that you can take any transaction you want to be in there, any layer 2 pre-signed transaction with whatever fee rate and say “I am going to sponsor this transaction with one of my wallet inputs”. That’s basically the idea.
This needs a soft fork and as you say it is on crack because it is not a CPFP or a RBF. It is literally a transaction that has no relation whatsoever to the transaction you are concerned about. There is no connection whatsoever, that high fee transaction is saying “You can only mine me if you also include this other transaction that is not related.”
@@ -370,7 +370,7 @@ It would be good to have package relay already deployed and solve some safety ho
# Transaction mutation proposal
-<https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-June/019046.html>
+<https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-June/019046.html>
There was an alternative proposed solution, it was semi failed. The idea was to allow you to change transactions after they’ve been signed, to mutate them. Outputs tend to go towards one party or the other so one party can reduce one of the outputs after it has been signed. If we put a Tapscript in there that says “Under these conditions this output can be reduced with a signature from this key.” That would increase the fee. The problem with that idea is that in layer 2 protocols the outputs are not owned exclusively by one party yet. Or at least as soon as the commitment transaction is broadcast you don’t know who owns those funds even if they are in a `to_self` output or a `to_remote` output. In the `to_self` case you don’t know who they are going to yet. How do you decide where they get the funds from to reduce an output? You have to put some kind of limit in there, you can reduce this output by this amount. If you set that limit too high you can do a griefing attack where you broadcast an old commitment transaction in a channel you no longer have much interest in and burn a large fee to a miner to grief the other person. What is the logic to set the limit of the fees you can reduce? The advantage of this mutation scheme is clearly you do not need coins from outside the protocol coins. The coins that went into the channel in the first place, you can use those coins to bump the fee rather than getting them from the wallet. My concern is that is difficult from a UX perspective to always have coins around. It would be a nice thing to get rid of, you could just use the channel coins to bump the fee but it turns out to be rather involved. I am thinking that one of these two proposals is probably better.
diff --git a/transcripts/tftc-podcast/2021-02-11-matt-corallo-taproot-activation.mdwn b/transcripts/tftc-podcast/2021-02-11-matt-corallo-taproot-activation.mdwn
index 607eedd..a889d5c 100644
--- a/transcripts/tftc-podcast/2021-02-11-matt-corallo-taproot-activation.mdwn
+++ b/transcripts/tftc-podcast/2021-02-11-matt-corallo-taproot-activation.mdwn
@@ -110,9 +110,9 @@ MC: It has been years. Nothing is really happening, what is going on? Not so muc
MB: I feel like it was this time last year.
-MC: Let me check my email. I sent an [email](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-January/017547.html) on January 10th of last year describing what I think are good technical requirements to have for an activation method and kickstarted a little bit of a discussion. I got some responses, not from many people. That discussion only started around then and I think for a while there were only a few people involved in it and then more recently more developers have got involved in those discussions, Anthony Towns and a few other people leading the charge on that. It has been a slow process and doubly so because there is a lot of acrimony. Over time no one has really wanted to talk about SegWit and that mess because it was a painful time for a lot of people. I think over time different people have started remembering it slightly differently. It has been 3 years, that is a normal thing for human brains to do. And so I think once that conversation got started I quickly realized and I think a few other people realized that people are on some very different pages in terms of how we should think about soft forks. It has taken time but it sounds like there is a little more agreement, a little more “There’s debate over whether we should do flag days and it looks like it won’t even be an issue so maybe we should just do a normal 95 percent miner readiness signaling thing. We’ll do that and if it doesn’t work out we can revisit it and do a flag day, fine, whatever.” Not many people are going to say that is bad. They are just going to say they’d rather do a flag day maybe. There is some legitimate complaints about precedent on the other side saying that miner readiness signaling should not be viewed as a vote for or against a consensus change. Consensus changes aren’t decided by miners and there is an issue with setting precedent that miners decide consensus changes based on this signaling mechanism. That is a very valid point. I think these two opposite points about the different options on the table, flag day UASF or not, both have interesting points about precedent and drawbacks of the other side. Thus it has been a long conversation of hashing it out. That is to be expected.
+MC: Let me check my email. I sent an [email](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-January/017547.html) on January 10th of last year describing what I think are good technical requirements to have for an activation method and kickstarted a little bit of a discussion. I got some responses, not from many people. That discussion only started around then and I think for a while there were only a few people involved in it and then more recently more developers have got involved in those discussions, Anthony Towns and a few other people leading the charge on that. It has been a slow process and doubly so because there is a lot of acrimony. Over time no one has really wanted to talk about SegWit and that mess because it was a painful time for a lot of people. I think over time different people have started remembering it slightly differently. It has been 3 years, that is a normal thing for human brains to do. And so I think once that conversation got started I quickly realized and I think a few other people realized that people are on some very different pages in terms of how we should think about soft forks. It has taken time but it sounds like there is a little more agreement, a little more “There’s debate over whether we should do flag days and it looks like it won’t even be an issue so maybe we should just do a normal 95 percent miner readiness signaling thing. We’ll do that and if it doesn’t work out we can revisit it and do a flag day, fine, whatever.” Not many people are going to say that is bad. They are just going to say they’d rather do a flag day maybe. There is some legitimate complaints about precedent on the other side saying that miner readiness signaling should not be viewed as a vote for or against a consensus change. Consensus changes aren’t decided by miners and there is an issue with setting precedent that miners decide consensus changes based on this signaling mechanism. That is a very valid point. I think these two opposite points about the different options on the table, flag day UASF or not, both have interesting points about precedent and drawbacks of the other side. Thus it has been a long conversation of hashing it out. That is to be expected.
-MB: It is good to see people meeting on IRC. There is another meeting next Wednesday on IRC about it. I’m pulling up Michael Folkson’s [notes](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018379.html) from the first IRC meeting. Overwhelming consensus that 1 year is the correct timeout period, unanimous support for BIP 8 except for Luke Dashjr (see notes for exact wording), no decision on start time but 2 months was done for SegWit and that didn’t seem too objectionable. It seems like good conversation from Michael’s notes.
+MB: It is good to see people meeting on IRC. There is another meeting next Wednesday on IRC about it. I’m pulling up Michael Folkson’s [notes](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018379.html) from the first IRC meeting. Overwhelming consensus that 1 year is the correct timeout period, unanimous support for BIP 8 except for Luke Dashjr (see notes for exact wording), no decision on start time but 2 months was done for SegWit and that didn’t seem too objectionable. It seems like good conversation from Michael’s notes.
MC: I just have Michael’s notes as well. It seems like people are coming round to just doing the miner readiness signaling thing.
diff --git a/transcripts/wasabi-research-club/2020-06-15-coinswap.mdwn b/transcripts/wasabi-research-club/2020-06-15-coinswap.mdwn
index 2e839d6..bbc5be8 100644
--- a/transcripts/wasabi-research-club/2020-06-15-coinswap.mdwn
+++ b/transcripts/wasabi-research-club/2020-06-15-coinswap.mdwn
@@ -22,7 +22,7 @@ This is a 2020 ongoing GitHub research paper that Chris Belcher has been working
# Wasabi Research Club
-Just a reminder of what we have been doing. Wasabi Research Club is a weekly meetup that tries to focus on interesting philosophical papers, math papers, privacy papers around Bitcoin. We cover different topics. You can see [here](https://github.com/zkSNACKs/WasabiResearchClub) we have covered a lot of different topics in the last few months. We went on a hiatus because I was mostly the one organizing these things. It became a lot less formal in April and May with casual conversations. Most recently we are very happy to say that [Wabisabi](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-June/017969.html), the outcome of all of this discussion and work, we have this new protocol draft that has just been finished by a lot of people on this call. That is very exciting. Now we are getting back into the swing of things with regular discussions on papers. Last week we talked about CoinSwaps as a broad idea. What are CoinSwaps? Why do we want them? What are some ways we could use CoinSwaps? Today we are looking at a specific protocol for CoinSwaps which is Chris Belcher’s 2020 paper. Find out about what we are doing on our [GitHub](https://github.com/zkSNACKs/WasabiResearchClub).
+Just a reminder of what we have been doing. Wasabi Research Club is a weekly meetup that tries to focus on interesting philosophical papers, math papers, privacy papers around Bitcoin. We cover different topics. You can see [here](https://github.com/zkSNACKs/WasabiResearchClub) we have covered a lot of different topics in the last few months. We went on a hiatus because I was mostly the one organizing these things. It became a lot less formal in April and May with casual conversations. Most recently we are very happy to say that [Wabisabi](https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-June/017969.html), the outcome of all of this discussion and work, we have this new protocol draft that has just been finished by a lot of people on this call. That is very exciting. Now we are getting back into the swing of things with regular discussions on papers. Last week we talked about CoinSwaps as a broad idea. What are CoinSwaps? Why do we want them? What are some ways we could use CoinSwaps? Today we are looking at a specific protocol for CoinSwaps which is Chris Belcher’s 2020 paper. Find out about what we are doing on our [GitHub](https://github.com/zkSNACKs/WasabiResearchClub).
# CoinSwaps (Belcher 2020)