Return-Path: <ZmnSCPxj@protonmail.com>
Received: from smtp1.linuxfoundation.org (smtp1.linux-foundation.org
	[172.17.192.35])
	by mail.linuxfoundation.org (Postfix) with ESMTPS id 7D99D120D
	for <bitcoin-dev@lists.linuxfoundation.org>;
	Wed, 21 Mar 2018 07:54:07 +0000 (UTC)
X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6
Received: from mail4.protonmail.ch (mail4.protonmail.ch [185.70.40.27])
	by smtp1.linuxfoundation.org (Postfix) with ESMTPS id CCC4937D
	for <bitcoin-dev@lists.linuxfoundation.org>;
	Wed, 21 Mar 2018 07:54:04 +0000 (UTC)
Date: Wed, 21 Mar 2018 03:53:59 -0400
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=protonmail.com;
	s=default; t=1521618841;
	bh=lE3KmJr+snPZAIJIaDtct6c0lWZDomdkhijNVaGoGg4=;
	h=Date:To:From:Reply-To:Subject:In-Reply-To:References:Feedback-ID:
	From;
	b=c8/md8YxljLCJEDcvmfyei8XHrC0vYOERRsErPal3v+Mkn+6gfv7vT6aD8PEnqtUr
	bIZ4kYhdhWaQWxMSygOJ2TpdGdlZD6KS2kigCl7tmHeiof3/fsDx0L/NPODQrfOcU3
	pTAva5hHKDITMqf9QF99+QXb7EI95qYM+s/EFrLA=
To: Anthony Towns <aj@erisian.com.au>,
	Bitcoin Protocol Discussion <bitcoin-dev@lists.linuxfoundation.org>
From: ZmnSCPxj <ZmnSCPxj@protonmail.com>
Reply-To: ZmnSCPxj <ZmnSCPxj@protonmail.com>
Message-ID: <d_OOMciZ--WI6X8V1PWVCcPGyEFo7AWcNcXls8uUK8itK8pkoUJLRsekBYUdXTRYg_pOinoBQliMFKfzWW48kd3isE6DbkIVoI5frIxOBFo=@protonmail.com>
In-Reply-To: <20180321040618.GA4494@erisian.com.au>
References: <20180321040618.GA4494@erisian.com.au>
Feedback-ID: el4j0RWPRERue64lIQeq9Y2FP-mdB86tFqjmrJyEPR9VAtMovPEo9tvgA0CrTsSHJeeyPXqnoAu6DN-R04uJUg==:Ext:ProtonMail
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
X-Spam-Status: No, score=-2.2 required=5.0 tests=BAYES_00,DKIM_SIGNED,
	DKIM_VALID, DKIM_VALID_AU, FREEMAIL_FROM, FROM_LOCAL_NOVOWEL,
	RCVD_IN_DNSWL_LOW autolearn=ham version=3.3.1
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on
	smtp1.linux-foundation.org
X-Mailman-Approved-At: Wed, 21 Mar 2018 13:25:48 +0000
Subject: Re: [bitcoin-dev] Soft-forks and schnorr signature aggregation
X-BeenThere: bitcoin-dev@lists.linuxfoundation.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: Bitcoin Protocol Discussion <bitcoin-dev.lists.linuxfoundation.org>
List-Unsubscribe: <https://lists.linuxfoundation.org/mailman/options/bitcoin-dev>,
	<mailto:bitcoin-dev-request@lists.linuxfoundation.org?subject=unsubscribe>
List-Archive: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/>
List-Post: <mailto:bitcoin-dev@lists.linuxfoundation.org>
List-Help: <mailto:bitcoin-dev-request@lists.linuxfoundation.org?subject=help>
List-Subscribe: <https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev>,
	<mailto:bitcoin-dev-request@lists.linuxfoundation.org?subject=subscribe>
X-List-Received-Date: Wed, 21 Mar 2018 07:54:07 -0000

Good morning aj,

I am probably wrong, but could solution 2 be simplified by using the below =
opcodes for aggregated signatures?

OP_ADD_AGG_PUBKEY - Adds a public key for verification of an aggregated sig=
nature.

OP_CHECK_AGG_SIG[VERIFY] - Check that the gathered public keys matches the =
aggregated signature.

Then:

 pubkey1 OP_ADD_AGG_PUBKEY
 OP_IF
   pubkey2 OP_ADD_AGG_PUBKEY
 OP_ELSE
   cond OP_CHECKCOVENANT
 OP_ENDIF
 OP_CHECK_AGG_SIG

(omitting the existence of buckets)

I imagine that aggregated signatures, being linear, would allow pubkey to b=
e aggregated also by adding the pubkey points (but note that I am not a mat=
hematician, I only parrot what better mathematicians say) so OP_ADD_AGG_PUB=
KEY would not require storing all public keys, just adding them linearly.

The effect is that in the OP_CHECKCOVENANT case, pre-softfork nodes will no=
t actually do any checking.

OP_CHECK_AGG_SIG might accept the signature on the stack (combined signatur=
e of pubkey1 and pubkey2 and from other inputs), or the bucket the signatur=
e is stored in.

We might even consider using the altstack: no more OP_ADD_AGG_PUBKEY (one l=
ess opcode to reserve!), just push pubkeys on the altstack, and OP_CHECK_AG=
G_SIG would take the entire altstack as all the public keys to be used in a=
ggregated signature checking.

This way, rather than gathering signatures, we gather public keys for aggre=
gate signature checking.  OP_RETURN_TRUE interacts with that by not perform=
ing aggregate signature checking at all if we encounter OP_RETURN_TRUE firs=
t (which makes sense: old nodes have no idea what OP_RETURN_TRUE is really =
doing, and would fail to understand all its details).


I am very probably wrong but am willing to learn how to break the above, th=
ough.  I am probably making a mistake somewhere.

Regards,
ZmnSCPxj

=E2=80=8BSent with ProtonMail Secure Email.=E2=80=8B

=E2=80=90=E2=80=90=E2=80=90=E2=80=90=E2=80=90=E2=80=90=E2=80=90 Original Me=
ssage =E2=80=90=E2=80=90=E2=80=90=E2=80=90=E2=80=90=E2=80=90=E2=80=90

On March 21, 2018 12:06 PM, Anthony Towns via bitcoin-dev <bitcoin-dev@list=
s.linuxfoundation.org> wrote:

> Hello world,
>=20
> There was a lot of discussion on Schnorr sigs and key and signature
>=20
> aggregation at the recent core-dev-tech meeting (one relevant conversatio=
n
>=20
> is transcribed at \[0\]).
>=20
> Quick summary, with more background detail in the corresponding footnotes=
:
>=20
> signature aggregation is awesome \[1\], and the possibility of soft-forki=
ng
>=20
> in new opcodes via OP\_RETURN\_VALID opcodes (instead of OP_NOP) is also
>=20
> awesome \[2\].
>=20
> Unfortunately doing both of these together may turn out to be awful.
>=20
> RETURN_VALID and Signature Aggregation
>=20
>=20
> -------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
------------------
>=20
> Bumping segwit script versions and redefining OP_NOP opcodes are
>=20
> fairly straightforward to deal with even with signature aggregation,
>=20
> the straightforward implementation of both combined is still a soft-fork.
>=20
> RETURN_VALID, unfortunately, has a serious potential pitfall: any
>=20
> aggregatable signature operations that occur after it have to go into
>=20
> separate buckets.
>=20
> As an example of why this is the case, imagine introducing a covenant
>=20
> opcode that pulls a potentially complicated condition from the stack
>=20
> (perhaps, "an output pays at least 50000 satoshi to address xyzzy"),
>=20
> checks the condition against the transaction, and then pushes 1 (or 0)
>=20
> back onto the stack indicating compliance with the covenant (or not).
>=20
> You might then write a script allowing a single person to spend the coins
>=20
> if they comply with the covenant, and allow breaking the covenant with
>=20
> someone else's sign-off in addition. You could write this as:
>=20
> pubkey1 CHECKSIGVERIFY
>=20
> cond CHECKCOVENANT IFDUP NOTIF pubkey2 CHECKSIG ENDIF
>=20
> If you pass the covenant, you supply "SIGHASHALL|BUCKET_1" and aggregate
>=20
> the signature for pubkey1 into bucket1 and you're set; otherwise you supp=
ly
>=20
> "SIGHASHALL|BUCKET\_1 SIGHASHALL|BUCKET\_1" and aggregate signatures for =
both
>=20
> pubkey1 and pubkey2 into bucket1 and you're set. Great!
>=20
> But this isn't a soft-fork: old nodes would see this script as:
>=20
> pubkey1 CHECKSIGVERIFY
>=20
> cond RETURN_VALID IFDUP NOTIF pubkey2 CHECKSIG ENDIF
>=20
> which it would just interpret as:
>=20
> pubkey1 CHECKSIGVERIFY cond RETURN_VALID
>=20
> which is fine if the covenant was passing; but no good if the covenant
>=20
> didn't pass -- they'd be expecting the aggregted sig to just be for
>=20
> pubkey1 when it's actually pubkey1+pubkey2, so old nodes would fail the
>=20
> tx and new nodes would accept it, making it a hard fork.
>=20
> Solution 0a / 0b
>=20
>=20
> -------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
------------
>=20
> There are two obvious solutions here:
>=20
> 0a) Just be very careful to ensure any aggregated signatures that
>=20
> are conditional on an redefined RETURN_VALID opcode go into later
>=20
> buckets, but be careful about having separate sets of buckets every
>=20
> time a soft-fork introduces a new redefined opcode. Probably very
>=20
> complicated to implement correctly, and essentially doubles the
>=20
> number of buckets you have to potentially deal with every time you
>=20
> soft fork in a new opcode.
>=20
> 0b) Alternatively, forget about the hope that RETURN_VALID
>=20
> opcodes could be converted to anything, and just reserve OP_NOP
>=20
> opcodes and convert them to CHECK\_foo\_VERIFY opcodes just as we
>=20
> have been doing, and when we can't do that bump the segwit witness
>=20
> version for a whole new version of script. Or in twitter speak:
>=20
> "non-verify upgrades should be done with new script versions" \[3\]
>=20
> I think with a little care we can actually salvage RETURN_VALID though!
>=20
> Solution 1
>=20
>=20
> -------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
-----------------------------------------------------
>=20
> You don't actually have to write your scripts in ways that can cause
>=20
> this problem, as long as you're careful. In particular, the problem only
>=20
> occurs if you do aggregatable CHECKSIG operations after "RETURN_VALID"
>=20
> \-\- if you do all the CHECKSIGs first, then all nodes will be checking
>=20
> for the same signatures, and there's no problem. So you could rewrite
>=20
> the script above as:
>=20
> pubkey1 CHECKSIGVERIFY
>=20
> IF pubkey2 CHECKSIG ENDIF
>=20
> cond CHECKCOVENANT OR
>=20
> which is redeemable either by:
>=20
> sig1 0 \[and covenant is met\]
>=20
> sig1 1 sig2 \[covenant is not checked\]
>=20
> The witness in this case is essentially committing to the execution path
>=20
> that would have been taken in the first script by a fully validating node=
,
>=20
> then the new script checks all the signatures, and then validates that th=
e
>=20
> committed execution path was in fact the one that was meant to be taken.
>=20
> If people are clever enough to write scripts this way, I believe you
>=20
> can make RETURN_VALID soft-fork safe simply by having every soft-forked
>=20
> RETURN_VALID operation set a state flag that makes every subsequent
>=20
> CHECKSIG operation require a non-aggregated sig.
>=20
> The drawback of this approach is that if the script is complicated
>=20
> (eg it has multiple IF conditions, some of which are nested), it may be
>=20
> difficult to write the script to ensure the signatures are checked in the
>=20
> same combination as the later logic actually requires -- you might have
>=20
> to store the flag indicating whether you checked particular signatures
>=20
> on the altstack, or use DUP and PICK/ROLL to organise it on the stack.
>=20
> Solution 2
>=20
>=20
> -------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
------------------------------------------------------------------------
>=20
> We could make that simpler for script authors by making dedicated opcodes
>=20
> to help with "do all the signatures first" and "check the committed
>=20
> execution path against reality" steps. I think a reasonable approach
>=20
> would be something like:
>=20
> 0b01 pubkey2 pubkey1 2 CHECK\_AGGSIG\_VERIFY
>=20
> cond CHECKCOVENANT 0b10 CHECK\_AGG\_SIGNERS OR
>=20
> which is redeemed either by:
>=20
> sighash1 0 \[and passing the covenant cond\]
>=20
> sighash2 sighash1 0b10
>=20
> (I'm using the notation 0b10110 to express numbers as binary bitfields;
>=20
> 0b10110 =3D 22 eg)
>=20
> That is, two new opcodes, namely:
>=20
> CHECK\_AGGSIG\_VERIFY which takes from the stack:
>=20
> \- N: a count of pubkeys
>=20
> \- pubkey1..pubkeyN: N pubkeys
>=20
> \- REQ: a bitmask of which pubkeys are required to sign
>=20
> \- OPT: a bitmask of which optional pubkeys have signed
>=20
> \- sighashes: M sighashes for the pubkeys corresponding to the set
>=20
> bits of (REQ|OPT)
>=20
> CHECK\_AGGSIG\_VERIFY fails if:
>=20
> \- the stack doesn't have enough elements
>=20
> \- the aggregated signature doesn't pass
>=20
> \- a redefined RETURN_VALID opcode has already been seen
>=20
> \- a previous CHECK\_AGGSIG\_VERIFY has already been seen in this script
>=20
> REQ|OPT is stored as state
>=20
> CHECK\_AGG\_SIGNERS takes from the stack:
>=20
> \- B: a bitmask of which pubkeys are being queried
>=20
> and it pushes to the stack 1 or 0 based on:
>=20
> \- (REQ|OPT) & B =3D=3D B ? 1 : 0
>=20
> A possible way to make sure the "no agg sigs after an upgraded
>=20
> RETURN\_VALID" behaviour works right might be to have "RETURN\_VALID"
>=20
> fail if CHECK\_AGGSIG\_VERIFY hasn't already been seen. That way once you
>=20
> redefine RETURN\_VALID in a soft-fork, if you have a CHECK\_AGGSIG_VERIFY
>=20
> after a RETURN_VALID you've either already failed (because the
>=20
> RETURN\_VALID wasn't after a CHECK\_AGGSIG_VERIFY), or you automatically
>=20
> fail (because you've already seen a CHECK\_AGGSIG\_VERIFY).
>=20
> There would be no need to make CHECKSIG, CHECKSIGVERIFY, CHECKMULTISIG
>=20
> and CHECKMULTISIGVERIFY do signature aggregation in this case. They could
>=20
> be left around to allow script authors to force non-aggregate signatures
>=20
> or could be dropped entirely, I think.
>=20
> This construct would let you do M-of-N aggregated multisig in a fairly
>=20
> straightforward manner without needing an explicit opcode, eg:
>=20
> 0 pubkey5 pubkey4 pubkey3 pubkey2 pubkey1 5 CHECK\_AGGSIG\_VERIFY
>=20
> 0b10000 CHECK\_AGG\_SIGNERS
>=20
> 0b01000 CHECK\_AGG\_SIGNERS ADD
>=20
> 0b00100 CHECK\_AGG\_SIGNERS ADD
>=20
> 0b00010 CHECK\_AGG\_SIGNERS ADD
>=20
> 0b00001 CHECK\_AGG\_SIGNERS ADD
>=20
> 3 NUMEQUAL
>=20
> redeemable by, eg:
>=20
> 0b10110 sighash5 sighash3 sighash2
>=20
> and a single aggregate signature by the private keys corresponding to
>=20
> pubkey{2,3,5}.
>=20
> Of course, another way of getting M-of-N aggregated multisig is via MAST,
>=20
> which brings us to another approach...
>=20
> Solution 3
>=20
>=20
> -------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
--------------
>=20
> All we're doing above is committing to an execution path and validating
>=20
> signatures for that path before checking the path was the right one. But
>=20
> MAST is a great way of committing to an execution path, so another
>=20
> approach would just be "don't have alternative execution paths, just have
>=20
> MAST and CHECK/VERIFY codes". Taking the example I've been running with,
>=20
> that would be:
>=20
> branch1: 2 pubkey2 pubkey1 2 CHECKMULTISIG
>=20
> branch2: pubkey1 CHECKSIGVERIFY cond CHECKCOVENANT
>=20
> So long as MAST is already supported when signature aggregation becomes
>=20
> possible, that works fine. The drawback is MAST can end up with lots of
>=20
> branches, eg the 3-of-5 multisig check has 10 branches:
>=20
> branch1: 3 pubkey3 pubkey2 pubkey1 3 CHECKMULTISIG
>=20
> branch2: 3 pubkey4 pubkey2 pubkey1 3 CHECKMULTISIG
>=20
> branch3: 3 pubkey5 pubkey2 pubkey1 3 CHECKMULTISIG
>=20
> branch4: 3 pubkey4 pubkey3 pubkey1 3 CHECKMULTISIG
>=20
> branch5: 3 pubkey5 pubkey3 pubkey1 3 CHECKMULTISIG
>=20
> branch6: 3 pubkey5 pubkey4 pubkey1 3 CHECKMULTISIG
>=20
> branch7: 3 pubkey4 pubkey3 pubkey2 3 CHECKMULTISIG
>=20
> branch8: 3 pubkey5 pubkey3 pubkey2 3 CHECKMULTISIG
>=20
> branch9: 3 pubkey5 pubkey4 pubkey2 3 CHECKMULTISIG
>=20
> branch10: 3 pubkey5 pubkey4 pubkey3 3 CHECKMULTISIG
>=20
> while if you want, say, 6-of-11 multisig you get 462 branches, versus
>=20
> just:
>=20
> 0 pubkey11 pubkey10 pubkey9 pubkey8 pubkey7 pubkey6
>=20
> pubkey5 pubkey4 pubkey3 pubkey2 pubkey1 11 CHECK\_AGGSIG\_VERIFY
>=20
> 0b10000000000 CHECK\_AGG\_SIGNERS
>=20
> 0b01000000000 CHECK\_AGG\_SIGNERS ADD
>=20
> 0b00100000000 CHECK\_AGG\_SIGNERS ADD
>=20
> 0b00010000000 CHECK\_AGG\_SIGNERS ADD
>=20
> 0b00001000000 CHECK\_AGG\_SIGNERS ADD
>=20
> 0b00000100000 CHECK\_AGG\_SIGNERS ADD
>=20
> 0b00000010000 CHECK\_AGG\_SIGNERS ADD
>=20
> 0b00000001000 CHECK\_AGG\_SIGNERS ADD
>=20
> 0b00000000100 CHECK\_AGG\_SIGNERS ADD
>=20
> 0b00000000010 CHECK\_AGG\_SIGNERS ADD
>=20
> 0b00000000001 CHECK\_AGG\_SIGNERS ADD
>=20
> 6 NUMEQUAL
>=20
> Provided doing lots of hashes to calculate merkle paths is cheaper than
>=20
> publishing to the blockchain, MAST will likely still be better though:
>=20
> you'd be doing 6 pubkeys and 9 steps in the merkle path for about 1532byt=
es in MAST, versus showing off all 11 pubkeys above for 11(32+4)
>=20
> bytes, and the above is roughly the worst case for m-of-11 multisig
>=20
> via MAST.
>=20
> If everyone's happy to use MAST, then it could be the only solution:
>=20
> drop OP_IF and friends, and require all the CHECKSIG ops to occur before
>=20
> any RETURN_VALID ops: since there's no branching, that's just a matter of
>=20
> reordering your script a bit and should be pretty easy for script authors=
.
>=20
> I think there's a couple of drawbacks to this approach that it shouldn't
>=20
> be the only solution:
>=20
> a) we don't have a lot of experience with using MAST
>=20
> b) MAST is a bit more complicated than just dealing with branches in
>=20
> a script (probably solvable once (a) is no longer the case)
>=20
> c) some useful scripts might be a bit cheaper expressed with
>=20
> of branches and be better expressed without MAST
>=20
> If other approaches than MAST are still desirable, then MAST works fine
>=20
> in combination with either of the earlier solutions as far as I can see.
>=20
> Summary
>=20
>=20
> -------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
-----------------------------------------------
>=20
> I think something along the lines of solution 2 makes the most sense,
>=20
> so I think a good approach for aggregate signatures is:
>=20
> -   introduce a new segwit witness version, which I'll call v2 (but which
>    =20
>     might actually be v1 or v3 etc, of course)
>    =20
> -   v2 must support Schnorr signature verification.
> -   v2 should have a "pay to public key (hash?)" witness format. direct
>    =20
>     signatures of the transaction via the corresponding private key shoul=
d
>    =20
>     be aggregatable.
>    =20
> -   v2 should have a "pay to script hash" witness format: probably via
>    =20
>     taproot+MAST, possibly via graftroot as well
>    =20
> -   v2 should support MAST scripts: again, probably via taproot+MAST
> -   v2 taproot shouldn't have a separate script version (ie,
>    =20
>     the pubkey shouldn't be P+H(P,version,scriptroot)), as signatures
>    =20
>     for later-versioned scripts couldn't be aggregated, so there's no
>    =20
>     advantage over bumping the segwit witness version
>    =20
> -   v2 scripts should have a CHECK\_AGG\_SIG_VERIFY opcode roughly as
>    =20
>     described above for aggregating signatures, along with CHECK\_AGG\_SI=
GNERS
>    =20
> -   CHECK{MULTI,}SIG{VERIFY,} in v2 scripts shouldn't support aggregated
>    =20
>     signatures, and possibly shouldn't be present at all?
>    =20
> -   v2 signers should be able to specify an aggregation bucket for each
>    =20
>     signature, perhaps in the range 0-7 or so?
>    =20
> -   v2 scripts should have a bunch of RETURN_VALID opcodes for future
>    =20
>     soft-forks, constrained so that CHECK\_AGG\_SIG_VERIFY doesn't appear
>    =20
>     after them. the currently disabled opcodes should be redefined as
>    =20
>     RETURN_VALID eg.
>    =20
>     For soft-fork upgrades from that point:
>    =20
> -   introducing new opcodes just means redefining an RETURN_VALID opcode
> -   introducing new sighash versions requires bumping the segwit witness
>    =20
>     version (to v3, etc)
>    =20
> -   if non-interactive half-signature aggregation isn't ready to go, it
>    =20
>     would likewise need a bump in the segwit witness version when
>    =20
>     introduced
>    =20
>     I think it's worth considering bundling a hard-fork upgrade something
>    =20
>     like:
>    =20
> -   ~5 years after v2 scripts are activated, existing p2pk/p2pkh UTXOs
>    =20
>     (either matching the pre-segwit templates or v0 segwit p2wpkh) can
>    =20
>     be spent via a v2-aggregated-signature (but not via taproot)
>    =20
>     \[4\]
>    =20
> -   core will maintain a config setting that allows users to prevent
>    =20
>     that hard fork from activating via UASF up until the next release
>    =20
>     after activation (probably with UASF-enforced miner-signalling that
>    =20
>     the hard-fork will not go ahead)
>    =20
>     This is already very complicated of course, but note that there's sti=
ll
>    =20
>     more things that need to be considered for signature aggregation:
>    =20
> -   whether to use Bellare-Neven or muSig in the consensus-critical
>    =20
>     aggregation algorithm
>    =20
> -   whether to assign the aggregate sigs to inputs and plunk them in the
>    =20
>     witness data somewhere, or to add a new structure and commitment and
>    =20
>     worry about p2p impact
>    =20
> -   whether there are new sighash options that should go in at the same t=
ime
> -   whether non-interactive half-sig aggregation can go in at the same ti=
me
>    =20
>     That leads me to think that interactive signature aggregation is goin=
g to
>    =20
>     take a lot of time and work, and it would make sense to do a v1-upgra=
de
>    =20
>     that's "just" Schnorr (and taproot and MAST and re-enabling opcodes a=
nd
>    =20
>     ...) in the meantime. YMMV.
>    =20
>     Cheers,
>    =20
>     aj
>    =20
>     \[0\] http://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2018-03=
-06-taproot-graftroot-etc/
>    =20
>     \[1\] Signature aggregation:
>    =20
>     Signature aggregation is cool because it lets you post a transaction
>    =20
>     spending many inputs, but only providing a single 64 byte signature
>    =20
>     that proves authorisation by the holders of all the private keys
>    =20
>     for all the inputs. So the witnesses for your inputs might be:
>    =20
>     p2wpkh: pubkey1 SIGHASH_ALL
>    =20
>     p2wpkh: pubkey2 SIGHASH_ALL
>    =20
>     p2wsh: "3 pubkey1 pubkey3 pubkey4 3 CHECKMULTISIG" SIGHASH\_ALL SIGHA=
SH\_ALL SIGHASH_ALL
>    =20
>     where instead of including full 65-byte signature for each CHECKSIG
>    =20
>     operation in each input witness, you just include the ~1-byte sighash=
,
>    =20
>     and provide a single 64-byte signature elsewhere, calculated either
>    =20
>     according to the Bellare-Neven algorithm, or the muSig algorithm.
>    =20
>     In the above case, that means going from about 500 witness bytes
>    =20
>     for 5 public keys and 5 signatures, to about 240 witness bytes for
>    =20
>     5 public keys and just 1 signature.
>    =20
>     A complication here is that because the signatures are aggregated,
>    =20
>     in order to validate any signature you have to be able to validate
>    =20
>     every signature.
>    =20
>     It's possible to limit that a bit, and have aggregation
>    =20
>     "buckets". This might be something you just choose when signing, eg:
>    =20
>     p2wpkh: pubkey1 SIGHASH\_ALL|BUCKET\_1
>    =20
>     p2wpkh: pubkey2 SIGHASH\_ALL|BUCKET\_2
>    =20
>     p2wsh: "3 pubkey1 pubkey3 pubkey4 3 CHECKMULTISIG" SIGHASH\_ALL|BUCKE=
T\_1 SIGHASH\_ALL|BUCKET\_2 SIGHASH\_ALL|BUCKET\_2
>    =20
>     bucket1: 64 byte sig for (pubkey1, pubkey1)
>    =20
>     bucket2: 64 byte sig for (pubkey2, pubkey3, pubkey4)
>    =20
>     That way you get the choice to verify both of the pubkey1 signatures
>    =20
>     or all of the pubkey{2,3,4} signatures or all the signatures (or
>    =20
>     none of the signatures).
>    =20
>     This might be useful if the private key for pubkey1 is essentially
>    =20
>     offline, and can't easily participate in an interactive protocol
>    =20
>     \-\- with separate buckets the separate signatures can be generated
>    =20
>     independently at different times, while with only one bucket,
>    =20
>     everyone has to coordinate to produce the signature)
>    =20
>     (For clarity: each bucket corresponds to many CHECKSIG operations,
>    =20
>     but only contains a single 64-byte signature)
>    =20
>     Different buckets will also be necessary when dealing with new
>    =20
>     segwit script versions: if there are any aggregated signatures for
>    =20
>     v1 addresses that go into bucket X, then aggregate signatures for
>    =20
>     v2 addresses cannot go into bucket X, as that would prevent nodes
>    =20
>     that support v1 addresses but not v2 addresses from validating
>    =20
>     bucket X, which would prevent them from validating the v1 addresses
>    =20
>     corresponding to that bucket, which would make the v2 upgrade a hard
>    =20
>     fork rather than a soft fork. So each segwit version will need to
>    =20
>     introduce a new set of aggregation buckets, which in turn reduces
>    =20
>     the benefit you get from signature aggregation.
>    =20
>     Note that it's obviously fine to use an aggregated signature in
>    =20
>     response to CHECKSIGVERIFY or n-of-n CHECKMULTISIGVERIFY -- when
>    =20
>     processing the script you just assume it succeeds, relying on the
>    =20
>     fact that the aggregated signature will fail the entire transaction
>    =20
>     if there was a problem. However it's also fine to use an aggregated
>    =20
>     signature in response to CHECKSIG for most plausible scripts, since:
>    =20
>     sig key CHECKSIG
>    =20
>     can be treated as equivalent to
>    =20
>     sig DUP IF key CHECKSIGVERIFY OP_1 FI
>    =20
>     provided invalid signatures are supplied as a "false" value. So
>    =20
>     for the purpose of this email, I'll mostly be treating CHECKSIG and
>    =20
>     n-of-n CHECKMULTISIG as if they support aggregation.
>    =20
>     \[2\] Soft-forks and RETURN_VALID:
>    =20
>     There are two approaches for soft-forking in new opcodes that are
>    =20
>     reasonably well understood:
>    =20
>     1.  We can bump the segwit script version, introducing a new class of
>        =20
>         bc1 bech32 addresses, which behave however we like, but can't be
>        =20
>         validated at all by existing nodes. This has the downside that it
>        =20
>         effectively serialises upgrades.
>        =20
>     2.  We can redefine OP\_NOP opcodes as OP\_CHECK\_foo\_VERIFY
>        =20
>         opcodes, along the same lines as OP_CHECKLOCKTIMEVERIFY or
>        =20
>         OP_CHECKSEQUENCEVERIFY. This has the downside that it's pretty
>        =20
>         restrictive in what new opcodes you can introduce.
>        =20
>         A third approach seems possible as well though, which would combi=
ne
>        =20
>         the benefits of both approaches: allowing any new opcode to be
>        =20
>         introduced, and allowing different opcodes to be introduced in
>        =20
>         concurrent soft-forks. Namely:
>        =20
>     3.  If we introduce some RETURN_VALID opcodes (in script for a new
>        =20
>         segwit witness version), we can then redefine those as having any
>        =20
>         behaviour we might want, including ones that manipulate the stack=
,
>        =20
>         and have the change simply be a soft-fork. RETURN_VALID would
>        =20
>         force the script to immediately succeed, in contrast to OP_RETURN
>        =20
>         which forces the script to immediately fail.
>        =20
>         \[3\] https://twitter.com/bramcohen/status/972205820275388416
>        =20
>         \[4\] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/201=
8-January/015580.html
>        =20
>=20
> bitcoin-dev mailing list
>=20
> bitcoin-dev@lists.linuxfoundation.org
>=20
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev