1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
|
Return-Path: <gavinandresen@gmail.com>
Received: from smtp1.linuxfoundation.org (smtp1.linux-foundation.org
[172.17.192.35])
by mail.linuxfoundation.org (Postfix) with ESMTPS id 2FB538CC
for <bitcoin-dev@lists.linuxfoundation.org>;
Thu, 5 Nov 2015 09:23:49 +0000 (UTC)
X-Greylist: whitelisted by SQLgrey-1.7.6
Received: from mail-lb0-f174.google.com (mail-lb0-f174.google.com
[209.85.217.174])
by smtp1.linuxfoundation.org (Postfix) with ESMTPS id D8415EE
for <bitcoin-dev@lists.linuxfoundation.org>;
Thu, 5 Nov 2015 09:23:46 +0000 (UTC)
Received: by lbbkw15 with SMTP id kw15so25094149lbb.0
for <bitcoin-dev@lists.linuxfoundation.org>;
Thu, 05 Nov 2015 01:23:45 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
h=mime-version:in-reply-to:references:date:message-id:subject:from:to
:cc:content-type;
bh=e9RIx5GKok9kLeLmaY+SD4h3TeNREOCEaRuHbsDTw38=;
b=MRN+FUSqfe07hv1G8U/7bomw28z18bg01GIJzjzYxQpyjrSb1jr5dyO8MrxhowfYSp
Bz0DqoI7ohEtIarnERXJsdwehOdRLDEaICCcsM+EuxJWlDE5ORRpDM54wUqNY91dqW+k
TAL1wfzccT5+bEzjImNrWVpvMsO4S1zVUIh93U5/IY6CRzhDA2yK5E5vU9UuDe2bSLLi
eQA6l47GWIBlJ+hBm3rv2iKSVbbGfVuYqU39IEl2OCRct9Mm5nAemLY0LIPkEb6mNr3C
MlmsQmXOvf6HOC4nvSz00zBchjAGQX45f48ruYy1kHG1NfO7FA6Cdu29dZp/fEo8Y2Oe
FupQ==
MIME-Version: 1.0
X-Received: by 10.112.205.194 with SMTP id li2mr3150380lbc.75.1446715424935;
Thu, 05 Nov 2015 01:23:44 -0800 (PST)
Received: by 10.25.22.95 with HTTP; Thu, 5 Nov 2015 01:23:44 -0800 (PST)
In-Reply-To: <CAOG=w-tR89zm_VDCR-MR_+F9bRNvm4TCZSQcTmYGKRW1JQhkUg@mail.gmail.com>
References: <CAOG=w-tR89zm_VDCR-MR_+F9bRNvm4TCZSQcTmYGKRW1JQhkUg@mail.gmail.com>
Date: Thu, 5 Nov 2015 09:23:44 +0000
Message-ID: <CABsx9T3K+YfZpD4aiSxfZHpRMGfOjvdPm4fkgD067G6mRyYWbg@mail.gmail.com>
From: Gavin Andresen <gavinandresen@gmail.com>
To: Mark Friedenbach <mark@friedenbach.org>
Content-Type: multipart/alternative; boundary=001a11c3ce56fe965d0523c7ae07
X-Spam-Status: No, score=-2.7 required=5.0 tests=BAYES_00,DKIM_SIGNED,
DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,HTML_MESSAGE,RCVD_IN_DNSWL_LOW
autolearn=ham version=3.3.1
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on
smtp1.linux-foundation.org
Cc: Bitcoin Dev <bitcoin-dev@lists.linuxfoundation.org>
Subject: Re: [bitcoin-dev] A validation-cost metric for aggregate limits and
fee determination
X-BeenThere: bitcoin-dev@lists.linuxfoundation.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: Bitcoin Development Discussion <bitcoin-dev.lists.linuxfoundation.org>
List-Unsubscribe: <https://lists.linuxfoundation.org/mailman/options/bitcoin-dev>,
<mailto:bitcoin-dev-request@lists.linuxfoundation.org?subject=unsubscribe>
List-Archive: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/>
List-Post: <mailto:bitcoin-dev@lists.linuxfoundation.org>
List-Help: <mailto:bitcoin-dev-request@lists.linuxfoundation.org?subject=help>
List-Subscribe: <https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev>,
<mailto:bitcoin-dev-request@lists.linuxfoundation.org?subject=subscribe>
X-List-Received-Date: Thu, 05 Nov 2015 09:23:49 -0000
--001a11c3ce56fe965d0523c7ae07
Content-Type: text/plain; charset=UTF-8
I have several thoughts:
Weighing CPU validation cost should be reasonably straightforward-- just
pick some arbitrary, commonly-available, recent hardware and then benchmark
the two things that take the bulk of validation time (hashing to create the
signature hash, then ECDSA validation), and weigh the terms in the
validation cost equation appropriately (e.g. hashing X GB of data takes the
same amount of CPU time as one libsecp256k1 validation, so count cpu cost
of an OP_CHECKSIG as 1 + X/actual_bytes_hashed).
But how should bandwidth cost be counted? There isn't an obvious "Y GB of
bandwidth-per-month equals 1 ECDSA validation. We need to find common units
for the terms in the validation cost equation for it to make sense,
otherwise we're adding apples and oranges.
I think the only units that will work is "percentage of maximum validation
ability for some reference hardware running with a network connection
capable of some reference bandwidth."
For example, imagine the reference was the typical home computer being sold
today running with some multiple or fraction of the average global
broadband connection speed of 5Mbps. CPU cost to validate a block can then
be expressed as a percentage of maximum capacity, as can bandwidth--
hooray, two metrics with the same units, so they can be added up. If the
result is less than 100%, then the block is valid-- it can be received and
validated in a reasonable amount of time.
Rolling in UTXO growth is harder, for two reasons:
1) UTXO changes per block can be negative or positive, as opposed to
bandwidth/CPU costs.
2) It is not clear how to choose or benchmark "reference UTXO growth"
(1) could be finessed to just treat UTXO shrinkage as zero.
(2) could just be decided by picking a reasonable growth number. Since we
want the UTXO set to fit into main memory, something a bit below the
long-ish term price/performance trend of main memory would be a good target.
So, starting with that growth rate and an initial UTXO size in bytes,
divide by the number of blocks in a year to get a maximum UTXO growth in
bytes per block.
When validating a block, take the actual UTXO growth, express it as a
percentage of the maximum allowed (make it zero if it is negative), and
combine with the CPU and bandwidth percentages.
If the total is less than 100%, block is valid. Otherwise, invalid.
----------------
Now.... all of that worked through, I'm not 100% sure it solves the "do
miners or wallet have to solve a bin-packing problem to determine which
transactions to put into their blocks or what fees to attach."
I think it mostly works out-- instead of fee-per-kilobyte, it would be
fee-per-validation-cost (which is in the weird units "fraction of 100%
validation cost").
But the UTXO term might be a problem-- transactions that create more UTXOs
than they spend might end up being costly. I'm traveling right now, perhaps
somebody could pick some arbitrary reference points and try to get a rough
idea of what different transactions might pay in fees (e.g. if a
one-input-two-output had a cost of X, two-output-one-input would have a
cost of X/something).
I'm not convinced that a single validation cost metric is the best
approach-- it might be better to break the cost into three (UTXO growth,
CPU, and bandwidth) and just let miners set reasonable transaction
selection policies that keep each of the three under whatever caps are
imposed on each. If a miner comes up with a clever algorithm that lets them
pack in more transactions and get more fees, good for them!
But I do like the simplicity of a single validation cost metric.
--
--
Gavin Andresen
--001a11c3ce56fe965d0523c7ae07
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
<div dir=3D"ltr">I have several thoughts:<div><br></div><div>Weighing CPU v=
alidation cost should be reasonably straightforward-- just pick some arbitr=
ary, commonly-available, recent hardware and then benchmark the two things =
that take the bulk of validation time (hashing to create the signature hash=
, then ECDSA validation), and weigh the terms in the validation cost equati=
on appropriately (e.g. hashing X GB of data takes the same amount of CPU ti=
me as one libsecp256k1 validation, so count cpu cost of an OP_CHECKSIG as 1=
+ X/actual_bytes_hashed).</div><div><br></div><div>But how should bandwidt=
h cost be counted? There isn't an obvious "Y GB of bandwidth-per-m=
onth equals 1 ECDSA validation. We need to find common units for the terms =
in the validation cost equation for it to make sense, otherwise we're a=
dding apples and oranges.</div><div><br></div><div>I think the only units t=
hat will work is "percentage of maximum validation ability for some re=
ference hardware running with a network connection capable of some referenc=
e bandwidth."</div><div><br></div><div>For example, imagine the refere=
nce was the typical home computer being sold today running with some multip=
le or fraction of the average global broadband connection speed of 5Mbps. C=
PU cost to validate a block can then be expressed as a percentage of maximu=
m capacity, as can bandwidth-- hooray, two metrics with the same units, so =
they can be added up.=C2=A0 If the result is less than 100%, then the block=
is valid-- it can be received and validated in a reasonable amount of time=
.</div><div><br></div><div><br></div><div>Rolling in UTXO growth is harder,=
for two reasons:</div><div>1) UTXO changes per block can be negative or po=
sitive, as opposed to bandwidth/CPU costs.</div><div>2) It is not clear how=
to choose or benchmark "reference UTXO growth"=C2=A0</div><div><=
br></div><div>(1) could be finessed to just treat UTXO shrinkage as zero.</=
div><div>(2) could just be decided by picking a reasonable growth number. S=
ince we want the UTXO set to fit into main memory, something a bit below th=
e long-ish term price/performance trend of main memory would be a good targ=
et.</div><div><br></div><div>So, starting with that growth rate and an init=
ial UTXO size in bytes, divide by the number of blocks in a year to get a m=
aximum UTXO growth in bytes per block.</div><div><br></div><div>When valida=
ting a block, take the actual UTXO growth, express it as a percentage of th=
e maximum allowed (make it zero if it is negative), and combine with the CP=
U and bandwidth percentages.</div><div><br></div><div>If the total is less =
than 100%, block is valid. Otherwise, invalid.</div><div><br></div><div>---=
-------------</div><div><br></div><div>Now.... all of that worked through, =
I'm not 100% sure it solves the "do miners or wallet have to solve=
a bin-packing problem to determine which transactions to put into their bl=
ocks or what fees to attach."</div><div><br></div><div>I think it most=
ly works out-- instead of fee-per-kilobyte, it would be fee-per-validation-=
cost (which is in the weird units "fraction of 100% validation cost&qu=
ot;).</div><div><br></div><div>But the UTXO term might be a problem-- trans=
actions that create more UTXOs than they spend might end up being costly. I=
'm traveling right now, perhaps somebody could pick some arbitrary refe=
rence points and try to get a rough idea of what different transactions mig=
ht pay in fees (e.g. if a one-input-two-output had a cost of X, two-output-=
one-input would have a cost of X/something).</div><div><br></div><div>I'=
;m not convinced that a single validation cost metric is the best approach-=
- it might be better to break the cost into three (UTXO growth, CPU, and ba=
ndwidth) and just let miners set reasonable transaction selection policies =
that keep each of the three under whatever caps are imposed on each. If a m=
iner comes up with a clever algorithm that lets them pack in more transacti=
ons and get more fees, good for them!</div><div><br></div><div>But I do lik=
e the simplicity of a single validation cost metric.</div><div><br></div><d=
iv class=3D"gmail_extra">-- <br><div class=3D"gmail_signature">--<br>Gavin =
Andresen<br></div>
</div></div>
--001a11c3ce56fe965d0523c7ae07--
|