1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
|
Return-Path: <adam@cypherspace.org>
Received: from smtp1.linuxfoundation.org (smtp1.linux-foundation.org
[172.17.192.35])
by mail.linuxfoundation.org (Postfix) with ESMTPS id 0F4F8BCA
for <bitcoin-dev@lists.linuxfoundation.org>;
Tue, 30 Jun 2015 19:54:58 +0000 (UTC)
X-Greylist: from auto-whitelisted by SQLgrey-1.7.6
Received: from mout.perfora.net (mout.perfora.net [74.208.4.196])
by smtp1.linuxfoundation.org (Postfix) with ESMTPS id EA5FE1F3
for <bitcoin-dev@lists.linuxfoundation.org>;
Tue, 30 Jun 2015 19:54:56 +0000 (UTC)
Received: from mail-qk0-f179.google.com ([209.85.220.179]) by
mrelay.perfora.net (mreueus003) with ESMTPSA (Nemesis) id
0LZfxK-1Yml9w3vrA-00lTWA for <bitcoin-dev@lists.linuxfoundation.org>;
Tue, 30 Jun 2015 21:54:49 +0200
Received: by qkeo142 with SMTP id o142so14631406qke.1
for <bitcoin-dev@lists.linuxfoundation.org>;
Tue, 30 Jun 2015 12:54:47 -0700 (PDT)
MIME-Version: 1.0
X-Received: by 10.140.31.161 with SMTP id f30mr28430981qgf.23.1435694087959;
Tue, 30 Jun 2015 12:54:47 -0700 (PDT)
Received: by 10.96.28.39 with HTTP; Tue, 30 Jun 2015 12:54:47 -0700 (PDT)
Date: Tue, 30 Jun 2015 21:54:47 +0200
Message-ID: <CALqxMTG1=+F8DSeRAThtTSmj4F3YhgUiCbqJ1CfBy9Z-LLZvSQ@mail.gmail.com>
From: Adam Back <adam@cypherspace.org>
To: Michael Naber <mickeybob@gmail.com>
Content-Type: text/plain; charset=UTF-8
X-Provags-ID: V03:K0:ZDyEes2NlEtEMrfqdJc6CBU6xNsRP/NfH7KjCMqwM4MimtY7XK0
FyHRFnLLaLDy0u54LisdZBzBmgbfjNPhTdDmooOpo6DzvCeofqNSYqaqVJF7cPFX1NMZbAg
RF+z1vrFaCj/mqCeAX0QstisCU3mMtrEuJmxQhQjohWgATOzXpfEjWWjk8vFsmTwug/ExCy
gvdjiaPwterh4ApGI9iNQ==
X-UI-Out-Filterresults: notjunk:1;V01:K0:IFc7eq2oE0s=:z/hdz+dsnkaQl0TjTATJw1
cjFvZPg6DukvDEJQUOtNp2cSVuaoV7+GaoT/1OhKmyhKNdHTONJoCwn2rZ1SLTyKF589isPf6
P/M8EZBXK59dTQw/3PDQKbrcyMf01RscW/WahIkyuDg8Ipdv8Ew3zzkZHu8izD0YSNGUgYYxr
zslNsr1i5ThYtXcF8LP52LWoyYM59Apq5axf/e1DYK2AK9MgqMmoZLnVyivmscT8mB6rDl5bH
7gYOhoyqmI/cABWsDiCpbIvVt8LGRsy9dvpCuZ1sWUmc87QJbByGqUt0/W29Bf2kGw9ZdEnfv
senbilDMTMH6108hxirz6nOg7zxXLVgU9Akw5HbFu+Cgp4ECHUEQFQUMmbBZw96WdU43VAd7l
6jRKuvElgEy9lMjvB2Eeo0SEa4LXmwjLKYtWDZxK8apQjYjkUJJDltVtHrfMOIRUcGegKe5ep
QKwwb+kv1BVRlSjTiIzA78gVTIFhGy1e8OE9Az+H4bQn30XpK8Faf4vylxLsxKf3Zyjd/+Csz
niVnNXaqLrJsg7BeL3Oli6CnOvVl9K+E5om1CgMq2mpbWM3C27X4TjRKs1whdsiRuQGIAdyU/
fj5cJo8hXwv0DKM0U7FkmUIg0gv822H1cl9TGwV1+jnfvqQW18bBJ1WCjsCH2MuvWx7fnYG+G
YP0kaB2+PoxsIE8vGonTlDTrrWPRmCdazUqgsnie9q2fuVw==
X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_NONE
autolearn=ham version=3.3.1
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on
smtp1.linux-foundation.org
Cc: bitcoin-dev@lists.linuxfoundation.org
Subject: [bitcoin-dev] block-size tradeoffs & hypothetical alternatives (Re:
Block size increase oppositionists: please clearly define what you need
done to increase block size to a static 8MB, and help do it)
X-BeenThere: bitcoin-dev@lists.linuxfoundation.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: Bitcoin Development Discussion <bitcoin-dev.lists.linuxfoundation.org>
List-Unsubscribe: <https://lists.linuxfoundation.org/mailman/options/bitcoin-dev>,
<mailto:bitcoin-dev-request@lists.linuxfoundation.org?subject=unsubscribe>
List-Archive: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/>
List-Post: <mailto:bitcoin-dev@lists.linuxfoundation.org>
List-Help: <mailto:bitcoin-dev-request@lists.linuxfoundation.org?subject=help>
List-Subscribe: <https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev>,
<mailto:bitcoin-dev-request@lists.linuxfoundation.org?subject=subscribe>
X-List-Received-Date: Tue, 30 Jun 2015 19:54:58 -0000
Not that I'm arguing against scaling within tech limits - I agree we
can and should - but note block-size is not a free variable. The
system is a balance of factors, interests and incentives.
As Greg said here
https://www.reddit.com/r/Bitcoin/comments/3b0593/to_fork_or_not_to_fork/cshphic?context=3
there are multiple things we should usefully do with increased
bandwidth:
a) improve decentralisation and hence security/policy
neutrality/fungibility (which is quite weak right now by a number of
measures)
b) improve privacy (privacy features tend to consume bandwidth, eg see
the Confidential Transactions feature) or more incremental features.
c) increase throughput
I think some of the within tech limits bandwidth should be
pre-allocated to decentralisation improvements given a) above.
And I think that we should also see work to improve decentralisation
with better pooling protocols that people are working on, to remove
some of the artificial centralisation in the system.
Secondly on the interests and incentives - miners also play an
important part of the ecosystem and have gone through some lean times,
they may not be overjoyed to hear a plan to just whack the block-size
up to 8MB. While it's true (within some limits) that miners could
collectively keep blocks smaller, there is the ongoing reality that
someone else can take break ranks and take any fee however de minimis
fee if there is a huge excess of space relative to current demand and
drive fees to zero for a few years. A major thing even preserving
fees is wallet defaults, which could be overridden(plus protocol
velocity/fee limits).
I think solutions that see growth scale more smoothly - like Jeff
Garzik's and Greg Maxwell's and Gavin Andresen's (though Gavin's
starts with a step) are far less likely to create perverse unforeseen
side-effects. Well we can foresee this particular effect, but the
market and game theory can surprise you so I think you generally want
the game-theory & market effects to operate within some more smoothly
changing caps, with some user or miner mutual control of the cap.
So to be concrete here's some hypotheticals (unvalidated numbers):
a) X MB cap with miner policy limits (simple, lasts a while)
b) starting at 1MB and growing to 2*X MB cap with 10%/year growth
limiter + policy limits
c) starting at 1MB and growing to 3*X MB cap with 15%/year growth
limiter + Jeff Garzik's miner vote.
d) starting at 1MB and growing to 4*X MB cap with 20%/year growth
limiter + Greg Maxwell's flexcap
I think it would be good to see some tests of achievable network
bandwidth on a range of networks, but as an illustration say X is 2MB.
Rationale being the weaker the signalling mechanism between users and
user demanded size (in most models communicated via miners), the more
risk something will go in an unforeseen direction and hence the lower
the cap and more conservative the growth curve.
15% growth limiter is not Nielsen's law by intent. Akamai have data
on what they serve, and it's more like 15% per annum, but very
variable by country
http://www.akamai.com/stateoftheinternet/soti-visualizations.html#stoi-graph
CISCO expect home DSL to double in 5 years
(http://www.cisco.com/c/en/us/solutions/collateral/service-provider/visual-networking-index-vni/VNI_Hyperconnectivity_WP.html
), which is about the same number.
(Thanks to Rusty for data sources for 15% number).
This also supports the claim I have made a few times here, that it is
not realistic to support massive growth without algorithmic
improvement from Lightning like or extension-block like opt-in
systems. People who are proposing that we ramp blocksizes to create
big headroom are I think from what has been said over time, often
without advertising it clearly, actually assuming and being ok with
the idea that full nodes move into data-centers period and small
business/power user validation becomes a thing of the distant past.
Further the aggressive auto-growth risks seeing that trend continuing
into higher tier data-centers with negative implications for
decentralisation. The odd proponent seems OK with even that too.
Decentralisation is key to Bitcoin's security model, and it's
differentiating properties. I think those aggressive growth numbers
stray into the zone of losing efficiency. By which I mean in
scalability or privacy systems if you make a trade-off too far, it
becomes time to re-asses what you're doing. For example at that level
of centralisation, alternative designs are more network efficient,
while achieving the same effective (weak) decentralisation. In
Bitcoin I see this as a strong argument not to push things to that
extreme, the core functionality must remain for Lightning and other
scaling approaches to remain secure by using the Bitcoin as a secure
anchor. If we heavily centralise and weaken the security of the main
Bitcoin chain, there remains nothing secure to build on.
Therefore I think it's more appropriate for high scale to rely on
lightning, or a semi-centralised trade-offs being in the side-chain
model or similar, where the higher risk of centralisation is opt-in
and not exposed back (due to the security firewall) to the Bitcoin
network itself.
People who would like to try the higher tier data-center and
throughput by high bandwidth use route should in my opinion run that
experiment as a layer 2 side-chain or analogous. There are a few ways
to do that. And it would be appropriate to my mind that we discuss
them here also.
An experiment like that could run in parallel with lightning, maybe it
could be done faster, or offer different trade-offs, so could be an
interesting and useful thing to see work on.
> On Tue, Jun 30, 2015 at 12:25 PM, Peter Todd <pete@petertodd.org> wrote:
>> Which of course raises another issue: if that was the plan, then all you
>> can do is double capacity, with no clear way to scaling beyond that.
>> Why bother?
A secondary function can be a market signalling - market evidence
throughput can increase, and there is a technical process that is
effectively working on it. While people may not all understand the
trade-offs and decentralisation work that should happen in parallel,
nor the Lightning protocol's expected properties - they can appreciate
perceived progress and an evidently functioning process. Kind of a
weak rationale, from a purely technical perspective, but it may some
value, and is certainly less risky than a unilateral fork.
As I recall Gavin has said things about this area before also
(demonstrate throughput progress to the market).
Another factor that people have said, which I think I agree with
fairly much is that if we can chose something conservative, that there
is wide-spread support for, it can be safer to do it with moderate
lead time. Then if there is an implied 3-6mo lead time we are maybe
projecting ahead a bit further on block-size utilisation. Of course
the risk is we overshoot demand but there probably should be some
balance between that risk and the risk of doing a more rushed change
that requires system wide upgrade of all non-SPV software, where
stragglers risk losing money.
As well as scaling block-size within tech limits, we should include a
commitment to improve decentralisation, and I think any proposal
should be reasonably well analysed in terms of bandwidth assumptions
and game-theory. eg In IETF documents they have a security
considerations section, and sometimes a privacy section. In BIPs
maybe we need a security, privacy and decentralisation/fungibility
section.
Adam
NB some new list participants may not be aware that miners are
imposing local policy limits eg at 750kB and that a 250kB policy
existed in the past and those limits saw utilisation and were
unilaterally increased unevenly. I'm not sure if anyone has a clear
picture of what limits are imposed by hash-rate even today. That's
why Pieter posed the question - are we already at the policy limit -
maybe the blocks we're seeing are closely tracking policy limits, if
someone mapped that and asked miners by hash-rate etc.
On 30 June 2015 at 18:35, Michael Naber <mickeybob@gmail.com> wrote:
> Re: Why bother doubling capacity? So that we could have 2x more network
> participants of course.
>
> Re: No clear way to scaling beyond that: Computers are getting more capable
> aren't they? We'll increase capacity along with hardware.
>
> It's a good thing to scale the network if technology permits it. How can you
> argue with that?
|