1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
|
Return-Path: <gubatron@gmail.com>
Received: from smtp1.linuxfoundation.org (smtp1.linux-foundation.org
[172.17.192.35])
by mail.linuxfoundation.org (Postfix) with ESMTPS id 46D0E7AE
for <bitcoin-dev@lists.linuxfoundation.org>;
Mon, 17 Aug 2015 12:38:27 +0000 (UTC)
X-Greylist: whitelisted by SQLgrey-1.7.6
Received: from mail-ig0-f177.google.com (mail-ig0-f177.google.com
[209.85.213.177])
by smtp1.linuxfoundation.org (Postfix) with ESMTPS id 2407E63
for <bitcoin-dev@lists.linuxfoundation.org>;
Mon, 17 Aug 2015 12:38:26 +0000 (UTC)
Received: by igxp17 with SMTP id p17so54498923igx.1
for <bitcoin-dev@lists.linuxfoundation.org>;
Mon, 17 Aug 2015 05:38:25 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
h=mime-version:in-reply-to:references:from:date:message-id:subject:to
:cc:content-type;
bh=GPfg9ikBCaWS9FitScm8p3rkunpSuvKS+1+HbYoQGFE=;
b=tLciiBJmI1kHsDi1Zs9wm9n9GmF1u1RFD4AQaGgRc9N/oU2tx4O4qsPb5J97I7YNQg
63NyWCcbP8OKeTA1WyPb6ECTfD3gr1NxflzHBeMUv5wzokE6RGFDYAGExMRlx+RFyWHx
ex0HzQhSFzx0sBPGnIgZgdqYFGFwpNPueC2iBHkfJCgpTBt0mrnmaKaANqj3f/qWBHC1
Ewr4Smrn3+ijJtfQbJ+kviOXLW0qrI8NCjNLJkt14soq/HZ4uBDich7ALjoVN7SZsdYJ
ayRmcyJqfu2ESrFK9EG2q1SaD05gmfRw++atmeYsJt53mgtUzm2IRWH+9Exo23/8vf3L
ELGA==
X-Received: by 10.50.109.233 with SMTP id hv9mr17094519igb.92.1439815105523;
Mon, 17 Aug 2015 05:38:25 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.36.122.144 with HTTP; Mon, 17 Aug 2015 05:38:06 -0700 (PDT)
In-Reply-To: <CABerxhEwA7Pz0hdSuOf+RwWZiZpY1fSArB+UiyVUwr6S2fr3vQ@mail.gmail.com>
References: <CABerxhEwA7Pz0hdSuOf+RwWZiZpY1fSArB+UiyVUwr6S2fr3vQ@mail.gmail.com>
From: Angel Leon <gubatron@gmail.com>
Date: Mon, 17 Aug 2015 08:38:06 -0400
Message-ID: <CADZB0_a448SROLVLiYb3-zVPaX0+u1FHwZHSkeLgdS1s2BBsOg@mail.gmail.com>
To: Rodney Morris <rodney.morris@gmail.com>
Content-Type: multipart/alternative; boundary=089e0122e6bce838c1051d81138a
X-Spam-Status: No, score=-2.7 required=5.0 tests=BAYES_00,DKIM_SIGNED,
DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,HTML_MESSAGE,RCVD_IN_DNSWL_LOW
autolearn=ham version=3.3.1
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on
smtp1.linux-foundation.org
Cc: Bitcoin Dev <bitcoin-dev@lists.linuxfoundation.org>
Subject: Re: [bitcoin-dev] Dynamically Controlled Bitcoin Block Size Max Cap
X-BeenThere: bitcoin-dev@lists.linuxfoundation.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: Bitcoin Development Discussion <bitcoin-dev.lists.linuxfoundation.org>
List-Unsubscribe: <https://lists.linuxfoundation.org/mailman/options/bitcoin-dev>,
<mailto:bitcoin-dev-request@lists.linuxfoundation.org?subject=unsubscribe>
List-Archive: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/>
List-Post: <mailto:bitcoin-dev@lists.linuxfoundation.org>
List-Help: <mailto:bitcoin-dev-request@lists.linuxfoundation.org?subject=help>
List-Subscribe: <https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev>,
<mailto:bitcoin-dev-request@lists.linuxfoundation.org?subject=subscribe>
X-List-Received-Date: Mon, 17 Aug 2015 12:38:27 -0000
--089e0122e6bce838c1051d81138a
Content-Type: text/plain; charset=UTF-8
I've been sharing a similar solution for the past 2 weeks. I think 2016
blocks is too much of a wait, I think we should look at the mean block size
during the last 60-120 minutes instead and avert any crisis caused by
transactional spikes that could well be caused by organic use of the
network (Madonna sells her next tour tickets on Bitcoin, OpenBazaar network
starts working as imagined, XYZ startup really kicks ass and succeeds in a
couple of major cities with major PR push)
Pseudo code in Python
https://gist.github.com/gubatron/143e431ee01158f27db4
My idea stems from a simple scalability metric that affects real users and
the desire to use Bitcoin:
Waiting times to get your transactions confirmed on the blockchain.
Anything past 45mins-1 hour should be unnacceptable.
Initially I wanted to measure the mean time for the transactions in blocks
to go from being sent by the user
(initial broadcast into mempools) until the transaction was effectively
confirmed on the blockchain, say for 2 blocks (acceptable 15~20mins)
When blocks get full, people start waiting unnaceptable times for their
transactions to come through
if they don't adjust their fees. The idea is to avoid that situation at all
costs and keep the network
churning to the extent of its capabilities, without pretending a certain
size will be right at some
point in time, nobody can predict the future, nobody can predict real
organic usage peaks
on an open financial network, not all sustained spikes will come from
spammers,
they will come from real world use as more and more people think of great
uses for Bitcoin.
I presented this idea to measure the mean wait time for transactions and I
was told
there's no way to reliably meassure such a number, there's no consensus
when transactions are still
in the mempool and wait times could be manipulated. Such an idea would have
to include new timestamp fields
on the transactions, or include the median wait time on the blockheader
(too complex, additional storage costs)
This is an iteration on the next thing I believe we can all agree is 100%
accurately measured, blocksize.
Full blocks are the cause why many transactions would have to be waiting in
the mempool, so we should be able
to also use the mean size of the blocks to determine if there's a
legitimate need to increase or reduce the
maximum blocksize.
The idea is simple, If blocks are starting to get full past a certain
threshold then we double the blocksize
limit starting the next block, if blocks remain within a healthy bound,
transaction wait times should be as
expected for everyone on the network, if blocks are not getting that full
and the mean goes below a certain
threshold then we half the maximum block size allowed until we reach the
level we need.
Similar to what we do with hashing difficulty, it's something you can't
predict, therefore no fixed limits,
or predicted limits should be established.
--089e0122e6bce838c1051d81138a
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
<div dir=3D"ltr"><div class=3D"gmail_extra"><div class=3D"gmail_extra">I=
9;ve been sharing a similar solution for the past 2 weeks. I think 2016 blo=
cks is too much of a wait, I think we should look at the mean block size du=
ring the last 60-120 minutes instead and avert any crisis caused by transac=
tional spikes that could well be caused by organic use of the network (Mado=
nna sells her next tour tickets on Bitcoin, OpenBazaar network starts worki=
ng as imagined, XYZ startup really kicks ass and succeeds in a couple of ma=
jor cities with major PR push)<br><br>Pseudo code in Python<br><a href=3D"h=
ttps://gist.github.com/gubatron/143e431ee01158f27db4">https://gist.github.c=
om/gubatron/143e431ee01158f27db4</a><br><br>My idea stems from a simple sca=
lability metric that affects real users and the desire to use Bitcoin:</div=
><div class=3D"gmail_extra">Waiting times to get your transactions confirme=
d on the blockchain.=C2=A0</div><div class=3D"gmail_extra">Anything past 45=
mins-1 hour should be unnacceptable.</div><div class=3D"gmail_extra"><br></=
div><div class=3D"gmail_extra">Initially I wanted to measure the mean time =
for the transactions in blocks to go from being sent by the user</div><div =
class=3D"gmail_extra">(initial broadcast into mempools) until the transacti=
on was effectively=C2=A0</div><div class=3D"gmail_extra">confirmed on the b=
lockchain, say for 2 blocks (acceptable 15~20mins)</div><div class=3D"gmail=
_extra"><br></div><div class=3D"gmail_extra">When blocks get full, people s=
tart waiting unnaceptable times for their transactions to come through=C2=
=A0</div><div class=3D"gmail_extra">if they don't adjust their fees. Th=
e idea is to avoid that situation at all costs and keep the network</div><d=
iv class=3D"gmail_extra">churning to the extent of its capabilities, withou=
t pretending a certain size will be right at some=C2=A0</div><div class=3D"=
gmail_extra">point in time, nobody can predict the future, nobody can predi=
ct real organic usage peaks=C2=A0</div><div class=3D"gmail_extra">on an ope=
n financial network, not all sustained spikes will come from spammers,=C2=
=A0</div><div class=3D"gmail_extra">they will come from real world use as m=
ore and more people think of great uses for Bitcoin.</div><div class=3D"gma=
il_extra"><br></div><div class=3D"gmail_extra">I presented this idea to mea=
sure the mean wait time for transactions and I was told=C2=A0</div><div cla=
ss=3D"gmail_extra">there's no way to reliably meassure such a number, t=
here's no consensus when transactions are still=C2=A0</div><div class=
=3D"gmail_extra">in the mempool and wait times could be manipulated. Such a=
n idea would have to include new timestamp fields=C2=A0</div><div class=3D"=
gmail_extra">on the transactions, or include the median wait time on the bl=
ockheader (too complex, additional storage costs)</div><div class=3D"gmail_=
extra"><br></div><div class=3D"gmail_extra">This is an iteration on the nex=
t thing I believe we can all agree is 100% accurately measured, blocksize.<=
/div><div class=3D"gmail_extra">Full blocks are the cause why many transact=
ions would have to be waiting in the mempool, so we should be able</div><di=
v class=3D"gmail_extra">to also use the mean size of the blocks to determin=
e if there's a legitimate need to increase or reduce the=C2=A0</div><di=
v class=3D"gmail_extra">maximum blocksize.</div><div class=3D"gmail_extra">=
<br></div><div class=3D"gmail_extra">The idea is simple, If blocks are star=
ting to get full past a certain threshold then we double the blocksize=C2=
=A0</div><div class=3D"gmail_extra">limit starting the next block, if block=
s remain within a healthy bound, transaction wait times should be as=C2=A0<=
/div><div class=3D"gmail_extra">expected for everyone on the network, if bl=
ocks are not getting that full and the mean goes below a certain=C2=A0</div=
><div class=3D"gmail_extra">threshold then we half the maximum block size a=
llowed until we reach the level we need.</div><div class=3D"gmail_extra">Si=
milar to what we do with hashing difficulty, it's something you can'=
;t predict, therefore no fixed limits,=C2=A0</div><div class=3D"gmail_extra=
">or predicted limits should be established.</div></div></div>
--089e0122e6bce838c1051d81138a--
|