Yeah, but increasing block-size is not a longterm solution.
Are you sure? That sort of statement is hard to answer because it doesn't
say what you think long term is, or how much you expect Bitcoin to grow.
Satoshi thought it was a perfectly fine long term solution because he
thought hardware
Hi Adam,
I am still confused about whether you actually support an increase in the
block size limit to happen right now. As you agree that this layer 2 you
speak of doesn't exist yet, and won't within the next 10-12 months (do you
agree that actually?), can you please state clearly that you will
Or alternatively, fix the reasons why users would have negative
experiences with full blocks
It's impossible, Mark. *By definition* if Bitcoin does not have sufficient
capacity for everyone's transactions, some users who were using it will be
kicked out to make way for the others. Whether
We already removed the footer because it was incompatible with DKIM
signing. Keeping the [Bitcoin-dev] prepend tag in subject is compatible
with DKIM header signing only if the poster manually prepends it in their
subject header.
I still see footers being added to this list by
The new list currently has footers removed during testing. I am not
pleased with the need to remove the subject tag and footer to be more
compatible with DKIM users.
Lists can do what are effectively MITM attacks on people's messages in any
way they like, if they resign for the messages
And allegations that the project is run like wikipedia or an edit war
are verifyably untrue.
Check the commit history.
This was a reference to a post by Gregory on Reddit where he said if Gavin
were to do a pull request for the block size change and then merge it, he
would revert it. And I
If you think it's not clear enough, which may explain why you did not even
attempt to follow it for your block size increase, feel free to make
improvements.
As the outcome of a block size BIP would be a code change to Bitcoin Core,
I cannot make improvements, only ask for them. Which is
So then: make a proposal for a better process, post it to this list.
Alright. Here is a first cut of my proposal. It can be inserted into an
amended BIP 1 after What belongs in a successful BIP?. Let me know what
you think.
The following section applies to BIPs that affect the block chain
Hi Bryan,
Specifically, when Adam mentioned your conversations with non-technical
people, he did not mean Mike has talked with people who have possibly not
made pull requests to Bitcoin Core, so therefore Mike is a non-programmer.
Yes, my comment was prickly and grumpy. No surprises, I did
How do you plan to deal with security incident response for the
duration you describe where you will have control while you are deploying
the unilateral hard-fork and being in sole maintainership control?
How do we plan to deal with security incident response - exactly the same
way as
are only connected to each other through a slow 2 Mbit/s link.
That's very slow indeed. For comparison, plain old 3G connections routinely
cruise around 7-8 Mbit/sec.
So this simulation is assuming a speed dramatically worse than a mobile
phone can get!
Sure, and you did indeed say that.
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
If we assume that transactions are being dropped in an unpredictable way
when blocks are full, knowing the network congestion *right now* is
critical, and even then you just have to hope that someone who wants that
space more than you do doesn't show up after you disconnect.
Yeah, my
Re: dropped in an unpredictable way - transactions would be dropped
lowest fee/KB first, a completely predictable way.
Quite agreed.
No, Aaron is correct. It's unpredictable from the perspective of the user
sending the transaction, and as they are the ones picking the fees, that is
what
1,000 *people* in control vs. 10 is two orders of
magnitude more decentralized.
Yet Bitcoin has got worse by all these metrics: there was a time before
mining pools when there were ~thousands of people mining with their local
CPUs and GPUs. Now the number of full nodes that matter for block
!
On Thu, Apr 9, 2015 at 10:23 PM, Mike Hearn m...@plan99.net wrote:
Next week on April 15th Gavin, Wladimir, Corey and myself will be at
DevCore London:
https://everyeventgives.com/event/devcore-london
If you're in town why not come along?
It's often the case that conferences can be just
But the majority of the hashrate can now perform double spends on your
chain! They can send bitcoins to exchanges, sell it, extract the money and
build a new longer chain to get their bitcoins back.
Obviously if the majority of the mining hash rate is doing double spending
attacks on
It's surprising to see a core dev going to the public to defend a proposal
most other core devs disagree on, and then lobbying the Bitcoin ecosystem.
I agree that it is a waste of time. Many agree. The Bitcoin ecosystem
doesn't really need lobbying - my experience from talking to businesses
Ignorant. You seem do not understand the current situation. We
suffered from orphans a lot when we started in 2013. It is now your
turn.
Then please enlighten me. You're unable to download block templates from a
trusted node outside of the country because the bandwidth requirements are
too
I don't see this as an issue of sensitivity or not. Miners are businesses
that sell a service to Bitcoin users - the service of ordering transactions
chronologically. They aren't charities.
If some miners can't provide the service Bitcoin users need any more, then
OK, they should not/cannot mine.
(at reduced security if it has software that doesnt understand it)
Well, yes. Isn't that rather key to the issue? Whereas by simply
increasing the block size, SPV wallets don't care (same security and
protocol as before) and fully validating wallets can be updated with a very
small code
By the time a hard fork can happen, I expect average block size will be
above 500K.
Yes, possibly.
Would you support a rule that was larger of 1MB or 2x average size ?
That is strictly better than the situation we're in today.
It is, but only by a trivial amount - hitting the limit is
If the plan is a fix once and for all, then that should be changed too.
It could be set so that it is at least some multiple of the max block size
allowed.
Well, but RAM is not infinite :-) Effectively what these caps are doing is
setting the minimum hardware requirements for running a
The measure is miner consensus. How do you intend to measure
exchange/merchant acceptance?
Asking them.
In fact, we already have. I have been talking to well known people and CEOs
in the Bitcoin community for some time now. *All* of them support bigger
blocks, this includes:
- Every
And looking at the version (aka user-agent) strings of publicly reachable
nodes on the network.
(e.g. see the count at https://getaddr.bitnodes.io/nodes/ )
Yeah, though FYI Luke informed me last week that I somehow managed to take
out the change to the user-agent string in Bitcoin XT,
Twenty is scary.
To whom? The only justification for the max size is DoS attacks, right?
Back when Bitcoin had an average block size of 10kb, the max block size was
100x the average. Things worked fine, nobody was scared.
The max block size is really a limit set by hardware capability, which
The prior (and seemingly this) assurance contract proposals pay the
miners who mines a chain supportive of your interests and miners whom
mine against your interests identically.
The same is true today - via inflation I pay for blocks regardless of
whether they contain or double spend my
Sequence numbers appear to have been originally intended as a mechanism
for transaction replacement within the context of multi-party transaction
construction, e.g. a micropayment channel.
Yes indeed they were. Satoshis mechanism was more general than micropayment
channels and could do HFT
Mike, this proposal was purposefully constructed to maintain as well as
possible the semantics of Satoshi's original construction. Higher sequence
numbers -- chronologically later transactions -- are able to hit the chain
earlier, and therefore it can be reasonably argued will be selected by
I wrote an article that explains the hashing assurance contract concept:
https://medium.com/@octskyward/hashing-7d04a887acc8
(it doesn't contain an in depth protocol description)
--
Very interesting Matt.
For what it's worth, in future bitcoinj is very likely to bootstrap from
Cartographer nodes (signed HTTP) rather than DNS, and we're also steadily
working towards Tor by default. So this approach will probably stop working
at some point. As breaking PorcFest would kind of
Hi Andrew,
Your belief that Bitcoin has to be constrained by the belief that hardware
will never improve is extremist, but regardless, your concerns are easy to
assuage: there is no requirement that the block chain be stored on hard
disks. As you note yourself the block chain is used for
Hi Thomas,
My problem is that this seems to lacks a vision.
Are you aware of my proposal for network assurance contracts?
There is a discussion here:
https://www.mail-archive.com/bitcoin-development@lists.sourceforge.net/msg07552.html
But I agree with Gavin that attempting to plan for 20
some wallets (e.g., Andreas Schildbach's wallet) don't even allow it - you
can only spend confirmed UTXOs. I can't tell you how aggravating it is to
have to tell a friend, Oh, oops, I can't pay you yet. I have to wait for
the last transaction I did to confirm first. All the more aggravating
If capacity grows, fewer individuals would be able to run full nodes.
Hardly. Nobody is currently exhausting the CPU capacity of even a normal
computer currently and even if we did a 20x increase in load overnight,
that still wouldn't even warm up most machines good enough to be always on.
Wallets are incentivised to do a better job with defragmentation already,
as if you have lots of tiny UTXOs then your fees end up being huge when
trying to make a payment.
The reason they largely don't is just one of manpower. Nobody is working on
it.
As a wallet developer myself, one way I'd
Very nice Emin! This could be very useful as a building block for oracle
based services. If only there were opcodes for working with X.509 ;)
I'd suggest at least documenting in the FAQ how to extract the data from
the certificate:
openssl pkcs12 -in virtual-notary-cert-stocks-16070.p12 -nodes
CPFP also solves it just fine.
--
One dashboard for servers and applications across Physical-Virtual-Cloud
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give
* Though there are many proposals floating around which could
significantly decrease block propagation latency, none of them are
implemented today.
With a 20mb cap, miners still have the option of the soft limit.
I would actually be quite surprised if there were no point along the road
Alan argues that 7 tps is a couple orders of magnitude too low
By the way, just to clear this up - the real limit at the moment is more
like 3 tps, not 7.
The 7 transactions/second figure comes from calculations I did years ago,
in 2011. I did them a few months before the sendmany command was
There are certainly arguments to be made for and against all of these
proposals.
The fixed 20mb cap isn't actually my proposal at all, it is from Gavin. I
am supporting it because anything is better than nothing. Gavin originally
proposed the block size be a function of time. That got dropped, I
Looks like a neat solution, Tier.
--
One dashboard for servers and applications across Physical-Virtual-Cloud
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give
Hey Matt,
OK, let's get started
However, there hasnt been any discussion on this
mailing list in several years as far as I can tell.
Probably because this list is not a good place for making progress or
reaching decisions. Those are triggered by pull requests (sometimes).
If you're
The only answer to this that anyone with a clue should give is it
will very, very likely be able to support at least 1MB blocks roughly
every 10 minutes on average for the next eleven years, and it seems
likely that a block size increase of some form will happen at some point in
the next
I think you are rubbing against your own presupposition that people must
find and alternative right now. Quite a lot here do not believe there is
any urgency, nor that there is an immanent problem that has to be solved
before the sky falls in.
I have explained why I believe there is some
These statements may even be true, but they're no logical conclusions
even if they seem obvious to you.
I don't think those claims are strictly true, specially because they
involve predictions about what people will do.
But if they're true they require some proof or at least some explanation.
The appropriate method of doing any fork, that we seem to have been
following for a long time, is to get consensus here and on IRC and on
github and *then* go pitch to the general public
So your concern is just about the ordering and process of things, and not
about the change itself?
I
Can you please elaborate on what terrible things will happen if we
don't increase the block size by winter this year?
I was referring to winter next year. 0.12 isn't scheduled until the end of
the year, according to Wladimir. I explained where this figure comes from
in this article:
If his explanation was I will change my mind after we increase block
size, I guess the community should say then we will just ignore your
nack because it makes no sense.
Oh good! We can just kick anyone out of the consensus process if we think
they make no sense.
I guess that means me and
What gives Bitcoin value aren't its technical merits but the fact that
people believe in it.
Much of the belief in Bitcoin is that it has a bright future. Certainly the
huge price spikes we've seen were not triggered by equally large spikes in
usage - it's speculation on that future.
I quite
It is an argument against my admittedly vague definition of
non-controversial change.
If it's an argument against something you said, it's not a straw man, right
;)
Consensus has to be defined as agreement between a group of people. Who are
those people? If you don't know, it's impossible to
Dear list,
Apparently my emails are being marked as spam, despite being sent from
GMail's web interface. I've pinged our sysadmin.
It's a problem with the mailing list software, not your setup. BitPay could
disable the phishing protections but that seems like a poor solution. The
only real
It is a trivial *code* change. It is not a trivial change to the
economics of a $3.2B system.
Hmm - again I'd argue the opposite.
Up until now Bitcoin has been unconstrained by the hard block size limit.
If we raise it, Bitcoin will continue to be unconstrained by it. That's the
default
1. There will be a 1:1 relationship between a payment code owner and their
identity.
Bear in mind, the spec defines identity to mean:
*Identity is a particular extended public/private key pair. *
So that's not quite what is meant normally by identity. It's not a
government / real name
Could you maybe write a short bit of text comparing this approach to
extending BIP70 and combining it with a simple Subspace style
store-and-forward network?
--
One dashboard for servers and applications across
Next week on April 15th Gavin, Wladimir, Corey and myself will be at
DevCore London:
https://everyeventgives.com/event/devcore-london
If you're in town why not come along?
It's often the case that conferences can be just talking shops, without
much meat for real developers. So in the
I don't think it's quite a blank check, but it would enable replay attacks
in the form of sending the money to the same place it was sent before if an
address ever receives coins again.
Right, good point. I wonder if this sort of auto forwarding could even be a
useful feature. I can't think
I've written a couple of blog posts on replace by fee and double spending
mitigations. They sum up the last few years (!) worth of discussions on
this list and elsewhere, from my own perspective.
I make no claim to be comprehensive or unbiased but I keep being asked
about these topics so figured
It sounds like the main issue is this is a web wallet server of some kind.
If the clients were SPV then they'd be checking their own balances and
downloading their own tx history, which would mean the coordination tasks
could be done by storing encrypted blobs on the server rather than the
server
As soon as that PaymentRequest leaves the wallet on its way to the hotel
server, it is up for grabs
Is it? I'm assuming TLS is being used here. And the hotel server also has a
copy of the PaymentRequest, as the hotel actually issued it and that's how
they're deciding the receipt is valid. So
That would be rather new and tricky legal territory.
But even putting the legal issues to one side, there are definitional
issues.
For instance if the Chainalysis nodes started following the protocol specs
better and became just regular nodes that happen to keep logs, would that
still be a
I'm not talking about keeping logs, I mean purporting to be a network
peer in order to gain a connection slot and then not behaving as one
(not relaying transactions)
That definition would include all SPV clients?
I get what you are trying to do. It just seems extremely tricky.
Hi Kalle,
I think you're thinking along the right lines, but I am skeptical that this
protocol adds much. A saved payment request is meant to be unique per
transaction e.g. because the destination address is unique for that payment
(for privacy reasons). Where would you store the signed payment
You are killing us Mike! :) We really don't like to think that BWS is
a webwallet. Note
that private keys are not stored (not even encrypted) at the server.
Sure, sorry, by web wallet I meant a blockchain.info/CoPay type setup where
the client has the private keys and signs txns, but
Don't SPV clients announce their intentions by the act of uploading a
filter?
Well they don't set NODE_NETWORK, so they don't claim to be providing
network services. But then I guess the Chainalysis nodes could easily just
clear that bit flag too.
What I'd actually like to see is for
b) Creation date is just a short-term hack.
I agree, but we need things to be easy in the short term as well as the
long term :)
The long term solution is clearly to have the 12 word seed be an encryption
key for a wallet backup with all associated metadata. We're heading in that
direction
bitcoinj also uses this convention.
--
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by Intel and developed in partnership with Slashdot Media, is your hub for all
things parallel software
Sigh. The wallet words system is turning into kind of a mess.
I thought the word list is in fact not a fixed part of the spec, because
the entropy is a hash of the words. But perhaps I'm misunderstanding
something.
The main problem regular SPV wallets have with BIP39 is that there is no
birth
I'd like to offer that the best practice for the shared wallet use case
should be multi-device multi-sig.
Sure. But in practice people will want to have a pool of spending money
that they can spend when they are out and about, and also with one click
from their web browser on their primary
Users will want to have wallets shared between devices, it's as simple as
that, especially for mobile/desktop wallets. Trying to stop them from doing
that by making things gratuitously incompatible isn't the right approach:
they'll just find workarounds or wallet apps will learn how to import
Nice, Andrew.
Just one minor point. SPV clients do not need to maintain an ever growing
list of PoW solutions. BitcoinJ uses a ring buffer with 5000 headers and
thus has O(1) disk usage. Re-orgs past the event horizon cannot be
processed but are assumed to be sufficiently rare that manual
Congrats Thomas! Glad to see Electrum 2 finally launch.
* New seed derivation method (not compatible with BIP39).
Does this mean a 12 words wallet created by Electrum won't work if
imported into some other wallet that supports BIP39? Vice versa? This seems
unfortunate. I guess if seeds are
Does this not also require the BT publication of the script for a P2SH
address?
You mean if the URI you're serving is like this?
bitcoin:3aBcD?bt=
Yes it would. I guess then, the server would indicate both the script, and
the key within that script that it wanted to use. A
Andreas' wallet supports this, but don't do it. Payment requests can get
larger in future even without signing. Exploding the capacity of a QR code
is very easy.
Instead, take a look at the Bluetooth/NFC discussion happening in a
different thread.
On Tue, Feb 24, 2015 at 4:58 PM, Oleg Andreev
This happened to one of the merchants at the Bitcoin 2013 conference in
San Jose. They sold some T-shirts and accepted zero-confirmation
transactions. The transactions depended on other unconfirmed transactions,
which never confirmed, so this merchant never got their money.
Beyond the fact
DHKE will not improve the situation. Either we use a simple method to
transfer a session key or a complex method.
You're right that just sending the session key is simpler. I originally
suggested doing ECDHE to set up an encrypted channel for the following
reasons:
1. URIs are put in QR
I read from your answer that even if we use ECDHE, we can't use it for
every situation.
Which situations do you mean? I think it can be used in every situation.
It's the opposite way around - a fixed session key in the URI cannot be
used in every situation.
At the moment I'm also modifying BitPay's memo field to contain 'ack', as
Andreas' wallet otherwise reports a failure if I transmit the original via
Bluetooth. :-)
Huh?
--
Download BIRT iHub F-Type - The Free
But via Bluetooth it checks for 'ack' directly:
We need a BIP70 conformance suite really. There are so many deviations from
the spec out there already and it's brand new :(
--
Download BIRT iHub F-Type - The Free
I don't see how you propose to treat the bitcoin address as a secp256k1
public key, or do you mean something else?
Sorry, I skipped a step. I shouldn't make assumptions about what's obvious.
The server would provide the public key and the client would convert it to
address form then match
Adam seems to be making sense to me. Only querying a single node when an
address in my wallet matches the block filter seems to be pretty efficient.
No, I think it's less efficient (for the client).
Quick sums: blocks with 1500 transactions in them are common today. But
Bitcoin is growing.
For downloading transactions unless you frequently receive
transactions you wont be fetching every block. Or are you assuming
bloom filters dialled up to the point of huge false positives? You
said otherwise.
Well, what I mean is, bitcoinj already gets criticised for having very low
FP
Hi Fikret,
This is the wrong mailing list for such questions. Most Bitcoin ATM's are
commercial products anyway and don't accept contributors. If you find one
that is different, it's better to contact them directly.
On Fri, Feb 20, 2015 at 5:19 PM, Fikret AKIN i...@fikretakin.com wrote:
Ah, I see, I didn't catch that this scheme relies on UTXO commitments
(presumably with Mark's PATRICIA tree system?).
If you're doing a binary search over block contents then does that imply
multiple protocol round trips per synced block? I'm still having trouble
visualising how this works.
This is talking about a committed bloom filter. Not a committed UTXO set.
I read the following comment to mean it requires the UTXO commitments.
Otherwise I'm not sure how you prove absence of withholding with just
current block structures+an extra filter included in the block:
but with the
So now they ask a full node for merkle paths + transactions for the
addresses from the UTXO set from the block(s) that it was found in.
This is the part where I get lost. How does this improve privacy? If I have
to specify which addresses are mine in this block, to get the tx data, the
node
It's a straight forward idea: there is a scriptpubkey bitmap per block
which is committed. Users can request the map, and be SPV confident
that they received a faithful one. If there are hits, they can request
the block and be confident there was no censoring.
OK, I see now, thanks Gregory.
He didn't said a project for all possible language bindings, just
java bindings. Other languages' bindings would be separate projects.
Yes/no/sorta.
Java/JNA bindings can be used from Python, Ruby, Javascript, PHP as well as
dialects of Haskell, Lisp, Smalltalk and a bunch of more obscure
history. Lots of miners have dropped out due to hardware obsolescence,
yet
massive double spending hasn't happened.
How many thousands of BTC must be stolen by miners before you'd agree
that it has, in fact, happened?
(https://bitcointalk.org/index.php?topic=321630.0)
Hmm. I think this
You can not consider the outcome resulting by replace-by-fee fraudulent,
as it could be the world as observed by some.
Fraudulent in what sense?
If you mean the legal term, then you'd use the legal beyond reasonable
doubt test. You mined a double spend that ~everyone thinks came 5 minutes
1. They won't be attacking Bitcoin, they will attack merchants who accept
payments with 0 confirmations.
Which is basically all of them other than exchanges. Any merchant that uses
BitPay or Coinbase, for instance, or any physical shop.
If you want to play word games and redefine Bitcoin to
You can prove a doublespend instantly by showing two conflicting
transactions both signed by thar party. This pair can be distributed as a
proof of malice globally in seconds via a push messaging mechanism.
There have been lots of e-cash schemes proposed in the academic literature
that work
But, let's say, 5 years from now, some faction of miners who own
soon-to-be-obsolete equipment will decide to boost their profits with a
replace-by-fee pool and a corresponding wallet. They can market it as 1 of
10 hamburgers are free if they have 10% of the total hashpower.
Yes, like any
Are you not counting collateralized multisignature notaries? Its an
extended version of the Greenaddress.it model.
It makes unconfirmed transactions useless in the classical Bitcoin model.
Obviously if you introduce a trusted third party you can fix things, but
then you're back to having the
So you're just arguing that a notary is different to a miner, without
spelling out exactly why.
I'm afraid I still don't understand why you think notaries would build long
term businesses but miners wouldn't, in this model.
I think you are saying because notaries have identity, brand
So anyway, in my opinion, it is actually great that Bitcoin is still
relatively small: we have an opportunity to analyze and improve things.
But you seem to be hostile to people who do that (and who do not share
your opinion), which is kinda uncool.
To clarify once more, I'm all for people
I see no fundamental difference in outcome from miner collusion in
scorched-fee (which isn't guaranteed to pay the right pool!) and miner
collusion in knowingly mining a doublespend transaction.
Well, they're the same thing. Replace-by-fee *is* miner collusion in
knowingly mining a double
If you're interested in working on mining decentralisation, chipping away
at getblocktemplate support would be a better path forward. It's possible
to have decentralised pooled mining - I know it sounds like a contradiction
but it's not.
I wrote about some of the things that can be done in this
We can certainly imagine many BIP70 extensions, but for things like
auto-filling shipping addresses, is the wallet the best place to do it? My
browser already knows how to fill out this data in credit card forms, it
would make sense to reuse that for Bitcoin.
It sounds like you want a kind of
BLE meets a different use case than regular Bluetooth. BLE is designed to
allow always-on broadcast beacons which are conceptually similar to NFC
tags, with very low power requirements. The tradeoff for this ultra-low
power consumption and always on nature is the same as with NFC tags: you
get
1 - 100 of 642 matches
Mail list logo