Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-08 Thread Joel Joonatan Kaartinen
such a contract is a possibility, but why would big owners give an
exclusive right to such pools? It seems to me it'd make sense to offer
those for any miner as long as the get paid a little for it. Especially
when it's as simple as offering an incomplete transaction with the
appropriate SIGHASH flags.

a part of the reason I like this idea is because it will allow stakeholders
a degree of influence on how large the fees are. At least from the surface,
it looks like incentives are pretty well matched. They have an incentive to
not let the fees drop too low so the network continues to be usable and
they also have an incentive to not raise them too high because it'll push
users into using other systems. Also, there'll be competition between
stakeholders, which should keep the fees reasonable.

I think this would at least be preferable to the let the miner decide
model.

- Joel

On Fri, May 8, 2015 at 7:51 PM, Peter Todd p...@petertodd.org wrote:

 On Fri, May 08, 2015 at 03:32:00PM +0300, Joel Joonatan Kaartinen wrote:
  Matt,
 
  It seems you missed my suggestion about basing the maximum block size on
  the bitcoin days destroyed in transactions that are included in the
 block.
  I think it has potential for both scaling as well as keeping up a
 constant
  fee pressure. If tuned properly, it should both stop spamming and
 increase
  block size maximum when there are a lot of real transactions waiting for
  inclusion.

 The problem with gating block creation on Bitcoin days destroyed is
 there's a strong potential of giving big mining pools an huge advantage,
 because they can contract with large Bitcoin owners and buy dummy
 transactions with large numbers of Bitcoin days destroyed on demand
 whenever they need more days-destroyed to create larger blocks.
 Similarly, with appropriate SIGHASH flags such contracting can be done
 by modifying *existing* transactions on demand.

 Ultimately bitcoin days destroyed just becomes a very complex version of
 transaction fees, and it's already well known that gating blocksize on
 total transaction fees doesn't work.

 --
 'peter'[:-1]@petertodd.org
 0f53e2d214685abf15b6d62d32453a03b0d472e374e10e94

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-08 Thread Joel Joonatan Kaartinen
Matt,

It seems you missed my suggestion about basing the maximum block size on
the bitcoin days destroyed in transactions that are included in the block.
I think it has potential for both scaling as well as keeping up a constant
fee pressure. If tuned properly, it should both stop spamming and increase
block size maximum when there are a lot of real transactions waiting for
inclusion.

- Joel

On Fri, May 8, 2015 at 1:30 PM, Clément Elbaz clem...@gmail.com wrote:

 Matt : I think proposal #1 and #3 are a lot better than #2, and #1 is my
 favorite.

 I see two problems with proposal #2.
 The first problem with proposal #2 is that, as we see in democracies,
 there is often a mismatch between the people conscious vote and these same
 people behavior.

 Relying on an  intentional vote made consciously by miners by choosing a
 configuration value can lead to twisted results if their actual behavior
 doesn't correlate with their vote (eg, they all vote for a small block size
 because it is the default configuration of their software, and then they
 fill it completely all the time and everything crashes).

 The second problem with proposal #2 is that if Gavin and Mike are right,
 there is simply no time to gather a meaningful amount of votes over the
 coinbases, after the fork but before the Bitcoin scalability crash.

 I like proposal #1 because the vote is made using already available
 data. Also there is no possible mismatch between behavior and vote. As a
 miner you vote by choosing to create a big (or small) block, and your
 actions reflect your vote. It is simple and straightforward.

 My feelings on proposal #3 is it is a little bit mixing apples and
 oranges, but I may not seeing all the implications.


 Le ven. 8 mai 2015 à 09:21, Matt Whitlock b...@mattwhitlock.name a
 écrit :

 Between all the flames on this list, several ideas were raised that did
 not get much attention. I hereby resubmit these ideas for consideration and
 discussion.

 - Perhaps the hard block size limit should be a function of the actual
 block sizes over some trailing sampling period. For example, take the
 median block size among the most recent 2016 blocks and multiply it by 1.5.
 This allows Bitcoin to scale up gradually and organically, rather than
 having human beings guessing at what is an appropriate limit.

 - Perhaps the hard block size limit should be determined by a vote of the
 miners. Each miner could embed a desired block size limit in the coinbase
 transactions of the blocks it publishes. The effective hard block size
 limit would be that size having the greatest number of votes within a
 sliding window of most recent blocks.

 - Perhaps the hard block size limit should be a function of block-chain
 length, so that it can scale up smoothly rather than jumping immediately to
 20 MB. This function could be linear (anticipating a breakdown of Moore's
 Law) or quadratic.

 I would be in support of any of the above, but I do not support Mike
 Hearn's proposed jump to 20 MB. Hearn's proposal kicks the can down the
 road without actually solving the problem, and it does so in a
 controversial (step function) way.


 --
 One dashboard for servers and applications across Physical-Virtual-Cloud
 Widest out-of-the-box monitoring support with 50+ applications
 Performance metrics, stats and reports that give you Actionable Insights
 Deep dive visibility with transaction tracing using APM Insight.
 http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development



 --
 One dashboard for servers and applications across Physical-Virtual-Cloud
 Widest out-of-the-box monitoring support with 50+ applications
 Performance metrics, stats and reports that give you Actionable Insights
 Deep dive visibility with transaction tracing using APM Insight.
 http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development


--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Block Size Increase

2015-05-07 Thread Joel Joonatan Kaartinen
Having observed the customer support nightmare it tends to cause for a
small exchange service when 100% full blocks happen, I've been thinking
that the limit really should be dynamic and respond to demand and the
amount of fees offered. It just doesn't feel right when it takes ages to
burn through the backlog when 100% full is hit for a while. So, while
pondering this, I got an idea that I think has a chance of working that I
can't remember seeing suggested anywhere.

How about basing the maximum valid size for a block on the total bitcoin
days destroyed in that block? That should still stop transaction spam but
naturally expand the block size when there's a backlog of real
transactions. It'd also provide for an indirect mechanism for increasing
the maximum block size based on fees if there's a lot of fees but little
bitcoin days destroyed. In such a situation there'd be incentive to pay
someone to spend an older txout to expand the maximum. I realize this is a
rather half baked idea, but it seems worth considering.

- Joel

On Thu, May 7, 2015 at 10:31 PM, Alan Reiner etothe...@gmail.com wrote:

  This *is* urgent and needs to be handled right now, and I believe Gavin
 has the best approach to this.  I have heard Gavin's talks on increasing
 the block size, and the two most persuasive points to me were:

 (1) Blocks are essentially nearing full now.  And by full he means
 that the reliability of the network (from the average user perspective) is
 about to be impacted in a very negative way (I believe it was due to the
 inconsistent time between blocks).  I think Gavin said that his simulations
 showed 400 kB - 600 kB worth of transactions per 10 min (approx 3-4 tps) is
 where things start to behave poorly for certain classes of transactions.
 In other words, we're very close to the effective limit in terms of
 maintaining the current standard of living, and with a year needed to
 raise the block time this actually is urgent.

 (2) Leveraging fee pressure at 1MB to solve the problem is actually really
 a bad idea.  It's really bad while Bitcoin is still growing, and relying on
 fee pressure at 1 MB severely impacts attractiveness and adoption potential
 of Bitcoin (due to high fees and unreliability).  But more importantly, it
 ignores the fact that for a 7 tps is pathetic for a global transaction
 system.  It is a couple orders of magnitude too low for any meaningful
 commercial activity to occur.  If we continue with a cap of 7 tps forever,
 Bitcoin *will* fail.  Or at best, it will fail to be useful for the vast
 majority of the world (which probably leads to failure).  We shouldn't be
 talking about fee pressure until we hit 700 tps, which is probably still
 too low.

 You can argue that side chains and payment channels could alleviate this.
 But how far off are they?  We're going to hit effective 1MB limits long
 before we can leverage those in a meaningful way.  Even if everyone used
 them, getting a billion people onto the system just can't happen even at 1
 transaction per year per person to get into a payment channel or move money
 between side chains.

 We get asked all the time by corporate clients about scalability.  A limit
 of 7 tps makes them uncomfortable that they are going to invest all this
 time into a system that has no chance of handling the economic activity
 that they expect it handle.  We always assure them that 7 tps is not the
 final answer.

 Satoshi didn't believe 1 MB blocks were the correct answer.  I personally
 think this is critical to Bitcoin's long term future.   And I'm not sure
 what else Gavin could've done to push this along in a meaninful way.

 -Alan



 On 05/07/2015 02:06 PM, Mike Hearn wrote:

 I think you are rubbing against your own presupposition that people
 must find and alternative right now. Quite a lot here do not believe there
 is any urgency, nor that there is an immanent problem that has to be solved
 before the sky falls in.


  I have explained why I believe there is some urgency, whereby some
 urgency I mean, assuming it takes months to implement, merge, test,
 release and for people to upgrade.

  But if it makes you happy, imagine that this discussion happens all over
 again next year and I ask the same question.



 --
 One dashboard for servers and applications across Physical-Virtual-Cloud
 Widest out-of-the-box monitoring support with 50+ applications
 Performance metrics, stats and reports that give you Actionable Insights
 Deep dive visibility with transaction tracing using APM 
 Insight.http://ad.doubleclick.net/ddm/clk/290420510;117567292;y



 ___
 Bitcoin-development mailing 
 listBitcoin-development@lists.sourceforge.nethttps://lists.sourceforge.net/lists/listinfo/bitcoin-development




 --
 One dashboard for servers and applications 

Re: [Bitcoin-development] Proposal to address Bitcoin malware

2015-02-02 Thread Joel Joonatan Kaartinen
If the attacker has your desktop computer but not the mobile that's acting
as an independent second factor, how are you then supposed to be able to
tell you're not signing the correct transaction on the mobile? If the
address was replaced with the attacker's address, it'll look like
everything is ok.

- Joel

On Mon, Feb 2, 2015 at 9:58 PM, Brian Erdelyi brian.erde...@gmail.com
wrote:


  Confusing or not, the reliance on multiple signatures as offering
 greater security than single relies on the independence of multiple
 secrets. If the secrets cannot be shown to retain independence in the
 envisioned threat scenario (e.g. a user's compromised operating system)
 then the benefit reduces to making the exploit more difficult to write,
 which, once written, reduces to no benefit. Yet the user still suffers the
 reduced utility arising from greater complexity, while being led to believe
 in a false promise.

 Just trying to make sure I understand what you’re saying.  Are you eluding
 to that if two of the three private keys get compromised there is no gain
 in security?  Although the likelihood of this occurring is lower, it is
 possible.

 As more malware targets bitcoins I think the utility is evident.  Given
 how final Bitcoin transactions are, I think it’s worth trying to find
 methods to help verify those transactions (if a user deems it to be
 high-risk enough) before the transaction is completed.  The balance is
 trying to devise something that users do not find too burdensome.

 Brian Erdelyi

 --
 Dive into the World of Parallel Programming. The Go Parallel Website,
 sponsored by Intel and developed in partnership with Slashdot Media, is
 your
 hub for all things parallel software development, from weekly thought
 leadership blogs to news, videos, case studies, tutorials and more. Take a
 look and join the conversation now. http://goparallel.sourceforge.net/
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development

--
Dive into the World of Parallel Programming. The Go Parallel Website,
sponsored by Intel and developed in partnership with Slashdot Media, is your
hub for all things parallel software development, from weekly thought
leadership blogs to news, videos, case studies, tutorials and more. Take a
look and join the conversation now. http://goparallel.sourceforge.net/___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] performance testing for bitcoin

2012-10-04 Thread Joel Joonatan Kaartinen
For script evaluation benchmarking, I don't think just a good
approximation of real-world traffic is enough. You really need to
benchmark the worst case scenarios, otherwise you could be creating a
DoS vulnerability.

- Joel

ke, 2012-10-03 kello 13:57 -0400, Ian Miers kirjoitti:
 Script evaluation performance was what I was primarily concerned
 with.  I'm fooling around with adding some new instruction types.
 The tricky part is that to test how that effects performance, you need
 to be able to intersperse transactions with the new instructions with
 existing ones.  For accuracy, you'd like your simulated traffic to at
 least approximate the real world traffic.
 
 
 
 
 Also, is there any bench-marking / instrumentation in bitcoind ? 
 
 
 Ian
 On Wed, Oct 3, 2012 at 1:43 PM, Jeff Garzik jgar...@exmulti.com
 wrote:
 On Wed, Oct 3, 2012 at 1:38 PM, Ian Miers imie...@jhu.edu
 wrote:
  Whats the best way to get performance numbers for
 modifications to bitcoin ?
  Profiling it while running on testnet might work, but that
 would take a
  rather long time to get data.
  Is there anyway to speed this up  if we only needed to
 provide  relative
  performance between tests. (in a sense a fast performance
 regression test).
 
 
 You have to be specific about what you're measuring, because
 performance is vague.
 
 You can measure many aspects of blockchain performance by
 importing
 blocks via -loadblock=FILE.
 
 Other performance measurements like how fast does a block
 relay
 through the network cannot be as easily measured.
 
 --
 Jeff Garzik
 exMULTI, Inc.
 jgar...@exmulti.com
 
 
 --
 Don't let slow site performance ruin your business. Deploy New Relic APM
 Deploy New Relic app performance management and know exactly
 what is happening inside your Ruby, Python, PHP, Java, and .NET app
 Try New Relic at no cost today and get our sweet Data Nerd shirt too!
 http://p.sf.net/sfu/newrelic-dev2dev
 ___ Bitcoin-development mailing 
 list Bitcoin-development@lists.sourceforge.net 
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development



--
Don't let slow site performance ruin your business. Deploy New Relic APM
Deploy New Relic app performance management and know exactly
what is happening inside your Ruby, Python, PHP, Java, and .NET app
Try New Relic at no cost today and get our sweet Data Nerd shirt too!
http://p.sf.net/sfu/newrelic-dev2dev
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Punishing empty blocks?

2012-05-24 Thread Joel Joonatan Kaartinen
I think the strong verification would go well if you add it along with an
optimization that avoids rechecking transactions that have already been
verified as valid. Any transactions it doesn't have to verify are from the
pool, of course :)

On Thu, May 24, 2012 at 7:33 PM, Jeff Garzik jgar...@exmulti.com wrote:

 There appears to be some non-trivial mining power devoted to mining
 empty blocks.  Even with satoshi's key observation -- hash a fixed
 80-byte header, not the entire block -- some miners still find it
 easier to mine empty blocks, rather than watch the network for new
 transactions.

 Therefore I was wondering what people thought about a client
 implementation change:

 - Do not store or relay empty blocks, if time since last block  X
   (where X = 60 minutes, perhaps)

 or even stronger,

 - Ensure latest block includes at least X percent of mempool
 unconfirmed TXs

 The former is easier to implement, though there is the danger that
 no-TX miners simply include a statically generated transaction or two.

 The latter might be considered problematic, as it might refuse to
 relay quickly found blocks.

 Comments?  It wouldn't be a problem if these no-TX blocks were not
 already getting frequent (1 in 20).

 --
 Jeff Garzik
 exMULTI, Inc.
 jgar...@exmulti.com


 --
 Live Security Virtual Conference
 Exclusive live event will cover all the ways today's security and
 threat landscape has changed and how IT managers can respond. Discussions
 will include endpoint security, mobile security and the latest in malware
 threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Alternative to OP_EVAL

2011-12-31 Thread Joel Joonatan Kaartinen
Wouldn't it work to restrict the number of executions of OP_EVAL allowed
per transaction? That way it wouldn't allow for unlimited looping. If
there's too many OP_EVAL executions during the transaction evaluation,
just consider the transaction illegal. 3 would be enough for the
purposes people have been planning for here I think.

- Joel

On Thu, 2011-12-29 at 11:42 -0500, rocon...@theorem.ca wrote:
 On Thu, 29 Dec 2011, theymos wrote:
 
  On Thu, Dec 29, 2011, at 01:55 AM, rocon...@theorem.ca wrote:
  The number of operations executed is still bounded by the number of
  operations occurring in the script.  With the OP_EVAL proposal the
  script language becomes essentially Turing complete, with only an
  artificial limit on recursion depth preventing arbitrary computation
  and there is no way to know what code will run without executing it.
 
  Even if OP_EVAL allowed infinite depth, you'd still need to explicitly
  specify all operations performed, since there is no way of looping.
 
 That's not true.  Gavin himself showed how to use OP_EVAL to loop:
 OP_PUSHDATA {OP_DUP OP_EVAL} OP_DUP OP_EVAL.
 
 Basically OP_DUP lets you duplicate the code on the stack and that is the 
 key to looping.  I'm pretty sure from here we get get Turing completeness. 
 Using the stack operations I expect you can implement the SK-calculus 
 given an OP_EVAL that allows arbitrary depth.
 
 OP_EVAL adds dangerously expressive power to the scripting language.
 



--
Ridiculously easy VDI. With Citrix VDI-in-a-Box, you don't need a complex
infrastructure or vast IT resources to deliver seamless, secure access to
virtual desktops. With this all-in-one solution, easily deploy virtual 
desktops for less than the cost of PCs and save 60% on VDI infrastructure 
costs. Try it free! http://p.sf.net/sfu/Citrix-VDIinabox
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] multisig, op_eval and lock_time/sequence...

2011-11-09 Thread Joel Joonatan Kaartinen
It's propably best to create a separate p2p network for off-band
information like this. No need to involve the blockchain with it.

- Joel

On Wed, 2011-11-09 at 16:18 -0500, Gavin Andresen wrote:
 One more thought on putting arbitrary stuff in the scriptSig:
 
 Miners could decide to revolt and remove the extra scriptSig
 information before including the transaction in their blocks. They'd
 still get the full transaction fee, and the transaction would still
 validate so the block would be accepted by everybody else.
 
 Come to think of it, if a node relaying transactions wanted to save
 bandwidth costs or be annoying, it could also strip off the extra
 information before forwarding it, so this isn't a reliable
 communication mechanism. It is probably a much better idea to use
 another protocol to gather signatures.
 



--
RSA(R) Conference 2012
Save $700 by Nov 18
Register now
http://p.sf.net/sfu/rsa-sfdev2dev1
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Determine input addresses of a transaction

2011-10-25 Thread Joel Joonatan Kaartinen
On Tue, 2011-10-25 at 11:45 +0200, Jan Vornberger wrote:
 1) Get something working reasonable fast to detect current green address
 style transactions. It's fine if it is a little bit of a hack, as long as
 it's safe, since I don't expect it to be merged with mainline anyway at
 this point.
 
 2) Rethink how green transactions are created and verified and try to put
 something 'proper' together which has a chance of being merged at some
 point.
 
 For the moment I was going more with 1) because I got the impression, that
 green transactions are too controversial at this point to get them
 included in mainline. Criticism ranging from 'unnecessary, as
 0-confirmation transactions are fairly safe today' to 'encourages too much
 centralization and therefore evil'. So how to people on this list feel
 about green transactions? Would people be interested in helping me with
 2)?

One possibility would be to create a peer sourced green address
implementation. That is, each user could, individually decide to trust
certain addresses as green and optionally, publish this trust. Basing
things on the published trust, you could dynamically, as opposed to
static hierarchies, evaluate the trustworthiness of each green address
you haven't personally decided to trust.

This would be somewhat involved implementation, though, as it would
require heavy statistical calculations.

- Joel


--
The demand for IT networking professionals continues to grow, and the
demand for specialized networking skills is growing even more rapidly.
Take a complimentary Learning@Cisco Self-Assessment and learn 
about Cisco certifications, training, and career opportunities. 
http://p.sf.net/sfu/cisco-dev2dev
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Request review: drop misbehaving peers

2011-09-16 Thread Joel Joonatan Kaartinen
 Darn good question. If the protection fails, would it be better for it
 to 'fail hard', leaving people complaining bitcoin won't stay
 connected!
 
 Or fail soft, so you at least have a couple of connections.
 
 I think fail hard is better-- we'll immediately know about the
 problem, and can fix it.  Fail soft makes me nervous because  I think
 that would make it more likely a bug splits the network (and,
 therefore, the blockchain).

My own preference would be to fail hard with detection of the problem
and notification of the user if there's a GUI connected and/or running.

- Joel



--
BlackBerryreg; DevCon Americas, Oct. 18-20, San Francisco, CA
http://p.sf.net/sfu/rim-devcon-copy2
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development