https://www.youtube.com/watch?v=oHT6Dfu7FfI
Doran works fast.
On Thu, Oct 23, 2014 at 10:19 AM, Levi Pearson
wrote:
> On Thu, Oct 23, 2014 at 3:33 AM, Dan Egli wrote:
> > This sounds like a wonderful meeting, but there's no way I can attend.
> May
> > I hope it will be recorded and placed onli
On Thu, Oct 23, 2014 at 3:33 AM, Dan Egli wrote:
> This sounds like a wonderful meeting, but there's no way I can attend. May
> I hope it will be recorded and placed online?
>
I was there, and Doran did a great job with it. He did record it, so
I imagine it will be placed online, but I can't giv
This sounds like a wonderful meeting, but there's no way I can attend. May
I hope it will be recorded and placed online?
--- Dan
/*
PLUG: http://plug.org, #utah on irc.freenode.net
Unsubscribe: http://plug.org/mailman/options/plug
Don't fear the penguin.
*/
Date: Tuesday, October 21st
Time: 7:30pm
Location: UVU Business Resource Center
Systemd is a "system management daemon designed for Linux and programmed
exclusively for the Linux API" and is widely regarded as a replacement for the
traditional SysV init architecture used to manage services runni
Date: Tuesday, October 21st
Time: 7:30pm
Location: UVU Business Resource Center
Systemd is a "system management daemon designed for Linux and programmed
exclusively for the Linux API" and is widely regarded as a replacement for the
traditional SysV init architecture used to manage services runni
On December 15, 2013 S. Dale Morrey wrote:
> Now let's hope my choice of RDS for this project (mysql cuz I'm an idiot
and
> a cheap one at that), doesn't choke on the fact that I'm cramming 20+GB of
> data into a single table.
I kind of doubt it, but if it does choke, you could always try out
So this seems to be working out really well now.
I've got the entire thing operational and the finalized data looks about as
I would expect it to.
Plus...
Total Execution time to this step 892.853 seconds
Total blocks complete: 17099 of 274910
15 minutes for ~17,000 blocks.
That's 68,000 blocks p
Thanks Levi. That's some very sage advice.
To be clear where I'm coming from. I already wrote an app in node.js that
did exactly what I needed it to do, i.e. stuff the entire tx chain of
bitcoin into an RDS so I can query it later using SQL style queries (part
of a service I'm working on simila
On Thu, Dec 12, 2013 at 2:16 PM, S. Dale Morrey wrote:
> Now I've got to figure out how to slowdown the requests and gradually feed
> them to the server. Batching will help somewhat, but the max I can send in
> a batch is 100 and even then it's going to quickly overwhelm the server to
> send the
On Thu, Dec 12, 2013 at 12:30 PM, S. Dale Morrey wrote:
> The function spewing this out is an iterator that is calling setTimeout to
> run these requests with about 500ms of delay (so as not to swamp the server
> I'm getting the request from. Once it has the result of the block hash it
> should
Interesting insight! Thank you! For the record it wasn't doing that when
I finally figured out the problem. I was getting the expected hashes when
I cut the number of requests down. However it's possible that this was
only because I eliminated the timeout so it was executing in real time. I
wi
one issue:
for(var i = lastblock; i < blockcount; i++){
//Multiplex across multiple clients so we don't overwhelm a single
server
blockDB.put("lastblock",i);
var clientnum = 0;
if(i >= clients.length){
clientnum = i%clients.length;
}
consol
Nevermind, I got it sorted out. The delay wasn't long enough and the feed
server was treating it as an attack.
Furthermore there was no case to catch the connection reset so it was just
silently filling the array with undefined values, since it never reached
the part where it splits the hashes int
So this probably belongs on StackOverflow, but I figured there was a chance
someone on the list might be able to help me figure out what I'm missing
here.
I have an app I refactored to take advantage of asynchronous called in
node.js.
Now my log is full of this...
Setting up 267176 of 267180 bloc
14 matches
Mail list logo