> -----Original Message----- > From: Travis Savo [mailto:[EMAIL PROTECTED] > Sent: Thursday, April 15, 2004 4:09 PM > To: 'Turbine JCS Developers List' > Subject: RE: remote > > Sure. > > The changes for the threading are all in CacheEventQueue, and > ElementEventQueue. > > The original design spawned one thread for each region immediately, and > had > the thread sleep until there was events in the queue to process. > > My change was to have it spawn the thread the first time an item entered > the > queue, and run until the queue was empty. When the queue was empty, the > thread would sleep for a specified period of time. If another event came > into the queue while the thread was sleeping, it would wake up and resume > processing. If the thread sleep period expired without another event > coming > into the queue, the thread would die, leaving a new thread to be created > with an event came in. > > Thus, an active queue would always have a thread available and ready to > process events. An inactive queue's thread won't ever get spawned. A > semi-active queue can be tuned for best behavior via the timeout. The > problem was that assuming you had 1,000 regions, it would instantly spawn > 1,000 threads, even if only 20 regions were getting used. On some > operating > systems, this would make the box completely unusable. I suspect this is no > longer as much of a problem with newer kernels like 2.6 on Linux, but rest > assured it's pretty broken on older machines. >
Interesting. I'll look into it. Do you have a simple patch for the queue that I could try out, that you could send over? > The other major important change was (and my memory is failing me as to > where it was exactly) when a client did a remove to a remote cache, the > remote cache would send a remove to all the other clients clients, who > would > in return send a remove back to the remote cache, who would send a remove > to > all the other clients, ad nasueum, creating (X-1)^2 packets with every > iteration, where X is the number of clients talking to remote cache. It > won't happen with only one client... but it will with 2+. > > My fix was a change from a 'remove()' to a 'localRemove()' at a key > point... > now if only I could remember where that point was! > That would be bad. That bug must have crept in. We need to find that spot. > The final change, which is less important, but necessary for long-term > stability is to change the cache ID from a byte to an integer. Only > supporting 256 remote clients is all good and fine, but assuming there's 2 > clients, and one of them disconnects and reconnects 255 times, it's going > to > break in new and interesting ways when the ID wraps back around to 1. > Ya. That should be changed. Again, do you have a simple patch, say a the methods to paste in and were to do it. More later. Aaron --------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
