Hi all,
Since I got no replies to my previous message (see below), I went ahead and set
the tcp_tw_recycle to true. This worked like a charm. The number of sockets in
TIME_WAIT went down from many thousands to just a couple (tens). Apparently,
once set to true, the recycling happens quite eager
We're excited to announce Surge, the Scalability and Performance
Conference, to be held in Baltimore on Sept 30 and Oct 1, 2010. The
event focuses on case studies that demonstrate successes (and failures)
in Web applications and Internet architectures.
Our Keynote speakers include John Allspaw an
Have you seen this error in shutting down a region server:
2010-06-14 17:17:39,521 ERROR org.apache.hadoop.hdfs.DFSClient: Exception
closing file /hbase/doc/compaction.dir/1341061960/4230672672594086608 :
java.net.SocketTimeoutException: 9000 millis timeout while waiting for channel
to be ready
Can you post the log from the regionserver that did not ever open the region
(from 12:57 to 13:14)? And actually grab it from a couple minutes before 12:57.
Most likely this is not a bug as much as a current limitation of handling
open/close messages sequentially. It's possible that a long-run
Spoke too soon.. Thanks..
On 6/14/10 12:32 PM, "Todd Lipcon" wrote:
On Mon, Jun 14, 2010 at 12:14 PM, Vidhyashankar Venkataraman <
vidhy...@yahoo-inc.com> wrote:
> >> Most likely you are not appending the correct metadata entries (in
> >> particular the log sequence ID)
> Since I am not creatin
On Mon, Jun 14, 2010 at 12:37 PM, Vidhyashankar Venkataraman <
vidhy...@yahoo-inc.com> wrote:
> > In trunk there's a feature whereby the metadata can include a special
> "this
> > is a bulk load" entry. In 0.20, you have to pick some sequence number -
> I'd
> > go with something like 0 for a bulk
> In trunk there's a feature whereby the metadata can include a special "this
> is a bulk load" entry. In 0.20, you have to pick some sequence number - I'd
> go with something like 0 for a bulk load. Check out what HFileOutputFormat
> does and copy that :)
I did that initially but it doesn't compi
On Mon, Jun 14, 2010 at 12:14 PM, Vidhyashankar Venkataraman <
vidhy...@yahoo-inc.com> wrote:
> >> Most likely you are not appending the correct metadata entries (in
> >> particular the log sequence ID)
> Since I am not creating any logs, the max log sequence ID should be -1,
> isnt it?
>
>
In tru
>> Most likely you are not appending the correct metadata entries (in
>> particular the log sequence ID)
Since I am not creating any logs, the max log sequence ID should be -1, isnt it?
On 6/14/10 11:36 AM, "Vidhyashankar Venkataraman"
wrote:
>> Most likely you are not appending the correct me
Please don't email the 'issues' list.
http://wiki.apache.org/hadoop/Hbase/Troubleshooting#A6
2010/6/14 chen peng :
>
> hi, all:
> I had met a question after my program continued for 28+ hours under
> the circumstances of cluster which have three machine that had set ulimit to
> 32K.
> ..
>> Most likely you are not appending the correct metadata entries (in
>> particular the log sequence ID)
Can you elaborate? What additional info do I need to add when I create/close
Hfiles?
Thank you
vidhya
On 6/14/10 11:22 AM, "Todd Lipcon" wrote:
On Mon, Jun 14, 2010 at 11:08 AM, Vidhyashan
On Mon, Jun 14, 2010 at 11:08 AM, Vidhyashankar Venkataraman <
vidhy...@yahoo-inc.com> wrote:
> I tried dumping my own Hfiles (similar to HFileOutputFormat: open an
> Hfile.writer, append the key value pairs and then close the writer) and
> tried loading them using the ruby script.. I had altered
Hi, There:
I have found an hbase bug related to openning region takes too long. The
client reported error of no server address. For the region
MyOwnEventTable,2010-06-13
10:33:31\x0922f3563bd43a3c3c044bd1db885f1523,1276457581773, here is the
sequence:
Around 12:57, all 8 region ser
Hi everyone,
i am new to nosql databases and especially column-oriented Databases
like hbase.
I am a student on information-systems and i evaluate a fitting no-sql
database for a web analytics system. Got the use-case of data like
webserver-logfile.
in an RDBMS it would be for every hit a row in th
I tried dumping my own Hfiles (similar to HFileOutputFormat: open an
Hfile.writer, append the key value pairs and then close the writer) and tried
loading them using the ruby script.. I had altered loadtable.rb to modify the
block size for the column family.
The script reported no errors. But
Thanks for updating the list Ferdy.
St.Ack
On Mon, Jun 14, 2010 at 3:09 AM, Ferdy wrote:
> After running stable for quite a while (using configured long timeouts), we
> recently noticed regionservers were starting to behave bad again. During
> compaction, regionservers complained that blocks are
After running stable for quite a while (using configured long timeouts),
we recently noticed regionservers were starting to behave bad again.
During compaction, regionservers complained that blocks are unavailable.
Every couple of days, a regionserver decided to terminate itself because
it coul
17 matches
Mail list logo