It looks like the zookeeper ensemble has not started. Is that possible?
2011-01-14 19:42:03,773 WARN org.apache.zookeeper.ClientCnxn:
Exception closing session 0x0 to sun.nio.ch.SelectionKeyImpl@76e8a7^M
java.net.ConnectException: Connection refused: no further information^M
at sun.nio.ch.Soc
On Fri, Jan 14, 2011 at 11:36 AM, charan kumar wrote:
> Could this be an effect of hotspotting? Since I am persisting millions of
> keys through MR (which are URLS) and they are already sorted.
>
I don't think so. HBase usually buckles first. For sure you've upped
ulimit and xceivers as per HBa
On Wed, Jan 12, 2011 at 10:54 PM, charan kumar wrote:
> Hi Stak,
>
> We user hadoop-0.20.2 . I applied the patch HDFS-630 this morning, didnt
> help.
You ever consider running the apache 0.20-append branch or CDH? Both
have a bunch of fixes to HDFS above 0.20.2 Hadoop.
> This is what I see befor
Tell us more Eric. Which HBase version? What kinda sizes. We should
make it work reliably for at least cells of 1 or 2 MB.
Thanks,
St.Ack
On Sat, Jan 15, 2011 at 2:55 AM, Eric wrote:
> I would also recommend against storing large files in HBase. Your regions
> get filled up very quickly, meani
2011/1/15 明珠刘 :
> What does `netstat` look like?
>
Are you asking about the netstat command? To learn about it, type
'man netstat'. Or are you asking something else?
St.Ack
What does `netstat` look like?
2011/1/11 陈加俊
> I set the env as fallows:
>
> $ ulimit -n
> 65535
>
> $ ulimit -a
> core file size (blocks, -c) 0
> data seg size (kbytes, -d) unlimited
> scheduling priority (-e) 0
> file size (blocks, -f) unlimited
On Sat, Jan 15, 2011 at 8:45 AM, Wayne wrote:
> Not everyone is looking for a distributed memcache. Many of us are looking
> for a database that scales up and out, and for that there is only one
> choice. HBase does auto partitioning with regions; this is the genius of the
> original bigtable desi
Not everyone is looking for a distributed memcache. Many of us are looking
for a database that scales up and out, and for that there is only one
choice. HBase does auto partitioning with regions; this is the genius of the
original bigtable design. Regions are logical units small enough to be fast
t
I would also recommend against storing large files in HBase. Your regions
get filled up very quickly, meaning you will get a lot of regions. You can
increase the max region size but I've seen very unstable behaviour while
insering tens of thousands of small to large files into HBase with both
small