Re: HBase with opentsdb creates huge .tmp file & runs out of hdfs space

2015-06-02 Thread sathyafmt
FYI: haven't seen this issue since we turned off tsdb compaction. See github opentsdb issue #490 for more info.. -- View this message in context: http://apache-hbase.679495.n3.nabble.com/HBase-with-opentsdb-creates-huge-tmp-file-runs-out-of-h

Re: HBase with opentsdb creates huge .tmp file & runs out of hdfs space

2015-02-25 Thread sathyafmt
Hi John, What's the fix (committed on Jan11) you mentioned ? Is it in opentsdb/hbase ? Do you have a JIRA #.. ? Thanks On Wed, Feb 25, 2015 at 5:49 AM, brady2 [via Apache HBase] < ml-node+s679495n4068627...@n3.nabble.com> wrote: > Hi Sathya and Nick, > > Here are the stack traces of the region

Re: HBase with opentsdb creates huge .tmp file & runs out of hdfs space

2015-02-24 Thread sathyafmt
Find out the regionserver causing this & then take 10 thread dumps (with a delay of 10s) with curl http://rregionserver:60030/dump for i in {1..10} ; do echo "Dump: $i"; echo "+"; curl http://rs:60030/dump; sleep 10; done On Tue, Feb 24, 2015 at 10:05 AM, brady2 [via A

Re: HBase with opentsdb creates huge .tmp file & runs out of hdfs space

2015-02-23 Thread sathyafmt
Nick, Look at my reply from 02/06/2015, I have the stack traces on my google drive... === We ran into this issue again at the customer site & I collected the region server dumps (25 of them at 10s intervals). I uploaded it to my google drive

Re: HBase with opentsdb creates huge .tmp file & runs out of hdfs space

2015-02-23 Thread sathyafmt
John - No solution yet, I didn't hear anything back from the group.. I am still running into this issue. Are you running on a VM or bare-metal ? Thanks -sathya -- View this message in context: http://apache-hbase.679495.n3.nabble.com/HBase-with-opentsdb-creates-huge-tmp-file-runs-out-of-hdfs-s

Re: HBase with opentsdb creates huge .tmp file & runs out of hdfs space

2015-02-08 Thread sathyafmt
ks -sathya On Tue, Jan 13, 2015 at 4:34 PM, sathyafmt [via Apache HBase] < ml-node+s679495n4067601...@n3.nabble.com> wrote: > yes, painful :-) > > This happens at a customer deployment & the next time I hit this I'll get > you the stack trace. I have the hbase-site.

Re: HBase with opentsdb creates huge .tmp file & runs out of hdfs space

2015-01-13 Thread sathyafmt
yes, painful :-) This happens at a customer deployment & the next time I hit this I'll get you the stack trace. I have the hbase-site.xml attached to my response to Sean, take a look. -- View this message in context: http://apache-hbase.679495.n3.nabble.com/HBase-with-opentsdb-creates-huge-tmp

Re: HBase with opentsdb creates huge .tmp file & runs out of hdfs space

2015-01-13 Thread sathyafmt
I have dfs.datanode.data.dir=/var/vcap/store/hadoop/hdfs/data, dfs.datanode.failed.volumes.tolerated=0. [I don't have dfs.datanode.du.reserved: thanks for mentioning it, I'll set it to 10G going forward] The CF has compression=SNAPPY. I have only hbase.cluster.distributed=true, hbase.rootdir=hdfs:

Re: HBase with opentsdb creates huge .tmp file & runs out of hdfs space

2015-01-13 Thread sathyafmt
Yes, we normally run a 4 node hbase instance with 500G on each of the nodes for HDFS. Here's the hadoop fs -ls from a single node instance. sathya = user@node1:/var/lib$ hadoop fs -ls -R /hbase drwxr-xr-x - hbase hbase 0 2015-01-05 10:59 /hbase/.tmp drwxr-xr-x - hbase hbase

Re: HBase with opentsdb creates huge .tmp file & runs out of hdfs space

2015-01-13 Thread sathyafmt
Thanks Esteban. We do have lots of space ~2TB. The compaction starts on around a 300MB column and dies after consuming all the 2TB of space. sathya -- View this message in context: http://apache-hbase.679495.n3.nabble.com/HBase-with-opentsdb-creates-huge-tmp-file-runs-out-of-hdfs-space-tp40675

HBase with opentsdb creates huge .tmp file & runs out of hdfs space

2015-01-12 Thread sathyafmt
CDH5.1.2 (hbase 0.98.1) (running on a vm - vmware esx) We use opentsdb(2.1.0RC) with hbase & after ingesting 200-300MB of data, hbase tries to compact the table, and ends up creating a .tmp file which grows to fill up the entire hdfs space.. and dies eventually. I tried to remove this .tmp file &