+1
I've been using this RC with what will be hbase 0.96.1 on a small cluster
under light loading and all seems to be basically work.
St.Ack
On Tuesday, November 5, 2013 4:56:41 PM UTC+8, tsuna wrote:
>
> Hi all,
> RC1 contained a few bugs that managed to escape, so we're cutting a
> second
What is your distributed hardware/services configuration? Where are your
masters and slaves and what spec is maintained by each?
You have compaction set to zero but the issues happen near a major
compaction event, so are you running manual compactions during a heavy put
operation?
On Tue, Dec 3,
Like Vladimir is saying. Do you have any need of storing the files into
HBase? 20mb is pretty big. Can you not just store the file into HDFS and
store only the path of the file into HBase?
Do you have the logs of when the servers are died? Any GC pause?
JM
2013/12/3 Vladimir Rodionov
> >>Any
>>Any advice is appreciated.
Do not store your files in HBase, store only references.
Best regards,
Vladimir Rodionov
Principal Platform Engineer
Carrier IQ, www.carrieriq.com
e-mail: vrodio...@carrieriq.com
From: Bill Sanchez [bill.sanchez2...@gmail.com]
Hello,
I am seeking some advice on my hbase issue. I am trying to configure a
system that will eventually load and store approximately 50GB-80GB of data
daily. This data consists of files that are roughly 3MB-5MB each with some
reaching 20MB and some as small as 1MB. The load job does roughly 2
Hi Kim,
I would take a look at:
http://hadoop-hbase.blogspot.com/2013/01/hbase-region-server-memory-sizing.html
On Tue, Dec 3, 2013 at 2:42 PM, Kim Chew wrote:
> I am wondering if there is a restrain on the number of regions that a table
> could have. For example, if I have a table that gro
I am wondering if there is a restrain on the number of regions that a table
could have. For example, if I have a table that grows very fast so the
region keeps splitting, is it possible that the table could have as many
regions as it could until all the resource run out?
Thanks.
Kim
Hi James,
it seems a problem of search for non-standardized documents, I think solr
(or some like this) may meet your requires.
good luck.
2013/12/3 James Pettyjohn
> Hi, general strategy and schemata approach question.
>
> I've got a lot of different data in a relational db I'm trying to m
Hi Ted,
Thank you for your response. I use hbase-0.94.6-cdh4.4.0.
For test, I just have one region in the table, so, it quite sure that
the data is inserted in the region. The RPC server is running on each
RegionServer, one for one.
In my real application, I got the HRegion by HRegion E
Hi Tsuna,
Just wanted to know , if there is any way of Scanning using column value
and with filters on column value.
Eg. Suppose I have column values against qualifier waterlevel as
40,20,30,10 etc.
*If my requirement is like that I want to fetch data with requirement like
give me data with w
bq. make an RPC call of my custom RPC server to scan the HRegion
How many regions do you have ?
How do you direct the RPC call to the region where the data was inserted ?
What version of HBase are you using ?
Cheers
On Tue, Dec 3, 2013 at 8:22 AM, Wukang Lin wrote:
> Hi all,
> I got a t
Hi all,
I got a trouble in using HRegion's RegionScanner in RegionServer.
In my project, a custom RPC server was started in RegionServer using
coprocessor mechanism, and it work well. In this RPC server, it receive a
call from client, and scan the HRegion specialised by the params setted by
The issue is with my code. Something to do with cache that I was using to store
the rowkeys.
Thanks Anyway!
Regards,
Mrudula
On Tuesday, 3 December 2013 4:34 AM, Ted Yu wrote:
Which HBase release are you using ?
In your while loop, you used the same set of row keys for each attempt ?
Th
I think that error is because of missing .archive files?
We only see this issue when we export snapshot to local file system
(file:///tmp/hbase_backup)
but not when we export snapshot to hdfs.
When we export snapshot to hdfs we are getting all the data.Its only not
working when we export snapsho
Please check what happened to the HFileLink mentioned in the exception
below - this would be the cause for snapshot export failure:
Exception in thread "main" java.io.FileNotFoundException: Unable to open
link: org.apache.hadoop.hbase.io.HFileLink locations=[hdfs://
site.com:54310/data_full_backup
15 matches
Mail list logo