Hello,
I am running a ycsb instance to insert data into hbase.All was well when
this was against hbase 0.96.1.Now I am trying to run the same program to
another cluster which is configured with hbase 0.98.4.I get the below
error on the client side.Could some one help me with this?.
The
Do you have replication turned on in hbase and if so is your slave
consuming the replicated data?.
-Nishanth
On Wed, Feb 25, 2015 at 10:19 AM, Madeleine Piffaretti
mpiffare...@powerspace.com wrote:
Hi all,
We are running out of space in our small hadoop cluster so I was checking
disk
Please ignore..
On Fri, Jan 30, 2015 at 10:39 AM, Nishanth S nishanth.2...@gmail.com
wrote:
Hello,
I have a field which is indexed and stored in the solr schema( 4.4.solr
cloud).This field is relatively huge and I plan to only index the field
and not to store.Is there a need to re-index
: the number of client threads. By default, the YCSB Client
uses a single worker thread, but additional threads can be specified.
This is often done to increase the amount of load offered against the
database.
2015-01-29 17:27 GMT+01:00 Nishanth S nishanth.2...@gmail.com:
How many instances
Hello,
I have a field which is indexed and stored in the solr schema( 4.4.solr
cloud).This field is relatively huge and I plan to only index the field
and not to store.Is there a need to re-index the documents once this
change is made?.
Thanks,
Nishanth
/apurtell/ycsb/tree/new_hbase_client
Cheers
On Wed, Jan 28, 2015 at 12:41 PM, Nishanth S
nishanth.2...@gmail.com
wrote:
You can use ycsb for this purpose.See here
https://github.com/brianfrankcooper/YCSB/wiki/Getting-Started
-Nishanth
On Wed, Jan 28
You can use ycsb for this purpose.See here
https://github.com/brianfrankcooper/YCSB/wiki/Getting-Started
-Nishanth
On Wed, Jan 28, 2015 at 1:37 PM, Guillermo Ortiz konstt2...@gmail.com
wrote:
Hi,
I'd like to do some benchmarks fo HBase but I don't know what tool
could use. I started to make
HI,
We were running an hbase cluster with replication enabled.How ever we have
moved away from replication and turned this off.I also went ahead and
removed the peers from hbase shell.How ever the oldwals directory is not
cleaned up.I am using hbase version 0.96.1. Is it safe enough to delete
Hi All,
I am running a map reduce job which scans the hbase table for a particular
time period and then creates some files from that.The job runs fine for 10
minutes or so and few around 10% of maps get completed succesfully.Here is
the error that I am getting.Can some one help?
15/01/22
Hey folks,
I am trying to write a map reduce in pig against my hbase table.I have a
salting in my rowkey appended with reverse timestamps ,so I guess the best
way is to do a scan for all the dates that I require to pull out
records.Does any one know if pig supports hbase scan out of the box or
It doesn't support dealing with salted rowkeys (or reverse timestamps) out
of the box, so you may have to munge with the data a little bit after it's
loaded to get what you want.
Hope this helps.
Pradeep
On Fri Dec 05 2014 at 9:55:04 AM Nishanth S nishanth.2...@gmail.com
wrote:
Hey folks
...@gmail.com wrote:
Nishanth,
What version of HBase you are using?
You can try clear the ZNode about regionserver list in zookeeper
/hbase/ and then restart HMaster.
--
yeweichen2...@gmail.com
*From:* Nishanth S nishanth.2...@gmail.com
*Date:* 2014-11
Hey folks,
How do I remove a dead region server?.I manually failed over the hbase
master but this is still appearing in master UI and also on the status
command that I run.
Thanks,
Nishan
the dead region
servers is to restart the master daemon.
-Pere
On Mon, Nov 3, 2014 at 9:49 AM, Nishanth S nishanth.2...@gmail.com
wrote:
Hey folks,
How do I remove a dead region server?.I manually failed over the hbase
master but this is still appearing in master UI and also on the status
Can you telnet to port 2181 and 60020 on the remote cluster if you are
running default ports.I had a similar issue in the past where there was
firewall.
Thanks,
Nishanthon
On Sun, Oct 26, 2014 at 9:39 AM, Ted Yu yuzhih...@gmail.com wrote:
Is hbase-site.xml corresponding to your cluster on
; no such investigation
has
been undertaken that I am aware of.
Thanks,
Nick
On Monday, October 20, 2014, Nishanth S nishanth.2...@gmail.com
wrote:
Hey folks,
I have been reading a bit about parque and how hive and impala
works
well
on data stored in parque format.Is
Hey folks,
I have been reading a bit about parque and how hive and impala works well
on data stored in parque format.Is it even possible to do the same with
hbase to reduce storage etc..
Thanks,
Nishanth
Hi Ted ,
Since I am also working on similar thing is there a way we can first test
the filter on client side?.You know what I mean without disrupting others
who are using the same cluster for other work?
Thanks,
Nishanth
On Wed, Oct 15, 2014 at 3:17 PM, Ted Yu yuzhih...@gmail.com wrote:
bq.
unit tests for Filters in hbase code.
Cheers
On Wed, Oct 15, 2014 at 2:30 PM, Nishanth S nishanth.2...@gmail.com
wrote:
Hi Ted ,
Since I am also working on similar thing is there a way we can first
test
the filter on client side?.You know what I mean without disrupting
others
who
Hey folks,
I am evaluating on loading an hbase table from parquet files based on
some rules that would be applied on parquet file records.Could some one
help me on what would be the best way to do this?.
Thanks,
Nishan
?
Thanks,
-Nishan
On Wed, Oct 8, 2014 at 9:50 AM, Nishanth S nishanth.2...@gmail.com wrote:
Hey folks,
I am evaluating on loading an hbase table from parquet files based on
some rules that would be applied on parquet file records.Could some one
help me on what would be the best way
be configurable and loadable (but not
unloadable, so you need to think about some class loading magic like
ClassWorlds)
For bulk imports you can create HFiles directly and add them incrementally:
http://hbase.apache.org/book/arch.bulk.load.html
On Wed, Oct 8, 2014 at 8:13 PM, Nishanth S nishanth.2
tool moves files, not copies them. So once written you
will not do any additional writes (except for those regions which was split
while you filtering data). If importing data is small that would not be a
problem.
On Wed, Oct 8, 2014 at 8:45 PM, Nishanth S nishanth.2...@gmail.com
wrote
this is
possible or am I doing some thing wrong.
-Nishan
On Thu, Sep 25, 2014 at 11:56 AM, Ted Yu yuzhih...@gmail.com wrote:
There should not be impact to hbase write performance for two column
families.
Cheers
On Thu, Sep 25, 2014 at 10:53 AM, Nishanth S nishanth.2...@gmail.com
wrote:
Thank
at 10:49 AM, Ted Yu yuzhih...@gmail.com wrote:
Can you give a bit more detail, such as:
the release of HBase you're using
number of column families where slowdown is observed
size of cluster
release of hadoop you're using
Thanks
On Mon, Sep 29, 2014 at 9:43 AM, Nishanth S nishanth.2
Hi everyone,
This question may have been asked many times but I would really appreciate
if some one can help me on how to go about this.
Currently my hbase table consists of about 10 columns per row which in
total has an average size of 5K.The chunk of the size is held by one
particular
, you can designate the smaller column family as essential
column family where smaller columns are queried.
Cheers
On Thu, Sep 25, 2014 at 9:57 AM, Nishanth S nishanth.2...@gmail.com
wrote:
Hi everyone,
This question may have been asked many times but I would really
appreciate
Thank you Ted.
-Nishan
On Thu, Sep 25, 2014 at 11:56 AM, Ted Yu yuzhih...@gmail.com wrote:
There should not be impact to hbase write performance for two column
families.
Cheers
On Thu, Sep 25, 2014 at 10:53 AM, Nishanth S nishanth.2...@gmail.com
wrote:
Thank you Ted.No I do not plan
Hi All,
We were using TTL feature to delete the hbase data since we were able to
define the retention days at column family level.But right now we have a
requirement for storing data with different retention period in this
column family.So we would need to do a select and delete.What would be
that needs to be deleted.
-Nishan
On Wed, Sep 24, 2014 at 12:14 PM, Jean-Marc Spaggiari
jean-m...@spaggiari.org wrote:
Hi Nishan,
What you are looking for is HBASE-11764
https://issues.apache.org/jira/browse/HBASE-11764 and not available yet.
JM
2014-09-24 14:12 GMT-04:00 Nishanth S nishanth.2
Hi folks,
We have a hbase table with 4 column families which stores log data.The
columns and the content stored on each of these column families are the
same. The reason for having multiple families is that we needed 4 retention
buckets for messages and were using the TTL feature of hbase
Hi folks,
We have a hbase table with 4 column families which stores log data.The
columns and the content stored on each of these column families are the
same. The reason for having multiple families is that we needed 4 retention
buckets for messages and were using the TTL feature of hbase
* Cell v)
in your custom Filter.
JFYI
-Anoop-
On Fri, Sep 12, 2014 at 4:36 AM, Nishanth S nishanth.2...@gmail.com
wrote:
Sure Sean.This is much needed.
-Nishan
On Thu, Sep 11, 2014 at 3:57 PM, Sean Busbey bus...@cloudera.com
wrote:
I filed HBASE-11950 to get some details
Hi All,
I have an hbase table with multiple cfs (say c1,c2,c3).Each of this column
family has a column 'message' which is about 5K.What I need to do is to
grab only the first 1000 Characters of this message when I do a get on the
table using row Key.I was thinking of using filters to do this
might want to look at
RegexStringComparator to match the first 1000 characters of your column
qualifier.
-Dima
On Thu, Sep 11, 2014 at 12:37 PM, Nishanth S nishanth.2...@gmail.com
wrote:
Hi All,
I have an hbase table with multiple cfs (say c1,c2,c3).Each of this
column
Hey All,
I am sorry if this is a naive question.Do we need to generate a proto file
using proto buffer compiler when implementing a filter.I did not see that
any where in the documentation.Can some one help please?
On Thu, Sep 11, 2014 at 12:41 PM, Nishanth S nishanth.2...@gmail.com
wrote
]: https://issues.apache.org/jira/browse/HBASE-11950
On Thu, Sep 11, 2014 at 4:40 PM, Ted Yu yuzhih...@gmail.com wrote:
See http://search-hadoop.com/m/DHED4xWh622
On Thu, Sep 11, 2014 at 2:37 PM, Nishanth S nishanth.2...@gmail.com
wrote:
Hey All,
I am sorry if this is a naive
Hi everyone,
We have an hbase implementation where we have a single table which stores
different types of log messages.We have a requirement to notify (send
email to mailing list) when we receive a particular type of message.I will
be able to able to identify this type of message by looking
at HBASE-5416 which introduced essential column family.
Cheers
On Fri, Aug 22, 2014 at 8:41 AM, Nishanth S nishanth.2...@gmail.com
wrote:
Hi everyone,
We have an hbase implementation where we have a single table which
stores
different types of log messages.We have a requirement
39 matches
Mail list logo