HI,
GC log is not opened. only zk's log, pls see below:
2012-10-12 00:14:30,470 - INFO [NIOServerCxn.Factory:
0.0.0.0/0.0.0.0:2181:NIOServerCnxn$Factory@251] - Accepted socket
connection from /10.20.16.22:56954
2012-10-12 00:14:30,470 - ERROR [CommitProcessor:1:NIOServerCnxn@445] -
Unexpected E
May be I didn't understand your question. Do you mean SAN overshadows some of
the hbase benefits? I thought SAN isn't a great choice for hbase deployments
because of concentration of iops to a single device and thus losing out on
parallelism and redundancy -- but I am no expert on SANs.
Anyw
I want to loop through the records and if a certain condition (like field
mismatch) between two records is found, I need to save the all the values to an
output table.
Regards,
Shobha M
-Original Message-
From: saint@gmail.com [mailto:saint@gmail.com] On Behalf Of Stack
Sent
Hi Suresh
I would like to use startRow() and stopRow() on a scan, but these
> operations
To set the start and stopRow you need to know the rowkey.
between column values "DEBUG:x" and "y". How can I force scan
> to
> > return all these rows?
Have you made setFilterRowIfMissing(true). By d
If you have a SAN, why would you want to use HBase?
-- Lars
From: "Pamecha, Abhishek"
To: "user@hbase.apache.org"
Sent: Monday, October 15, 2012 3:00 PM
Subject: hbase deployment using VMs for data nodes and SAN for data storage
Hi
We are deciding between usi
I tried that, it didn't work. I thought GREATER and LESS operators will
not
work for StringComparator.
I would like to use startRow() and stopRow() on a scan, but these
operations
are based on plain Strings and not regular expressions like I want.
Suresh
-Original Message-
From: Norbert
Hi Kuldeep,
If I read your intention behind the question correctly, you
want to know whether you can connect to HBase (which is running on a linux box
within your hadoop cluster) from another node which is on windows may be...
Yes you can. Your HBase client can be running out
It sounds like you want to have the coprocessor expose it's own
metrics as part of the HBase metrics? If that's right, can you
describe some of the metrics you might want to expose?
We could possibly provide hooks to publish metrics through the
CoprocessorEnvironment, which could then get pushed
Try changing your CompareOp.EQUALs to CompareOp.GREATER_OR_EQUAL and
CompareOp.LESS_OR_EQUAL, respectively. You want all rows between your
two key.
Norbert
On Mon, Oct 15, 2012 at 7:00 PM, Kumar, Suresh wrote:
> I have a HBase with some apache logs loaded.
>
>
>
> I am trying to retrieve a sect
There's not a special way inside of the coprocessor to get metrics.
- RegionObservers can get the HRegion, through the ObserverContext,
which has lots of metrics hanging off of it that might be useful
- You can get it through the normal jmx system. However that's pretty
verbose.
-
I have a HBase with some apache logs loaded.
I am trying to retrieve a section of logs to analyze using the following
code. I would like all the rows
between column values "DEBUG:x" and "y". How can I force scan to
return all these rows? I am using
SingleColumnValueFilter and adding a
Lars,
Did you push to the maven repo? I cannot find it on maven central.
Thanks,
Enis
On Thu, Oct 11, 2012 at 10:10 PM, lars hofhansl wrote:
> The HBase Team is pleased to announce the release of HBase 0.94.2.
> Download it from your favorite Apache mirror [1].
>
> HBase 0.94.2 is a bug fix re
Hi
We are deciding between using local disks for bare metal hosts Vs VMs using SAN
for data storage. I was wondering if anyone has contrasted performance,
availability and scalability between these two options?
IMO, This is kinda similar to a typical AWS or another cloud deployment.
Thanks,
A
We moved to 0.92.2 some time ago and with that, increased the max file size
setting to 4GB (from 2GB). Also an application triggered cleanup operation
deleted lots of unwanted rows.
These two combined have gotten us to a state where lots of regions are
smaller than desired size.
Merging regions tw
No, I don't think so. This is a dedicated testing machine and no automatic
cleaning up on the /tmp folder...
Thanks,
YuLing
-Original Message-
From: Jimmy Xiang [mailto:jxi...@cloudera.com]
Sent: Monday, October 15, 2012 1:32 PM
To: user@hbase.apache.org
Subject: Re: could not start H
Is your /tmp folder cleaned up automatically and some files are gone?
Thanks,
Jimmy
On Mon, Oct 15, 2012 at 12:26 PM, wrote:
> Hi,
>
> I set up a single node HBase server on top of Hadoop and it has been working
> fine with most of my testing scenarios such as creating tables and inserting
>
Hi,
I set up a single node HBase server on top of Hadoop and it has been working
fine with most of my testing scenarios such as creating tables and inserting
data. Just during the weekend, I accidentally left a testing script running
that inserts about 67 rows every min for three days. Today wh
On Mon, Oct 15, 2012 at 12:19 AM, Kuldeep Chitrakar
wrote:
> Hi
>
> To connect to Hbase do we always need to have Pentao installed on one of the
> Hadoop Cluster machine.
>
> Cant we connect to a Hbase remotely using Pentahi like
>
> Pentaho on Windows machine and Hbase on some linux box.
>
> Is
On Mon, Oct 15, 2012 at 3:17 AM, Mahadevappa, Shobha
wrote:
> Hi,
> I want my Reducer to act as IdentityTableReducer based on a certain condition.
> Can you please let me know if there is a neat way to achieve this instead of
> populating the Put objects explicitly in the reduce method after chec
On Mon, Oct 15, 2012 at 8:25 AM, Norbert Burger
wrote:
> Hi folks,
>
> Does anyone have a good working process for renaming tables? From the
> links below, I gather that the bin/rename_table.rb (last included in
> 0.90.x) had a few issues.
>
> http://search-hadoop.com/m/TVnYN1OEdOT/Hbase%253A+Tab
On Mon, Oct 15, 2012 at 9:52 AM, Amit Sela wrote:
> Hi everyone,
>
> I have a cluster running Hadoop 0.20.3-snapshot with HBase 0.90.2.
>
> I want to use bulk loading with HFileOutPutFormat, which works for me when
> writing to 1 CF but fails for more.
>
> I know this is solved in HBase 0.92 and m
On Mon, Oct 15, 2012 at 10:06 AM, Jean-Marc Spaggiari
wrote:
> Hi Lars,
>
> To install it, can we just remove the .jar on the root directory and
> replace it with this one? I'm running 0.94.0 so it might be
> compatible, right?
>
There could be changes other than those bundled in the jar --
suppo
Hi ./zahoor:
I don't think it is the same issue.
Did you provide the Scan object with the startkey = prefix?
something like:
Scan scan = new Scan(prefix);
My understanding is that the PrefixFilter does not Seek to the key with
Prefix therefore, the Scanner basically start from the beginning of t
Is this related to HBASE-6757 ?
I use a filter list with
- prefix filter
- filter list of column filters
/zahoor
On Monday, October 15, 2012, J Mohamed Zahoor wrote:
> Hi
>
> My scanner performance is very slow when using a Prefix filter on a
> **Encoded Column** ( encoded using FAST_DIFF on
You can change table settings in the shell. To start the shell:
HBASE_HOME/bin/hbase shell
To see some examples for the "alter" command, just type it without any
arguments. Here's an example for this case:
describe 'MyTable'
disable 'MyTable'
alter 'MyTable', METHOD => 'table_att', MEMSTORE_FL
Hi Lars,
To install it, can we just remove the .jar on the root directory and
replace it with this one? I'm running 0.94.0 so it might be
compatible, right?
Thanks,
JM
2012/10/12, lars hofhansl :
> Oops. Done.
>
>
>
>
> From: Aditya
> To: d...@hbase.apache.org
Hi everyone,
I have a cluster running Hadoop 0.20.3-snapshot with HBase 0.90.2.
I want to use bulk loading with HFileOutPutFormat, which works for me when
writing to 1 CF but fails for more.
I know this is solved in HBase 0.92 and my question is:
Can I upgrade from 0.90.2 to 0.92 without upgrad
Sorry Kevin, english isn't my primary language.Iam so sorry.
What i need:
- i have a lot of data actualy in /hbase.old
- new data are saved to /hbase
and now i need "combine" all this data, but exists rows from /hbase
can't be rewrited by rows from /hbase.old.
Maybe i can do it like this:
hadoop
Lukas,
I am not sure I understand what you are saying there. What I thought you
did was:
hadoop fs -mv /hbase /tmp/hbase.old
Restart HBase
You should now have a clean HBase
hadoop fs -mv /tmp/hbase.old/
Once you have moved all of the tables
./bin/hbase hbck -fixMeta -fixAssignments
This
Wow, its perfect "hack" but what about "exists" rows?
For example when i have saved new version from row in /hbase i can't
replace it from /hbase.old
Thanks for time
Lukas
2012/10/15 Kevin O'dell :
> Lukas,
>
> A little trick you can use is to just copy the table directories into
> your new /h
Lukas,
A little trick you can use is to just copy the table directories into
your new /hbase dir and then use:
hbck -fixMeta -fixAssignments
This will pull in the tables into your new empty META. If once you do this
Regions get stuck in transition again, you have bigger problems.
On Mon, Oct
Hi again.
> Does FSCK come back clean? Are those the regions showing in transition?
> We are not going to be able to get a clear idea of what to do next until
> we gather some more data. At this point running repairs could put your
> data in jeopardy.
Ye, this is in transition, but from hadoo
Hi Dalia,
I would highly encourage you to go through the JUnit test cases and
documentation of Co-processors to get better understanding. Here is the
link for code-base of HBase: http://svn.apache.org/repos/asf/hbase/trunk
R = The data type which was stored in HBase as Value
S = The data type whi
Lukas,
Looking over what you sent me:
ERROR: Region { meta =>
twitter_users,1ffd1a52913c3fd600dedb97d1b2b8ce,1350294703353.6e3be1c26233ed744479289b00a422dd.,
hdfs =>
hdfs://hadoop-7:9000/hbase/twitter_users/6e3be1c26233ed744479289b00a422dd,
deployed => } not deployed on any region server.
ERRO
Jean-Marc:
What do you think of the approach in patch v5 from HBASE-6942 ?
Here is the sample from patch v5:
+ private long invokeBulkDeleteProtocol(byte[] tableName, final Scan scan,
+ final int rowBatchSize, DeleteType deleteType, Long timeStamp)
throws Throwable {
+HTable ht = new HTa
Kevin,
Its more than one hour and region are still in transition.
Here is output from hbck -detailed http://pastebin.com/f9wLx9LU
Thanks for time
Lukas
2012/10/15 Kevin O'dell :
> Lukas,
>
> Sure, how long has it been since the restart? You will need to give the
> regions time to transition
In the past when I have seen regions get locked in transition it is usually
a problem with the HMaster. Seemingly a transition starts and succeeds
between region servers, but the HMaster may miss part of that communication
and think it is still in transition. Then it keeps retrying but the node
i
Lukas,
Sure, how long has it been since the restart? You will need to give the
regions time to transition and logs time to split. Did the region
in transition properly? Can you please out a pastebin together of hbck
-details full output so that I can take a look at it. Once I have reviewed
i
Hello Kevin,
thanks for response. I clear it now and start cluster, but now i have
many other's regions :(
Here is output from hbase hbck -fixmeta
http://pastebin.com/HqsPVLMi
Any next hint?
Thanks a lot
2012/10/15 Kevin O'dell :
> Have you tried clearing out your Znode information? Typically,
I found that: https://blogs.apache.org/hbase/entry/coprocessor_introduction
which is, I think, giving all the details to call the end point... So
I will give a try to all of that.
JM
2012/10/13, Jean-Marc Spaggiari :
> Wow. Seems it's coming right on time ;)
>
> Is there any code example on the w
Have you tried clearing out your Znode information? Typically, when I have
encountered a RIT, we will bring down HBase and go to the ZKcli and clear
out /hbase. What do you see in the logs pertaining to the region? If it
is a region that has bad hfiles or something like that you will not be able
Oh sorry. My HBase version is 0.94.1 and hadoop 1.0.3
I found this in master log http://pastebin.com/CYt5PZCL and this line
are repeated for all region servers.
Can someone help me please?
Hi all,
i have now a big problem with one region. This region is allways in
transition and i don't no how fix it.
I run "hbase hbck -repair" and this ended with:
INFO util.HBaseFsckRepair: Region still in transition, waiting for it
to become assigned: {NAME =>
'twitter_tweets,08000e806a7b8ba7d6cd
Here's a key statement:
Again, I am on grid which is used by many others, and the machines in my
cluster are not dedicated to my job. I am mainly looking at scalability
trends when running with various numbers of regionservers.
What do you notice when you monitor the cluster in ganglia?
Sent fro
Hi,
I want my Reducer to act as IdentityTableReducer based on a certain condition.
Can you please let me know if there is a neat way to achieve this instead of
populating the Put objects explicitly in the reduce method after checking the
condition.
Regards,
Shobha M
__
Hello anil,
What do u mean by promoted data type?
Sent from my iPhone
On 2012-10-15, at 12:03 AM, "anil gupta" wrote:
> R- Cell value data type
> S- Promoted data type
>
>
> On Sun, Oct 14, 2012 at 3:39 PM, Dalia Sobhy
> wrote:
>
>> Hi anil,
>>
>> Whats ?
>> Sent from my iPhone
>>
>> On
Hi
Sorry if my reply mislead you. I meant to see the GC logs that should give
you an idea of if Full GC happened.
Regards
Ram
> -Original Message-
> From: Xiang Hua [mailto:bea...@gmail.com]
> Sent: Monday, October 15, 2012 12:42 PM
> To: user@hbase.apache.org
> Subject: Re: hmaster an
Hi
To connect to Hbase do we always need to have Pentao installed on one of the
Hadoop Cluster machine.
Cant we connect to a Hbase remotely using Pentahi like
Pentaho on Windows machine and Hbase on some linux box.
Is it true for HDFS also?
Thanks,
Kuldeep
We will check the zk log.
On Monday, October 15, 2012, Ramkrishna.S.Vasudevan wrote:
> Check your GC configurations. Seems to that a Full GC has happened and the
> Zookeeper thought that to be session expiry.
>
> Regards
> Ram
>
> > -Original Message-
> > From: Xiang Hua [mailto:bea...@g
49 matches
Mail list logo