This is where i was coming to.Then I am wondering, it will be a security
threat, as i have to expose all the region server IP's to the remote system
from where i will be connecting the Hbase tables.
Regards,
KG
On Wed, Apr 2, 2014 at 5:56 PM, 刘磊 zlen...@gmail.com wrote:
You should specify all
Hi all,
I'm running Hadoop 1.0.4 and HBase 0.94.12.
I'm also running a Hadoop and HBase OSGi client where most modifications
were in configuration Objects (Hadoop and HBase) where Class Loader is set
to the bundle's CL instead of TCCL.
I manage to successfully execute MapReduce jobs writing to
After analysing HBASE-10850 I think better we can fix this in 98.1 release
itself. Also Phoenix plan to use this 98.1 and Phoenix uses essential CF
optimization.
Also HBASE-10854 can be included in 98.1 in such a case,
Considering those we need a new RC.
-Anoop-
On Tue, Apr 1, 2014 at 10:19
I agree with Anoop's assessment.
Cheers
On Apr 3, 2014, at 2:19 AM, Anoop John anoop.hb...@gmail.com wrote:
After analysing HBASE-10850 I think better we can fix this in 98.1 release
itself. Also Phoenix plan to use this 98.1 and Phoenix uses essential CF
optimization.
Also HBASE-10854
Can we also include HBASE-10848 in 0.98.1 ?
This is another issue related to filters, not as critical as HBASE-10850
but it would be great to have it in this release.
On Thu, Apr 3, 2014 at 11:21 AM, Ted Yu yuzhih...@gmail.com wrote:
I agree with Anoop's assessment.
Cheers
On Apr 3, 2014,
This is already committed. So if we have a new RC, this issue also will get
in.
-Anoop-
On Thu, Apr 3, 2014 at 3:26 PM, Fabien LE GALLO flega...@ubikod.com wrote:
Can we also include HBASE-10848 in 0.98.1 ?
This is another issue related to filters, not as critical as HBASE-10850
but it would
I will sink this RC and roll a new one tomorrow.
However, I may very well release the next RC even if I am the only +1 vote and
testing it causes your workstation to catch fire. So please take the time to
commit whatever you feel is needed to the 0.98 branch or file blockers against
0.98.1 in
Understood, Andy.
I have integrated fix for HBASE-10850 to 0.98
Cheers
On Thu, Apr 3, 2014 at 3:00 AM, Andrew Purtell andrew.purt...@gmail.comwrote:
I will sink this RC and roll a new one tomorrow.
However, I may very well release the next RC even if I am the only +1 vote
and testing it
Will target HBASE-10899 also then by that time.
Regards
Ram
On Thu, Apr 3, 2014 at 3:47 PM, Ted Yu yuzhih...@gmail.com wrote:
Understood, Andy.
I have integrated fix for HBASE-10850 to 0.98
Cheers
On Thu, Apr 3, 2014 at 3:00 AM, Andrew Purtell andrew.purt...@gmail.com
wrote:
I will
Hi All,
I have around 20-30 geographically distant clients that need to
write data to a centralized HBase server. I have dedicated VPN for the
communication and hence bandwidth won't be a issue. is it a good idea to
make the clients directly send data to the centralized server?. Or a
Regions hosted by the server may be moved to other servers.
Can you clarify what you meant by directly writing to the server ?
Thanks
On Apr 3, 2014, at 5:59 AM, Manthosh Kumar T manth...@gmail.com wrote:
Hi All,
I have around 20-30 geographically distant clients that need to
write
I mean directly interacting with the remote zookeeper. Say I'm able to
access the zk server and hbase server externally.
On 3 April 2014 18:54, Ted Yu yuzhih...@gmail.com wrote:
Regions hosted by the server may be moved to other servers.
Can you clarify what you meant by directly writing to
Pardon me if I miss anything, like any network issues
On 3 April 2014 18:57, Manthosh Kumar T manth...@gmail.com wrote:
I mean directly interacting with the remote zookeeper. Say I'm able to
access the zk server and hbase server externally.
On 3 April 2014 18:54, Ted Yu yuzhih...@gmail.com
I will say, remote client connecting to a cluster is fine. But a cluster
spread over multiple physical sites is not at all a good idea.
2014-04-03 9:28 GMT-04:00 Manthosh Kumar T manth...@gmail.com:
Pardon me if I miss anything, like any network issues
On 3 April 2014 18:57, Manthosh Kumar
Is that a good idea even if I don't have a VPN??. Will it be efficient in a
fairly good connection?
On 3 April 2014 19:25, Jean-Marc Spaggiari jean-m...@spaggiari.org wrote:
I will say, remote client connecting to a cluster is fine. But a cluster
spread over multiple physical sites is not at
Efficient? Probably not ;) But it's like if you are connecting your client
app to a webserver local to the cluster and then the webserver connects to
the cluster.
I don't like the idea of having the cluster accessible from the outsite and
usuall prefer to have kind of a gateway, but that's your
Hi Jean,
Thanks. I might sound a bit lame. Can you just elaborate on the
gateway part?. What is the best practice?
On 3 April 2014 19:30, Jean-Marc Spaggiari jean-m...@spaggiari.org wrote:
Efficient? Probably not ;) But it's like if you are connecting your client
app to a webserver
I need data versioning but want to keep older data in a separate location (to
keep the current data file denser). What would be the best way to do that?
I implore you to stick with releasing RC3. Phoenix 4.0 has no release it
can currently run on. Phoenix doesn't use SingleColumnValueFilter, so it
seems that HBASE-10850 has no impact wrt Phoenix. Can't we get these
additional bugs in 0.98.2 - it's one month away [1]?
James
[1]
James:
HBASE-10850 is not just about SingleColumnValueFilter. See Anoop's comment:
https://issues.apache.org/jira/browse/HBASE-10850?focusedCommentId=13958668page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13958668
The test case Fabien provided uses
It's just the optimization that's (sometimes) broken, right? The scan
still returns the correct results, no?
On Apr 3, 2014, at 9:13 AM, Ted Yu yuzhih...@gmail.com wrote:
James:
HBASE-10850 is not just about SingleColumnValueFilter. See Anoop's comment:
we are working on a backup/restore solution in
https://issues.apache.org/jira/browse/HBASE-7912, which will use snapshot
and exportsnapshot for full backup and also use WALPlayer for incremental
backup. the patches are coming.
For critical data, real time replication is the way to go :
I don't think so.
Please take a look at the new test - TestSCVFWithMiniCluster
It exposes the defect Fabien reported. Without the fix, two of the
sub-tests in TestSCVFWithMiniCluster would fail.
Cheers
On Thu, Apr 3, 2014 at 9:20 AM, James Taylor jtay...@salesforce.com wrote:
It's just the
Ted,
you are right. We are targeting HBASE-7912 to 1.0 and 0.98, which 1.0 as
the priority for now. :-)
BTW, we have some code to leverage HBASE-9426 so that we can go distributed
LOG roll at RS level before taking snapshot. I will open a jira to share
that code for discussion purpose.
Demai
This would be my preference also.
Can someone provide a definitive statement on if a critical/blocker bug exists
for Phoenix or not? If not, we have sufficient votes at this point to carry the
RC and can go forward with the release at the end of the vote period.
On Apr 3, 2014, at 5:57 PM,
+1 to Andrew's suggestion. @Anoop - would you mind verifying whether or not
the TestSCVFWithMiniCluster written as a Phoenix query returns the correct
results?
On Thu, Apr 3, 2014 at 9:34 AM, Andrew Purtell andrew.purt...@gmail.comwrote:
This would be my preference also.
Can someone provide
+1 pushing out 0.96.1. HBASE-10850 is not a blocker.
St.Ack
On Thu, Apr 3, 2014 at 9:34 AM, Andrew Purtell andrew.purt...@gmail.comwrote:
This would be my preference also.
Can someone provide a definitive statement on if a critical/blocker bug
exists for Phoenix or not? If not, we have
Hey, that's one of the reasons I have opened HBASE-10115 but never got a
chance to work on it. Basically, setup a TTL on the column, and with the
hook, move the cells somewhere else.
With current state, the only thing I see is a MR job which will run daily
and move the older versions. Like,
I see that every 10 minutes on my scheduled MR jobs for the last months
without any issue. I think it should be a WARN and not an ERROR.
You can just ignore it.
JM
2014-04-03 4:53 GMT-04:00 Amit Sela am...@infolinks.com:
Hi all,
I'm running Hadoop 1.0.4 and HBase 0.94.12.
I'm also running
To be fair, Phoenix should not have relied on an unreleased dependency. (I know
there are corporate timing issues, but they really should not force us into
situations like these).
As far as I understand the issue, it not just a performance but can lead to
incorrect results.
Then again, this
I think the right understanding of this is it will slow down the data query
processing. You can think the RS who hit a heady I/O as a hotspot node. It
will not slow down the whole cluster, it will only slow down the data
applications which access the data from that RS.
On Thu, Apr 3, 2014 at
I think you can define coprocessors to do this. For example, for every put
command, you can keep the desired versions that you want, and later put the
older version into the other table or HDFS. Finally, either let Hbase
delete your stale data or let coprocessor do that for you. The problem of
I think having one such slow RS will make the whole cluster work slower
(basically, at its speed). is not 100% accurate.
The slowness usually affects HDFS and ripples into HBase in many
different ways. I've seen in some cases the DN process is not started and
HBase needs to get all the blocks
That is a feasible option.
I have changed Fix Version of HBASE-10850 to 0.98.2
Cheers
On Thu, Apr 3, 2014 at 12:16 PM, lars hofhansl la...@apache.org wrote:
To be fair, Phoenix should not have relied on an unreleased dependency. (I
know there are corporate timing issues, but they really
I logged HBASE-10906 and attached a patch there.
Cheers
On Thu, Apr 3, 2014 at 12:00 PM, Jean-Marc Spaggiari
jean-m...@spaggiari.org wrote:
I see that every 10 minutes on my scheduled MR jobs for the last months
without any issue. I think it should be a WARN and not an ERROR.
You can just
Hi,
Is it possible to skip unresponsive regions in hbase table export import? I
am trying to migrate my table from hbase 0.90.5 to hbase 0.94.6. For that I
am using hbase export tool. As it internally spins up a Map reduce program
for this, it fails when few regions are not responding back.
Is
bq. reader=hdfs://machine/hbase/table/c4713d144d1fa6bdfc937b570ebc14
e2/column/4884593935967971789,
Can you use HFile tool to check whether there is data corruption in the
file ?
Cheers
On Thu, Apr 3, 2014 at 4:24 PM, sriram vsrira...@gmail.com wrote:
Hi,
Is it possible to skip
HBase-0.96.2 is now available for download:
http://www.apache.org/dyn/closer.cgi/hbase/
HBase-0.96.2 is available in Hadoop 1 and Hadoop 2 bundles. Pick the
package that suits your environment. You can rolling upgrade from previous
0.96.x releases up on this one.
179 issues have been
hi,maillist :
i use following process to clean hbase old data region,
1, remove region info from .META. table
2, remove region directory from HDFS
but when i recheck use http://192.168.10.22:60010/master-status ,i find
the region number is not decrease,why?
Phoenix 4.0 has no release it can currently run on
Can't we get these additional bugs in 0.98.2 - it's one month away
I was thinking that for Phoenix 4.0 *release* the 98.1 is needed.. Thats
why was in favor of correcting the bug in 98.1 itself.. Ya 98.2 can come
out in a month time and at that
+1 on getting this RC3 out as the release and targetting the bug for
0.98.2.
Regards
Ram
On Fri, Apr 4, 2014 at 7:49 AM, Anoop John anoop.hb...@gmail.com wrote:
Phoenix 4.0 has no release it can currently run on
Can't we get these additional bugs in 0.98.2 - it's one month away
I was
Great !
On Fri, Apr 4, 2014 at 7:44 AM, Stack st...@duboce.net wrote:
HBase-0.96.2 is now available for download:
http://www.apache.org/dyn/closer.cgi/hbase/
HBase-0.96.2 is available in Hadoop 1 and Hadoop 2 bundles. Pick the
package that suits your environment. You can rolling
Hi, all
I came across the problem in the early morning several days ago. It
happened when I used hadoop completebulkload command to bulk load some hdfs
files into a hbase table. Several regions hung and after retried three
times they threw RegionTooBusyExceptions. Fortunately, I caught one of the
Hi,
We are currently on 0.94.2(CDH 4.2.1) and would likely upgrade to 0.94.15
(CDH 4.6) primarily to use the above fix. We have turned off automatic major
compactions. We load data into an hbase table every 2 minutes. Currently, we
are not using bulk load since it created compaction issues.
Hi Rahul , you may set hbase.server.thread.wakefrequency.multiplier to a
small number, default 1000. So the CompactionChecker will run more often.
2014-04-04 11:42 GMT+08:00 Rahul Ravindran rahu...@yahoo.com:
Hi,
We are currently on 0.94.2(CDH 4.2.1) and would likely upgrade to
0.94.15
Looking below the 'parking to wait for' line, we see:
at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4840)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2279)
For HRegionServer.java at tip of 0.94, line 2279 is in put() call.
What version of
Hi Ted,
Thanks for your reply. I ran the check and it is not showing any problems.
From the error, I feel it looks more like region server issue. Any inputs?
Thanks,
V.Sriram
--
View this message in context:
Hi,
I have some questions about the secure configuration of mulitiple
regionservers. Our cluster using hbase-0.94 and hadoop-2.2.0, and we use
kerberos to ensure the security of our cluster. However, when I try to config
mutiple regionservers, the properties of the file called hbase-site.xml
48 matches
Mail list logo