Hi,
I would like to use HBase with distributed filesystems other
than HDFS. Are there any plans for developing filesystem
adapters for these distributed filesystems ? (ceph, glusterfs, ...)
Best,
Bugra
Hi,
What's the best practice to calculate this value for your cluster, if there
is some?
In some situations we saw that some maps are taking more than default 60
seconds which was failing specific map job (as if it failed once, it failed
also every other time by number of configured retries).
I
Hello, everyone. I'm new to coprocessor and I found that all regionservers
would abort when I updated a wrong coprocessor. To get rid of this on produce
environment,
should I set hbase.coprocessor.abortonerror to false? I wonder if this option
will cause any bad effect to my hbase service,
I wouldn’t call storing attributes in separate columns a ‘rigid schema’.
You are correct that you could write your data as a CLOB/BLOB and store it in a
single cell.
The upside is that its more efficient.
The downside is that its really an all or nothing fetch and then you need to
write the
Jeetendra:
Add the following repo in your pom.xml (repositories section):
https://repository.apache.org/content/repositories/orgapachehbase-1076
Then you can use 1.1.0 for hbase version.
Cheers
On Wed, Apr 29, 2015 at 11:06 PM, Jeetendra Gangele gangele...@gmail.com
wrote:
I means how to
Please take a look at HBASE-13485
Cheers
On Apr 30, 2015, at 6:35 AM, Buğra Çakır bugra.ca...@oranteknoloji.com
wrote:
Hi,
I would like to use HBase with distributed filesystems other
than HDFS. Are there any plans for developing filesystem
adapters for these distributed
I would look at a different solution than HBase.
HBase works well because its tied closely to the HDFS and Hadoop ecosystem.
Going outside of this… too many headaches and you’d be better off with a NoSQL
engine like Cassandra or Riak, or something else.
On Apr 30, 2015, at 8:35 AM, Buğra
${hbase.version}=1.1.0
On Thu, Apr 30, 2015 at 7:30 AM, Ted Yu yuzhih...@gmail.com wrote:
And the following:
dependency
groupIdorg.apache.hbase/groupId
artifactIdhbase-protocol/artifactId
version${hbase.version}/version
/dependency
dependency
I have added below line but this not bringing the required jars
repositories
repository
idHbase-1.1.0/id
url
https://repository.apache.org/content/repositories/orgapachehbase-1076/url
/repository
/repositories
On 30 April 2015 at 19:50, Jeetendra Gangele gangele...@gmail.com
most probably something like this:
dependency
groupIdorg.apache.hbase/groupId
artifactIdhbase-client/artifactId
version${hbase.version}/version
/dependency
dependency
groupIdorg.apache.hbase/groupId
And the following:
dependency
groupIdorg.apache.hbase/groupId
artifactIdhbase-protocol/artifactId
version${hbase.version}/version
/dependency
dependency
groupIdorg.apache.hbase/groupId
artifactIdhbase-hadoop-compat/artifactId
Guatam,
Michael makes a lot of good points. Especially the importance of analyzing your
use case for determining the row key design. We (Jive) did a talk at HBasecon a
couple years back talking about our row key redesign to vastly improve
performance. It also talks a little about the write
Unsubscribe
div Original message /divdivFrom: Sean Busbey
bus...@cloudera.com /divdivDate:04/30/2015 12:57 PM (GMT-06:00)
/divdivTo: dev d...@hbase.apache.org /divdivSubject: Re: Using
1.1.0RC0 in your maven project (was [VOTE] First release candidate for HBase
1.1.0 (RC0)
+dev@hbase
-user@hbase to bcc
Adding thread to dev@hbase and moving user@hbase to bcc. Please keep
discussion on how to test the RC and make use of it in downstream projects
on the dev list.
On Thu, Apr 30, 2015 at 12:48 PM, Nick Dimiduk ndimi...@gmail.com wrote:
I've updated Stack's
The effect of setting this to false is that, if any of your coprocessors
throw unexpected exceptions, instead of aborting, the region server will
log an error and remove the coprocessor from the list of loaded
coprocessors on the region / region server / master.
This allows HBase to continue
On Thu, Apr 30, 2015 at 10:19 AM, Stack st...@duboce.net wrote:
On Thu, Apr 30, 2015 at 6:35 AM, Buğra Çakır
bugra.ca...@oranteknoloji.com
wrote:
Hi,
I would like to use HBase with distributed filesystems other
than HDFS. Are there any plans for developing filesystem
adapters
We've also made HBase running on IBM GPFS.
http://en.wikipedia.org/wiki/IBM_General_Parallel_File_System
We have a Hadoop FileSystem implementation that translates hadoop calls
into GPFS native calls.
Overall it has been running well on live clusters.
Jerry
Heh.. I just did a talk at BDTC in Boston… of course at the end of the last
day… small audience.
Bucketing is a bit different from just hashing the rowkey. If you are doing
get(), then having 480 buckets isn’t a problem.
Doing a range scan over the 480 buckets makes getting your sort ordered
Exactly!
So if you don’t need to know if your table is bucketed or not.
You just put() or get()/scan() like it any other table.
On Apr 30, 2015, at 3:00 PM, Andrew Mains andrew.ma...@kontagent.com wrote:
Thanks all again for the replies--this is a very interesting discussion :).
I believe HBase also runs directly against Azure Blob Storage. This article
[0] gives some details; I'm not sure if it's hit GA yet.
-n
[0]:
http://azure.microsoft.com/blog/2014/06/06/azure-hdinsight-previewing-hbase-clusters-as-a-nosql-database-on-azure-blobs/
On Thu, Apr 30, 2015 at 11:46 AM,
This is a nice topic. Let's put it on the ref guide.
Hbase on Azure FS is GA, and there has already been some work for
supporting HBase on the Hadoop native driver.
From this thread, my gathering is that, HBase should run on HDFS, MaprFS,
IBM GPFS, Azure WASB (and maybe Isilon, etc).
I had a
This thread is starting to sound like a new section for the ref guide. :)
--
Sean
On Apr 30, 2015 1:07 PM, Jerry He jerry...@gmail.com wrote:
We've also made HBase running on IBM GPFS.
http://en.wikipedia.org/wiki/IBM_General_Parallel_File_System
We have a Hadoop FileSystem implementation
Perfect example of why you really don’t want to allow user/homegrown
coprocessors to run.
If you’re running Ranger and a secure cluster… you have no choice you are
running coprocessors. So you will want to shut down coprocessors that are not
launched from your hbase-site.xml file. (I forget
Thanks all again for the replies--this is a very interesting discussion :).
@Michael HBASE-12853 is definitely an interesting proposition for our
(Upsight's) use case--we've done a moderate amount of work to make our
reads over the bucketed table efficient using hive. In particular, we
added
On Apr 30, 2015 4:11 PM, Enis Söztutar e...@apache.org wrote:
This is a nice topic. Let's put it on the ref guide.
Hbase on Azure FS is GA, and there has already been some work for
supporting HBase on the Hadoop native driver.
From this thread, my gathering is that, HBase should run on HDFS,
Hi,
The documentation says:
“Click the Metrics Dump link near the top. The metrics for the region server
are presented as a dump of the JMX bean in JSON format. This will dump out all
metrics names and their values. To include metrics descriptions in the listing
— this can be useful when you
Please take a look at 98.3 under
http://hbase.apache.org/book.html#trouble.client
BTW what's the value for hbase.hregion.max.filesize ?
Which split policy do you use ?
Cheers
On Thu, Apr 30, 2015 at 6:59 AM, Dejan Menges dejan.men...@gmail.com
wrote:
Basically how I came to this question -
On Thu, Apr 30, 2015 at 4:26 PM, Enis Söztutar e...@apache.org wrote:
The build is broken with Hadoop-2.2 because mini-kdc is not found:
[ERROR] Failed to execute goal on project hbase-server: Could not resolve
dependencies for project org.apache.hbase:hbase-server:jar:1.1.0: Could not
find
The build is broken with Hadoop-2.2 because mini-kdc is not found:
[ERROR] Failed to execute goal on project hbase-server: Could not resolve
dependencies for project org.apache.hbase:hbase-server:jar:1.1.0: Could not
find artifact org.apache.hadoop:hadoop-minikdc:jar:2.2
We are saying that 1.1
There is no single ‘right’ value.
As you pointed out… some of your Mapper.map() iterations are taking longer than
60 seconds.
The first thing is to determine why that happens. (It could be normal, or it
could be bad code on your developers part. We don’t know.)
The other thing is that if
Cool, thanks really a lot for very useful information, I really appreciate
it.
I'll take a look in Hortonworks knowledge base as I have account there, and
eventually check with them if this is 'maybe' going to be part of one of
the patchsets for 2.1 - we installed last one, 2.1.10 which really
will it be possible for you to give me below
Artifact Id,GrouId and version.
if this is not possible how to add this repository is Pom.xml?
On 30 April 2015 at 19:28, Ted Yu yuzhih...@gmail.com wrote:
Jeetendra:
Add the following repo in your pom.xml (repositories section):
On Thu, Apr 30, 2015 at 6:35 AM, Buğra Çakır bugra.ca...@oranteknoloji.com
wrote:
Hi,
I would like to use HBase with distributed filesystems other
than HDFS. Are there any plans for developing filesystem
adapters for these distributed filesystems ? (ceph, glusterfs, ...)
What are you
Nope, you're right. That link should be
https://dist.apache.org/repos/dist/dev/hbase/hbase-1.1.0RC0/
On Wed, Apr 29, 2015 at 10:39 PM, Ashish Singhi
ashish.singhi.apa...@gmail.com wrote:
Hi Nick.
bq. (HBase-1.1.0RC0) is available for download at
I means how to include in pom.xml
On 30 April 2015 at 11:36, Jeetendra Gangele gangele...@gmail.com wrote:
How to include this is project code any sample?
On 30 April 2015 at 11:32, Nick Dimiduk ndimi...@gmail.com wrote:
Nope, you're right. That link should be
How to include this is project code any sample?
On 30 April 2015 at 11:32, Nick Dimiduk ndimi...@gmail.com wrote:
Nope, you're right. That link should be
https://dist.apache.org/repos/dist/dev/hbase/hbase-1.1.0RC0/
On Wed, Apr 29, 2015 at 10:39 PM, Ashish Singhi
Thanks Guys for responding!
Michael,
I indeed should have elaborated on our current rowkey design. Re:
hotspotting, We'r doing exactly what you'r suggesting, i.e. fanning out
into buckets where the bucket location is a hash(message_unique_fields)
(we use murmur3). So our write pattern is
This is a VOTE thread. This discussion is highly off topic. Please drop dev@
from the CC and change the subject.
On Apr 30, 2015, at 7:30 AM, Ted Yu yuzhih...@gmail.com wrote:
And the following:
dependency
groupIdorg.apache.hbase/groupId
38 matches
Mail list logo