Hey Water,
Just as an FYI, LGPL is still considered incompatible with the Apache
license and so will generally be a non-starter for organizations who pay
attention to category X dependencies (see
https://www.apache.org/legal/resolved.html#category-x).
On Mon, May 28, 2018 at 12:22 AM Water Guo
LGPL is not compatible with ASL, either. See: https://www.apache.org/
legal/resolved.html#category-x
-Dima
On Wed, May 9, 2018 at 10:59 AM, Sanel Zukan wrote:
> What about Connector/J from MariaDB? It is LGPL and (I think) that
> should make it easier to mix with Apache
Hey Muni,
This is probably a better question for Cloudera support. Especially if you
happen to be using Cloudera Manager, the normal command line functionality
may not apply.
-Dima
On Mon, Mar 19, 2018 at 10:41 AM, Muni Adusumalli wrote:
> Hi,
>
> We have a cloudera cluster
I'd think journalctl would have more logs on systemd stuff, no? Can you try
running that against your service (journalctl -u, if memory serves me).
My guess from the capitalized "Cannot" is that it's running some command
which is failing and then trying to pass that failing arg's output in to
the
Yay Mike!
-Dima
On Tue, Aug 1, 2017 at 9:26 AM, Jonathan Hsieh wrote:
> Congrats Mike!
>
> On Tue, Aug 1, 2017 at 8:38 AM, Josh Elser wrote:
>
> > On behalf of the Apache HBase PMC, I'm pleased to announce that Mike Drob
> > has accepted the PMC's
Presplitting tables is typically how this is addressed in production cases.
On Thu, Jul 27, 2017 at 12:17 PM jeff saremi wrote:
> We haven't done enough testing for me to say this with certainty but as we
> insert data and new regions get created, it could be a while
/
-Dima
On Tue, Jul 18, 2017 at 9:52 PM, Udbhav Agarwal <udbhav.agar...@syncoms.com>
wrote:
> Okay, at which scale you have experience with ?
>
> -Original Message-----
> From: Dima Spivak [mailto:dimaspi...@apache.org]
> Sent: Monday, July 17, 2017 7:40 PM
> To: user@hb
gt;
> -Original Message-
> From: Dima Spivak [mailto:dimaspi...@apache.org]
> Sent: Monday, July 17, 2017 6:37 PM
> To: user@hbase.apache.org
> Subject: Re: Hbase on docker container with persistent storage
>
> Hi Udbhav,
>
> How have you containerized HDFS to run
So I will need to use hbase deployed on 70-80
> servers.
>Now can you please let me know how can I containerize hbase
> so as to be able to use hbase backed by hdfs using 70-80 host machines and
> not loose data if the container itself dies due to some reason?
>
there? Is this the best
> way ?
>
>
> Thanks,
> Udbhav
> -Original Message-
> From: Dima Spivak [mailto:dimaspi...@apache.org]
> Sent: Friday, July 14, 2017 3:44 AM
> To: hbase-user <user@hbase.apache.org>
> Subject: Re: Hbase on docker container with
Udbhav,
Volumes are Docker's way of having folders or files from the host machine
bypass the union filesystem used within a Docker container. As such, if a
container with a volume is killed, the data from that volume should remain
there. That said, if whatever caused the container to die affects
Yay Allan!
On Thu, Jun 8, 2017 at 8:49 PM Yu Li wrote:
> On behalf of the Apache HBase PMC, I am pleased to announce that Allan Yang
> has accepted the PMC's invitation to become a committer on the
> project. We appreciate all of Allan's generous contributions thus far and
>
>
> >
> >
> > The only place where I mentionned /solr is at the indexer creation using
> :
> >
> >
> > ./hbase-indexer add-indexer -n myindexer -c
> > ../Fred_Indexer/indexdemo-indexer.xml
> -cp solr.zk=MyIpAddress:2181/solr -cp solr.collection=c
h the
> HBase source code.
>
>
> ____
> From: Dima Spivak <dimaspi...@apache.org>
> Sent: Friday, May 26, 2017 9:58:00 AM
> To: hbase-user
> Subject: Re: What is Dead Region Servers and how to clear them up?
>
> Sending this back to the user mail
Sending this back to the user mailing list.
RegionServers can die for many reasons. Looking at your RegionServer log
files should give hints as to why it's happening.
-Dima
On Fri, May 26, 2017 at 9:48 AM, jeff saremi wrote:
> I had posted this to the user mailing
+1
-Dima
On Mon, Apr 10, 2017 at 12:08 PM, Stack wrote:
> I agree we should EOL 0.98.
> St.Ack
>
> On Mon, Apr 10, 2017 at 11:43 AM, Andrew Purtell
> wrote:
>
> > Please speak up if it is incorrect to interpret the lack of responses as
> > indicating
zookeeper.property.clientPort
>
> 2181
>
>
>
>
>
>
>
>
>
> The special place is the file /etc/hosts with one ip mapping to two
> hostnames on all nodes,so it will have the message:
>
>
>
> ...
>
>
>
> the
Hi C R,
Like many Hadoop-like services, HBase is pretty temperamental about
requiring forward and reverse DNS to work properly. FWIW, the configuration
file where you can populate RegionServers doesn't tend to matter as long as
the hbase-site.xml file is populated correctly (it's just used to
so I can start look more into that area.
>
>
>
> I find few which were recommended in blogs but still I am not sure about
>
> these.
>
>
>
> can you please help me to figure out the best one? Fast reading and writing
>
> were Major area.
>
>
>
> Than
Hi Mich,
How many files are you looking to store? How often do you need to read
them? What's the total size of all the files you need to serve?
Cheers,
Dima
On Mon, Nov 28, 2016 at 7:04 AM Mich Talebzadeh
wrote:
> Hi,
>
> Storing XML file in Big Data. Are there any
The HBase reference guide [1) has some suggestions with respect to sizing
of hardware and tuning for performance. There is no industry standard
because the best way to configure HBase is dependent upon how you are using
HBase.
As for the ZK question, Google can point you to resources that
Here's a link:
https://www.youtube.com/channel/UCy25rIFxWRBokFg-2Cm83BQ/videos?sort=dd_id=0=0
;) Stack
On Thursday, October 27, 2016, Stack wrote:
> Some good stuff in here:
>
> + BigTable Lead on some interesting tricks done to make the service more
> robust
> + Compare of
Hey Cheyenne,
HBase itself only provides primitives for operations like put, get, and
scan, so you'd need to implement any particular search algorithms at the
application level or seek out existing projects that could add such
functionality. Projects like Giraph and Titan come to mind.
On
o will get what value?
>
> Thanks
> Manjeet
>
> On Mon, Oct 24, 2016 at 12:12 AM, Dima Spivak <dimaspi...@apache.org
> <javascript:;>> wrote:
>
> > Unless told not to, HBase will always write to memory and append to the
> WAL
> > on disk before return
Unless told not to, HBase will always write to memory and append to the WAL
on disk before returning and saying the write succeeded. That's by design
and the same write pattern that companies like Apple and Facebook have
found works for them at scale. So what's there to solve?
On Sunday, October
be able to run two isolated(not resource wise) hbase instances.
>
> Again, thanks a lot for the tip. I may give it a try if simple
> configuration change not available.
>
> Demai
>
> Demai
>
> On Thu, Oct 20, 2016 at 3:04 PM, Dima Spivak <dimaspi...@apache.org>
&g
It can be lots of things, Manjeet. You've gotta do a bit of troubleshooting
yourself first; a long dump of your machine specs doesn't change that.
Can you describe what happened before/after the node went down? The log
just says server isn't running, so we can't tell much from that alone.
-Dima
Any reason to not use the container way via clusterdock [1]? I do
replication testing on my Mac for this using it and have had pretty good
results.
1.
http://blog.cloudera.com/blog/2016/08/multi-node-clusters-with-cloudera-quickstart-for-docker/
-Dima
On Thu, Oct 20, 2016 at 2:51 PM, Demai Ni
Hey Alexander,
Could something be amiss in your network settings? Seeing phantom datanodes
could be tripping things up. Are these physical machines or instances in
the cloud?
On Monday, October 17, 2016, Alexander Ilyin wrote:
> Hi,
>
> We have a 7-node HBase cluster
Congrats, Stephen!
-Dima
On Fri, Oct 14, 2016 at 11:27 AM, Enis Söztutar wrote:
> On behalf of the Apache HBase PMC, I am happy to announce that Stephen has
> accepted our invitation to become a PMC member of the Apache HBase project.
>
> Stephen has been working on HBase for
Yeah, just to reinforce what Ted is saying, DO NOT run HDFS's balancer if
you use HBase. Doing so will move blocks in such a way as to destroy data
locality and negatively impact HBase performance (until a major compaction
in HBase is done).
On Friday, October 7, 2016, Ted Yu
I think you might have better luck with Phoenix questions on the Phoenix
user mailing list. :)
-Dima
On Wed, Oct 5, 2016 at 7:34 AM, Mich Talebzadeh
wrote:
> Thanks John.
>
> 0: jdbc:phoenix:rhes564:2181> select "Date","volume" from "tsco" where
> "Date" =
Hey Kumar,
The ref guide section on enabling security for the Thrift gateway [1] is a
good place to start. Have you gone through that?
1. http://hbase.apache.org/book.html#security.gateway.thrift.doas
-Dima
On Tue, Oct 4, 2016 at 4:59 AM, kumar r wrote:
> Hi,
>
> I need
You hit the HBase user list instead of the Trafodion one. :) Moving
user@hbase.apache.org to bcc.
-Dima
On Tue, Oct 4, 2016 at 8:21 AM, Dave Birdsall
wrote:
> Forwarding this to the Trafodion user list.
>
>
>
> *From:* helloHuiW [mailto:notificati...@github.com]
>
Sounds like the problem Ted Malaska was trying to solve with his
multicluster client [1], though not sure that has gone anywhere for a
while.
1. https://github.com/tmalaska/HBase.MCC
On Monday, September 26, 2016, Sreeram wrote:
> Dear All,
>
> Please let me know your
Hey Deepak,
Assuming I understand your question, I think you'd be better served
reaching out to MapR directly. Our community isn't involved in M7 so the
average user (or dev) wouldn't know about the ins and outs of that
offering.
On Wednesday, September 21, 2016, Deepak Khandelwal <
Hey Karthik,
This blog post [1] by our very own JD Cryans is a good place to start
understanding bulk load.
1.
http://blog.cloudera.com/blog/2013/09/how-to-use-hbase-bulk-loading-and-why/
On Wednesday, September 21, 2016, karthi keyan
wrote:
> Can any one please
At what rate are you ingesting data, Viswa?
On Monday, September 19, 2016, Viswanathan J
wrote:
> Thanks Eric.
>
> So addition of new region servers will not impact the regions which I
> splitted as 3 while writing and reading.
>
> After adding 2 region servers I
I'd worry about doing this from both the client-server compatibility side
as well as for when it comes to upgrades. Having to go between Java
versions is way scarier for ops people than just swapping JARs.
On Thursday, September 8, 2016, Duo Zhang wrote:
> The main reason
Yay Duo!
On Tuesday, September 6, 2016, Stack wrote:
> On behalf of the Apache HBase PMC I am pleased to announce that 张铎
> has accepted our invitation to become a PMC member on the Apache
> HBase project. Duo has healthy notions on where the project should be
> headed and
-master.$NETWORK:$(docker inspect -f
> "{{with index .NetworkSettings.Networks \"${NETWORK}\"}}{{.IPAddress}}{{end}}"
> hadoop-master.$NETWORK) \
> --add-host zk.$NETWORK:$(docker inspect -f "{{with index
> .NetworkSettings.Networks \"${NETWOR
think this is
> >> the main difference compared to what I am doing which is creating a
> bridge
> >> network for the hadoop cluster.
> >> I have only 3 machines: hadoop-master, hadoop-slave1, hadoop-slave2.
> >>
> >> Why do those strange hadoop-slav
art of the hostname.
> Any idea what it is happening in my case?
>
> Pierre
>
> > On 5 Sep 2016, at 16:48, Dima Spivak <dimaspi...@apache.org
> <javascript:;>> wrote:
> >
> > You should try the Apache HBase topology for clusterdock that was
> committed
>
You should try the Apache HBase topology for clusterdock that was committed
a few months back. See HBASE-12721 for details.
On Sunday, September 4, 2016, Pierre Caserta
wrote:
> Hi,
> I am building a fully distributed hbase cluster with unmanaged zookeeper.
> I pretty
Perhaps it'd be useful to have performance benchmarks with an Isilon setup
vs. a non-Isilon setup? I saw a graphic promising 50x better performance
for [ostensibly un-released] "Project Nitro," but don't get any sense of
how much faster things would be for a user today. There's also been a lot
of
tlook.com <javascript:;>>
> 보낸 날짜: 2016년 8월 31일 수요일 오전 9:48:07
> 받는 사람: user@hbase.apache.org <javascript:;>
> 제목: RE: How to deal OutOfOrderScannerNextException
>
> Thanks for your help.
>
> I have to upgrade my client dependencies.
>
>
> I will
wrote:
> Because I have a number of hbase cluster.
>
> They are different version.
>
>
> Legacy Hbase cluster version is 0.96.2-hadoop2.
>
> So I have to maintain 0.96.2-hadoop2.
>
> ________
> 보낸 사람: Dima Spivak <dspi...@cloudera.com <javasc
Any reason to not use the 1.2.2 client library? You're likely hitting a
compatibility issue.
On Tuesday, August 30, 2016, Kang Minwoo <minwoo.k...@outlook.com> wrote:
> Hi Dima Spivak,
>
>
> Thanks for interesting my problem.
>
>
> Hbase server version is 1.2.2
>
If you're trying to unsubscribe from HBase's mailing lists, go to
https://hbase.apache.org/mail-lists.html and follow the instructions.
On Tuesday, August 30, 2016, Mark Prakash wrote:
> I am no longer involved in Hadoop, can someone please tell me how to un
>
Hey Minwoo,
What version of HBase are you running? Also, can you post an excerpt of the
code you're trying to run when you get this Exception?
On Tuesday, August 30, 2016, Kang Minwoo wrote:
> Hello Hbase users.
>
>
> While I used hbase client libarary in JAVA, I got
>
Moving dev@ to bcc. Please don't email the developer mailing list with
questions about how to set up HBase for your use case.
On Monday, August 29, 2016, Manjeet Singh
wrote:
> I want ot add few more points
>
> I am using Java native Api for Hbase get/put
>
> and
(Though if it is only 7 GB, why not just store it in memory?)
On Sunday, August 28, 2016, Dima Spivak <dspi...@cloudera.com> wrote:
> If your data can all fit on one machine, HBase is not the best choice. I
> think you'd be better off using a simpler solution for small data and l
dont want to invest into another DB like Dynamo, Cassandra and Already
> are in the Hadoop Stack. Managing another DB would be a pain. Why HBase
> over RDMS, is because we call HBase via Spark Streaming to lookup the keys.
>
> Manish
>
> On Mon, Aug 29, 2016 at 1:47 PM, Dima S
Hey Manish,
Just to ask the naive question, why use HBase if the data fits into such a
small table?
On Sunday, August 28, 2016, Manish Maheshwari wrote:
> Hi,
>
> We have a scenario where HBase is used like a Key Value Database to map
> Keys to Regions. We have over 5
And what kind of performance do you see vs. what you expect to see? How big
is your cluster in production/how much total data will you be storing in
production?
On Sunday, August 28, 2016, Manjeet Singh
wrote:
> Hi
> I performed this testing on 2 node cluster where
Can you give us more specifics about what kind of performance you're
expecting, Manjeet, and what kind of performance you're actually seeing?
Also, how big is your cluster (i.e. number of nodes, amount of RAM/CPU per
node)? It's also important realize that performance can be impacted by the
write
0.94 in order not to
> provide down time
>
> what is the best way to copy data from a 0.94 cluster to a new cluster of
> different hbase major versions ?
>
> can you give me some link ?
>
> Thanks
> Enrico
>
>
> Il giorno ven, 26/08/2016 alle 00.04 -0700, Dima Spivak ha
I would say no; 0.94 is not wire compatible with 1.2.2 because the former
uses Hadoop IPC and the latter uses protocol buffers. Sorry, Enrico.
On Friday, August 26, 2016, Enrico Olivelli - Diennea <
enrico.olive...@diennea.com> wrote:
> Hi,
> I would like to connect to both a 0.94 hbase cluster
Hi George,
All rows? Or just a handful of rows? How big is your table?
On Thursday, August 25, 2016, GEORGE, MURALIDHARAN wrote:
> Hello Support Team
>
> Greetings!
>
> I have a question in HBase:
>
> I want to copy data from some rows in one HBase table to another HBase
>
That HDFS didn't have the change. See HADOOP-6857 for details.
On Thursday, August 25, 2016, marjana wrote:
> Hm I only see one number:
>
> 2.4 M /apps/hbase/data/data/default/FACT_AMERICAN
>
> This is 2.3.0 version.
>
>
>
> --
> View this message in context:
Hey Manjeet,
How much data are you actually trying to get the last 1000 records for? If
you're dealing at the scale of only millions of rows, HBase may not be the
best choice for this type of problem.
On Wed, Aug 24, 2016 at 12:05 PM, Manjeet Singh
wrote:
> Hi all
>
gt;
> Thanks,
> Anil
>
> On Sat, Aug 20, 2016 at 10:48 PM, Dima Spivak <dspi...@cloudera.com
> <javascript:;>> wrote:
>
> > Nope, you'd be in uncharted territory there, my friend, and definitely
> not
> > in a place that would be production-ready. Sor
for HBase 2.0,
> this is the reason why I want to try out MOB now in HBase 1.2.2 in my test
> environment, any steps and guide to do the backport?
>
>
> On Sun, Aug 21, 2016 at 12:44 PM, Dima Spivak <dspi...@cloudera.com
> <javascript:;>> wrote:
>
> > Hi Asc
Hi Ascot,
MOB won't be backported into any pre-2.0 HBase branch. HBASE-15370 tracked
the effort and an email thread on the dev list ("[DISCUSS] Criteria for
including MOB feature backport in branch-1" started by Ted Yu on March 3rd
of this year) has additional rationale as to why that is.
r year) (640 TB with
> replication). 13 R+W ops/sec. Each message 100 bytes or 1024 bytes.
> Is it possible to handle such load with hbase?
>
> Sincerely,
> Alexandr
>
> On Sat, Aug 20, 2016 at 8:44 AM, Dima Spivak <dspi...@cloudera.com
> <javascript:;>> wrote
5 billion
> messages with 1024 bytes and 120 billion messages with 100 bytes per month.
>
> I thought that they used only hbase to handle such a huge data If they used
> their own implementation of hbase then I haven't questions.
>
> Sincerely,
> Alexandr
>
> On Sat, Aug 20
> > > >> hbase to store huge amount of messages (client to client) (40%
> writes,
> > > 60%
> > > >> reads).
> > >
> > > Any particular reason for federated cluster? How huge is huge amount
> and
> > > what is the message si
As far as I know, HBase doesn't support spreading tables across namespaces;
you'd have to point it at one namenode at a time. I've heard of people
trying to run multiple HBase instances in order to get access to all their
HDFS data, but it doesn't tend to be much fun.
-Dima
On Fri, Aug 19, 2016
Row locks on the client side were deprecated in 0.94 (see HBASE-7341) and
removed in 0.96 (see HBASE-7315). As you note, they could lead to deadlocks
and also had problems when region moves or splits occurred.
Is there a specific reason you're looking for this functionality, Manjeet?
-Dima
On
Hey Yang,
Looks like HDFS is having trouble with a block. Have you tried running
hadoop fsck?
-Dima
On Thursday, August 11, 2016, Ming Yang wrote:
> The cluster enabled shortCircuitLocalReads.
>
> dfs.client.read.shortcircuit
> true
>
>
> When enabled
Hey Manjeet,
Let me move dev@ to bcc and add user@ as the main recipient. You're most
likely to get good advice from HBase users who might well have faced this
question themselves. :)
Cheers,
Dima
On Saturday, August 6, 2016, Manjeet Singh
wrote:
> Hi All
>
> I
nstack.org/show/550398/
> > /usr/hbase/logs/hbase-hadoop-master-hadoopActiveMaster.log -
> > http://paste.openstack.org/show/550402/
> >
> > Also I have put in "/usr/hbase/conf/hbase-env.sh":
> > export HBASE_MANAGES_ZK=false
> >
> > Best rega
Hey Alexandr,
What does your hbase-site and hdfs-site look like? Wanna upload them to
Gist or something similar and then paste a link?
-Dima
On Friday, August 5, 2016, Alexandr Porunov
wrote:
> Hello Dima,
>
> Thank you for advice. But the problem haven't
Hey Alexandr,
In that case, you'd use what you have set in your hdfs-site.xml for
the dfs.nameservices property (followed by the HBase directory under HDFS).
-Dima
On Thu, Aug 4, 2016 at 12:54 PM, Alexandr Porunov <
alexandr.poru...@gmail.com> wrote:
> Hello,
>
> I don't understand one
Hey Yang,
Looks like HDFS is having trouble with a block. Have you tried running
hadoop fsck?
-Dima
On Tuesday, August 2, 2016, Ming Yang wrote:
> When enabled replication,we found a large number of error logs.Is the
> cluster configuration incorrect?
>
> 2016-08-03
Hm, not sure what to say. The error seems to be pointing at not having a
TGT...
-Dima
On Tue, Aug 2, 2016 at 12:45 AM, Aneela Saleem <ane...@platalytics.com>
wrote:
> Yes, I have kinit'd as the service user. But still getting error
>
> On Tue, Aug 2, 2016 at 3:05 AM, Di
ached is the hbase-site.xml file, please have a look. What's wrong there?
>
> On Thu, Jul 28, 2016 at 11:58 PM, Dima Spivak <dspi...@cloudera.com>
> wrote:
>
>> I haven't looked in detail at your hbase-site.xml, but if you're running
>> Apache HBase (and not a CD
Hey Ankit,
Moving the dev list to bcc and adding the user mailing list as the
recipient. Maybe a fellow user can offer some suggestions.
All the best,
Dima
On Thursday, July 28, 2016, ankit beohar wrote:
> Hi Hbase,
>
> My use case is :- I am getting files and I
com>
wrote:
> Hi Dima,
>
> I'm running Hbase version 1.2.2
>
> On Thu, Jul 28, 2016 at 8:35 PM, Dima Spivak <dspi...@cloudera.com> wrote:
>
> > Hi Aneela,
> >
> > What version of HBase are you running?
> >
> > -Dima
> >
> > On
Hi Aneela,
What version of HBase are you running?
-Dima
On Thursday, July 28, 2016, Aneela Saleem wrote:
> Hi,
>
> I have successfully configured Zookeeper with Kerberos authentication. Now
> i'm facing issue while configuring HBase with Kerberos authentication. I
>
Hey Kanagha,
What kind of scale are you looking at for running your application? How big
do you envision the cluster needing to be to handle your use case?
-Dima
On Mon, Jul 25, 2016 at 1:09 PM, Dave Birdsall
wrote:
> Hi,
>
> For ecommerce, you'll likely want solid
Hello,
No, the tests in test folders under most modules are unit tests that spin
up miniclusters and/or use internal hooks to test small sections of code in
isolation. For end-to-end tests that can be run on clusters, check out the
tests in the hbase-it module. Details on running those can be
Hi Anil,
Have you tried the steps documented in the HBase ref guide [1] yet? If so,
can you describe your environment a bit more? How many machines are you
trying to run HBase across? Do you have an HDFS cluster set up already?
Cheers,
Dima
[1] https://hbase.apache.org/book.html
On Thursday,
TimestampsFilter*. My bad. :)
-Dima
On Friday, July 15, 2016, Dima Spivak <dspi...@cloudera.com> wrote:
> If you have a Thrift server running, you can use HappyBase (
> https://happybase.readthedocs.io/en/stable/) to get a pretty nifty Python
> API. That along wi
If you have a Thrift server running, you can use HappyBase (
https://happybase.readthedocs.io/en/stable/) to get a pretty nifty Python
API. That along with using a scan with the TimestampFilter should get you
what you want.
-Dima
On Thursday, July 14, 2016, Mahesh Sankaran
)
> at
>
> org.apache.hadoop.hbase.client.BufferedMutatorImpl.flush(BufferedMutatorImpl.java:183)
> at
> org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1449)
> at org.apache.hadoop.hbase.client.HTable.put(HTable.java:1040)
>
>
>
>
Hey Raja,
We'll need more details about your setup (HBase version, size/topology of
cluster, server specs, etc.) and the applications you're running before we
can even start giving ideas of things to try. Wanna pass those along?
-Dima
On Monday, July 11, 2016, Raja.Aravapalli
Hi Mahesha,
1.) HBase stores all values as byte arrays, so there's no typing to speak
of. ImportTsv is simply ingesting what it sees, quotes included (or not).
2.) ImportTsv doesn't support escaping, if I'm reading the code correctly. (
ing thing is that if I load it via
> spark-shell --jars, it seems to work. However, if I load it via
> spark.driver.extraClassPath in the config file, it seems to fail.
> What is the difference between --jars (command line) and
> spark.driver.extraClassPath (config)?
>
> O
Hey Robert,
HBaseConfiguration is part of the hbase-common module of the HBase project.
Are you using Maven to provide dependencies or just running java -cp?
-Dima
On Monday, July 4, 2016, Robert James wrote:
> When trying to load HBase via Spark, I get
Hey M.,
Just to follow up on what JMS said, this was fixed in April 2014 (details
at https://issues.apache.org/jira/browse/HBASE-10118), so running a version
of HBase in which the patch went in is probably your best option.
-Dima
On Sunday, June 26, 2016, Jean-Marc Spaggiari
Please email user-unsubscr...@hbase.apache.org, Fateme.
Cheers,
Dima
On Thursday, June 23, 2016, fateme Abiri
wrote:
>
> unsubscribe me please from this mailing list
> Thanks & Regards
>
>
>
> Fateme Abiri
> Software Engineer
> M.Sc. Degree
> Ferdowsi
You weren't setting the classpath. In Bash, you can't put a $ in front of
the variable name when you're assigning it to a value.
-Dima
On Wednesday, June 22, 2016, Mahesha999 wrote:
> hey thanks. That worked. Seems that my lack of experience with Linux
> causing
> trouble.
someone more seasoned than me can explain the justification…
Things like replication_scope can be seen by using ‘describe' from the
HBase shell. Other metadata (e.g. modified time) is not kept for HBase
tables as far as I’m aware.
-Dima
On Wed, Jun 8, 2016 at 12:44 AM, Dima Spivak <dspi...@cloudera.
Hi Kumar,
Which version of HBase do you run? Recent releases have moved to ACLs for
table permissions in place of an "owner" construct.
-Dima
On Wednesday, June 8, 2016, kumar r wrote:
> Hi,
>
> Is there any command to get complete description about hbase table such as
>
FWIW, some engineers at Cloudera who worked on adding encryption at rest to
HDFS wrote a blog post on this where they describe negligible performance
impacts on write and only a slight performance degradation on large reads (
Hey Pranavan,
You’ll likely have more luck on the user@hbase.apache.org mailing list.
Cheers,
Dima
On Wed, Jun 1, 2016 at 9:39 AM, Pranavan Theivendiram <
pranavan...@cse.mrt.ac.lk> wrote:
> Hi Devs,
>
> I am Pranavan from Sri Lanka. I am doing a GSoC project for apache
> pheonix. Please
Probably better off asking on the Hadoop user mailing list (
u...@hadoop.apache.org) than the HBase one… :)
-Dima
On Mon, Apr 18, 2016 at 2:57 AM, Henning Blohm
wrote:
> Hi,
>
> in our Hadoop 2.6.0 cluster, we need to pass some properties to all Hadoop
> processes so
/configuration
Any help on this is appreciated.
Thank you!
On Sat, Jul 18, 2015 at 12:20 AM, Dima Spivak dspi...@cloudera.com
wrote:
+user@, dev@ to bcc
Pubudu,
I think you'll get more help on an issue like this on the users list.
-Dima
-- Forwarded message --
From: Ted
+user@, dev@ to bcc
Pubudu,
I think you'll get more help on an issue like this on the users list.
-Dima
-- Forwarded message --
From: Ted Yu yuzhih...@gmail.com
Date: Fri, Jul 17, 2015 at 5:40 AM
Subject: Re: Hbase Fully distribution mode - Cannot resolve regionserver
hostname
1 - 100 of 137 matches
Mail list logo