Hello
Maybe you should look at export tools source code as it can export HBase
data to distant HDFS space (setting a full hdfs:// url in command line
option for outputdir)
https://github.com/apache/hbase/blob/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/Export.java
Cheers,
Thanks Ted for the response. But the issue is that I want to read from one
cluster and write to another. If I will have to clients then how will they
communicate with each other? Essentially what am I trying to do here is
intra-cluster data copy/exchange. Any other ideas or suggestions? Even if
bot
Hi, but I don't understand what you mean. Did I miss a step
in the tutorial?
On Fri, Apr 26, 2013 at 4:26 PM, Leonid Fedotov wrote:
> Looks like your zookeeper configuration is incorrect in HBase.
>
> Check it out.
>
> Thank you!
>
> Sincerely,
> Leonid Fedotov
> Technical Support Engineer
>
>
Hi,
I presume you have read the percolator paper. The design there uses a
single ts oracle, and BigTable itself as the transaction manager. In omid,
they also have a TS oracle, but I do not know how scalable it is. But using
ZK as the TS oracle would not work, since ZK can scale up to 40-50K
reque
Hi,
Interesting use case. I think it depends on job many jobId's you expect to
have. If it is on the order of thousands, I would caution against going the
one table per jobid approach, since for every table, there is some master
overhead, as well as file structures in hdfs. If jobId's are managabl
Looks like the easiest solution is to use separate clients, one for each
cluster you want to connect to.
Cheers
On Sat, Apr 27, 2013 at 6:51 AM, Shahab Yunus wrote:
> Hello,
>
> This is a follow-up to my previous post a few days back. I am trying to
> connect to 2 different Hadoop clusters' setu
Hello,
This is a follow-up to my previous post a few days back. I am trying to
connect to 2 different Hadoop clusters' setups through a same client but I
am running into the issue that the config of one overwrites the other.
The scenario is that I want to read data from an HBase table from one
cl
My understanding of your use case is that data for different jobIds would
be continuously loaded into the underlying table(s).
Looks like you can have one table per job. This way you drop the table
after map reduce is complete. In the single table approach, you would
delete many rows in the table
Hey Sean,
could you provide us the full stack trace of the FileNotFoundException
Unable to open link
and also the output of: hbase org.apache.hadoop.hbase.snapshot.SnapshotInfo
-files -stats -snapshot SNAPSHOT_NAME
to give us a better idea of what is the state of the snapshot
Thanks!
On Fri, Ap
Hi Jon,
I've actually discovered another issue with snapshot export. If you have a
region that has recently split and you take a snapshot of that table and try to
export it while the children still have references to the files in the split
parent, the files will not be transferred and will be
Looks like your zookeeper configuration is incorrect in HBase.
Check it out.
Thank you!
Sincerely,
Leonid Fedotov
Technical Support Engineer
On Apr 26, 2013, at 9:59 AM, Yves S. Garret wrote:
> Hi, thanks for your reply.
>
> I did [ hostname ] in my linux OS and this is what I have for a
> ho
Hi
I am new to HBase, I have been trying to POC an application and have a
design questions.
Currently we have a single table with the following key design
jobId_batchId_bundleId_uniquefileId
This is an offline processing system so data would be bulk loaded into
HBase via map/reduce jobs. We onl
Hi, thanks for your reply.
I did [ hostname ] in my linux OS and this is what I have for a
hostname [ ysg.connect ].
This is how my hosts file looks like.
127.0.0.1 localhost localhost.localdomain localhost4
localhost4.localdomain4
127.0.0.1 localhost
192.168.1.6 ysg.connect
::1 local
Ok thanks for the clarification.
I tried that (removing ruby but not yet re-installing it) and I got the
same error message.
On Thu, Apr 25, 2013 at 5:02 PM, Jean-Marc Spaggiari <
jean-m...@spaggiari.org> wrote:
> Hi Robin,
>
> No, the idea is to run yum remove, and then test the HBase sheel.
>
Hi Yves,
You need to add an entry with your host name and your local IP.
As an example, here is mine:
127.0.0.1 localhost
192.168.23.2buldo
My host name is buldo.
JM
2013/4/25 Yves S. Garret :
> Hi Jean, this is my /etc/hosts.
>
> 127.0.0.1 localhost localhost.localdomain localhos
15 matches
Mail list logo