I have jdk 1.5 configured,by the way,in this link,I can't find tricks as
you said
On Thu, 2011-03-24 at 09:51 -0700, Stack wrote:
> Building hadoop forrest is a pain at the best of times. Requires java
> 1.5. See the Hadoop build instructions referenced from hbase doc:
> http://wiki.apache.org/ha
Thanks for the reply.
On Thu, Mar 24, 2011 at 10:00 PM, Jean-Daniel Cryans wrote:
> It's basic REST with php. Google both together and you will see pages
> that show how it's done like:
>
> http://www.sematopia.com/2006/10/how-to-making-a-php-rest-client-to-call-rest-resources/
>
> And that's j
There is no native support for secondary indices in HBase (currently).
You will have to manage it yourself.
St.Ack
On Thu, Mar 24, 2011 at 10:47 PM, sreejith P. K. wrote:
> I have tried secondary indexing. It seems I miss some points. Could you
> please explain how it is possible using secondary
I have tried secondary indexing. It seems I miss some points. Could you
please explain how it is possible using secondary indexing?
I have tried like,
Columnamilty1:kwd1
Columnamilty1:kwd2
row1 Columnamilty1:kwd3
Columnamilty1:kwd2
Thanks stack. Just in case, I'll run the following to keep things nice and
clean:
HBaseAdmin admin = new HBaseAdmin(config);
admin.disableTable("old_indexed_table");
HTableDescriptor tableDesc =
admin.getTableDescriptor(Bytes.toBytes("old_indexed_table"));
tableDesc.remove(Bytes.toBytes("INDEXES")
Dear Buddies,
I need to re-calculate the entries in a hbase everyday, like let x = 0.9x
everyday, to make the time has impact on the entry values.
So I write a TableMapper to get the Entry, and recalculate the result, and
use Context.write(key, put) to put the update operation in context, and the
Something is just wrong. You should be able to do 17,000 records from a few
nodes with multiple threads against a fairly small cluster. You should be
able to come close to that from a single node into a dozen region servers.
On Thu, Mar 24, 2011 at 5:32 PM, Vivek Krishna wrote:
> I have a total
I have a total of 10 clients-nodes with 3-10 threads running on each node.
Record size ~1K
Viv
On Thu, Mar 24, 2011 at 8:28 PM, Ted Dunning wrote:
> Are you putting this data from a single host? Is your sender
> multi-threaded?
>
> I note that (20 GB / 20 minutes < 20 MB / s) so you aren't p
Are you putting this data from a single host? Is your sender
multi-threaded?
I note that (20 GB / 20 minutes < 20 MB / s) so you aren't particularly
stressing the network. You would likely be stressing a single threaded
client pretty severely.
What is your record size? It may be that you are b
Data Size - 20 GB. It took about an hour with default hbase setting and
after varying several parameters, we were able to get this done in ~20
minutes. This is slow and we are trying to improve.
We wrote a java client which would essentially `put` to hbase tables in
batches. Our fine-tuning par
Ok so this is the same old DNS issue...
This is the important message in the log:
Master passed us address to use. Was=hadoop1-s02:60020,
Now=hadoop1-s02.farm-ny.not-a-spammer.com:60020
This means that when the RS tries to resolve itself it gets its
hostname, but when the master resolves the RS
It is the full region name - something like this:
TestTable,262335,1300510101703.372a66d40705a4f2338b0219767602d3.
If you go to the master web UI and click on the table name and you
will see all the regions for that table. It is the string under Table
Regions / Name column.
The other thing is
Now it doesn't like the email because it was in HTML format... As I
said, not a very smart piece of software.
On Fri, Mar 25, 2011 at 00:07, Eran Kutner wrote:
>
> You make it sound like it's a bad thing :)
> But seriously, SpamAssassin is really not the brightest anti spam software on
> the pla
George:
Gary's digging would explain why things would work on an untarnished
hbase (Thanks Gary). HTD and HCD both have buckets that you can dump
any schema key/value into.Thats what ITHBase used it looks like.
St.Ack
On Thu, Mar 24, 2011 at 1:52 PM, George P. Stathis wrote:
> CLASSPATH is
Thanks Gary. Good to know.
On Thu, Mar 24, 2011 at 4:46 PM, Gary Helmling wrote:
> Hi George,
>
> Looking at the IndexedTableDescriptor code on github
>
>
> https://github.com/hbase-trx/hbase-transactional-tableindexed/blob/master/src/main/java/org/apache/hadoop/hbase/client/tableindexed/Indexed
CLASSPATH is pristine:
George-Stathiss-MacBook-Pro logs gstathis$ hbase classpath
/opt/servers/hbase-current/conf
/System/Library/Frameworks/JavaVM.framework/Versions/CurrentJDK/Home/lib/tools.jar
/opt/servers/hbase-current
/opt/servers/hbase-current/hbase-0.90.1-CDH3B4-tests.jar
/opt/servers/hbas
Ah yeah that's the issue, the mixup of FQDNs and hostnames. I wonder
how it got into that state... but that explains why the environment
looked so weird! Let me have a quick look at the code to figure why
it's different and hopefully I can get you a patch just in time for
0.90.2
J-D
> One thing s
Hi George,
Looking at the IndexedTableDescriptor code on github
https://github.com/hbase-trx/hbase-transactional-tableindexed/blob/master/src/main/java/org/apache/hadoop/hbase/client/tableindexed/IndexedTableDescriptor.java
)
it seems to just store the serialized index definition to the
HTableDe
I'm not sure. If it came up -- 'seems to work' -- then it looks like
we just ignore the extra stuff (though, that seems a little odd... I'd
expect the deserializing of these 'exotic's to throw an exception).
Test more I'd say. The shell you are using below is for sure from an
untarnished hbase --
Ah, it seems to work, yes. I was thinking all along that it didn't because I
had setup a simple unit test that kept throwing this:
java.io.IOException: Unknown protocol to name node:
org.apache.hadoop.hbase.ipc.IndexedRegionInterface
at
org.apache.hadoop.hbase.regionserver.HRegionServer.getProtoc
> So does this mean that if it's unable to replicate after some number of
> sleeps,
> as the ones you've listed above, it gives up trying to replicate?
No, it continues to sleep for 10 seconds forever (until it can replicate).
> OK. So sequentially restarting each RS on the master cluster shoul
Hi,
> > 1. How long does it take for edits to be propagated to a slave cluster?
> >
> > As far as I understand from HBase Replication page
> > (http://hbase.apache.org/replication.html) there's a separate buffer held
by
> > each region server which accumulates data (edits which should be
>
Do you have disk space to spare? I'd think that all that is different
about indexed hbase is the WAL format. If you had an hbase.rootdir
that was the product of a clean shutdown with no WALs to process, I'd
think you could just 0.90.x on top of it. If you had the disk space
you could give it a g
Hey folks,
What would be the best approach for migrating away from a given region
server implementation back to the default out-of-the box one? My goal here
is to upgrade our cluster to 0.90 and migrate away from IndexedRegionServer
back to the default HRegionServer.
The only options that I know
Inline.
Also if you think any of my answers should be part of the
documentation, feel free to open a jira with a patch :)
J-D
On Thu, Mar 24, 2011 at 11:05 AM, Otis Gospodnetic
wrote:
> Hello,
>
> We are looking into HBase replication to separate our clients'-facing HBase
> cluster and the one
Hello,
We are looking into HBase replication to separate our clients'-facing HBase
cluster and the one we need to run analytics against (likely heavy MR jobs +
potentially big scans).
1. How long does it take for edits to be propagated to a slave cluster?
As far as I understand from HBase Repl
Oh, if this is the only lib that is different, and your xml parsing is
seemingly working, I'd not worry too much about it.
St.Ack
On Thu, Mar 24, 2011 at 10:01 AM, Stack wrote:
> Hey Zhou:
>
> Its odd that they are not the same. Do you think it possible that we
> downloaded from different reposi
Hey Zhou:
Its odd that they are not the same. Do you think it possible that we
downloaded from different repositories. Can you check the jars
signature in the repository you've downloaded from with other
locations?
Here is the md5 from my machine for that jar:
9534ce6506dc96bac3944423d804be30
Building hadoop forrest is a pain at the best of times. Requires java
1.5. See the Hadoop build instructions referenced from hbase doc:
http://wiki.apache.org/hadoop/HowToRelease It talks explicitly about
the tricks getting forrest build to work.
Hopefully this helps,
St.Ack
(I'll fix the doc.
What you are asking for is a secondary index, and it doesn't exist at
the moment in HBase (let alone REST). Googling a bit for "hbase
secondary indexing" will show you how people usually do it.
J-D
On Thu, Mar 24, 2011 at 6:18 AM, sreejith P. K. wrote:
> Is it possible using stargate interface t
It's basic REST with php. Google both together and you will see pages
that show how it's done like:
http://www.sematopia.com/2006/10/how-to-making-a-php-rest-client-to-call-rest-resources/
And that's just an example. When you understand how the basis work,
applying it to HBase should be piece of c
I have a question about expected times for the importtsv program
My cluster has four nodes. All four machines are datanodes / regionservers /
tasktrackers, with one node also acting as namenode, jobtracker, and
hmaster. I'm running on Red Hat 5.5, 64gb ram, 2.8ghz, 8 cpus. I'm using the
hadoop 0.2
Thanks,this error has gone,but another error comes up:
---
BUILD FAILED
/home/alex/Software/branch-0 .20-append/build.xml:927: The following error
occurred while executing this line:
/home/alex/Software/bran
On Thu, Mar 24, 2011 at 7:14 PM, Alex Luya wrote:
> I am not sure whether they are same.
The doc link is a viewable link (links to a webservice that has nifty
version control browsing features). Your deduced link is the right one
for svn checkouts instead.
> clean docs package-libhdfs
> Target "
Assume you know 0.91 needs hadoop branch-0.20-append to run,and it's doc
gives a link:
http://svn.apache.org/viewvc/hadoop/common/branches/branch-0.20-append/
to check hadoop,but when tried to check out source code from this link,
I got this error:
-
Thank you, St. Ack. We'll take the jstack next time.
On 03/23/2011 04:17 PM, Stack wrote:
Please take a jstack next time. What version of jvm are you using.
If older,< u21, and no apparent reason for the RegionServer lockup,
then it might be a JVM issue. Try adding -XX:+UseMembar to your
HBA
Hi,
I compile the hbase_0.90.1 by Maven-3.0.3 successfully, but found that the
jaxb-api-2.1.jar in "hbase_0.90.1(download from http://hbase.apache.org)\lib"
and in "hbase_0.90.1(build by myself)\lib" is not same. Is it all right?
Is there anyone else encounter the same problem with me?
Thanks
Zh
Can anybody illustrate a simple php application which reads and writes in to
hbase? How can I make use this package in my php application?
On Thu, Mar 24, 2011 at 10:46 AM, Stack wrote:
> Yes.
>
> On Wed, Mar 23, 2011 at 10:08 PM, Robert J Berger
> wrote:
> > Does this replace stargate for 0
38 matches
Mail list logo