Hi Capwell,
Stack said HBASE-1861 is one "legitimate" approach to support
multi-family bulkload. Multi-family support is not integrated into
current released versions.
On Wed, Aug 3, 2011 at 9:46 AM, David Capwell wrote:
> Looking at trunk it says that HFileOutputFormat doesn't support multi-
H Seigal,
importtsv tool is not applicable to your case.For advanced usage of
bulkload, please dig into ImportTsv.java and check the JavaDoc for
HFileOutputFormat. And
https://issues.apache.org/jira/browse/HBASE-1861 is helpful if
multi-family support is required.
On Wed, Aug 3, 2011 at 8:13 AM,
Looking at trunk it says that HFileOutputFormat doesn't support multi-families
http://svn.apache.org/viewvc/hbase/trunk/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat.java?revision=1152947&view=markup#l65
but the bug https://issues.apache.org/jira/browse/HBASE-1861 is marked fi
I just noticed that all keyvalues written from a single map instance for
importtsv have the same version timestamp. This I think will not produce
multiple versions of the same row keys are located in the same mapper chunk.
Why not use a new version timestamp for every put ? Is there a specific
reas
Hi All,
I am using the importtsv tool to load some data into an hbase cluster. Some
of the row keys + cf:qualifier might occur more than once with a different
value in the files I have generated. I would expect this to just create two
versions of the record with the different values. However, I am
Given the hardy reviews and timing, we recently shifted from 90.3 (apache)
to 90.4rc2 (the July 24th one that Stack posted -- 0.90.4, r1150278).
We had a network switch go down last night which caused an apparent network
partition between two of our region servers and one or more zk nodes.
(We're
If you mean Karthick Sankarachary, he does not hang around mailing list much
. Let me go and get him.
On Tue, Aug 2, 2011 at 3:06 PM, Ted Yu wrote:
> I created https://issues.apache.org/jira/browse/HBASE-4150
>
> On Tue, Aug 2, 2011 at 3:02 PM, lars hofhansl wrote:
>
> > Yep, the RR pool has a
I created https://issues.apache.org/jira/browse/HBASE-4150
On Tue, Aug 2, 2011 at 3:02 PM, lars hofhansl wrote:
> Yep, the RR pool has a similar issue. Maybe the maximum number of
> connection should be related to the number of cores on the client.
> Something like 2 x #cores?
>
> For the thread
Yep, the RR pool has a similar issue. Maybe the maximum number of connection
should be related to the number of cores on the client.
Something like 2 x #cores?
For the threadLocal case that I mentioned below it would be hard to find a
useful hard limit, since it is hard to foresee the number of
xyz:
Can you provide a patch based on the discussion on HBASE-4155 ?
Thanks
On Tue, Aug 2, 2011 at 11:57 AM, Jean-Daniel Cryans wrote:
> I commented in the jira you ended opening:
> https://issues.apache.org/jira/browse/HBASE-4155
>
> J-D
>
> On Tue, Aug 2, 2011 at 5:59 AM, xyz wrote:
> > hi
>
When it's done :)
On Tue, Aug 2, 2011 at 1:16 PM, Ioan Eugen Stan wrote:
> 2011/8/2 Jean-Daniel Cryans :
>> Not in the current release, but you could do that with 0.92 when it
>> gets released.
>>
>> J-D
>
> Thanks J-D,
>
> This means I will have to copy things manually until the next version
> o
2011/8/2 Jean-Daniel Cryans :
> Not in the current release, but you could do that with 0.92 when it
> gets released.
>
> J-D
Thanks J-D,
This means I will have to copy things manually until the next version
of HBase. Do you know when that is going to be?
--
Ioan Eugen Stan
http://ieugen.blogspo
I commented in the jira you ended opening:
https://issues.apache.org/jira/browse/HBASE-4155
J-D
On Tue, Aug 2, 2011 at 5:59 AM, xyz wrote:
> hi
> all
>
> I want to scan rows by specified timestamp. I use following hbase shell
> command :
>
> scan 'testcrawl',{TIMESTAMP=>1312268202071}
> ROW
Good question Eric, AFAIK that would be custom.
J-D
On Tue, Aug 2, 2011 at 11:13 AM, Eric Charles
wrote:
> Hi J-D,
>
> Are you referring to a custom coprocessor, or is it a built-in new copy API?
>
> Thx.
>
> On 02/08/11 19:01, Jean-Daniel Cryans wrote:
>>
>> Not in the current release, but you
Hi J-D,
Are you referring to a custom coprocessor, or is it a built-in new copy API?
Thx.
On 02/08/11 19:01, Jean-Daniel Cryans wrote:
Not in the current release, but you could do that with 0.92 when it
gets released.
J-D
On Tue, Aug 2, 2011 at 2:12 AM, Ioan Eugen Stan wrote:
Hello,
Is th
Could it be that the version that's being started during boot doesn't
use the same hbase-env.sh file?
J-D
On Mon, Aug 1, 2011 at 1:54 PM, Bill Wacek wrote:
> Hi All!
>
> I am hoping this is a simple problem, as I can't find documentation on this
> anywhere.
>
> I have HBase 0.90.3 installed with
Like it says, the backup master is waiting for the primary one to
create the znode as in it considers it's not even running yet. If your
primary node is indeed running correctly, it could mean that they
aren't using the same Zookeeper ensemble or root znode.
Check that first.
J-D
On Tue, Aug 2,
Not in the current release, but you could do that with 0.92 when it
gets released.
J-D
On Tue, Aug 2, 2011 at 2:12 AM, Ioan Eugen Stan wrote:
> Hello,
>
> Is there a way to copy all the information one row contains to another
> row without taking all data through the client?
>
> Thanks,
>
> --
>
Mmmm I really need to learn Scala...
I would love to see a Java version of it, as past discussions on this
list show that people would use it.
Thanks,
J-D
On Tue, Aug 2, 2011 at 7:45 AM, Leif Wickland wrote:
> I ended up writing an HFileInputFormat in Scala.
> https://gist.github.com/1120311
I ended up writing an HFileInputFormat in Scala.
https://gist.github.com/1120311 Feel free to take a look and tell me if I
did something obviously wrong. Would you be interested in having a Java
analog to add to the project?
Hi All,
I have the problem in my HBase fully distributed mode with backup master
. The HBase in the fully distributed mode is starting and working properly. The
Active Master is working and My backup master is waiting for the active master
termination. If I kill the active master proce
hi
all
I want to scan rows by specified timestamp. I use following hbase shell command
:
scan 'testcrawl',{TIMESTAMP=>1312268202071}
ROW COLUMN+CELL
Hi All,
I have the problem in my HBase fully distributed mode with backup master
. The HBase in the fully distributed mode is starting and working properly. The
Active Master is working and My backup master is waiting for the active master
termination. If I kill the active master proc
Hi All,
I have the problem in my HBase fully distributed mode with backup master
. The HBase in the fully distributed mode is starting and working properly. The
Active Master is working and My backup master is waiting for the active master
termination. If I kill the active master process,
Hello,
Is there a way to copy all the information one row contains to another
row without taking all data through the client?
Thanks,
--
Ioan Eugen Stan
http://ieugen.blogspot.com/
Ted Yu writes:
>
> Remove hadoop-*.jar from $HBASE_HOME/lib.
> Copy hadoop-0.20.2 jar to hbase lib directory.
>
> This needs to be done on every node in the cluster.
thanks for your reply. I'm testing it on a single cluster. I did that and
received this error:
2011-08-02 13:30:41,211 WARN org
26 matches
Mail list logo