Any reason why 0.99-SNAPSHOT is installed an not deploye by jenkins [1]
? We don't have snapshot available atm in maven repo [2].
Maybe do gain time on each trunk build? What about an additional deploy
daily build?
[1]
/home/jenkins/jenkins-slave/tools/hudson.tasks.Maven_MavenInstallation/Ma
ts of the master not being up will be added to the book.
re: #2. "RS monitoring"
Good question, I'm not familiar with the specifics on that. (and more
info will be added to the book too.)
On 2/12/12 4:00 AM, "Eric Charles" wrote:
Hi Doug,
I was well thinking
ster -
> there's an asterisk.
>
>
>
>
> On 2/11/12 11:31 AM, "Eric Charles" wrote:
>
>> Funky, cause HBase is often presented as a 3-layers server model
>> zookeeper/master/region (root/meta in the regions bringing still more
>> fogs).
>
I
remember having read valid arguments against that model).
client<->zk<->(master)<->region(root|master)
or
client<->zk<->region (+admin)
the latter looks simpler, doesn't it?
Thx,
Eric
On 11/02/12 16:45, Stack wrote:
2012/2/11 Eric Charles:
Hypothetical cas
Hypothetical case (probably asked a number of time here, sorry...):
Can a client correcty put, get, scan (no admins tasks such as create
table,...) with a HBase cluster having its HMaster process down ?
(if really the case, 'master' could be renamed to 'admin' to make it
clear that it is 'option
hbase and hadoop (yarn-based) trunks are trying to live together (see
https://issues.apache.org/jira/browse/HBASE-5361)
The 'yarn' is a new beast and hbase needs to learn to ride it (starting
with the unit tests).
Eric
On 08/02/12 16:53, Stack wrote:
On Wed, Feb 8, 2012 at 1:51 AM, raghaven
from miniHbaseCluster as well
as with the newly created one).
Thx a lot,
Eric
On 09/10/11 18:01, Eugene Koontz wrote:
On 10/8/11 5:31 AM, Eric Charles wrote:
Sorry, fallback situation is
https://svn.apache.org/repos/asf/james/mailbox/trunk/spring/src/main/resources/META-INF/org/apache/james/s
Hi,
We use HBaseTestingUtility to create a MiniHBaseCluster to test Apache
James mailbox project. One of the deployment option is Spring, in that
case, the wiring/injection is done via xml file.
We began instanciating the MiniHBaseCluster in the test class before
loading the spring context f
Sorry, fallback situation is
https://svn.apache.org/repos/asf/james/mailbox/trunk/spring/src/main/resources/META-INF/org/apache/james/spring-mailbox-hbase.xml
The link [1] in previous mail is what we want to achieve but we get the
ZooKeeperConnectionException.
Eric
On 08/10/11 14:26, Eric
ClassLoader.java:247)
Could not find the main class:
org.apache.hadoop.hdfs.server.namenode.NameNode. Program will exit.
vamshi@vamshi-laptop:/usr/local/hadoop-append$
please help, this setting up is taking my most of the time..!!
On Mon, Sep 12, 2011 at 10:25 PM, Eric Charles
wrote:
(answ
Hi,
Depending on the HBase version you use, you will have to return or not
the table to the pool (see
https://issues.apache.org/jira/browse/HBASE-4054). Only with latest
version, you are no more obliged to return.
Maybe not relevant, but to load bulk data, there's also
http://hbase.apache.o
On 12/09/11 17:05, Jean-Daniel Cryans wrote:
Also you referred to "George Lars", I don't understand those crazy
Europeans that give their child two first names either (joking) but
I'm 99.% sure it's the other way around :)
... and what to say about those Europeans who inherit a first name a
On 13/09/11 05:26, Doug Meil wrote:
Hi there-
Regarding EC2, see this in the Hbase book...
http://hbase.apache.org/book.html#trouble.ec2
btw, There's also the whirr project (http://whirr.apache.org/) that
allows to deploy hbase on amazon without trouble.
I can submit a patch if it makes
Yep, commons-configuration-1.6.jar is shipped in recent hadoop-0.20.203
and not in previous hadoop-0.20.2.
Vamshi, I bet on hadoop throwing a remote exception, with hbase being
unable to read it cause it does not have the commons-configuration jar
(that's what can happen when remote throws exc
Hi,
It may be common sense to say that datamodels depend on your application
and search patterns...
I just want to take the opportunity to point that even for a very
standard application such as email, there is no unique datamodel:
- The facebook messaging platform where they use a column p
Hi JD,
I tried without success to hack our test architecture to evaluate the
effect of those configs. I stopped to avoid diverging to a kind of
'hbase performance testing'.
Definitely good to note that hbase.client.pause and
hbase.master.event.waiting.time can be tuned for tests speed.
Thx
On 16/08/11 05:33, Stack wrote:
On Mon, Aug 15, 2011 at 5:12 PM, Garrett Wu wrote:
I have a bunch of integration tests that spend a lot of time creating and
deleting tables in a mini hbase cluster started using HBaseTestingUtility.
Disabling and deleting a table seems to take a second or two
Hi,
I opened https://issues.apache.org/jira/browse/HBASE-4205 for this.
Eric
On 16/08/11 16:28, Eric Charles wrote:
Rethinking about it, it would be better to have clear indication and
simply state that the class is not thread-safe (the client of this class
should know what it means and take
;s an edge-case where it
works, otherwise it's a very bad idea).
I'll take care of it.
Thanks!
On 8/16/11 8:42 AM, "Eric Charles" wrote:
If HTable is not thread-safe for R/W, should the javadoc updated to
something like?:
"This class is not thread safe for any ope
If HTable is not thread-safe for R/W, should the javadoc updated to
something like?:
"This class is not thread safe for any operation (Read or Write). If the
same HTable instance is used among multiple threads, you need to
carefully synchronize access to the single resource in your code,
othe
there a way to copy all the information one row contains to another
row without taking all data through the client?
Thanks,
--
Ioan Eugen Stan
http://ieugen.blogspot.com/
--
Eric Charles
http://about.echarles.net
27;s definitely more work to be done in terms of testing and tweaking
the code.
Hi Mike,
Thx for the answer and have fun with the prototype :)
Eric
Sent from a remote device. Please excuse any typos...
Mike Segel
On Jul 16, 2011, at 3:38 AM, Eric Charles wrote:
On 15/07/11 16:48, Micha
vation Alto Adige,
Siemens Street n. 19, Bolzano. You can find the complete information on the web
site www.tis.bz.it.
--
Eric Charles
http://about.echarles.net
e the
system is in use ... Anybody tried different outage scenarios with
HBase?
Thanks!
Thomas
--
Eric Charles
http://about.echarles.net
Hi Andrew,
Thx for your replies.
I may give a try one day to this indirection layer if someone does not
pick it before me :)
On 01/07/11 18:34, Andrew Purtell wrote:
One reasonable way to handle native storage of large objects in HBase would
be to introduce a layer of indirection.
Do you se
On 01/07/11 10:23, Andrew Purtell wrote:
From: Stack
3. The size of them varies like this
70% from them have their length< 1MB
29% from them have their length between 1MB and 10 MB
1% from them have their length> 10MB (they can have also
100MB)
What Da
Hi,
Although kundera-examples declares lucene as dependency [1], I didn't
find usage of lucene indexing functions in kundera.
I was more on the idea kundera in an JPA ORM on top of Key Value stores
such as HBase.
Maybe I miss a clue.
Tks, Eric
[1]
https://github.com/impetus-opensource/Kunde
I guess organizations will rely on CDH with full doc, support...
The individuals will like to use Apache release, and especially
snapshots/trunk to get latest functionality and get insight of the
internals.
In this specific append/security case, it may be a bit more tricky
The CDH release soun
btw, I'm running maven 3.0.3, which may impact dependency resolution
compared to 2.x
mvn -v
Apache Maven 3.0.3 (r1075438; 2011-02-28 18:31:09+0100)
On 15/06/11 17:30, Eric Charles wrote:
Hi Lars,
Trying the hbase-book project on github, Ioan and I were missing the
famous hadoop-core
On 15/06/11 17:42, Patrick Angeles wrote:
I think both modes are useful.
Certainly for unit tests, embedded HBase is a great option.
We can probably do something to make it easy to switch back-and-forth.
Yes, there are good facets to both options (Embedded and Separate Cluster).
We can imag
ke me to Pull Request via github, or maybe you
have another solution?
Tks,
- Eric
[1]
https://repository.apache.org/content/groups/snapshots/org/apache/hbase/hbase/0.90.1-SNAPSHOT/hbase-0.90.1-20110215.213202-4.pom
On 14/06/11 10:18, Eric Charles wrote:
Yep, tks for the pointer.
- Eric
On
Yep, tks for the pointer.
- Eric
On 14/06/11 10:16, Lars George wrote:
Hi Eric,
Agreed. I put the run scripts into each chapter's "bin" directory, for
example:
https://github.com/larsgeorge/hbase-book/blob/master/ch03/bin/run.sh
Does that help?
Lars
On Tue, Jun 14, 2011 at
Hi Lars,
A base unit test class which setups the embedded HBase cluster with
HBaseTestingUtility, ant the chapter tests being gathered in a JUnit
Suite is an option (you can tell maven to only run the *IntegrationTest
class for example, user would invoke "mvn test" for all tests or "mvn
test
ted HRegion openHRegion(final Progressable reporter) throws
> IOException {
>
>
> @Override
> protected long replayRecoveredEditsIfAny(final Path regiondir, final
> long minSeqId, final Progressable reporter)
>
>
>
> Fleming Chiu(邱宏明)
> Ext: 707-2260
Hi,
This is the place for the source
https://github.com/hbase-trx/hbase-transactional-tableindexed
You will have to build it (mvn package) and validate it is still
compatible with 0.90.1-cdh3u0.
About the installation, they say "Drop the jar in the classpath of your
application". I guess addi
This is now maintained for 0.90.RC3 on
https://github.com/hbase-trx/hbase-transactional-tableindexed/
I guess it is compatible with 0.90.3 (I plan to test it one of these days).
Tks,
Eric
On 13/06/11 03:25, Something Something wrote:
What's the best way of implementing transaction management i
Hi Lars,
I've given your hbase-book link on github [1] to Ioan (GSoC2011, see
previous mail I just sent) to help him dig into the HBase API.
I've also checked-out your repo to learn more, the basic stuff, and the
less basic such as coprocessors...
Great work! Tanks a bunch :) - Tks also to
Hi HBasers,
Just to inform you we are working at Apache James (Mail Server) on
porting our mailbox component [1] to HBase. We already support JPA, JCR
and Maildir implementations, and hopefully soon HBase.
The "we" is Ioan Eugen Stan as student and Robert and Robert Burrell
Donkin and I as c
Hi,
From http://s.apache.org/x4, there are 178 open issues (40%).
Btw, it would be cool if the Atlassian guys would provide a graph
showing the evolution in time of that number. Are we more on the rising
or descending side?
Tks,
- Eric
On 09/06/11 02:58, Otis Gospodnetic wrote:
I wouldn't
Oops, sorry, I confused MapR (the company) with Map Reduce (MR, the
technology).
Time for me to update my knowledge on Hadoop ecosystem.
Tks,
- Eric
On 09/06/11 09:49, Ted Dunning wrote:
On Thu, Jun 9, 2011 at 9:12 AM, Eric Charleswrote:
Good news!
I suppose there's a risk of "incoherent" b
Good news!
I suppose there's a risk of "incoherent" backup.
I mean, with classical sql databases, online-backups ensure that the
taken dataset can be restored in a state where all open transactions are
committed. Even if the backup takes hours, the initial backuped data is
finally updated to
Seems like your previous hbase is not really stopped (see the "Address
already in use" message).
Retry stop-hbase.sh or "ps -ef | grep hbase" and "kill -9 pid" the
remaining processes.
Tks,
- Eric
On 28/04/2011 16:48, Kerry wang wrote:
I cannot start my hbase master after a shutting down.
H
ex Baranau
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch - Hadoop - HBase
On Thu, Mar 17, 2011 at 11:52 AM, Eric Charles
wrote:
Hi Lars,
Many tks for your reply.
For now, I just rely on random or hashed keys and don't need any range
queries.
I will have to choose a nice solution one
Hi Hari,
I'm just beginning with hbase and can't give any feedback on potential
impact.
I had bookmarked http://markmail.org/message/bx2nsg7m4dser6yx post where
the conclusion was not crystal clear to me.
It seems a recurring and complicated topic on ml, but I hope hbase will
soon rely on
On 5/04/2011 10:34, 陈加俊 wrote:
another questiion: which version is uesed in hbase0.90.2 ?
I just downloaded hbase 0.90.2 from
http://people.apache.org/~stack/hbase-0.90.2-candidate-0/.
It ships with hadoop-core-0.20-append-r1056497.jar, exactly the same as
in hbase 0.90.1.
For the record
Hi Ted,
Tks for pointing that HBASE-3488 is about CellCounter.
This will bring better visibility on the stored cells.
Vishal initial question was about having numerous version for a same
rowid/key.
I know datamodel design depends on usecase, but on a technical
point-of-vue (read/write perfor
https://issues.apache.org/jira/browse/HBASE-3729
Get cells via shell with a time range predicate
Tks,
- Eric
On 4/04/2011 16:09, Ted Yu wrote:
Please file a JIRA.
On Mon, Apr 4, 2011 at 2:50 AM, Eric Charleswrote:
Hi,
The shell allows to specify a timestamp to get a value
- get 't1', 'r1', {
Hi,
The shell allows to specify a timestamp to get a value
- get 't1', 'r1', {COLUMN => 'c1', TIMESTAMP => ts1}
If you don't give the exact timestamp, you get nothing...
I didn't find a way to a list of values (different versions) via a
command such as
- get 't1', 'r1', {COLUMN => 'c1', TIMER
; in there somewhere...
But yes, not too big, not too small. There probably isnt a reasonable
setting here, I'm guessing 1 isnt quite right either.
Its one aspect of your data modelling, so people should probably be
setting this value on larger tables.
-ryan
On Sun, Apr 3, 2011 at 11:49 PM,
Hi,
Is there a particular reason to have chosen 3 for
HTableDescriptor.DEFAULT_VERSIONS?
("not too low, not too big"? - I didn't find discussions about this).
Tks,
- Eric
1.- On my side, I could imagine to use the versions to store the history
of a key (without the need to add extra index table). Really depends on
requirement and datamodel, I think but many versions can sometimes make
sense.
2.- HBASE-3488 is related to the hadoop rowcounter job. To get version
would for example see an interleaved
distribution of row keys to regions. Region 1 holds 1, 8, 15,... while
region 2 holds 2, 9, 16,... and so on. I do not think performance is a
big issue. And yes, this is currently all client side driven :(
Lars
On Wed, Mar 16, 2011 at 2:57 PM, Eric Charles
wrote
...and probably the additional hashing doesn't help the performance.
Eric
On 16/03/2011 19:17, Eric Charles wrote:
A new laptop is definitively on my invest plan :)
Tks,
Eric
On 16/03/2011 18:56, Harsh J wrote:
On Wed, Mar 16, 2011 at 8:36 PM, Eric Charles
wrote:
Cool.
Everythi
A new laptop is definitively on my invest plan :)
Tks,
Eric
On 16/03/2011 18:56, Harsh J wrote:
On Wed, Mar 16, 2011 at 8:36 PM, Eric Charles
wrote:
Cool.
Everything is already available.
Great!
1 row(s) in 0.0840 seconds
1 row(s) in 0.0420 seconds
Interesting, how your test's get
String.new('row1')
COLUMN CELL
f:a timestamp=1300170063837, value=val4
1 row(s) in 0.0420 seconds
On Wed, Mar 16, 2011 at 4:26 PM, Eric Charles
wrote:
Hi,
I understand from your answer that it's possible but n
rom scanner 4
5. row "4001" -> kv from scanner 5
6. row "5002" -> kv from scanner 6
7. row "6000" -> kv from scanner 7
Notice how you always only have a list with N elements on the client
side, each representing the next value the scanners offer. Since the
roach in code. Though since this is JRuby
you could write that code in Ruby and add it to you local shell giving
you what you need.
Lars
On Wed, Mar 16, 2011 at 9:01 AM, Eric Charles
wrote:
Oops, forget my first question about range query (if keys are hashed, they
can not be queried based on a
k the hash function should work in the shell if it
returns a string type (like what '' defines in-place).
On Wed, Mar 16, 2011 at 2:22 PM, Eric Charles
wrote:
Hi,
To help avoid hotspots, I'm planning to use hashed keys in some tables.
1. I wonder if this strategy is adviced for
Oops, forget my first question about range query (if keys are hashed,
they can not be queried based on a range...)
Still curious to have info on hash function in shell shell (2.) and
advice on md5/jenkins/sha1 (3.)
Tks,
Eric
On 16/03/2011 09:52, Eric Charles wrote:
Hi,
To help avoid hotspots
Hi,
To help avoid hotspots, I'm planning to use hashed keys in some tables.
1. I wonder if this strategy is adviced for range queries (from/to key)
use case, because the rows will be randomly distributed in different
regions. Will it cause some performance loose?
2. Is it possible to query fro
60 matches
Mail list logo