on’t I get back only
the single, most-recently-inserted value of the cell as I do when I use
MUST_PASS_ALL? Note that if I don’t use a filterList at all and instance just
set the get’s filter to the paginationFilter, I get the result I would expect
(ie the single OFFSET = 0:family,name,Jane,10
”. Logically, I want to select
(cf1 OR (cf2 && prefix(cf2)==”MYPREFIX”)). Is this possible with a Filter?
--
Warmest Regards,
Jason Tokayer, PhD
The information contained in this e-mail is confidential and/or proprietary to
Capital One an
Unsubscribe
> On Aug 30, 2016, at 10:03 AM, Rich Bowen wrote:
>
> It's traditional. We wait for the last minute to get our talk proposals
> in for conferences.
>
> Well, the last minute has arrived. The CFP for ApacheCon Seville closes
> on September 9th, which is less than 2 weeks away. It's
't see how I can use that
approach.
Thanks,
Jason
Sent with Good (www.good.com)
From: ramkrishna vasudevan
Sent: Thursday, May 5, 2016 4:03:48 AM
To: user@hbase.apache.org
Subject: Re: Hbase ACL
I verified the above behaviour using test case as the cluste
oning for table overrides on existing cells?
--
Warmest Regards,
Jason Tokayer, PhD
On 5/4/16, 12:30 PM, "ramkrishna vasudevan"
wrote:
>Superuser:
>grant 'ns1:t1', {'userX' => 'R' }, { COLUMNS => 'cf1', FILTER =>
>"(PrefixFilt
#x27;ns1:t1', {'userX' => 'R' }, { COLUMNS => 'cf1', FILTER =>
"(PrefixFilter ('r2'))" }
userX:
put 'ns1:t1', 'r2', 'cf1:q1', 'v2',1462364682267 #WORKS, BUT SHOULD IT???
Any help/guidance
trunk? Can you verify via tests that
CHECK_CELL_DEFAULT is (a) used by default and (b) is working properly? I
don¹t see any unit tests in the codebase for this feature.
--
Warmest Regards,
Jason Tokayer, PhD
On 5/3/16, 1:41 PM, "ramkrishna vasudevan"
wrote:
>Hi Jason
>Which vers
s it possible to set the
strategy to cell-level prioritization, preferably in hbase-site.xml? This
feature is critical to our cell-level access control.
--
Warmest Regards,
Jason Tokayer, PhD
[cid:BC8E9BC9-24FF-45A2-9D71-15EEC66C2C79]
The i
Unsubscribe
Original message From: Sean Busbey
Date:04/30/2015 12:57 PM (GMT-06:00)
To: dev Subject: Re: Using
1.1.0RC0 in your maven project (was [VOTE] First release candidate for HBase
1.1.0 (RC0) is available)
+dev@hbase
-user@hbase to bcc
Adding thread to dev@hbase
OD = 0? In
/conf/hbase-site.xml?
thanks!
Jason
Premal,
So this should be set at /conf/hbase-site.xml?
thanks,
Jason
On Fri, Sep 20, 2013 at 4:15 PM, Premal Shah wrote:
>
>
> hbase.hregion.majorcompaction
> 0
>
>
>
>
> On Fri, Sep 20, 2013 at 1:06 PM, Jason Huang
> wrote:
>
> >
Thanks Ted and JM.
Jason
On Wed, Sep 18, 2013 at 6:46 PM, Jean-Marc Spaggiari <
jean-m...@spaggiari.org> wrote:
> But...
>
> if you can't update, then you will have to checkout the 0.94.3 version from
> SVN, apply the patch manually, build and re-deploy. Patch might be
for 0.94.3 from
somewhere and apply this patch manually, then rebuild the jars? Does this
patch has any other dependency code that's not in 0.94.3?
thanks,
Jason
thanks for all these valuable comments.
Jason
On Mon, Jul 8, 2013 at 12:25 PM, Michael Segel wrote:
> Where is murmur?
>
> In your app?
> So then every app that wants to fetch that row must now use murmur.
>
> Added to Hadoop/HBase?
> Then when you do upgrades you have
hat this
separator/delimiter shouldn't appear elsewhere in the rowkey, are there any
other requirements? Are there any special separator/delimiters that are
better/worse than the average ones?
thanks!
Jason
makes a lot of sense.
thanks Dave,
Jason
On Thu, Jun 27, 2013 at 10:26 AM, Dave Wang wrote:
> Jason,
>
> HBase replication is for between two HBase clusters as you state.
>
> What you are seeing is merely the expected behavior within a single
> cluster. DFS replicati
n configs
(1) and (2) above?
thanks!
Jason
cool.
thanks Stack.
On Wed, Jun 26, 2013 at 10:52 AM, Stack wrote:
> On Wed, Jun 26, 2013 at 6:57 AM, Jason Huang
> wrote:
>
> > My question is - is this kind of "heartbeat" expected and useful? Our
> > normal use case involves fetching data to HBase table every
on is - is this kind of "heartbeat" expected and useful? Our
normal use case involves fetching data to HBase table every 60 seconds or
so. Could we stop that heartbeat and re-connect to zookeeper on the fly
only when we need to read from HBase?
thanks!
Jason
thanks for the good comments!
Jason
On Tue, Oct 2, 2012 at 7:36 PM, Pamecha, Abhishek wrote:
> For 1. I wouldn't worry about that problem until it really happens. Just my
> opinion. If you really want to solve it you will need to generate a unique id
> per row-key 'put'
Thanks Mohammad.
The issue about phone number is that it tends to change over time and
we think name and DOB are more reliable. SSN is more unique but the
issue is that we can't force the user to provide it. Basically we have
limited information that can be used.
thanks,
Jason
On Tue,
ches.
Does anyone ever use names as part of the row key and encounter this
type of issue?
(3) The row key seems to be long (30+ chars), will this affect our
read/write performance? Maybe it will increase the storage a bit (say
we have 3 million rows per month)? In other words, does the length of
the row key matter a lot?
thanks!
Jason
Thanks all for the responses!
Now I have a much better idea.
thanks!
Jason
On Fri, Sep 28, 2012 at 5:34 AM, Bruno Dumon wrote:
> Hi,
>
> On Thu, Sep 27, 2012 at 10:58 PM, Jason Huang wrote:
>
>> Hello,
>>
>> I am exploring HBase & Lily and I have a few start
our APIs back to them? We love sharing but some of our
APIs may be under different agreements and can't be shared.
thanks!
Jason
Andrew - thanks for the quick response!
Jason
On Thu, Sep 20, 2012 at 3:43 PM, Andrew Purtell wrote:
> The issue with the patch on HBASE-3529 is it relies on modifications
> to HDFS that the author of HBASE-3529 proposed to the HDFS project as
> https://issues.apache.org/jira/browse/
at
haven't been resolved yet? It appears that no one has actively worked
on this project for a while. Does anyone in this mail list know the
most recent status?
thanks!
Jason
I see. Thanks for bringing the utility class to my attention.
Jason
On Thu, Sep 20, 2012 at 2:32 AM, anil gupta wrote:
> Hi Jason,
>
> AFAIK, you need to write custom serialization and deserialization code for
> this kind of stuff. For any primitive data type and some others like
Just to report back (in case someone else ran into similar issues
during install) - I noticed that one of my friends use Sun's JDK (and
I was using openJDK). I then replaced my JDK and started a new install
with hadoop 1.0.3 + Hbase 0.94.1. Now it works in my MacBook!
Jason
On Wed, Sep 19,
I think I should clarify my question - is there any existing tool to
help convert these type of nested entities to byte arrays so I can
easily write them to the column, or do I need to write my own
serialization/deserialization?
thanks!
Jason
On Wed, Sep 19, 2012 at 12:53 PM, Jason Huang wrote
ed objects but I am not sure how to write these
data to the table. I tried to search online but couldn't find any
good post talking about how to implement a nested entity (there are
some posts out there talking about using this type of design, though).
Thanks!
Jason
will work.
thanks again for all your time,
Jason
On Tue, Sep 18, 2012 at 1:34 PM, Jean-Daniel Cryans wrote:
> On Tue, Sep 18, 2012 at 10:21 AM, Jason Huang wrote:
>> I am using hadoop 1.0.3 - I was using dfs.datanode.data.dir last week
>> but that had already been updated (someo
Hi J-D,
I am using hadoop 1.0.3 - I was using dfs.datanode.data.dir last week
but that had already been updated (someone else pointed that out)
before I ran this test today.
thanks,
Jason
On Tue, Sep 18, 2012 at 1:05 PM, Jean-Daniel Cryans wrote:
> Which Hadoop version are you using exac
9?9? .META.,,1.META.+???$.META.,,1infov9?9?
Sorry for the lengthy email. Any help will be greatly appreciated!
Jason
On Thu, Sep 13, 2012 at 6:42 PM, Jason Huang wrote:
> Hello,
>
> I am trying to set up HBase at pseudo-distributed mode on my Macbook.
> I was able to insta
Hi Andrew,
See my comments below (I have also replied at
https://issues.apache.org/jira/browse/HBASE-6800#comment-13457508).
Thanks,
-Jason
>>>> coprocessor based applications should begin as independent code
>>>> contributions, perhaps hosted in a GitHub repository
OK
Status: HEALTHY
However, the file mentioned in the error log
hdfs://localhost:54310/hbase/-ROOT-/70236052/.tmp/0212db15465842b38cc63eb9ef8b73d2
doesn't seem to exist in my fsck report. (Not sure if that matters).
I have no idea where to go next.. Any suggestions?
thanks!
Jason
On Fri, Sep 1
something wrong with my Hadoop setup. I will do more
research and see if I can find out why.
thanks,
Jason
On Thu, Sep 13, 2012 at 7:56 PM, Marcos Ortiz wrote:
>
> Regards, Jason.
> Answers in line
>
>
> On 09/13/2012 06:42 PM, Jason Huang wrote:
>
> Hello,
>
> I am tr
Hello,
I am trying to set up HBase at pseudo-distributed mode on my Macbook.
I was able to installed hadoop and HBase and started the nodes.
$JPS
5417 TaskTracker
5083 NameNode
5761 HRegionServer
5658 HMaster
6015 Jps
5613 HQuorumPeer
5171 DataNode
5327 JobTracker
5262 SecondaryNameNode
However,
I see now.
Thanks for the quick response and clear explanation.
Jason
On Wed, Sep 12, 2012 at 5:28 PM, Adrien Mogenet
wrote:
> I think you misunderstand concept of "memstore". That's just the name of
> the temporary in-memory storage. Each region has its own memstore, an
find that data in Memstore?
thanks,
Jason
On Wed, Sep 12, 2012 at 5:19 PM, Adrien Mogenet
wrote:
> WAL is just there for recover. Reads will meet the Memstore on their read
> path, that's how LSM Trees are working.
>
> On Wed, Sep 12, 2012 at 11:15 PM, Jason Huang wrote:
>
>
it going to the Region servers
and tried to get the previous version of this data, or is it smart
enough to go to the MemStore or WAL to get the most recent version of
data?
thanks!
Jason
Yes! This is it!
Thanks Shrijeet!
On Tue, Sep 11, 2012 at 2:47 PM, Shrijeet Paliwal
wrote:
> Your HDFS server is listening on a different port than the one you
> configured in hbase-site (9000 != 8020).
>
>
>
> On Tue, Sep 11, 2012 at 11:44 AM, Jason Huang wrote:
>> Hel
Hello,
I am trying to set up HBase at pseudo-distributed mode on my Macbook.
I've installed Hadoop 1.0.3 in pseudo-distributed mode and was able to
successfully start the nodes:
$ bin/start-all.sh
$ jps
1002 NameNode
1246 JobTracker
1453 Jps
1181 SecondaryNameNode
1335 TaskTracker
1091 DataNode
T
Abhishek,
Setting your column family's bloom filter to ROWCOL will include qualifiers:
http://hbase.apache.org/book.html#schema.bloom
-Jason
On Wed, Aug 22, 2012 at 1:49 PM, Pamecha, Abhishek wrote:
> Can I enable bloom filters per block at column qualifier levels too? That
> way,
Lin,
Looks like your questions may already be answered, but you might find the
following link comparing "traditional" columnar databases against
HBase/BigTable interesting:
http://dbmsmusings.blogspot.com/2010/03/distinguishing-two-major-types-of_29.html
-Jason
On Sun, Aug 5, 2012
hbase-dev/201201.mbox/%3c1326684100.80142.yahoomail...@web121706.mail.ne1.yahoo.com%3E
https://issues.apache.org/jira/browse/HBASE-5241
-Jason
On Wed, Jul 18, 2012 at 10:32 AM, Zoltán Tóth-Czifra <
zoltan.tothczi...@softonic.com> wrote:
> Hi,
>
> Thanks for the quick answer! So I
Yet another approach is to transform your keys into byte comparable values
that preserve your desired sort order, and store that instead. The ICU
library has the ability to do this for various collations of UTF strings:
http://userguide.icu-project.org/collation/architecture#TOC-Sort-Keys
So for
html
This does mean that range scan is only efficient when it stays within a
hash prefix, though.
4) Still not clear why I can't have 10 column families in HBase and why
> only 2 or 3 of them according to this link (1)?
>
> (1) - http://hbase.apache.org/book/number.of.cfs.html
>
See
Laurent,
This could be implemented with Lucene, eg, HBASE-3529. Contact me
offline if you are interested in pursuing that angle.
Cheers.
On Wed, Aug 10, 2011 at 10:02 AM, Laurent Hatier
wrote:
> Hi all,
>
> I would like to know why MongoDB is faster than HBase to select items.
> I explain my c
t; Subject: Re: hbase + lucene?
>>
>> I wasn't at the day-after presentation, but I believe these are the
>> slides?
>>
>> https://docs.google.com/viewer?a=v&pid=explorer&chrome=true&srcid=0B2c-F
>> WyLSJBCN2E5MTdmOGMtY2U5NS00NmEwLWE2
e indexes would remain reasonably
> sized because each index indexes only the data of a single region
Yes, this is the design of HBase Search.
Jason
On Thu, Jul 21, 2011 at 10:18 AM, Geoff Hendrey wrote:
> Thanks for the pointer. If I understand correctly, the index
> partitioning strategy would
Fri, Jul 15, 2011 at 10:58 AM, Jason Chuong <
> jason.chu...@cbsinteractive.com> wrote:
>
> > Hi Dave,
> >
> > Yes we are and on hbase version 0.90, I've also verify that the
> zookeeper
> > are responding via the zk shell and logs look normal.
> > Just
rs, root-region-server, table, shutdown]
On Fri, Jul 15, 2011 at 9:54 AM, Buttler, David wrote:
> You really don't need 3 zookeeper nodes for a 5 node cluster. 1 is
> sufficient.
> Are you managing zookeeper with hbase or independently?
>
> Dave
>
>
> ---
Hi All,
I have a 5-node cluster setup with 3 nodes as a part of zookeeper quorum.
When i restart the hbase master, the server try to connect to an unknown
host and then crash.
Anyone seen this error message before or know how to resolve this thanks
2011-07-15 05:10:49,158 INFO org.apache.hadoop.i
Todd,
Can you define what a Locality Group is for HBase and how it would
function? Eg, it sounds like the same thing as a column-family, and
it's not clear how one would benefit from the usage of an LG.
Jason
On Mon, Jun 13, 2011 at 8:45 PM, Todd Lipcon wrote:
> Keep in mind that BigT
Mark Kerzner has a synopsis of a recent discussion:
http://shmsoft.blogspot.com/2011/06/search-in-ediscovery.html
I think there will be query and index performance degradation if the
index is stored in HBase as for example a term per column.
For HBASE-3529 I took the approach that Lucene is heav
> Table 2 provides some actual CF/table numbers. One of the crawl tables has
> 16 CFs and one of the Google Base tables had 29 CFs
What's Google doing in BigTable that enables so many CFs?
Is the cost in HBase the seek to each individual key in the CFs, or is
it the cost of loading each block in
be tricky on the slave side. We're assuming
that the slave RS has the same regions as the master RS.
With the Coprocessor system, each RS writes it's own HLogs, or perhaps
there's no need for the slave to write logs?
On Mon, Jun 13, 2011 at 7:21 AM, Andrew Purtell wrote:
>> Fr
HBase cluster replication, so that's why I
> gave you this link.
>
> What you are now referring to is something like
> https://issues.apache.org/jira/browse/HBASE-2357
>
> J-D
>
> On Sat, Jun 11, 2011 at 11:31 AM, Jason Rutherglen
> wrote:
>> I think that's
adoop/hbase/replication/package-summary.html#status
>
> Make sure you use 0.90.3
>
> J-D
>
> On Sat, Jun 11, 2011 at 11:06 AM, Jason Rutherglen
> wrote:
>> I see a lot of resolved issues under HBASE-1295, however I'm not sure
>> what the state of replica
I see a lot of resolved issues under HBASE-1295, however I'm not sure
what the state of replication is. Eg, can one implement live
MySQL'ish streaming master -> slave replication today?
That doesn't look like it's open source though? Isn't it an SaS?
On Sat, Jun 4, 2011 at 11:15 AM, M. C. Srivas wrote:
> There's also DrawnToScale http://www.drawntoscale.com/.
>
> Don't know if its released or not.
>
> On Sat, Jun 4, 2011 at 11:07 AM, Steven Noels wrote:
>
>> On Sat, Jun 4, 2011
, am interested in learning more about elasticsearch with HBase
> after reading the article over at StumbleUpon (
> http://www.stumbleupon.com/devblog/searching-for-serendipity/)
>
> Intriguing that it is relatively easy to set up. Anyone else using
> elasticsearch?
>
> -Matt
&g
Mark,
'Add search to HBase' - HBASE-3529 is in development.
On Fri, Jun 3, 2011 at 5:57 PM, Mark Kerzner wrote:
> Hi,
>
> I need to store, say, 10M-100M documents, with each document having say 100
> fields, like author, creation date, access date, etc., and then I want to
> ask questions like
>
ta. I'm
not sure how that would be implemented.
On Wed, Jun 1, 2011 at 6:54 AM, Lars George wrote:
> Hi Jason,
>
> This was discussed in the past, using the HFileInputFormat. The issue
> is that you somehow need to flush all in-memory data *and* perform a
> major compaction -
empted to open one.
On Tue, May 31, 2011 at 5:18 PM, Jason Rutherglen
wrote:
>> The Hive-HBase integration allows you to create Hive tables that are backed
>> by HBase
>
> In addition, HBase can be made to go faster for MapReduce jobs, if the
> HFile's could be used
oin). One reason to prefer a map join is
> you avoid the shuffle phase which potentially involves several trips to disk
> for the intermediate records due to spills, and also once through the
> network to get each intermediate KV pair to the right reducer. With a map
> join, everything is l
Doesn't Hive for HBase enable joins?
On Tue, May 31, 2011 at 5:06 AM, Eran Kutner wrote:
> Hi,
> I need to join two HBase tables. The obvious way is to use a M/R job for
> that. The problem is that the few references to that question I found
> recommend pulling one table to the mapper and then do
Hi all,
I am having the cloudera hadoop CDH3 - pseudo distributed environment. I am
trying to upload the bulk of data into the hbase-0.89 using the Map Reduce
program. I am not interested in the command line tools in hbase (importtsv &
completebulkload). I got the Sample Map Reduce program fr
It'd be great to fill in something here
http://hbase.apache.org/book.html#master about the HMaster?
for Solr, so the
> grid should be robust.
> But I'm just a doc reader so far...
>
> Le 13/05/11 19:07, Jason Rutherglen a écrit :
>>
>> I think Lily implements search using Solr distributed search (eg,
>> external to HBase), so it's possible for a server
think that it's the purpose of lily (http://www.lilyproject.org/), though
> you have to use lily's API instead of HBase client API. I mean it's not
> intended to be plugged to existing HBase data. Am I wrong ?
>
> Le 13/05/11 18:35, highpointe a écrit :
>>
>&g
I probably need to delete the patch that's up there, and I'm not sure
it needs to be updated regularly given the other dependencies.
On Fri, May 13, 2011 at 9:35 AM, highpointe wrote:
> Jason,
>
> Where could one find that patch. Also your implementation of Solr to Hba
. Is this what you're looking for?
Jason
On Fri, May 13, 2011 at 8:52 AM, Sterk, Paul (Contractor)
wrote:
> Hi,
>
>
>
> I am looking for a web based query tool that is able to query Hbase
> tables and columns. Can someone point me to any? I want to be able to
> crea
will be updates in the memory store that
aren't
> yet written to HFiles. You'll miss these.
>
> On Fri, May 6, 2011 at 12:27 PM, Jason Rutherglen <
> jason.rutherg...@gmail.com> wrote:
>
>> Is there an issue open or any particular reason that an MR job needs to
typical
MR jobs that execute against files in HDFS.
Jason
> Sorry if this is a naive question but can you explain why you consider
> that ElasticSearch isn't a distributed solution for realtime search?
I wasn't referring just to ES, mainly to Katta and Solr. Taking a
step back, RT in Lucene should enable immediate consistency making it
symmetrical with
I'm not sure... I copied code from another unit test, maybe
something's changed? Here's the pastbin of the log:
http://pastebin.com/TvaG7q1X
I'll check if something changed.
On Mon, Apr 18, 2011 at 9:38 PM, Stack wrote:
> On Mon, Apr 18, 2011 at 9:26 PM, Jason Rut
It sometimes closes, most of the time not, I can commit the code and
post a link?
On Mon, Apr 18, 2011 at 8:37 PM, Stack wrote:
> So, this region won't close in your test Jason?
> St.Ack
>
> On Mon, Apr 18, 2011 at 8:26 PM, Ted Yu wrote:
>> Should be this:
>>
Well, I don't mind the logging, however the message is repeating and
part of the time the test isn't returning (because something's
looping).
On Mon, Apr 18, 2011 at 7:58 PM, Stack wrote:
> Trunk is logging each rpc. Needs to be quietened
>
>
>
> On Apr 18, 20
This log message seems to repeat. I wonder if this is common?
2011-04-18 19:40:07,420 DEBUG
[RegionServer:0;j-laptop,53922,1303180713318]
regionserver.HRegionServer(583): Waiting on
7e3bf5836e8a07608039a5a357579213
2011-04-18 19:40:07,423 DEBUG
[RegionServer:0;j-laptop,53922,1303180713318]
ipc.Wr
ghest availability with lowest latency over other concerns with read
> replicas updated best effort from the write path
This one sounds like it's the simplest to implement and would cover
the most common use case (eg, scaling reads)?
On Mon, Apr 18, 2011 at 5:10 PM, Andrew Purtell w
wrote:
>> From: Jason Rutherglen
>> > With the new replication feature
>> > of 0.92 edits are streamed from one cluster
>> > to another
>>
>> Interesting, what does 'cluster' mean in this context?
>
> Cluster in this context is a typi
replication?
On Sun, Apr 17, 2011 at 12:46 PM, Andrew Purtell wrote:
> Jason,
>
>> Andrew, when you say this:
>>
>> > Because HBase is a DOT it can provide strongly consistent
>> > and atomic operations on rows, because rows exist in only
>> > one plac
Andrew, when you say this:
> Because HBase is a DOT it can provide strongly consistent and atomic
> operations on rows, because rows exist in only one place at a time.
This excludes the use of HBase replication? I'm curious as to where
HBase replication places the duplicate(?) region blocks in
Previously in this thread there was concern about the indexing speed
of Lucene vs. HBase, while certainly the throughput will not be as
high when building a search index in conjunction with HBase, it should
be quite good nonetheless.
Here's a link to a discussion on this:
http://bit.ly/dGxlEp
He
Ted thanks!
On Thu, Apr 14, 2011 at 7:41 PM, Ted Yu wrote:
> Jason:
> I logged https://issues.apache.org/jira/browse/HBASE-3786
> Feel free to comment there.
>
> On Thu, Apr 14, 2011 at 6:18 PM, Jason Rutherglen <
> jason.rutherg...@gmail.com> wrote:
>
>> Since
Since posting this I started working on HBASE-3529, the goal of which
is to integrate Lucene into HBase, with an eye towards fully
integrating realtime search when it's available in Lucene. RT'll give
immediate consistency of HBase put's into the search index. The first
challenge has been how to
-D
>
> On Tue, Apr 12, 2011 at 4:35 PM, Jason Rutherglen
> wrote:
>> Hmm... There's no physical limitation, is there an artificial setting?
>>
>> On Tue, Apr 12, 2011 at 4:27 PM, Jean-Daniel Cryans
>> wrote:
>>> It says:
>>>
>>> 2011-04
7.0.0.1:22967 is not chosen because the node does not
> have enough space
>
> J-D
>
> On Tue, Apr 12, 2011 at 4:24 PM, Jason Rutherglen
> wrote:
>> Ah, I had changed conf/log4j.properties. So I changed
>> src/test/resources/log4j.properties, and now the -output file&
12, 2011 at 3:38 PM, Stack wrote:
> You changed the src/test/resources/log4j.properties?
>
> Not sure why changing the block size would make a difference, why it
> would even care.
>
> St.Ack
>
> On Tue, Apr 12, 2011 at 2:38 PM, Jason Rutherglen
> wrote:
>> Thanks, I
.apache.hadoop.hbase-output.txt
>
>
>
> On Tue, Apr 12, 2011 at 12:43 PM, Jason Rutherglen <
> jason.rutherg...@gmail.com> wrote:
>
>> Where does MiniDFSCluster store the logs? I don't see a location,
>> assuming it's different than stdout/err.
Where does MiniDFSCluster store the logs? I don't see a location,
assuming it's different than stdout/err.
On Tue, Apr 12, 2011 at 11:26 AM, Stack wrote:
> The datanodes are not starting? Anything about that in the log?
> St.Ack
>
> On Tue, Apr 12, 2011 at 11:13 AM, Jas
I'm running into an error when setting the DFS block size to be larger
than the default. The following code is used to create the test
cluster:
Configuration conf = new Configuration();
MiniDFSCluster cluster = new MiniDFSCluster(conf, 2, true, null);
FileSystem fileSys = cluster.getFileSystem();
vc/hadoop/common/branches/branch-0.20-append/build.xml)
>
> - Andy
>
>> From: Jason Rutherglen
>> Subject: Re: Hadoop 0.20.3 Append branch?
>> To: apurt...@apache.org, hbase-u...@hadoop.apache.org
>> Date: Monday, April 11, 2011, 2:16 PM
>>
>> Well, just the
-0.20-append/
>
> ?
>
> - Andy
>
>
> --- On Mon, 4/11/11, Jason Rutherglen wrote:
>
>> From: Jason Rutherglen
>> Subject: Hadoop 0.20.3 Append branch?
>> To: hbase-u...@hadoop.apache.org
>> Date: Monday, April 11, 2011, 12:09 PM
>> In the HBase pom.xml,
In the HBase pom.xml, the Hadoop branch is 0.20. Will HBase work with
the Hadoop 0.20.3 append branch?
wrote:
> That last change on github was for trunk, not the append branch. The
> last one I see in that branch is:
>
> HDFS-1554. New semantics for recoverLease. Contributed by Hairong Kuang.
> Hairong Kuang (author)
> January 10, 2011
>
> Same as in SVN.
>
> J-D
>
&g
n Thu, Apr 7, 2011 at 2:05 PM, Jason Rutherglen
wrote:
> How did you compare?
>
> On Thu, Apr 7, 2011 at 1:37 PM, Jean-Daniel Cryans
> wrote:
>> As far as I can tell, they are at the same revision.
>>
>> J-D
>>
>> On Thu, Apr 7, 2011 at 1:19 PM, Jason Rut
How did you compare?
On Thu, Apr 7, 2011 at 1:37 PM, Jean-Daniel Cryans wrote:
> As far as I can tell, they are at the same revision.
>
> J-D
>
> On Thu, Apr 7, 2011 at 1:19 PM, Jason Rutherglen
> wrote:
>> Is http://svn.apache.org/repos/asf/hadoop/common/bra
they different or
is there some uniqueness involved with Github, or something simple I'm
missing.
On Thu, Apr 7, 2011 at 11:10 AM, Stack wrote:
> That one looks dead Jason. There was a bulk upload in December and
> nought since.
> St.Ack
>
>
> On Thu, Apr 7, 2011 at 11:
1 - 100 of 164 matches
Mail list logo