Re: Question about dead datanode
I can submit Jira for this if you feel that's appropriate On Feb 18, 2014 8:49 PM, "Stack" wrote: > On Sat, Feb 15, 2014 at 8:01 PM, Jack Levin wrote: > > > Looks like I patched it in DFSClient.java, here is the patch: > > https://gist.github.com/anonymous/9028934 > > > > > > > > I moved 'deadNodes' list outside as global field that is accessible by > > all running threads, so at any point datanode does go down, each > > thread is basically informed that the datanode _is_ down. > > > > We need to add something like this to current versions of DFSClient, a > global status, so each stream does not have to discover bad DNs for itself. > St.Ack >
RE: windows client and ignoring winutils.exe ??!!
You can go through these links and can adopt the required one, http://www.srccodes.com/p/article/39/error-util-shell-failed-locate-winutils-binary-hadoop-binary-path http://www.srccodes.com/p/article/38/build-install-configure-run-apache-hadoop-2.2.0-microsoft-windows-os Regards, Pankaj -Original Message- From: Enis Söztutar [mailto:e...@apache.org] Sent: 22 February 2014 05:21 To: hbase-user Subject: Re: windows client and ignoring winutils.exe ??!! winutils.exe is the native implementation for some of the utilities that hdfs and hadoop clients require. For the hbase client and hadoop client to work properly, you have to have it installed properly. You can build it locally: http://wiki.apache.org/hadoop/Hadoop2OnWindows. Enis On Fri, Feb 21, 2014 at 11:00 AM, shapoor wrote: > hello everyone, > i installed "hbase-0.96.1.1-hadoop2" using "hadoop-2.2.0". The > installation is in a Linux machine. I am however willing to access it > from a windows client using java and eclipse. it always worked with > hbase-0.94. But now i get the following exception and my hbaseAdmin > cannot be initialized. it stays null. > > "Could not locate executable > C:\Users\shapoor\sources\hadoop-2.2.0\bin\winutils.exe in the Hadoop > binaries." > > I don't have an installation in windows. How can I ignore this? > > regards, > > > > -- > View this message in context: > http://apache-hbase.679495.n3.nabble.com/windows-client-and-ignoring-w > inutils-exe-tp4056227.html Sent from the HBase User mailing list > archive at Nabble.com. >
Re: Why the put without column in batch doesn't generate any error
In 0.94, HConnectionManager#HConnectionImplementation#processBatch(), there is no validity check on the individual element in the list. In trunk, there is similar issue. Cheers On Sun, Feb 23, 2014 at 7:25 PM, java8964 wrote: > Hi, > I found some inconsistent behavior in the HBase, and wonder why. > In the simple Put API call, if there is no content at all with this put, > the local side of the client will throw IllegalArgumentException: No > columns to insert to failed the put, as shown in the example of Lars George > Book "HBase: The Definitive Guide, page 92, Example 3-6". This is a > reasonable behavior. > What confuses me is if this happen in a Batch operation, there is no > exception throw out in this case, worst of all, the corresponding result > will contain the None keyvalue instance, which represents successful > operation for 'Put' operation. This is kind of inconsistent. > In the following example code running on HBase 0.94.16: > hbase(main):003:0> create 'mydevtable', 'colfam1', 'colfam2'0 row(s) in > 1.1620 seconds > > List batch = new ArrayList(); > Put put = new Put(Bytes.toBytes("row2")); > put.add(Bytes.toBytes("colfam2"), Bytes.toBytes("qual1"), > Bytes.toBytes("val5"));batch.add(put); > Get get1 = new Get(Bytes.toBytes("row1")); > get1.addColumn(Bytes.toBytes("colfam1"), Bytes.toBytes("qual1")); > batch.add(get1); > Delete delete = new Delete(Bytes.toBytes("row1")); > delete.deleteColumns(Bytes.toBytes("colfam1"), Bytes.toBytes("qual2")); > batch.add(delete); > Put put2 = new Put(Bytes.toBytes("row2"));batch.add(put2); > Object[] results = new Object[batch.size()];try { >table.batch(batch, results);} catch (Exception e) { > System.err.println("Error: " + e);} > for (int i = 0; i < results.length; i++) { > System.out.println("Result[" + i + "]: " + results[i]);} > table.close(); > For put2, I expect an Exception should be throw, but no. At least the > corresponding Result object in the array for it should tell me this is a > invalid Put, but still nothing. > here is the output of running above code: > Result[0]: keyvalues=NONEResult[1]: keyvalues=NONEResult[2]: > keyvalues=NONEResult[3]: keyvalues=NONE > Process finished with exit code 0 > Any thoughts? > Thanks > Yong >
Why the put without column in batch doesn't generate any error
Hi, I found some inconsistent behavior in the HBase, and wonder why. In the simple Put API call, if there is no content at all with this put, the local side of the client will throw IllegalArgumentException: No columns to insert to failed the put, as shown in the example of Lars George Book "HBase: The Definitive Guide, page 92, Example 3-6". This is a reasonable behavior. What confuses me is if this happen in a Batch operation, there is no exception throw out in this case, worst of all, the corresponding result will contain the None keyvalue instance, which represents successful operation for 'Put' operation. This is kind of inconsistent. In the following example code running on HBase 0.94.16: hbase(main):003:0> create 'mydevtable', 'colfam1', 'colfam2'0 row(s) in 1.1620 seconds List batch = new ArrayList(); Put put = new Put(Bytes.toBytes("row2")); put.add(Bytes.toBytes("colfam2"), Bytes.toBytes("qual1"), Bytes.toBytes("val5"));batch.add(put); Get get1 = new Get(Bytes.toBytes("row1")); get1.addColumn(Bytes.toBytes("colfam1"), Bytes.toBytes("qual1")); batch.add(get1); Delete delete = new Delete(Bytes.toBytes("row1")); delete.deleteColumns(Bytes.toBytes("colfam1"), Bytes.toBytes("qual2")); batch.add(delete); Put put2 = new Put(Bytes.toBytes("row2"));batch.add(put2); Object[] results = new Object[batch.size()];try { table.batch(batch, results);} catch (Exception e) { System.err.println("Error: " + e);} for (int i = 0; i < results.length; i++) { System.out.println("Result[" + i + "]: " + results[i]);} table.close(); For put2, I expect an Exception should be throw, but no. At least the corresponding Result object in the array for it should tell me this is a invalid Put, but still nothing. here is the output of running above code: Result[0]: keyvalues=NONEResult[1]: keyvalues=NONEResult[2]: keyvalues=NONEResult[3]: keyvalues=NONE Process finished with exit code 0 Any thoughts? Thanks Yong
Re: Hbase shell - deletall doesnt remove records
Here is the definition from deleteall.rb : def command(table, row, column = nil, timestamp = org.apache.hadoop.hbase.HConstants::LATEST_TIMESTAMP) This is what I did: hbase(main):003:0> deleteall 'IntegrationTestMTTR', '050ux', nil, 1393161606402 0 row(s) in 0.0770 seconds hbase(main):004:0> get 'IntegrationTestMTTR', '050ux' COLUMN CELL 0 row(s) in 0.0110 seconds Cheers On Sun, Feb 23, 2014 at 1:06 AM, Ron Sher wrote: > But you can only use a timestamp on a specific column, not the whole row. > Here's the help: > > Delete all cells in a given row; pass a table name, row, and optionally > a column and timestamp. Examples: > > hbase> deleteall 't1', 'r1' > hbase> deleteall 't1', 'r1', 'c1' > hbase> deleteall 't1', 'r1', 'c1', ts1 > > > > > -- > View this message in context: > http://apache-hbase.679495.n3.nabble.com/Hbase-shell-deletall-doesnt-remove-records-tp4056162p4056246.html > Sent from the HBase User mailing list archive at Nabble.com. >
Re: Hbase shell - deletall doesnt remove records
But you can only use a timestamp on a specific column, not the whole row. Here's the help: Delete all cells in a given row; pass a table name, row, and optionally a column and timestamp. Examples: hbase> deleteall 't1', 'r1' hbase> deleteall 't1', 'r1', 'c1' hbase> deleteall 't1', 'r1', 'c1', ts1 -- View this message in context: http://apache-hbase.679495.n3.nabble.com/Hbase-shell-deletall-doesnt-remove-records-tp4056162p4056246.html Sent from the HBase User mailing list archive at Nabble.com.