[jira] [Created] (HBASE-20327) When qualifier is not specified, append and incr operation do not work (shell)

2018-04-01 Thread Nihal Jain (JIRA)
Nihal Jain created HBASE-20327:
--

 Summary: When qualifier is not specified, append and incr 
operation do not work (shell)
 Key: HBASE-20327
 URL: https://issues.apache.org/jira/browse/HBASE-20327
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 1.3.1, 3.0.0, 2.0.0
Reporter: Nihal Jain


Running the example commands specified in shell docs for "append" and "incr" 
throw following error:
{code:java}
ERROR: Failed to provide both column family and column qualifier for 
append{code}
{code:java}
ERROR: Failed to provide both column family and column qualifier for incr{code}
While running the same command via java does not require the user to provide 
both column and qualifier and works smoothly.

 

Steps to reproduce:

1) APPEND
{code:java}
hbase(main):002:0> create 't1', 'c1', 'c2'
Created table t1
Took 0.8151 seconds 
   
hbase(main):003:0> append 't1', 'r1', 'c1', 'value'

ERROR: Failed to provide both column family and column qualifier for append

Appends a cell 'value' at specified table/row/column coordinates.

  hbase> append 't1', 'r1', 'c1', 'value', ATTRIBUTES=>{'mykey'=>'myvalue'}
  hbase> append 't1', 'r1', 'c1', 'value', {VISIBILITY=>'PRIVATE|SECRET'}

The same commands also can be run on a table reference. Suppose you had a 
reference
t to table 't1', the corresponding command would be:

  hbase> t.append 'r1', 'c1', 'value', ATTRIBUTES=>{'mykey'=>'myvalue'}
  hbase> t.append 'r1', 'c1', 'value', {VISIBILITY=>'PRIVATE|SECRET'}

Took 0.0326 seconds 
 {code}
While the same command would run if we run the following java code:
{code:java}
    try (Connection connection = ConnectionFactory.createConnection(config);
    Admin admin = connection.getAdmin();) {
  Table table = connection.getTable(TableName.valueOf("t1"));
  Append a = new Append(Bytes.toBytes("r1"));
  a.addColumn(Bytes.toBytes("c1"), null, Bytes.toBytes("value"));
  table.append(a);
    }{code}
Scan result after executing java code:
{code:java}
hbase(main):005:0> scan 't1'
ROW  COLUMN+CELL
   
 r1  column=c1:, timestamp=1522649623090, 
value=value  
1 row(s)
Took 0.0188 seconds
{code}
 

2) INCREMENT:

Similarly in case of increment, we get the following error (shell):
{code:java}
hbase(main):006:0> incr 't1', 'r2', 'c1', 111

ERROR: Failed to provide both column family and column qualifier for incr

Increments a cell 'value' at specified table/row/column coordinates.
To increment a cell value in table 'ns1:t1' or 't1' at row 'r1' under column
'c1' by 1 (can be omitted) or 10 do:

  hbase> incr 'ns1:t1', 'r1', 'c1'
  hbase> incr 't1', 'r1', 'c1'
  hbase> incr 't1', 'r1', 'c1', 1
  hbase> incr 't1', 'r1', 'c1', 10
  hbase> incr 't1', 'r1', 'c1', 10, {ATTRIBUTES=>{'mykey'=>'myvalue'}}
  hbase> incr 't1', 'r1', 'c1', {ATTRIBUTES=>{'mykey'=>'myvalue'}}
  hbase> incr 't1', 'r1', 'c1', 10, {VISIBILITY=>'PRIVATE|SECRET'}

The same commands also can be run on a table reference. Suppose you had a 
reference
t to table 't1', the corresponding command would be:

  hbase> t.incr 'r1', 'c1'
  hbase> t.incr 'r1', 'c1', 1
  hbase> t.incr 'r1', 'c1', 10, {ATTRIBUTES=>{'mykey'=>'myvalue'}}
  hbase> t.incr 'r1', 'c1', 10, {VISIBILITY=>'PRIVATE|SECRET'}

Took 0.0103 seconds 
   
hbase(main):007:0> scan 't1'
ROW  COLUMN+CELL
   
 r1  column=c1:, timestamp=1522649623090, 
value=value  
1 row(s)
Took 0.0062 seconds  
{code}
While the same command would run, if we run the following java code:
{code:java}
    try (Connection connection = ConnectionFactory.createConnection(config);
    Admin admin = connection.getAdmin();) {
  Table table = connection.getTable(TableName.valueOf("t1"));
  Increment incr = new Increment(Bytes.toBytes("r2"));
  incr.addColumn(Bytes.toBytes("c1"), null, 111);
  table.increment(incr);
  scan(table);
    }
{code}
Scan result after executing java code: 
{code:java}
hbase(main):008:0> scan 't1'
ROW  COLUMN+CELL
   
 r1

[jira] [Created] (HBASE-20394) HBase over rides the value of HBASE_OPTS (if any) set by client

2018-04-11 Thread Nihal Jain (JIRA)
Nihal Jain created HBASE-20394:
--

 Summary: HBase over rides the value of HBASE_OPTS (if any) set by 
client
 Key: HBASE-20394
 URL: https://issues.apache.org/jira/browse/HBASE-20394
 Project: HBase
  Issue Type: Bug
Reporter: Nihal Jain


Currently HBase will over ride the value of HBASE_OPTS (if any) set by client

[
{code:java}
export HBASE_OPTS="-XX:+UseConcMarkSweepGC"{code}
|

https://github.com/apache/hbase/blob/68726b0ee3ef3eb52d32481444e64236c5a9e733/conf/hbase-env.sh#L43

]

For example a client may have the following set in his environment:
{code:java}
HBASE_OPTS="-Xmn512m"{code}
 

While starting the processes, HBase will internally over-ride the existing 
HBASE_OPTS value with the one set in hbase-env.sh.

 

Instead of overriding we can have the folowing in hbase-env.sh:
{code:java}
export HBASE_OPTS="$HBASE_OPTS -XX:+UseConcMarkSweepGC"{code}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20450) Provide metrics for number of total active, priority and replication rpc handlers

2018-04-18 Thread Nihal Jain (JIRA)
Nihal Jain created HBASE-20450:
--

 Summary: Provide metrics for number of total active, priority and 
replication rpc handlers
 Key: HBASE-20450
 URL: https://issues.apache.org/jira/browse/HBASE-20450
 Project: HBase
  Issue Type: Improvement
  Components: metrics
Reporter: Nihal Jain


Currently hbase provides a metric for [number of total active rpc 
handlers|https://github.com/apache/hbase/blob/f4f2b68238a094d7b1931dc8b7939742ccbb2b57/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/SimpleRpcScheduler.java#L187]
 which is a sum of the following:
 * number of active general rpc handlers
 * number of active priority rpc handlers
 * number of active replication rpc handlers

I think we can have 3 different metrics corresponding to the above mentioned 
handlers which will allow us to see detailed information about number of active 
handlers running for a particular type of handler.

We can have following new metrics:
 * numActiveGeneralHandler
 * numActivePriorityHandler
 * numActiveReplicationHandler

 

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20452) Master UI: Table merge button should validate required fields before submit

2018-04-18 Thread Nihal Jain (JIRA)
Nihal Jain created HBASE-20452:
--

 Summary: Master UI: Table merge button should validate required 
fields before submit
 Key: HBASE-20452
 URL: https://issues.apache.org/jira/browse/HBASE-20452
 Project: HBase
  Issue Type: Improvement
  Components: UI
Reporter: Nihal Jain
Assignee: Nihal Jain


In HBase master UI, whether the required fields are provided should be 
validated before the button is clicked. Also, it should avoid giving a false 
message that merge request has been submitted even if the 
[validation|https://github.com/apache/hbase/blob/80cbc0d1fefdba1492d7ec6e580ad54a2960cbdb/hbase-server/src/main/resources/hbase-webapps/master/table.jsp#L181]
 fails later.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20469) Directory used for sidelining old recovered edits files should be made configurable

2018-04-20 Thread Nihal Jain (JIRA)
Nihal Jain created HBASE-20469:
--

 Summary: Directory used for sidelining old recovered edits files 
should be made configurable
 Key: HBASE-20469
 URL: https://issues.apache.org/jira/browse/HBASE-20469
 Project: HBase
  Issue Type: Improvement
Reporter: Nihal Jain
Assignee: Nihal Jain


 Currently the directory used for sidelining of old recovered edit files is 
hardcoded to be "/tmp"
{code:java}
Path tmp = new Path("/tmp");
{code}
 [See L484 
WALSplittter.java|https://github.com/apache/hbase/blob/273d252838e96c4b4af2401743d84e482c4ec565/hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALSplitter.java#L484]

Instead, we can use some configurable directory in the following manner:

 
{code:java}
String tmpDirName = conf.get(HConstants.TEMPORARY_FS_DIRECTORY_KEY, 
HConstants.DEFAULT_TEMPORARY_HDFS_DIRECTORY); 
.
.

Path tmp = new Path(tmpDirName);
{code}
 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20472) InfoServer doesnot honour any acl set by the admin

2018-04-21 Thread Nihal Jain (JIRA)
Nihal Jain created HBASE-20472:
--

 Summary: InfoServer doesnot honour any acl set by the admin
 Key: HBASE-20472
 URL: https://issues.apache.org/jira/browse/HBASE-20472
 Project: HBase
  Issue Type: Bug
  Components: security, UI
Reporter: Nihal Jain


The adminsAcl property can be used to restrict access to certain sections of 
the web UI only to a particular set of users/groups. But in hbase,  adminAcl 
variable for InfoServer is always null, rendering it to not honour any acl set 
by the admin. In fact I could not find any property in hbase to specify acl 
list for web server.

*Analysis*:
 * *InfoSever* object forgets(?) to set any *adminAcl* in the builder object 
for http server.

{code:java}
public InfoServer(String name, String bindAddress, int port, boolean findPort,
final Configuration c) {
.
.
   
HttpServer.Builder builder =
new org.apache.hadoop.hbase.http.HttpServer.Builder();
.
.

this.httpServer = builder.build();
}{code}
[See InfoServer 
constructor|https://github.com/apache/hbase/blob/46cb5dfa226892fd2580f26ce9ce77225bd7e67c/hbase-http/src/main/java/org/apache/hadoop/hbase/http/InfoServer.java#L55]
 * http server retreives a null value and sets it as adminsAcl, which is passed 
to *createWebAppContext*() method

{code:java}
private HttpServer(final Builder b) throws IOException {
.
.
.

this.adminsAcl = b.adminsAcl;
this.webAppContext = createWebAppContext(b.name, b.conf, adminsAcl, appDir);

.
.
}{code}
[See L527 
HttpServer.java|https://github.com/apache/hbase/blob/46cb5dfa226892fd2580f26ce9ce77225bd7e67c/hbase-http/src/main/java/org/apache/hadoop/hbase/http/HttpServer.java#L527]

 

**Solution:**

 
 * This method sets *ADMIN_ACL* attribute for the servlet context to *null*

{code:java}
private static WebAppContext createWebAppContext(String name,
Configuration conf, AccessControlList adminsAcl, final String appDir) {
WebAppContext ctx = new WebAppContext();
.
.

ctx.getServletContext().setAttribute(ADMINS_ACL, adminsAcl);

.
.
}
{code}
 * Now any page having *HttpServer.hasAdministratorAccess*() will allow access 
to everyone, making this check useless. 

{code:java}
@Override
public void doGet(HttpServletRequest request, HttpServletResponse response
) throws ServletException, IOException {

// Do the authorization
if (!HttpServer.hasAdministratorAccess(getServletContext(), request,
response)) {
return;
}

.
.
}{code}
[For example See L104 
LogLevel.java|https://github.com/apache/hbase/blob/46cb5dfa226892fd2580f26ce9ce77225bd7e67c/hbase-http/src/main/java/org/apache/hadoop/hbase/http/log/LogLevel.java#L104]
 * *hasAdministratorAccess()* checks for the following and returns true, in any 
case as *ADMIN_ACL* is always *null*

{code:java}
public static boolean hasAdministratorAccess(
ServletContext servletContext, HttpServletRequest request,
HttpServletResponse response) throws IOException {
.
.

if (servletContext.getAttribute(ADMINS_ACL) != null &&
!userHasAdministratorAccess(servletContext, remoteUser)) {
  response.sendError(HttpServletResponse.SC_UNAUTHORIZED, "User "
 + remoteUser + " is unauthorized to access this page.");
   return false;
}
return true;
}{code}
[See line 1196 in 
HttpServer|https://github.com/apache/hbase/blob/46cb5dfa226892fd2580f26ce9ce77225bd7e67c/hbase-http/src/main/java/org/apache/hadoop/hbase/http/HttpServer.java#L1196]

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20499) Replication/Priority executors can use specific max queue length as default value instead of general maxQueueLength

2018-04-27 Thread Nihal Jain (JIRA)
Nihal Jain created HBASE-20499:
--

 Summary: Replication/Priority executors can use specific max queue 
length as default value instead of general maxQueueLength
 Key: HBASE-20499
 URL: https://issues.apache.org/jira/browse/HBASE-20499
 Project: HBase
  Issue Type: Improvement
Reporter: Nihal Jain
Assignee: Nihal Jain


{{In *SimpleRpcScheduler*'s constructor, instead of the using *maxQueueLength* 
as default value for *replicationExecutor*/*replicationExecutor*'s max queue 
length:}}
{code:java}
int maxQueueLength = conf.getInt(RpcScheduler.IPC_SERVER_MAX_CALLQUEUE_LENGTH,
handlerCount * RpcServer.DEFAULT_MAX_CALLQUEUE_LENGTH_PER_HANDLER);
int maxPriorityQueueLength =
conf.getInt(RpcScheduler.IPC_SERVER_PRIORITY_MAX_CALLQUEUE_LENGTH, 
maxQueueLength);
.
.
this.replicationExecutor = replicationHandlerCount > 0 ? new 
FastPathBalancedQueueRpcExecutor(
"replication.FPBQ", replicationHandlerCount, 
RpcExecutor.CALL_QUEUE_TYPE_FIFO_CONF_VALUE,
maxQueueLength, priority, conf, abortable) : null;{code}
[See 
SimpleRpcScheduler|https://github.com/apache/hbase/blob/96ed407c691ac0686fb14cdcd8680d1849e24ae8/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/SimpleRpcScheduler.java#L97]

We can do the following:
{code:java}
int maxQueueLength = conf.getInt(RpcScheduler.IPC_SERVER_MAX_CALLQUEUE_LENGTH,
handlerCount * RpcServer.DEFAULT_MAX_CALLQUEUE_LENGTH_PER_HANDLER);
int maxPriorityQueueLength =
conf.getInt(RpcScheduler.IPC_SERVER_PRIORITY_MAX_CALLQUEUE_LENGTH, 
priorityHandlerCount * RpcServer.DEFAULT_MAX_CALLQUEUE_LENGTH_PER_HANDLER);
.
.
int maxQueueLengthForReplication = 
conf.getInt(RpcScheduler.IPC_SERVER_MAX_CALLQUEUE_LENGTH, 
replicationHandlerCount * RpcServer.DEFAULT_MAX_CALLQUEUE_LENGTH_PER_HANDLER);
.
.
this.replicationExecutor = replicationHandlerCount > 0 ? new 
FastPathBalancedQueueRpcExecutor(
"replication.FPBQ", replicationHandlerCount, 
RpcExecutor.CALL_QUEUE_TYPE_FIFO_CONF_VALUE,
maxQueueLengthForReplication , priority, conf, abortable) : null;
{code}
 

Also, we can make the maximum replication call queue length configurable, 
similar to general and priority call queue length.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20577) Make Log Level page design consistent with the design of other pages in UI

2018-05-13 Thread Nihal Jain (JIRA)
Nihal Jain created HBASE-20577:
--

 Summary: Make Log Level page design consistent with the design of 
other pages in UI
 Key: HBASE-20577
 URL: https://issues.apache.org/jira/browse/HBASE-20577
 Project: HBase
  Issue Type: Improvement
  Components: UI, Usability
Reporter: Nihal Jain
Assignee: Nihal Jain


The Log Level page in web UI seems out of the place. I think we should make it 
look consistent with design of other pages in HBase web UI.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20614) REST scan API with incorrect filter text file throws HTTP 503 Service Unavailable error

2018-05-22 Thread Nihal Jain (JIRA)
Nihal Jain created HBASE-20614:
--

 Summary: REST scan API with incorrect filter text file throws HTTP 
503 Service Unavailable error
 Key: HBASE-20614
 URL: https://issues.apache.org/jira/browse/HBASE-20614
 Project: HBase
  Issue Type: Bug
Reporter: Nihal Jain
Assignee: Nihal Jain


HBase rest server returns a {{503 Server Unavailable}} when generating a 
scanner object fails using the hbase rest server interface.

The error code returned by hbase rest server is incorrect and may mislead the 
user.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20633) Dropping a table containing a disable violation policy fails to remove the quota upon table delete

2018-05-23 Thread Nihal Jain (JIRA)
Nihal Jain created HBASE-20633:
--

 Summary: Dropping a table containing a disable violation policy 
fails to remove the quota upon table delete
 Key: HBASE-20633
 URL: https://issues.apache.org/jira/browse/HBASE-20633
 Project: HBase
  Issue Type: Bug
Reporter: Nihal Jain
Assignee: Nihal Jain


 
{code:java}
  private void setQuotaAndThenDropTable(SpaceViolationPolicy policy) throws 
Exception {
Put put = new Put(Bytes.toBytes("to_reject"));
put.addColumn(Bytes.toBytes(SpaceQuotaHelperForTests.F1), 
Bytes.toBytes("to"),
  Bytes.toBytes("reject"));

SpaceViolationPolicy policy = SpaceViolationPolicy.DISABLE;

// Do puts until we violate space policy
final TableName tn = writeUntilViolationAndVerifyViolation(policy, put);

// Now, drop the table
TEST_UTIL.deleteTable(tn);
LOG.debug("Successfully deleted table ", tn);

// Now re-create the table
TEST_UTIL.createTable(tn, Bytes.toBytes(SpaceQuotaHelperForTests.F1));
LOG.debug("Successfully re-created table ", tn);

// Put some rows now: should not violate as table/quota was dropped
verifyNoViolation(policy, tn, put);
  }
{code}

 * When we drop a table, upon completion the quota triggers removal of disable 
policy, thus causing the system to enable the table
{noformat}
2018-05-18 18:08:58,189 DEBUG [PEWorker-13] 
procedure.DeleteTableProcedure(130): delete 
'testSetQuotaAndThenDropTableWithDisable19' completed
2018-05-18 18:08:58,191 INFO  [PEWorker-13] procedure2.ProcedureExecutor(1265): 
Finished pid=328, state=SUCCESS; DeleteTableProcedure 
table=testSetQuotaAndThenDropTableWithDisable19 in 271msec
2018-05-18 18:08:58,321 INFO  [regionserver/ba4cba1aa13d:0.Chore.1] 
client.HBaseAdmin$14(844): Started enable of 
testSetQuotaAndThenDropTableWithDisable19{noformat}

 * But, since the table has already been dropped, enable procedure would 
rollback
{noformat}
2018-05-18 18:08:58,427 DEBUG 
[RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46443] 
procedure2.ProcedureExecutor(884): Stored pid=329, 
state=RUNNABLE:ENABLE_TABLE_PREPARE; EnableTableProcedure 
table=testSetQuotaAndThenDropTableWithDisable19
2018-05-18 18:08:58,430 DEBUG 
[RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46443] 
master.MasterRpcServices(1141): Checking to see if procedure is done pid=329
2018-05-18 18:08:58,451 INFO  [PEWorker-10] procedure2.ProcedureExecutor(1359): 
Rolled back pid=329, state=ROLLEDBACK, 
exception=org.apache.hadoop.hbase.TableNotFoundException via 
master-enable-table:org.apache.hadoop.hbase.TableNotFoundException: 
testSetQuotaAndThenDropTableWithDisable19; EnableTableProcedure 
table=testSetQuotaAndThenDropTableWithDisable19 exec-time=124msec
2018-05-18 18:08:58,533 DEBUG 
[RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46443] 
master.MasterRpcServices(1141): Checking to see if procedure is done pid=329
2018-05-18 18:08:58,535 INFO  [regionserver/ba4cba1aa13d:0.Chore.1] 
client.HBaseAdmin$TableFuture(3652): Operation: ENABLE, Table Name: 
default:testSetQuotaAndThenDropTableWithDisable19 failed with 
testSetQuotaAndThenDropTableWithDisable19{noformat}

 * Since, quota manager fails to enable table (i.e disable violation policy), 
it would not remove the policy, causing problems if table re-created
{noformat}
2018-05-18 18:08:58,536 ERROR [regionserver/ba4cba1aa13d:0.Chore.1] 
quotas.RegionServerSpaceQuotaManager(210): Failed to disable space violation 
policy for testSetQuotaAndThenDropTableWithDisable19. This table will remain in 
violation.
 org.apache.hadoop.hbase.TableNotFoundException: 
testSetQuotaAndThenDropTableWithDisable19
 at 
org.apache.hadoop.hbase.master.procedure.EnableTableProcedure.prepareEnable(EnableTableProcedure.java:323)
 at 
org.apache.hadoop.hbase.master.procedure.EnableTableProcedure.executeFromState(EnableTableProcedure.java:98)
 at 
org.apache.hadoop.hbase.master.procedure.EnableTableProcedure.executeFromState(EnableTableProcedure.java:49)
 at 
org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:184)
 at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:850)
 at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1472)
 at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1240)
 at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$800(ProcedureExecutor.java:75)
 at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1760){noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20662) Increasing space quota on a violated table does not remove SpaceViolationPolicy.DISABLE enforcement

2018-05-30 Thread Nihal Jain (JIRA)
Nihal Jain created HBASE-20662:
--

 Summary: Increasing space quota on a violated table does not 
remove SpaceViolationPolicy.DISABLE enforcement
 Key: HBASE-20662
 URL: https://issues.apache.org/jira/browse/HBASE-20662
 Project: HBase
  Issue Type: Bug
Reporter: Nihal Jain


*Steps to reproduce*
* Create a table and set quota with {{SpaceViolationPolicy.DISABLE}} having 
limit = 2MB
* Now puts row until space quota is violated and table gets disabled
* Next, increase space quota with limit = 4MB on the table
* Now put a row into the table
{code:java}
 private void testSetQuotaThenViolateAndFinallyIncreaseQuota() throws Exception 
{
SpaceViolationPolicy policy = SpaceViolationPolicy.DISABLE;
Put put = new Put(Bytes.toBytes("to_reject"));
put.addColumn(Bytes.toBytes(SpaceQuotaHelperForTests.F1), 
Bytes.toBytes("to"),
  Bytes.toBytes("reject"));

// Do puts until we violate space policy
final TableName tn = writeUntilViolationAndVerifyViolation(policy, put);

// Now, increase limit
setQuotaLimit(tn, policy, 4L);

// Put some row now: should not violate as quota limit increased
verifyNoViolation(policy, tn, put);
  }
{code}

*Expected*
We should be able to put data as long as newly set quota limit is not reached

*Actual*
We fail to put any new row even after increasing limit

*Root cause*
Increasing quota on a violated table triggers the table to be enabled, but 
because the table is already in violation, the system does not allow it to be 
enable (may be thinking that a user is trying to enable it)
{noformat}
2018-05-31 00:34:27,563 INFO  [regionserver/root1-ThinkPad-T440p:0.Chore.1] 
client.HBaseAdmin$14(844): Started enable of 
testSetQuotaAndThenIncreaseQuotaWithDisable0
2018-05-31 00:34:27,571 DEBUG 
[RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=42525] ipc.CallRunner(142): 
callId: 11 service: MasterService methodName: EnableTable size: 104 connection: 
127.0.0.1:38030 deadline: 1527707127568, 
exception=org.apache.hadoop.hbase.security.AccessDeniedException: Enabling the 
table 'testSetQuotaAndThenIncreaseQuotaWithDisable0' is disallowed due to a 
violated space quota.
2018-05-31 00:34:27,571 ERROR [regionserver/root1-ThinkPad-T440p:0.Chore.1] 
quotas.RegionServerSpaceQuotaManager(210): Failed to disable space violation 
policy for testSetQuotaAndThenIncreaseQuotaWithDisable0. This table will remain 
in violation.
org.apache.hadoop.hbase.security.AccessDeniedException: 
org.apache.hadoop.hbase.security.AccessDeniedException: Enabling the table 
'testSetQuotaAndThenIncreaseQuotaWithDisable0' is disallowed due to a violated 
space quota.
at org.apache.hadoop.hbase.master.HMaster$6.run(HMaster.java:2275)
at 
org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:131)
at org.apache.hadoop.hbase.master.HMaster.enableTable(HMaster.java:2258)
at 
org.apache.hadoop.hbase.master.MasterRpcServices.enableTable(MasterRpcServices.java:725)
at 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)

at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:100)
at 
org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:90)
at 
org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:360)
at 
org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:348)
at 
org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101)
at 
org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:107)
at 
org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3061)
at 
org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3053)
at 
org.apache.hadoop.hbase.client.HBaseAdmin.enableTableAsync(HBaseAdmin.java:839)
at 
org.apache.hadoop.hbase.client.HBaseAdmin.enableTable(HBaseAdmin.java:833)
at 
org.apache

[jira] [Created] (HBASE-20685) Relook region state transition documentation

2018-06-05 Thread Nihal Jain (JIRA)
Nihal Jain created HBASE-20685:
--

 Summary: Relook region state transition documentation
 Key: HBASE-20685
 URL: https://issues.apache.org/jira/browse/HBASE-20685
 Project: HBase
  Issue Type: Bug
  Components: documentation
Reporter: Nihal Jain


HBASE-11961 added docs for region state transitions. I have a few points in 
mind:
 * The RST graph says {{OFFLINE}} is a terminal state (i.e. regions of disabled 
tables are set to offline), but with master branch I see table regions in 
{{CLOSED}} state. 
 * The CLOSED state has been marked as transient but as I understand it is a 
terminal state with AMv2 design (unassigning a region puts it into CLOSED state)

I think this diagram needs to be re-looked and is out dated w.r.t. master.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20693) Log Level page may throw FNFE if no header.jsp or footer.jsp is present

2018-06-06 Thread Nihal Jain (JIRA)
Nihal Jain created HBASE-20693:
--

 Summary: Log Level page may throw FNFE if no header.jsp or 
footer.jsp is present
 Key: HBASE-20693
 URL: https://issues.apache.org/jira/browse/HBASE-20693
 Project: HBase
  Issue Type: Bug
Reporter: Nihal Jain
Assignee: Nihal Jain


Log Level page design was changed to include header and footers in HBASE-20577. 
Not all web UIs in hbase have header/footer jsp. Also, it would be wrong to 
assume some new project would be having its jsp refactored into header/footer.

To mitigate, we can do the following:
 1) fall back to original log level design if 'header.jsp'/'footer.jsp' is not 
present
 2) Next, refactor jsp file of other projects (like rest, thrift) to have 
'header.jsp' and 'footer.jsp'



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (HBASE-20577) Make Log Level page design consistent with the design of other pages in UI

2018-06-06 Thread Nihal Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nihal Jain reopened HBASE-20577:


> Make Log Level page design consistent with the design of other pages in UI
> --
>
> Key: HBASE-20577
> URL: https://issues.apache.org/jira/browse/HBASE-20577
> Project: HBase
>  Issue Type: Improvement
>  Components: UI, Usability
>Reporter: Nihal Jain
>Assignee: Nihal Jain
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-20577.master.001.patch, 
> HBASE-20577.master.002.patch, HBASE-20577.master.ADDENDUM.patch, 
> after_patch_LogLevel_CLI.png, after_patch_get_log_level.png, 
> after_patch_require_field_validation.png, after_patch_set_log_level_bad.png, 
> after_patch_set_log_level_success.png, 
> before_patch_no_validation_required_field.png, rest_after_addendum_patch.png
>
>
> The Log Level page in web UI seems out of the place. I think we should make 
> it look consistent with design of other pages in HBase web UI.
> Also, validation of required fields should be done, otherwise user should not 
> be allowed to click submit button.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20699) QuataCache should cancel the QuotaRefresherChore service inside its stop()

2018-06-07 Thread Nihal Jain (JIRA)
Nihal Jain created HBASE-20699:
--

 Summary: QuataCache should cancel the QuotaRefresherChore service 
inside its stop()
 Key: HBASE-20699
 URL: https://issues.apache.org/jira/browse/HBASE-20699
 Project: HBase
  Issue Type: Bug
Reporter: Nihal Jain
Assignee: Nihal Jain


*ANALYSIS*
 * * Called from HRegionServer.run() in case rs is aborted for some reason:

{code:java}
// Stop the quota manager
if (rsQuotaManager != null) {
  rsQuotaManager.stop();
}
{code}
 * Inside {{RegionServerRpcQuotaManager.stop()}}:

{code:java}
  public void stop() {
if (isQuotaEnabled()) {
  quotaCache.stop("shutdown");
}
  }
{code}
 * {{QuotaCache starts QuotaRefresherChore in}}{{ QuotaCache.start()}}:

{code:java}
  public void start() throws IOException {
stopped = false;

// TODO: This will be replaced once we have the notification bus ready.
Configuration conf = rsServices.getConfiguration();
int period = conf.getInt(REFRESH_CONF_KEY, REFRESH_DEFAULT_PERIOD);
refreshChore = new QuotaRefresherChore(period, this);
rsServices.getChoreService().scheduleChore(refreshChore);
  }
{code}
 * {{QuotaCache stop should cancel refreshChore inside }}{{QuotaCache.stop()}}:

{code:java}
  @Override
  public void stop(final String why) {
stopped = true;
  }
{code}

*IMPACT:*
QuotaRefresherChore may cause some retrying operation to delay rs abort



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20712) HBase eclipse formatter should not format the ASF license header

2018-06-09 Thread Nihal Jain (JIRA)
Nihal Jain created HBASE-20712:
--

 Summary: HBase eclipse formatter should not format the ASF license 
header
 Key: HBASE-20712
 URL: https://issues.apache.org/jira/browse/HBASE-20712
 Project: HBase
  Issue Type: Bug
Reporter: Nihal Jain
Assignee: Nihal Jain


Whenever we add a new class along with the ASF license header, we cannot press 
{{ctrl + A}} to format the whole file as it also formats the header text.

IMO we should disable formatting of headers in our code formatter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20714) Document REST curl commands for supported (but missing in docs) operations

2018-06-11 Thread Nihal Jain (JIRA)
Nihal Jain created HBASE-20714:
--

 Summary: Document REST curl commands for supported (but missing in 
docs) operations
 Key: HBASE-20714
 URL: https://issues.apache.org/jira/browse/HBASE-20714
 Project: HBase
  Issue Type: Task
  Components: REST
Reporter: Nihal Jain


While going through some recent REST JIRAs, I found REST supports row 
operations like delete, append, increment etc.

I think we should document all such missing  ops.

Others can give some input regarding other missing ops apart from the 
above-mentioned ones.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20715) REST curl command for create namespace with "Content-Type: text/xml" specified (mistakenly) results in a 400 Bad Request

2018-06-11 Thread Nihal Jain (JIRA)
Nihal Jain created HBASE-20715:
--

 Summary: REST curl command for create namespace with 
"Content-Type: text/xml" specified (mistakenly) results in a 400 Bad Request
 Key: HBASE-20715
 URL: https://issues.apache.org/jira/browse/HBASE-20715
 Project: HBase
  Issue Type: Task
  Components: documentation, REST
Reporter: Nihal Jain


I have a habit of setting "Content-Type: text/xml" for curl commands. Today, I 
had a bad time debugging the 400 error for below command. Finally, this 
discussion helped me in resolving the problem.

As pointed out by [~mwarhaftig] and [~misty] in their discussion in 
HBASE-14147, a curl request of the following form will throw a {{400 BAD 
REQUEST}}
{code:java}
curl -vi -X POST -H "Accept: text/xml" -H "Content-Type: text/xml"  
"http://my_server:20550/namespaces/new_ns2/";
{code}
I think we should document this in the description for "Create a new namespace" 
explicitly mentioning that users should take care "Content-Type" is not set for 
this particular command.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20820) Removing SpaceQuota from a namespace does not remove it completely

2018-06-28 Thread Nihal Jain (JIRA)
Nihal Jain created HBASE-20820:
--

 Summary: Removing SpaceQuota from a namespace does not remove it 
completely
 Key: HBASE-20820
 URL: https://issues.apache.org/jira/browse/HBASE-20820
 Project: HBase
  Issue Type: Bug
Reporter: Nihal Jain
Assignee: Nihal Jain


As demonstarted in 
[HBASE-20662.master.002.patch|https://issues.apache.org/jira/secure/attachment/12927187/HBASE-20662.master.002.patch]
 setting quota on a namespace and removing it does not remove the quota setting 
from the tables in the namespace (which were added systematically due to 
namespace quota settings).

*Relevant code:*
{code:java}
public void setQuotaAndThenRemove(final String namespace, SpaceViolationPolicy 
policy)
throws Exception {
  Put put = new Put(Bytes.toBytes("to_reject"));
  put.addColumn(Bytes.toBytes(SpaceQuotaHelperForTests.F1), Bytes.toBytes("to"),
Bytes.toBytes("reject"));
  // Set quota and do puts until we violate space policy
  final TableName tn = writeUntilNSSpaceViolationAndVerifyViolation(namespace, 
policy, put);
  // Now, remove the quota
  removeQuotaFromNamespace(namespace);
  // Put some rows now: should not violate as quota settings removed
  verifyNoViolation(policy, tn, put);
}
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20821) Re-creating a dropped namespace and contained table inherits previously set space quota settings

2018-06-29 Thread Nihal Jain (JIRA)
Nihal Jain created HBASE-20821:
--

 Summary: Re-creating a dropped namespace and contained table 
inherits previously set space quota settings
 Key: HBASE-20821
 URL: https://issues.apache.org/jira/browse/HBASE-20821
 Project: HBase
  Issue Type: Bug
Reporter: Nihal Jain
Assignee: Nihal Jain


As demonstarted in 
[HBASE-20662.master.002.patch|https://issues.apache.org/jira/secure/attachment/12927187/HBASE-20662.master.002.patch]
 re-creating a dropped namespace and contained table inherits previously set 
space quota settings.

*Steps*
 * Create a namespace and a table in it
 * Set space quota on namespace
 * Drop table and then namespace
 * Re create same namespace and same table
 * 
{code:java}
private void setQuotaAndThenDropNamespace(final String namespace, 
SpaceViolationPolicy policy)
throws Exception {
  Put put = new Put(Bytes.toBytes("to_reject"));
  put.addColumn(Bytes.toBytes(SpaceQuotaHelperForTests.F1), Bytes.toBytes("to"),
Bytes.toBytes("reject"));
  createNamespaceIfNotExist(TEST_UTIL.getAdmin(), namespace);
  // Do puts until we violate space policy
  final TableName tn = writeUntilNSSpaceViolationAndVerifyViolation(namespace, 
policy, put);
  // Now, drop the table
  TEST_UTIL.deleteTable(tn);
  LOG.debug("Successfully deleted table ", tn);
  // Now, drop the namespace
  TEST_UTIL.getAdmin().deleteNamespace(namespace);
  LOG.debug("Successfully deleted the namespace ", namespace);
  // Now re-create the namespace
  createNamespaceIfNotExist(TEST_UTIL.getAdmin(), namespace);
  LOG.debug("Successfully re-created the namespace ", namespace);
  TEST_UTIL.createTable(tn, Bytes.toBytes(SpaceQuotaHelperForTests.F1));
  LOG.debug("Successfully re-created table ", tn);
  // Put some rows now: should not violate as namespace/quota was dropped
  verifyNoViolation(policy, tn, put);
}
{code}

*Expected*: SpaceQuota settings should not exist on the newly re-created table

*Actual:* SpaceQuota settings (systematically created due to previously added 
namespace space quota) exist on table



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-21021) Result returned by Append operation should be ordered

2018-08-07 Thread Nihal Jain (JIRA)
Nihal Jain created HBASE-21021:
--

 Summary: Result returned by Append operation should be ordered
 Key: HBASE-21021
 URL: https://issues.apache.org/jira/browse/HBASE-21021
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.3.0
Reporter: Nihal Jain
Assignee: Nihal Jain


*Problem:*

The result returned by the append operation should be ordered. Currently, it 
returns an unordered list, which may cause problems like if the user tries to 
perform Result.getValue(byte[] family, byte[] qualifier), even if the returned 
result has a value corresponding to (family, qualifier), the method may return 
null as it performs a binary search over the  unsorted result (which should 
have been sorted actually).

*Actual:* The returned result is unordered

*Expected:* Similar to increment op, the returned result should be ordered.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-21030) Correct javadoc for append operation

2018-08-09 Thread Nihal Jain (JIRA)
Nihal Jain created HBASE-21030:
--

 Summary: Correct javadoc for append operation
 Key: HBASE-21030
 URL: https://issues.apache.org/jira/browse/HBASE-21030
 Project: HBase
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.0, 1.5.0
Reporter: Nihal Jain


The doc for {{append}} operation is incorrect. (see {{@param append}} in the 
code snippet below or 
[Table.java#L566|https://github.com/apache/hbase/blob/3f5033f88ee9da2a5a42d058b9aefe57b089b3e1/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Table.java#L566])
{code:java}
  /**
   * Appends values to one or more columns within a single row.
   * 
   * This operation guaranteed atomicity to readers. Appends are done
   * under a single row lock, so write operations to a row are synchronized, and
   * readers are guaranteed to see this operation fully completed.
   *
   * @param append object that specifies the columns and amounts to be used
   *  for the increment operations
   * @throws IOException e
   * @return values of columns after the append operation (maybe null)
   */
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-21067) Backport HBASE-17519 (Rollback the removed cells) to branch-1.3

2018-08-17 Thread Nihal Jain (JIRA)
Nihal Jain created HBASE-21067:
--

 Summary: Backport HBASE-17519 (Rollback the removed cells) to 
branch-1.3
 Key: HBASE-21067
 URL: https://issues.apache.org/jira/browse/HBASE-21067
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.3.3
Reporter: Nihal Jain
Assignee: Nihal Jain


Backport HBASE-17519 (Rollback the removed cells) to branch-1.3, which handles 
rollback of append/increment completely in case of failure



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HBASE-20922) java.lang.ArithmeticException: / by zero in hbase shell

2018-08-24 Thread Nihal Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nihal Jain resolved HBASE-20922.

Resolution: Duplicate

> java.lang.ArithmeticException: / by zero in hbase shell
> ---
>
> Key: HBASE-20922
> URL: https://issues.apache.org/jira/browse/HBASE-20922
> Project: HBase
>  Issue Type: Bug
>  Components: shell
> Environment: # hbase version
> HBase 1.2.0-cdh5.13.0
> Source code repository 
> file:///data/jenkins/workspace/generic-package-rhel64-6-0/topdir/BUILD/hbase-1.2.0-cdh5.13.0
>  revision=Unknown
> Compiled by jenkins on Wed Oct 4 11:16:18 PDT 2017
> From source with checksum d217698686f8f4b97ae57e07264f65a8
>Reporter: Sergey Alaev
>Priority: Major
>
> # run `hbase shell`
>  # type a
>  # java.lang.ArithmeticException: will occur
>  
> ```
>  # hbase shell
>  2018-07-23 17:21:53,994 INFO [main] Configuration.deprecation: 
> hadoop.native.lib is deprecated. Instead, use io.native.lib.available
>  HBase Shell; enter 'help' for list of supported commands.
>  Type "exit" to leave the HBase Shell
>  Version 1.2.0-cdh5.13.0, rUnknown, Wed Oct 4 11:16:18 PDT 2017
> hbase(main):001:0> aConsoleReader.java:1414:in `backspace': 
> java.lang.ArithmeticException: / by zero
>  from ConsoleReader.java:1436:in `backspace'
>  from ConsoleReader.java:628:in `readLine'
>  from ConsoleReader.java:457:in `readLine'
>  from Readline.java:237:in `s_readline'
>  from Readline$s$s_readline.gen:65535:in `call'
>  from CachingCallSite.java:332:in `cacheAndCall'
>  from CachingCallSite.java:203:in `call'
>  from FCallTwoArgNode.java:38:in `interpret'
>  from LocalAsgnNode.java:123:in `interpret'
>  from IfNode.java:111:in `interpret'
>  from NewlineNode.java:104:in `interpret'
>  from ASTInterpreter.java:74:in `INTERPRET_METHOD'
>  from InterpretedMethod.java:147:in `call'
>  from DefaultMethod.java:183:in `call'
>  from CachingCallSite.java:292:in `cacheAndCall'
>  from CachingCallSite.java:135:in `call'
>  from CallNoArgNode.java:63:in `interpret'
>  from DAsgnNode.java:110:in `interpret'
>  from IfNode.java:111:in `interpret'
>  from NewlineNode.java:104:in `interpret'
>  from BlockNode.java:71:in `interpret'
>  from ASTInterpreter.java:111:in `INTERPRET_BLOCK'
>  from InterpretedBlock.java:374:in `evalBlockBody'
>  from InterpretedBlock.java:295:in `yield'
>  from InterpretedBlock.java:229:in `yieldSpecific'
>  from Block.java:99:in `yieldSpecific'
>  from ZYieldNode.java:25:in `interpret'
>  from NewlineNode.java:104:in `interpret'
>  from EnsureNode.java:96:in `interpret'
>  from BeginNode.java:83:in `interpret'
>  from NewlineNode.java:104:in `interpret'
>  from BlockNode.java:71:in `interpret'
>  from ASTInterpreter.java:74:in `INTERPRET_METHOD'
>  from InterpretedMethod.java:212:in `call'
>  from DefaultMethod.java:207:in `call'
>  from CachingCallSite.java:322:in `cacheAndCall'
>  from CachingCallSite.java:178:in `callBlock'
>  from CachingCallSite.java:187:in `callIter'
>  from FCallOneArgBlockNode.java:34:in `interpret'
>  from NewlineNode.java:104:in `interpret'
>  from ASTInterpreter.java:111:in `INTERPRET_BLOCK'
>  from InterpretedBlock.java:374:in `evalBlockBody'
>  from InterpretedBlock.java:328:in `yield'
>  from BlockBody.java:73:in `call'
>  from Block.java:89:in `call'
>  from RubyProc.java:270:in `call'
>  from RubyProc.java:220:in `call'
>  from RubyProc$i$0$0$call.gen:65535:in `call'
>  from DynamicMethod.java:203:in `call'
>  from DynamicMethod.java:199:in `call'
>  from CachingCallSite.java:292:in `cacheAndCall'
>  from CachingCallSite.java:135:in `call'
>  from CallNoArgNode.java:63:in `interpret'
>  from LocalAsgnNode.java:123:in `interpret'
>  from NewlineNode.java:104:in `interpret'
>  from BlockNode.java:71:in `interpret'
>  from ASTInterpreter.java:74:in `INTERPRET_METHOD'
>  from InterpretedMethod.java:147:in `call'
>  from DefaultMethod.java:183:in `call'
>  from CachingCallSite.java:292:in `cacheAndCall'
>  from CachingCallSite.java:135:in `call'
>  from VCallNode.java:86:in `interpret'
>  from IfNode.java:111:in `interpret'
>  from NewlineNode.java:104:in `interpret'
>  from WhileNode.java:131:in `interpret'
>  from NewlineNode.java:104:in `interpret'
>  from BlockNode.java:71:in `interpret'
>  from ASTInterpreter.java:74:in `INTERPRET_METHOD'
>  from InterpretedMethod.java:147:in `call'
>  from DefaultMethod.java:183:in `call'
>  from CachingCallSite.java:292:in `cacheAndCall'
>  from CachingCallSite.java:135:in `call'
>  from CallNoArgNode.java:63:in `interpret'
>  from LocalAsgnNode.java:123:in `interpret'
>  from NewlineNode.java:104:in `interpret'
>  from BlockNode.java:71:in `interpret'
>  from IfNode.java:117:in `interpret'
>  from NewlineNode.java:104:in `interpret'
>  from BlockNode.java:71:in `interpret'
>  from ASTInterpreter.java:7

[jira] [Reopened] (HBASE-20922) java.lang.ArithmeticException: / by zero in hbase shell

2018-08-24 Thread Nihal Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nihal Jain reopened HBASE-20922:


> java.lang.ArithmeticException: / by zero in hbase shell
> ---
>
> Key: HBASE-20922
> URL: https://issues.apache.org/jira/browse/HBASE-20922
> Project: HBase
>  Issue Type: Bug
>  Components: shell
> Environment: # hbase version
> HBase 1.2.0-cdh5.13.0
> Source code repository 
> file:///data/jenkins/workspace/generic-package-rhel64-6-0/topdir/BUILD/hbase-1.2.0-cdh5.13.0
>  revision=Unknown
> Compiled by jenkins on Wed Oct 4 11:16:18 PDT 2017
> From source with checksum d217698686f8f4b97ae57e07264f65a8
>Reporter: Sergey Alaev
>Priority: Major
>
> # run `hbase shell`
>  # type a
>  # java.lang.ArithmeticException: will occur
>  
> ```
>  # hbase shell
>  2018-07-23 17:21:53,994 INFO [main] Configuration.deprecation: 
> hadoop.native.lib is deprecated. Instead, use io.native.lib.available
>  HBase Shell; enter 'help' for list of supported commands.
>  Type "exit" to leave the HBase Shell
>  Version 1.2.0-cdh5.13.0, rUnknown, Wed Oct 4 11:16:18 PDT 2017
> hbase(main):001:0> aConsoleReader.java:1414:in `backspace': 
> java.lang.ArithmeticException: / by zero
>  from ConsoleReader.java:1436:in `backspace'
>  from ConsoleReader.java:628:in `readLine'
>  from ConsoleReader.java:457:in `readLine'
>  from Readline.java:237:in `s_readline'
>  from Readline$s$s_readline.gen:65535:in `call'
>  from CachingCallSite.java:332:in `cacheAndCall'
>  from CachingCallSite.java:203:in `call'
>  from FCallTwoArgNode.java:38:in `interpret'
>  from LocalAsgnNode.java:123:in `interpret'
>  from IfNode.java:111:in `interpret'
>  from NewlineNode.java:104:in `interpret'
>  from ASTInterpreter.java:74:in `INTERPRET_METHOD'
>  from InterpretedMethod.java:147:in `call'
>  from DefaultMethod.java:183:in `call'
>  from CachingCallSite.java:292:in `cacheAndCall'
>  from CachingCallSite.java:135:in `call'
>  from CallNoArgNode.java:63:in `interpret'
>  from DAsgnNode.java:110:in `interpret'
>  from IfNode.java:111:in `interpret'
>  from NewlineNode.java:104:in `interpret'
>  from BlockNode.java:71:in `interpret'
>  from ASTInterpreter.java:111:in `INTERPRET_BLOCK'
>  from InterpretedBlock.java:374:in `evalBlockBody'
>  from InterpretedBlock.java:295:in `yield'
>  from InterpretedBlock.java:229:in `yieldSpecific'
>  from Block.java:99:in `yieldSpecific'
>  from ZYieldNode.java:25:in `interpret'
>  from NewlineNode.java:104:in `interpret'
>  from EnsureNode.java:96:in `interpret'
>  from BeginNode.java:83:in `interpret'
>  from NewlineNode.java:104:in `interpret'
>  from BlockNode.java:71:in `interpret'
>  from ASTInterpreter.java:74:in `INTERPRET_METHOD'
>  from InterpretedMethod.java:212:in `call'
>  from DefaultMethod.java:207:in `call'
>  from CachingCallSite.java:322:in `cacheAndCall'
>  from CachingCallSite.java:178:in `callBlock'
>  from CachingCallSite.java:187:in `callIter'
>  from FCallOneArgBlockNode.java:34:in `interpret'
>  from NewlineNode.java:104:in `interpret'
>  from ASTInterpreter.java:111:in `INTERPRET_BLOCK'
>  from InterpretedBlock.java:374:in `evalBlockBody'
>  from InterpretedBlock.java:328:in `yield'
>  from BlockBody.java:73:in `call'
>  from Block.java:89:in `call'
>  from RubyProc.java:270:in `call'
>  from RubyProc.java:220:in `call'
>  from RubyProc$i$0$0$call.gen:65535:in `call'
>  from DynamicMethod.java:203:in `call'
>  from DynamicMethod.java:199:in `call'
>  from CachingCallSite.java:292:in `cacheAndCall'
>  from CachingCallSite.java:135:in `call'
>  from CallNoArgNode.java:63:in `interpret'
>  from LocalAsgnNode.java:123:in `interpret'
>  from NewlineNode.java:104:in `interpret'
>  from BlockNode.java:71:in `interpret'
>  from ASTInterpreter.java:74:in `INTERPRET_METHOD'
>  from InterpretedMethod.java:147:in `call'
>  from DefaultMethod.java:183:in `call'
>  from CachingCallSite.java:292:in `cacheAndCall'
>  from CachingCallSite.java:135:in `call'
>  from VCallNode.java:86:in `interpret'
>  from IfNode.java:111:in `interpret'
>  from NewlineNode.java:104:in `interpret'
>  from WhileNode.java:131:in `interpret'
>  from NewlineNode.java:104:in `interpret'
>  from BlockNode.java:71:in `interpret'
>  from ASTInterpreter.java:74:in `INTERPRET_METHOD'
>  from InterpretedMethod.java:147:in `call'
>  from DefaultMethod.java:183:in `call'
>  from CachingCallSite.java:292:in `cacheAndCall'
>  from CachingCallSite.java:135:in `call'
>  from CallNoArgNode.java:63:in `interpret'
>  from LocalAsgnNode.java:123:in `interpret'
>  from NewlineNode.java:104:in `interpret'
>  from BlockNode.java:71:in `interpret'
>  from IfNode.java:117:in `interpret'
>  from NewlineNode.java:104:in `interpret'
>  from BlockNode.java:71:in `interpret'
>  from ASTInterpreter.java:74:in `INTERPRET_METHOD'
> 

[jira] [Created] (HBASE-21135) Build fails on windows as it fails to parse windows path during license check

2018-08-31 Thread Nihal Jain (JIRA)
Nihal Jain created HBASE-21135:
--

 Summary: Build fails on windows as it fails to parse windows path 
during license check
 Key: HBASE-21135
 URL: https://issues.apache.org/jira/browse/HBASE-21135
 Project: HBase
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0, 2.1.1
Reporter: Nihal Jain
Assignee: Nihal Jain


License check via enforce plugic throws following error during build on windows:

 
{code:java}
Sourced file: inline evaluation of: ``File license = new 
File("D:\DS\HBase_2\hbase\hbase-shaded\target/maven-shared-ar . . . '' Token 
Parsing Error: Lexical error at line 1, column 29.  Encountered: "D" (68), 
after : "\"D:\\": {code}
Complete stacktrace with command
{code:java}
mvn clean install -DskipTests -X
{code}
is as follows:
{noformat}
[INFO] --- maven-enforcer-plugin:3.0.0-M1:enforce (check-aggregate-license) @ 
hbase-shaded ---
[DEBUG] Configuring mojo 
org.apache.maven.plugins:maven-enforcer-plugin:3.0.0-M1:enforce from plugin 
realm 
ClassRealm[plugin>org.apache.maven.plugins:maven-enforcer-plugin:3.0.0-M1, 
parent: sun.misc.Launcher$AppClassLoader@55f96302]
[DEBUG] Configuring mojo 
'org.apache.maven.plugins:maven-enforcer-plugin:3.0.0-M1:enforce' with basic 
configurator -->
[DEBUG] (s) fail = true
[DEBUG] (s) failFast = false
[DEBUG] (f) ignoreCache = false
[DEBUG] (f) mojoExecution = 
org.apache.maven.plugins:maven-enforcer-plugin:3.0.0-M1:enforce {execution: 
check-aggregate-license}
[DEBUG] (s) project = MavenProject: 
org.apache.hbase:hbase-shaded:2.1.1-SNAPSHOT @ 
D:\DS\HBase_2\hbase\hbase-shaded\pom.xml
[DEBUG] (s) condition = File license = new 
File("D:\DS\HBase_2\hbase\hbase-shaded\target/maven-shared-archive-resources/META-INF/LICENSE");

// Beanshell does not support try-with-resources,
// so we must close this scanner manually
Scanner scanner = new Scanner(license);

while (scanner.hasNextLine()) {
if (scanner.nextLine().startsWith("ERROR:")) {
scanner.close();
return false;
}
}
scanner.close();
return true;
[DEBUG] (s) message = License errors detected, for more detail find ERROR in
D:\DS\HBase_2\hbase\hbase-shaded\target/maven-shared-archive-resources/META-INF/LICENSE
[DEBUG] (s) rules = 
[org.apache.maven.plugins.enforcer.EvaluateBeanshell@7e307087]
[DEBUG] (s) session = org.apache.maven.execution.MavenSession@5e1218b4
[DEBUG] (s) skip = false
[DEBUG] -- end configuration --
[DEBUG] Executing rule: org.apache.maven.plugins.enforcer.EvaluateBeanshell
[DEBUG] Echo condition : File license = new 
File("D:\DS\HBase_2\hbase\hbase-shaded\target/maven-shared-archive-resources/META-INF/LICENSE");

// Beanshell does not support try-with-resources,
// so we must close this scanner manually
Scanner scanner = new Scanner(license);

while (scanner.hasNextLine()) {
if (scanner.nextLine().startsWith("ERROR:")) {
scanner.close();
return false;
}
}
scanner.close();
return true;
[DEBUG] Echo script : File license = new 
File("D:\DS\HBase_2\hbase\hbase-shaded\target/maven-shared-archive-resources/META-INF/LICENSE");

// Beanshell does not support try-with-resources,
// so we must close this scanner manually
Scanner scanner = new Scanner(license);

while (scanner.hasNextLine()) {
if (scanner.nextLine().startsWith("ERROR:")) {
scanner.close();
return false;
}
}
scanner.close();
return true;
[DEBUG] Adding failure due to exception
org.apache.maven.enforcer.rule.api.EnforcerRuleException: Couldn't evaluate 
condition: File license = new 
File("D:\DS\HBase_2\hbase\hbase-shaded\target/maven-shared-archive-resources/META-INF/LICENSE");

// Beanshell does not support try-with-resources,
// so we must close this scanner manually
Scanner scanner = new Scanner(license);

while (scanner.hasNextLine()) {
if (scanner.nextLine().startsWith("ERROR:")) {
scanner.close();
return false;
}
}
scanner.close();
return true;
at 
org.apache.maven.plugins.enforcer.EvaluateBeanshell.evaluateCondition(EvaluateBeanshell.java:107)
at 
org.apache.maven.plugins.enforcer.EvaluateBeanshell.execute(EvaluateBeanshell.java:72)
at org.apache.maven.plugins.enforcer.EnforceMojo.execute(EnforceMojo.java:202)
at 
org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:134)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:154)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:146)
at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:117)
at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:81)
at 
org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51)
at 
org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128)
at org.apa

[jira] [Created] (HBASE-21196) HTableMultiplexer clears the meta cache after every put operation

2018-09-13 Thread Nihal Jain (JIRA)
Nihal Jain created HBASE-21196:
--

 Summary: HTableMultiplexer clears the meta cache after every put 
operation
 Key: HBASE-21196
 URL: https://issues.apache.org/jira/browse/HBASE-21196
 Project: HBase
  Issue Type: Bug
  Components: Performance
Affects Versions: 3.0.0, 1.3.3, 2.2.0
Reporter: Nihal Jain
Assignee: Nihal Jain
 Attachments: HTableMultiplexer1000Puts.UT.txt

Operations which use {{AsyncRequestFutureImpl.receiveMultiAction(MultiAction, 
ServerName, MultiResponse, int)}} API with tablename set to null reset the meta 
cache of the corresponding server after each call. One such operation is put 
operation of HTableMultiplexer (Might not be the only one). This may impact the 
performance of the system severely as all new ops directed to that server will 
have to go to zk first to get the meta table address and then get the location 
of the table region as it will become empty after every htablemultiplexer put.

>From the logs below, one can see after every other put the cached region 
>locations are cleared. As a side effect of this, before every put the server 
>needs to contact zk and get meta table location and read meta to get region 
>locations of the table.

{noformat}
2018-09-13 22:21:15,467 TRACE [htable-pool11-t1] client.MetaCache(283): Removed 
all cached region locations that map to root1-thinkpad-t440p,35811,1536857446588
2018-09-13 22:21:15,467 DEBUG [HTableFlushWorker-5] 
client.HTableMultiplexer$FlushWorker(632): Processed 1 put requests for 
root1-ThinkPad-T440p:35811 and 0 failed, latency for this send: 5
2018-09-13 22:21:15,515 TRACE 
[RpcServer.reader=1,bindAddress=root1-ThinkPad-T440p,port=35811] 
ipc.RpcServer$Connection(1954): RequestHeader call_id: 218 method_name: "Get" 
request_param: true priority: 0 timeout: 6 totalRequestSize: 137 bytes
2018-09-13 22:21:15,515 TRACE 
[RpcServer.FifoWFPBQ.default.handler=3,queue=0,port=35811] ipc.CallRunner(105): 
callId: 218 service: ClientService methodName: Get size: 137 connection: 
127.0.0.1:42338 executing as root1
2018-09-13 22:21:15,515 TRACE 
[RpcServer.FifoWFPBQ.default.handler=3,queue=0,port=35811] ipc.RpcServer(2356): 
callId: 218 service: ClientService methodName: Get size: 137 connection: 
127.0.0.1:42338 param: region= 
testHTableMultiplexer_1,,1536857451720.304d914b641a738624937c7f9b4d684f., 
row=\x00\x00\x00\xC4 connection: 127.0.0.1:42338, response result { 
associated_cell_count: 1 stale: false } queueTime: 0 processingTime: 0 
totalTime: 0
2018-09-13 22:21:15,516 TRACE 
[RpcServer.FifoWFPBQ.default.handler=3,queue=0,port=35811] 
io.BoundedByteBufferPool(106): runningAverage=16384, totalCapacity=0, count=0, 
allocations=1
2018-09-13 22:21:15,516 TRACE [main] ipc.AbstractRpcClient(236): Call: Get, 
callTime: 2ms
2018-09-13 22:21:15,516 TRACE [main] client.ClientScanner(122): Scan 
table=hbase:meta, 
startRow=testHTableMultiplexer_1,\x00\x00\x00\xC5,99
2018-09-13 22:21:15,516 TRACE [main] client.ClientSmallReversedScanner(179): 
Advancing internal small scanner to startKey at 
'testHTableMultiplexer_1,\x00\x00\x00\xC5,99'
2018-09-13 22:21:15,517 TRACE [main] client.ZooKeeperRegistry(59): Looking up 
meta region location in ZK, 
connection=org.apache.hadoop.hbase.client.ZooKeeperRegistry@599f571f
{noformat}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HBASE-21218) TableStateNotFoundException thrown from RSGroupAdminEndpoint#postCreateTable when creating table

2018-09-27 Thread Nihal Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nihal Jain resolved HBASE-21218.

Resolution: Duplicate

> TableStateNotFoundException thrown from RSGroupAdminEndpoint#postCreateTable 
> when creating table
> 
>
> Key: HBASE-21218
> URL: https://issues.apache.org/jira/browse/HBASE-21218
> Project: HBase
>  Issue Type: Bug
>  Components: rsgroup
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
>Priority: Major
> Attachments: HBASE-21218.master.001.patch
>
>
> Similar to HBASE-19509, I found the following logs in master log when 
> creating table
> {code}
> 2018-09-21 15:14:47,476 ERROR 
> [RpcServer.default.FPBQ.Fifo.handler=296,queue=26,port=16000] 
> master.TableStateManager: Unable to get table t3 state
> org.apache.hadoop.hbase.master.TableStateManager$TableStateNotFoundException: 
> t3
> at 
> org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:215)
> at 
> org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:147)
> at 
> org.apache.hadoop.hbase.master.assignment.AssignmentManager.isTableDisabled(AssignmentManager.java:344)
> at 
> org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveTables(RSGroupAdminServer.java:412)
> at 
> org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.assignTableToGroup(RSGroupAdminEndpoint.java:471)
> at 
> org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.postCreateTable(RSGroupAdminEndpoint.java:494)
> at 
> org.apache.hadoop.hbase.master.MasterCoprocessorHost$12.call(MasterCoprocessorHost.java:335)
> at 
> org.apache.hadoop.hbase.master.MasterCoprocessorHost$12.call(MasterCoprocessorHost.java:332)
> at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:540)
> at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:614)
> at 
> org.apache.hadoop.hbase.master.MasterCoprocessorHost.postCreateTable(MasterCoprocessorHost.java:332)
> at org.apache.hadoop.hbase.master.HMaster$3.run(HMaster.java:1929)
> at 
> org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:131)
> at 
> org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1911)
> at 
> org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:628)
> at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
> {code}
>   
> In fact, we only need to change the information of rsgroup without moving 
> region.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-21243) Correct java-doc for the method RpcServer.getRemoteAddress()

2018-09-27 Thread Nihal Jain (JIRA)
Nihal Jain created HBASE-21243:
--

 Summary: Correct java-doc for the method 
RpcServer.getRemoteAddress()
 Key: HBASE-21243
 URL: https://issues.apache.org/jira/browse/HBASE-21243
 Project: HBase
  Issue Type: Improvement
Affects Versions: 2.0.0, 3.0.0
Reporter: Nihal Jain


Correct the java-doc for the method {{RpcServer.getRemoteAddress()}}.
 Currently it look like as below:
{code:java}
  /**
   * @return Address of remote client if a request is ongoing, else null
   */
  public static Optional getRemoteAddress() {
return getCurrentCall().map(RpcCall::getRemoteAddress);
  }
{code}
Contrary to the doc the method will never return null. Rather it may return an 
empty Optional.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (HBASE-20472) InfoServer doesnot honour any acl set by the admin

2018-10-03 Thread Nihal Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nihal Jain reopened HBASE-20472:


> InfoServer doesnot honour any acl set by the admin
> --
>
> Key: HBASE-20472
> URL: https://issues.apache.org/jira/browse/HBASE-20472
> Project: HBase
>  Issue Type: Bug
>  Components: security, UI
>Reporter: Nihal Jain
>Assignee: Nihal Jain
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HBASE-20472.master.001.patch
>
>
> The adminsAcl property can be used to restrict access to certain sections of 
> the web UI only to a particular set of users/groups. But in hbase,  adminAcl 
> variable for InfoServer is always null, rendering it to not honour any acl 
> set by the admin. In fact I could not find any property in hbase to specify 
> acl list for web server.
> *Analysis*:
>  * *InfoSever* object forgets(?) to set any *adminAcl* in the builder object 
> for http server.
> {code:java}
> public InfoServer(String name, String bindAddress, int port, boolean findPort,
> final Configuration c) {
> .
> .
>
> HttpServer.Builder builder =
> new org.apache.hadoop.hbase.http.HttpServer.Builder();
> .
> .
> this.httpServer = builder.build();
> }{code}
> [See InfoServer 
> constructor|https://github.com/apache/hbase/blob/46cb5dfa226892fd2580f26ce9ce77225bd7e67c/hbase-http/src/main/java/org/apache/hadoop/hbase/http/InfoServer.java#L55]
>  * http server retreives a null value and sets it as adminsAcl, which is 
> passed to *createWebAppContext*() method
> {code:java}
> private HttpServer(final Builder b) throws IOException {
> .
> .
> .
> this.adminsAcl = b.adminsAcl;
> this.webAppContext = createWebAppContext(b.name, b.conf, adminsAcl, 
> appDir);
> 
> .
> .
> }{code}
> [See L527 
> HttpServer.java|https://github.com/apache/hbase/blob/46cb5dfa226892fd2580f26ce9ce77225bd7e67c/hbase-http/src/main/java/org/apache/hadoop/hbase/http/HttpServer.java#L527]
>  * This method next sets *ADMIN_ACL* attribute for the servlet context to 
> *null*
> {code:java}
> private static WebAppContext createWebAppContext(String name,
> Configuration conf, AccessControlList adminsAcl, final String appDir) {
> WebAppContext ctx = new WebAppContext();
> .
> .
> ctx.getServletContext().setAttribute(ADMINS_ACL, adminsAcl);
> .
> .
> }
> {code}
>  * Now any page having *HttpServer.hasAdministratorAccess*() will allow 
> access to everyone, making this check useless. 
> {code:java}
> @Override
> public void doGet(HttpServletRequest request, HttpServletResponse response
> ) throws ServletException, IOException {
> // Do the authorization
> if (!HttpServer.hasAdministratorAccess(getServletContext(), request,
> response)) {
> return;
> }
> .
> .
> }{code}
> [For example See L104 
> LogLevel.java|https://github.com/apache/hbase/blob/46cb5dfa226892fd2580f26ce9ce77225bd7e67c/hbase-http/src/main/java/org/apache/hadoop/hbase/http/log/LogLevel.java#L104]
>  * *hasAdministratorAccess()* checks for the following and returns true, in 
> any case as *ADMIN_ACL* is always *null*
> {code:java}
> public static boolean hasAdministratorAccess(
> ServletContext servletContext, HttpServletRequest request,
> HttpServletResponse response) throws IOException {
> .
> .
> if (servletContext.getAttribute(ADMINS_ACL) != null &&
> !userHasAdministratorAccess(servletContext, remoteUser)) {
>   response.sendError(HttpServletResponse.SC_UNAUTHORIZED, "User "
>  + remoteUser + " is unauthorized to access this page.");
>return false;
> }
> return true;
> }{code}
> [See line 1196 in 
> HttpServer|https://github.com/apache/hbase/blob/46cb5dfa226892fd2580f26ce9ce77225bd7e67c/hbase-http/src/main/java/org/apache/hadoop/hbase/http/HttpServer.java#L1196]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HBASE-20472) InfoServer doesnot honour any acl set by the admin

2018-10-03 Thread Nihal Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nihal Jain resolved HBASE-20472.

Resolution: Duplicate

> InfoServer doesnot honour any acl set by the admin
> --
>
> Key: HBASE-20472
> URL: https://issues.apache.org/jira/browse/HBASE-20472
> Project: HBase
>  Issue Type: Bug
>  Components: security, UI
>Reporter: Nihal Jain
>Assignee: Nihal Jain
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HBASE-20472.master.001.patch
>
>
> The adminsAcl property can be used to restrict access to certain sections of 
> the web UI only to a particular set of users/groups. But in hbase,  adminAcl 
> variable for InfoServer is always null, rendering it to not honour any acl 
> set by the admin. In fact I could not find any property in hbase to specify 
> acl list for web server.
> *Analysis*:
>  * *InfoSever* object forgets(?) to set any *adminAcl* in the builder object 
> for http server.
> {code:java}
> public InfoServer(String name, String bindAddress, int port, boolean findPort,
> final Configuration c) {
> .
> .
>
> HttpServer.Builder builder =
> new org.apache.hadoop.hbase.http.HttpServer.Builder();
> .
> .
> this.httpServer = builder.build();
> }{code}
> [See InfoServer 
> constructor|https://github.com/apache/hbase/blob/46cb5dfa226892fd2580f26ce9ce77225bd7e67c/hbase-http/src/main/java/org/apache/hadoop/hbase/http/InfoServer.java#L55]
>  * http server retreives a null value and sets it as adminsAcl, which is 
> passed to *createWebAppContext*() method
> {code:java}
> private HttpServer(final Builder b) throws IOException {
> .
> .
> .
> this.adminsAcl = b.adminsAcl;
> this.webAppContext = createWebAppContext(b.name, b.conf, adminsAcl, 
> appDir);
> 
> .
> .
> }{code}
> [See L527 
> HttpServer.java|https://github.com/apache/hbase/blob/46cb5dfa226892fd2580f26ce9ce77225bd7e67c/hbase-http/src/main/java/org/apache/hadoop/hbase/http/HttpServer.java#L527]
>  * This method next sets *ADMIN_ACL* attribute for the servlet context to 
> *null*
> {code:java}
> private static WebAppContext createWebAppContext(String name,
> Configuration conf, AccessControlList adminsAcl, final String appDir) {
> WebAppContext ctx = new WebAppContext();
> .
> .
> ctx.getServletContext().setAttribute(ADMINS_ACL, adminsAcl);
> .
> .
> }
> {code}
>  * Now any page having *HttpServer.hasAdministratorAccess*() will allow 
> access to everyone, making this check useless. 
> {code:java}
> @Override
> public void doGet(HttpServletRequest request, HttpServletResponse response
> ) throws ServletException, IOException {
> // Do the authorization
> if (!HttpServer.hasAdministratorAccess(getServletContext(), request,
> response)) {
> return;
> }
> .
> .
> }{code}
> [For example See L104 
> LogLevel.java|https://github.com/apache/hbase/blob/46cb5dfa226892fd2580f26ce9ce77225bd7e67c/hbase-http/src/main/java/org/apache/hadoop/hbase/http/log/LogLevel.java#L104]
>  * *hasAdministratorAccess()* checks for the following and returns true, in 
> any case as *ADMIN_ACL* is always *null*
> {code:java}
> public static boolean hasAdministratorAccess(
> ServletContext servletContext, HttpServletRequest request,
> HttpServletResponse response) throws IOException {
> .
> .
> if (servletContext.getAttribute(ADMINS_ACL) != null &&
> !userHasAdministratorAccess(servletContext, remoteUser)) {
>   response.sendError(HttpServletResponse.SC_UNAUTHORIZED, "User "
>  + remoteUser + " is unauthorized to access this page.");
>return false;
> }
> return true;
> }{code}
> [See line 1196 in 
> HttpServer|https://github.com/apache/hbase/blob/46cb5dfa226892fd2580f26ce9ce77225bd7e67c/hbase-http/src/main/java/org/apache/hadoop/hbase/http/HttpServer.java#L1196]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-21297) ModifyTableProcedure can throw TNDE instead of IOE in case of REGION_REPLICATION change

2018-10-11 Thread Nihal Jain (JIRA)
Nihal Jain created HBASE-21297:
--

 Summary: ModifyTableProcedure can throw TNDE instead of IOE in 
case of REGION_REPLICATION change
 Key: HBASE-21297
 URL: https://issues.apache.org/jira/browse/HBASE-21297
 Project: HBase
  Issue Type: Improvement
Reporter: Nihal Jain
Assignee: Nihal Jain


Currently {{ModifyTableProcedure}} throws an {{IOException}} (See 
[ModifyTableProcedure.java#L252|https://github.com/apache/hbase/blob/924d183ba0e67b975e998f6006c993f457e03c20/hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/ModifyTableProcedure.java#L252])
 when a user tries to modify REGION_REPLICATION for an enabled table. Instead, 
it can throw a more specific {{TableNotDisabledException}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-21404) Master/RS navbar active state does not work

2018-10-29 Thread Nihal Jain (JIRA)
Nihal Jain created HBASE-21404:
--

 Summary: Master/RS navbar active state does not work
 Key: HBASE-21404
 URL: https://issues.apache.org/jira/browse/HBASE-21404
 Project: HBase
  Issue Type: Bug
  Components: UI
Reporter: Nihal Jain
 Attachments: master_after.png, master_before.png





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-21475) Put mutation (having TTL set) added via co-processor is retrieved even after TTL expires

2018-11-13 Thread Nihal Jain (JIRA)
Nihal Jain created HBASE-21475:
--

 Summary: Put mutation (having TTL set) added via co-processor is 
retrieved even after TTL expires
 Key: HBASE-21475
 URL: https://issues.apache.org/jira/browse/HBASE-21475
 Project: HBase
  Issue Type: Bug
  Components: Coprocessors
Affects Versions: 2.1.1, 3.0.0
Reporter: Nihal Jain
Assignee: Nihal Jain






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-21621) Reversed scan does not return expected number of rows

2018-12-19 Thread Nihal Jain (JIRA)
Nihal Jain created HBASE-21621:
--

 Summary: Reversed scan does not return expected  number of rows
 Key: HBASE-21621
 URL: https://issues.apache.org/jira/browse/HBASE-21621
 Project: HBase
  Issue Type: Bug
  Components: scan
Affects Versions: 2.1.1, 3.0.0
Reporter: Nihal Jain






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-21629) draining_servers.rb is broken

2018-12-21 Thread Nihal Jain (JIRA)
Nihal Jain created HBASE-21629:
--

 Summary: draining_servers.rb is broken
 Key: HBASE-21629
 URL: https://issues.apache.org/jira/browse/HBASE-21629
 Project: HBase
  Issue Type: Bug
  Components: scripts
Affects Versions: 2.1.1, 3.0.0, 2.1.2
Reporter: Nihal Jain
Assignee: Nihal Jain


1) Handle missing methods and implementation changes in core code.
 * In 
[ZKWatcher.java|https://github.com/apache/hbase/blob/12786f80c14c6f2c3c111a55bbf431fb2e81e828/hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKWatcher.java#L79],
 variable znodePaths has now been made private from public (See HBASE-19761). 
Currently the script directly tries to reference znodePaths which will result 
in exception.
 * Also, joinZNode method is moved to ZNodePaths and removed from ZKUtil (See 
HBASE-19200). The script relies on non-existant ZKUtil.joinZNode().

2) Close zk watcher while list draining servers: The list functionality does 
not close the zkw instance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-21636) Enhance the shell scan command to support missing scanner specifications like ReadType, IsolationLevel etc.

2018-12-23 Thread Nihal Jain (JIRA)
Nihal Jain created HBASE-21636:
--

 Summary: Enhance the shell scan command to support missing scanner 
specifications like ReadType, IsolationLevel etc.
 Key: HBASE-21636
 URL: https://issues.apache.org/jira/browse/HBASE-21636
 Project: HBase
  Issue Type: Improvement
  Components: shell
Affects Versions: 2.0.0, 3.0.0, 2.1.2
Reporter: Nihal Jain
Assignee: Nihal Jain






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-21644) Modify table procedure runs infinitely for a table having region replication > 1

2018-12-26 Thread Nihal Jain (JIRA)
Nihal Jain created HBASE-21644:
--

 Summary: Modify table procedure runs infinitely for a table having 
region replication > 1
 Key: HBASE-21644
 URL: https://issues.apache.org/jira/browse/HBASE-21644
 Project: HBase
  Issue Type: Bug
  Components: Admin
Affects Versions: 2.1.1, 3.0.0, 2.1.2
Reporter: Nihal Jain
Assignee: Nihal Jain






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-21645) Perform sanity check and disallow table creation/modification with region replication < 1

2018-12-26 Thread Nihal Jain (JIRA)
Nihal Jain created HBASE-21645:
--

 Summary: Perform sanity check and disallow table 
creation/modification with region replication < 1
 Key: HBASE-21645
 URL: https://issues.apache.org/jira/browse/HBASE-21645
 Project: HBase
  Issue Type: Improvement
Affects Versions: 2.1.1, 3.0.0, 1.5.0, 2.1.2
Reporter: Nihal Jain
Assignee: Nihal Jain


We should perform sanity check and disallow table creation with region 
replication < 1 or modification of an existing table with new region 
replication value < 1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-21672) Allow skipping HDFS block distribution computation

2019-01-04 Thread Nihal Jain (JIRA)
Nihal Jain created HBASE-21672:
--

 Summary: Allow skipping HDFS block distribution computation
 Key: HBASE-21672
 URL: https://issues.apache.org/jira/browse/HBASE-21672
 Project: HBase
  Issue Type: Improvement
Reporter: Nihal Jain
Assignee: Nihal Jain


We should have a configuration to skip HDFS block distribution calculation in 
HBase. For example on file systems that do not surface locality such as S3, 
calculating block distribution would not be any useful, we should have a way to 
skip hdfs block distribution.

For this, we can provide a new configuration key, say 
{{hbase.block.distribution.skip.computation}}, which would be {{false}} by 
default. Users using filesystems such as s3 may choose to make this {{true}}, 
thus skipping block distribution computation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-21681) Procedure V2 based enableTableReplication

2019-01-06 Thread Nihal Jain (JIRA)
Nihal Jain created HBASE-21681:
--

 Summary: Procedure V2 based enableTableReplication 
 Key: HBASE-21681
 URL: https://issues.apache.org/jira/browse/HBASE-21681
 Project: HBase
  Issue Type: Improvement
  Components: Admin, proc-v2
Reporter: Nihal Jain
Assignee: Nihal Jain


We should take advantage of procedure v2 framework and reimplement/refactor 
{{enableTableReplication()}} API and make it more robust. 
Currently it would not handle failover scenarios, although it can be doing a 
lot of create table ops (and thus run for quite a while), given we have lots of 
peers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-21749) RS UI may throw NPE and make rs-status page inaccessible with multiwal and replication

2019-01-20 Thread Nihal Jain (JIRA)
Nihal Jain created HBASE-21749:
--

 Summary: RS UI may throw NPE and make rs-status page inaccessible 
with multiwal and replication
 Key: HBASE-21749
 URL: https://issues.apache.org/jira/browse/HBASE-21749
 Project: HBase
  Issue Type: Bug
  Components: Replication, UI
Reporter: Nihal Jain
Assignee: Nihal Jain


Sometimes RS UI is unable to open as we get a NPE; This happens because 
{{shipper.getCurrentPath()}} may return null.

We should have a null check @ 
[ReplicationSource.java#L331|https://github.com/apache/hbase/blob/a2f6768acdc30b789c7cb8482b9f4352803f60a1/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java#L331]
 
{code:java}
  Path currentPath = shipper.getCurrentPath();
  try {
fileSize = getFileSize(currentPath);
  } catch (IOException e) {
LOG.warn("Ignore the exception as the file size of HLog only affects 
the web ui", e);
fileSize = -1;
  }{code}
 

!0b8e95c7-6715-42bf-88d2-b2edc9215022.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-21755) RS aborts while performing replication with wal dir on s3, root dir on hdfs

2019-01-22 Thread Nihal Jain (JIRA)
Nihal Jain created HBASE-21755:
--

 Summary: RS aborts while performing replication with wal dir on 
s3, root dir on hdfs
 Key: HBASE-21755
 URL: https://issues.apache.org/jira/browse/HBASE-21755
 Project: HBase
  Issue Type: Bug
  Components: Filesystem Integration, Replication
Affects Versions: 2.1.3
Reporter: Nihal Jain
Assignee: Nihal Jain


*Environment/Configuration*
 - _hbase.wal.dir_ : Configured to be on s3
 - _hbase.rootdir_ : Configured to be on hdfs

In replication scenario, while trying to get archived log dir (using method 
[WALEntryStream.java#L315|https://github.com/apache/hbase/blob/b0131e19f4b9ced05f501c61596191cb8a86b660/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/WALEntryStream.java#L315])
 we get the following exception:
{code:java}
2019-01-21 17:43:55,440 ERROR 
[RS_REFRESH_PEER-regionserver/host2:2-1.replicationSource,2.replicationSource.wal-reader.host2%2C2%2C1548063439555.host2%2C2%2C1548063439555.regiongroup-1,2]
 regionserver.ReplicationSource: Unexpected exception in 
RS_REFRESH_PEER-regionserver/host2:2-1.replicationSource,2.replicationSource.wal-reader.host2%2C2%2C1548063439555.host2%2C2%2C1548063439555.regiongroup-1,2
 
currentPath=hdfs://dummy_path/hbase/WALs/host2,2,1548063439555/host2%2C2%2C1548063439555.host2%2C2%2C1548063439555.regiongroup-1.1548063492594
java.lang.IllegalArgumentException: Wrong FS: 
s3a://xx/hbase128/oldWALs/host2%2C2%2C1548063439555.host2%2C2%2C1548063439555.regiongroup-1.1548063492594,
 expected: hdfs://dummy_path
at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:781)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:246)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1622)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1619)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1634)
at 
org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:465)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1742)
at 
org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.getArchivedLog(WALEntryStream.java:319)
at 
org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.resetReader(WALEntryStream.java:404)
at 
org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.reset(WALEntryStream.java:161)
at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:148)
2019-01-21 17:43:55,444 ERROR 
[RS_REFRESH_PEER-regionserver/host2:2-1.replicationSource,2.replicationSource.wal-reader.host2%2C2%2C1548063439555.host2%2C2%2C1548063439555.regiongroup-1,2]
 regionserver.HRegionServer: * ABORTING region server 
host2,2,1548063439555: Unexpected exception in 
RS_REFRESH_PEER-regionserver/host2:2-1.replicationSource,2.replicationSource.wal-reader.host2%2C2%2C1548063439555.host2%2C2%2C1548063439555.regiongroup-1,2
 *
java.lang.IllegalArgumentException: Wrong FS: 
s3a://xx/hbase128/oldWALs/host2%2C2%2C1548063439555.host2%2C2%2C1548063439555.regiongroup-1.1548063492594,
 expected: hdfs://dummy_path
at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:781)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:246)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1622)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1619)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1634)
at 
org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:465)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1742)
at 
org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.getArchivedLog(WALEntryStream.java:319)
at 
org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.resetReader(WALEntryStream.java:404)
at 
org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.reset(WALEntryStream.java:161)
at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:148)


{code}
 
Current code is:
{code}
  private Path getArchivedLog(Path path) throws IOException {
Path rootDir = FSUtils.getRootDir(conf);

// Try found the log in old dir
Path oldLogDir = new

[jira] [Created] (HBASE-21756) Backport HBASE-21279 (Split TestAdminShell into several tests) to branch-2

2019-01-22 Thread Nihal Jain (JIRA)
Nihal Jain created HBASE-21756:
--

 Summary: Backport HBASE-21279 (Split TestAdminShell into several 
tests) to branch-2
 Key: HBASE-21756
 URL: https://issues.apache.org/jira/browse/HBASE-21756
 Project: HBase
  Issue Type: Test
Reporter: Nihal Jain
Assignee: Nihal Jain






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-21795) Client application may get stuck (time bound) if a table modify op is called immediately after split op

2019-01-28 Thread Nihal Jain (JIRA)
Nihal Jain created HBASE-21795:
--

 Summary: Client application may get stuck (time bound) if a table 
modify op is called immediately after split op
 Key: HBASE-21795
 URL: https://issues.apache.org/jira/browse/HBASE-21795
 Project: HBase
  Issue Type: Bug
Reporter: Nihal Jain
Assignee: Nihal Jain


*Steps:*
 * Create a table
 * Split the table
 * Modify the table immediately after splitting

*Expected*: 
The modify table procedure completes and control returns back to client

*Actual:* 
The modify table procedure completes and control does not return back to 
client, until catalog janitor runs and deletes parent or future timeout occurs



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HBASE-21755) RS aborts while performing replication with wal dir on hdfs, root dir on s3

2019-01-28 Thread Nihal Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nihal Jain resolved HBASE-21755.

Resolution: Duplicate

> RS aborts while performing replication with wal dir on hdfs, root dir on s3
> ---
>
> Key: HBASE-21755
> URL: https://issues.apache.org/jira/browse/HBASE-21755
> Project: HBase
>  Issue Type: Bug
>  Components: Filesystem Integration, Replication, wal
>Affects Versions: 1.5.0, 2.1.3
>Reporter: Nihal Jain
>Assignee: Nihal Jain
>Priority: Critical
>  Labels: s3
>
> *Environment/Configuration*
>  - _hbase.wal.dir_ : Configured to be on hdfs
>  - _hbase.rootdir_ : Configured to be on s3
> In replication scenario, while trying to get archived log dir (using method 
> [WALEntryStream.java#L314|https://github.com/apache/hbase/blob/da92b3e0061a7c67aa9a3e403d68f3b56bf59370/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/WALEntryStream.java#L314])
>  we get the following exception:
> {code:java}
> 2019-01-21 17:43:55,440 ERROR 
> [RS_REFRESH_PEER-regionserver/host2:2-1.replicationSource,2.replicationSource.wal-reader.host2%2C2%2C1548063439555.host2%2C2%2C1548063439555.regiongroup-1,2]
>  regionserver.ReplicationSource: Unexpected exception in 
> RS_REFRESH_PEER-regionserver/host2:2-1.replicationSource,2.replicationSource.wal-reader.host2%2C2%2C1548063439555.host2%2C2%2C1548063439555.regiongroup-1,2
>  
> currentPath=hdfs://dummy_path/hbase/WALs/host2,2,1548063439555/host2%2C2%2C1548063439555.host2%2C2%2C1548063439555.regiongroup-1.1548063492594
> java.lang.IllegalArgumentException: Wrong FS: 
> s3a://xx/hbase128/oldWALs/host2%2C2%2C1548063439555.host2%2C2%2C1548063439555.regiongroup-1.1548063492594,
>  expected: hdfs://dummy_path
>   at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:781)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:246)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1622)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1619)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1634)
>   at 
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:465)
>   at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1742)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.getArchivedLog(WALEntryStream.java:319)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.resetReader(WALEntryStream.java:404)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.reset(WALEntryStream.java:161)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:148)
> 2019-01-21 17:43:55,444 ERROR 
> [RS_REFRESH_PEER-regionserver/host2:2-1.replicationSource,2.replicationSource.wal-reader.host2%2C2%2C1548063439555.host2%2C2%2C1548063439555.regiongroup-1,2]
>  regionserver.HRegionServer: * ABORTING region server 
> host2,2,1548063439555: Unexpected exception in 
> RS_REFRESH_PEER-regionserver/host2:2-1.replicationSource,2.replicationSource.wal-reader.host2%2C2%2C1548063439555.host2%2C2%2C1548063439555.regiongroup-1,2
>  *
> java.lang.IllegalArgumentException: Wrong FS: 
> s3a://xx/hbase128/oldWALs/host2%2C2%2C1548063439555.host2%2C2%2C1548063439555.regiongroup-1.1548063492594,
>  expected: hdfs://dummy_path
>   at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:781)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:246)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1622)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1619)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1634)
>   at 
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:465)
>   at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1742)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.getArchivedLog(WALEntryStream.java:319)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.resetReader(WALEntryStream.java:404)
>   at 
> org.apache.hadoop.hbase.replication.regionserver

[jira] [Reopened] (HBASE-12947) Replicating DDL statements like create from one cluster to another

2019-01-31 Thread Nihal Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-12947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nihal Jain reopened HBASE-12947:


> Replicating DDL statements like create  from one cluster to another
> ---
>
> Key: HBASE-12947
> URL: https://issues.apache.org/jira/browse/HBASE-12947
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication
>Affects Versions: 2.0.0
>Reporter: Prabhu Joseph
>Priority: Critical
>
> Problem:
>   When tables are created dynamically in Hbase cluster, the Replication 
> feature can't be used as the new table does not exist in peer cluster. To use 
> the replication, we need to create same table in peer cluster also.
>Having API to replicate the create table statement at peer cluster will be 
> more helpful in such cases.
> Solution:
> create 'table','cf',replication => true , peerFlag => true
> if peerFlag = true, the table with the column family has to be created at 
> peer
> cluster.
>Special cases like enabling replication at peer cluster also for cyclic 
> replication has to be considered.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HBASE-12947) Replicating DDL statements like create from one cluster to another

2019-01-31 Thread Nihal Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-12947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nihal Jain resolved HBASE-12947.

Resolution: Duplicate

> Replicating DDL statements like create  from one cluster to another
> ---
>
> Key: HBASE-12947
> URL: https://issues.apache.org/jira/browse/HBASE-12947
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication
>Affects Versions: 2.0.0
>Reporter: Prabhu Joseph
>Priority: Critical
>
> Problem:
>   When tables are created dynamically in Hbase cluster, the Replication 
> feature can't be used as the new table does not exist in peer cluster. To use 
> the replication, we need to create same table in peer cluster also.
>Having API to replicate the create table statement at peer cluster will be 
> more helpful in such cases.
> Solution:
> create 'table','cf',replication => true , peerFlag => true
> if peerFlag = true, the table with the column family has to be created at 
> peer
> cluster.
>Special cases like enabling replication at peer cluster also for cyclic 
> replication has to be considered.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-21830) Backport HBASE-20577 (Make Log Level page design consistent with the design of other pages in UI) to branch-2

2019-02-02 Thread Nihal Jain (JIRA)
Nihal Jain created HBASE-21830:
--

 Summary: Backport HBASE-20577 (Make Log Level page design 
consistent with the design of other pages in UI) to branch-2
 Key: HBASE-21830
 URL: https://issues.apache.org/jira/browse/HBASE-21830
 Project: HBase
  Issue Type: Bug
  Components: UI, Usability
Reporter: Nihal Jain
Assignee: Nihal Jain






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-21881) Use Forbidden API Checker to prevent future usages of forbidden api's

2019-02-12 Thread Nihal Jain (JIRA)
Nihal Jain created HBASE-21881:
--

 Summary: Use Forbidden API Checker to prevent future usages of 
forbidden api's
 Key: HBASE-21881
 URL: https://issues.apache.org/jira/browse/HBASE-21881
 Project: HBase
  Issue Type: Improvement
  Components: build
Reporter: Nihal Jain






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HBASE-21891) New space quota policy doesn't take effect if quota policy is changed after violation

2019-03-26 Thread Nihal Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nihal Jain resolved HBASE-21891.

Resolution: Duplicate

Based on [~a00408367] 
[testing|https://issues.apache.org/jira/browse/HBASE-20662?focusedCommentId=16785275&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16785275],
 resolving this as duplicate. This issue is fixed by HBASE-20662.

> New space quota policy doesn't take effect if quota policy is changed after 
> violation
> -
>
> Key: HBASE-21891
> URL: https://issues.apache.org/jira/browse/HBASE-21891
> Project: HBase
>  Issue Type: Bug
>Reporter: Ajeet Rai
>Priority: Minor
>
> *Steps to reproduce*
>  1: set_quota TYPE => SPACE, TABLE => 'test25', LIMIT => '2M', POLICY => 
> NO_WRITES
> 2: ./hbase pe --table="test25" --nomapred --rows=300  sequentialWrite  10
> 3: Observe that after some time data usage is 3 mb and policy is in violation
> 4: now try to insert some data again in the table and observe that operation 
> fails due to NoWritesViolationPolicyEnforcement 
> 5: Now change the quota policy 
>  set_quota TYPE => SPACE, TABLE => 'test25', LIMIT => '2M', POLICY => 
> NO_WRITES_COMPACTIONS
> 6: Now again try to insert data once new policy takes effect
> 7: Observe that still operation fails but because of old policy and not new 
> policy.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (HBASE-20662) Increasing space quota on a violated table does not remove SpaceViolationPolicy.DISABLE enforcement

2019-03-27 Thread Nihal Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nihal Jain reopened HBASE-20662:


> Increasing space quota on a violated table does not remove 
> SpaceViolationPolicy.DISABLE enforcement
> ---
>
> Key: HBASE-20662
> URL: https://issues.apache.org/jira/browse/HBASE-20662
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.0.0
>Reporter: Nihal Jain
>Assignee: Nihal Jain
>Priority: Major
> Fix For: 3.0.0, 2.2.0
>
> Attachments: HBASE-20662.branch-2.1.001.patch, 
> HBASE-20662.master.001.patch, HBASE-20662.master.002.patch, 
> HBASE-20662.master.003.patch, HBASE-20662.master.004.patch, 
> HBASE-20662.master.004.patch, HBASE-20662.master.005.patch, 
> HBASE-20662.master.006.patch, HBASE-20662.master.007.patch, 
> HBASE-20662.master.008.patch, HBASE-20662.master.008.patch, 
> HBASE-20662.master.009.patch, HBASE-20662.master.009.patch, 
> HBASE-20662.master.010.patch, screenshot.png
>
>
> *Steps to reproduce*
>  * Create a table and set quota with {{SpaceViolationPolicy.DISABLE}} having 
> limit say 2MB
>  * Now put rows until space quota is violated and table gets disabled
>  * Next, increase space quota with limit say 4MB on the table
>  * Now try putting a row into the table
> {code:java}
>  private void testSetQuotaThenViolateAndFinallyIncreaseQuota() throws 
> Exception {
> SpaceViolationPolicy policy = SpaceViolationPolicy.DISABLE;
> Put put = new Put(Bytes.toBytes("to_reject"));
> put.addColumn(Bytes.toBytes(SpaceQuotaHelperForTests.F1), 
> Bytes.toBytes("to"),
>   Bytes.toBytes("reject"));
> // Do puts until we violate space policy
> final TableName tn = writeUntilViolationAndVerifyViolation(policy, put);
> // Now, increase limit
> setQuotaLimit(tn, policy, 4L);
> // Put some row now: should not violate as quota limit increased
> verifyNoViolation(policy, tn, put);
>   }
> {code}
> *Expected*
> We should be able to put data as long as newly set quota limit is not reached
> *Actual*
> We fail to put any new row even after increasing limit
> *Root cause*
> Increasing quota on a violated table triggers the table to be enabled, but 
> since the table is already in violation, the system does not allow it to be 
> enabled (may be thinking that a user is trying to enable it)
> *Relevant exception trace*
> {noformat}
> 2018-05-31 00:34:27,563 INFO  [regionserver/root1-ThinkPad-T440p:0.Chore.1] 
> client.HBaseAdmin$14(844): Started enable of 
> testSetQuotaAndThenIncreaseQuotaWithDisable0
> 2018-05-31 00:34:27,571 DEBUG 
> [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=42525] 
> ipc.CallRunner(142): callId: 11 service: MasterService methodName: 
> EnableTable size: 104 connection: 127.0.0.1:38030 deadline: 1527707127568, 
> exception=org.apache.hadoop.hbase.security.AccessDeniedException: Enabling 
> the table 'testSetQuotaAndThenIncreaseQuotaWithDisable0' is disallowed due to 
> a violated space quota.
> 2018-05-31 00:34:27,571 ERROR [regionserver/root1-ThinkPad-T440p:0.Chore.1] 
> quotas.RegionServerSpaceQuotaManager(210): Failed to disable space violation 
> policy for testSetQuotaAndThenIncreaseQuotaWithDisable0. This table will 
> remain in violation.
> org.apache.hadoop.hbase.security.AccessDeniedException: 
> org.apache.hadoop.hbase.security.AccessDeniedException: Enabling the table 
> 'testSetQuotaAndThenIncreaseQuotaWithDisable0' is disallowed due to a 
> violated space quota.
>   at org.apache.hadoop.hbase.master.HMaster$6.run(HMaster.java:2275)
>   at 
> org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:131)
>   at org.apache.hadoop.hbase.master.HMaster.enableTable(HMaster.java:2258)
>   at 
> org.apache.hadoop.hbase.master.MasterRpcServices.enableTable(MasterRpcServices.java:725)
>   at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasExceptio

[jira] [Created] (HBASE-22129) Rewrite TestSpaceQuotas as parameterized tests

2019-03-29 Thread Nihal Jain (JIRA)
Nihal Jain created HBASE-22129:
--

 Summary: Rewrite TestSpaceQuotas as parameterized tests
 Key: HBASE-22129
 URL: https://issues.apache.org/jira/browse/HBASE-22129
 Project: HBase
  Issue Type: Improvement
Reporter: Nihal Jain
Assignee: Nihal Jain


In {{TestSpaceQuotas}}, for a particular test scenario we have a new method for 
each quota type. This calls for rewriting the tests as {{Parameterized}} tests. 

In this Jira I plan to split {{TestSpaceQuotas}} into:
 * *{{SpaceQuotasTestBase}}*: Base class for tests
 * *{{TestSpaceQuotas}}*: No-parameterized tests
 * *{{TestSpaceQuotasOnTables}}*: Parameterized table space quota tests
 * *{{TestSpaceQuotasOnNamespaces}}*: Parameterized namespace space quota tests

Mostly need to do what was done in [HBASE-20662 Patch 2|#file-9].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HBASE-27811) Enable cache control for logs endpoint and set max age as 0

2023-05-28 Thread Nihal Jain (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nihal Jain resolved HBASE-27811.

Fix Version/s: 3.0.0-alpha-4
 Hadoop Flags: Reviewed
   Resolution: Fixed

Will reopen, if back port jira is raised.

> Enable cache control for logs endpoint and set max age as 0
> ---
>
> Key: HBASE-27811
> URL: https://issues.apache.org/jira/browse/HBASE-27811
> Project: HBase
>  Issue Type: Improvement
>Reporter: Yash Dodeja
>Assignee: Yash Dodeja
>Priority: Minor
> Fix For: 3.0.0-alpha-4
>
>
> Not setting the proper header values may cause browsers to store pages within 
> their respective caches. On public, shared, or any other non-private 
> computers, a malicious person may search through the browser cache to locate 
> sensitive information cached during another user's session.
> /logs endpoint contains sensitive information that an attacker can exploit.
> Any page with sensitive information needs to have the following headers in 
> response:
> Cache-Control: no-cache, no-store, max-age=0
> Pragma: no-cache
> Expires: -1



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-27961) [HBCK2] Running assigns/unassigns command with large number of files/regions throws CallTimeoutException

2023-07-04 Thread Nihal Jain (Jira)
Nihal Jain created HBASE-27961:
--

 Summary: [HBCK2] Running assigns/unassigns command with large 
number of files/regions throws CallTimeoutException
 Key: HBASE-27961
 URL: https://issues.apache.org/jira/browse/HBASE-27961
 Project: HBase
  Issue Type: Bug
  Components: hbck2
Reporter: Nihal Jain
Assignee: Nihal Jain






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-27966) HBase Master/RS JVM metrics populated incorrectly

2023-07-07 Thread Nihal Jain (Jira)
Nihal Jain created HBASE-27966:
--

 Summary: HBase Master/RS JVM metrics populated incorrectly
 Key: HBASE-27966
 URL: https://issues.apache.org/jira/browse/HBASE-27966
 Project: HBase
  Issue Type: Bug
  Components: metrics
Affects Versions: 2.0.0-alpha-4
Reporter: Nihal Jain
Assignee: Nihal Jain


HBase Master/RS JVM metrics populated incorrectly due to regression causing 
ambari metrics system to not able to capture them.

Based on my analysis the issue is relevant for all release post 2.0.0-alpha-4 
and seems to be caused due to HBASE-18846.

Have been able to compare the JVM metrics across 3 versions of HBase and 
attaching results of same below:

HBase: 1.1.2
{code:java}
{
"name" : "Hadoop:service=HBase,name=JvmMetrics",
"modelerType" : "JvmMetrics",
"tag.Context" : "jvm",
"tag.ProcessName" : "RegionServer",
"tag.SessionId" : "",
"tag.Hostname" : "HOSTNAME",
"MemNonHeapUsedM" : 196.05664,
"MemNonHeapCommittedM" : 347.60547,
"MemNonHeapMaxM" : 4336.0,
"MemHeapUsedM" : 7207.315,
"MemHeapCommittedM" : 66080.0,
"MemHeapMaxM" : 66080.0,
"MemMaxM" : 66080.0,
"GcCount" : 3953,
"GcTimeMillis" : 662520,
"ThreadsNew" : 0,
"ThreadsRunnable" : 214,
"ThreadsBlocked" : 0,
"ThreadsWaiting" : 626,
"ThreadsTimedWaiting" : 78,
"ThreadsTerminated" : 0,
"LogFatal" : 0,
"LogError" : 0,
"LogWarn" : 0,
"LogInfo" : 0
  },
{code}
HBase 2.0.2
{code:java}
{
"name" : "Hadoop:service=HBase,name=JvmMetrics",
"modelerType" : "JvmMetrics",
"tag.Context" : "jvm",
"tag.ProcessName" : "IO",
"tag.SessionId" : "",
"tag.Hostname" : "HOSTNAME",
"MemNonHeapUsedM" : 203.86688,
"MemNonHeapCommittedM" : 740.6953,
"MemNonHeapMaxM" : -1.0,
"MemHeapUsedM" : 14879.477,
"MemHeapCommittedM" : 31744.0,
"MemHeapMaxM" : 31744.0,
"MemMaxM" : 31744.0,
"GcCount" : 75922,
"GcTimeMillis" : 5134691,
"ThreadsNew" : 0,
"ThreadsRunnable" : 90,
"ThreadsBlocked" : 3,
"ThreadsWaiting" : 158,
"ThreadsTimedWaiting" : 36,
"ThreadsTerminated" : 0,
"LogFatal" : 0,
"LogError" : 0,
"LogWarn" : 0,
"LogInfo" : 0
  },
{code}
HBase: 2.5.2
{code:java}
{
  "name": "Hadoop:service=HBase,name=JvmMetrics",
  "modelerType": "JvmMetrics",
  "tag.Context": "jvm",
  "tag.ProcessName": "IO",
  "tag.SessionId": "",
  "tag.Hostname": "HOSTNAME",
  "MemNonHeapUsedM": 192.9798,
  "MemNonHeapCommittedM": 198.4375,
  "MemNonHeapMaxM": -1.0,
  "MemHeapUsedM": 773.23584,
  "MemHeapCommittedM": 1004.0,
  "MemHeapMaxM": 1024.0,
  "MemMaxM": 1024.0,
  "GcCount": 2048,
  "GcTimeMillis": 25440,
  "ThreadsNew": 0,
  "ThreadsRunnable": 22,
  "ThreadsBlocked": 0,
  "ThreadsWaiting": 121,
  "ThreadsTimedWaiting": 49,
  "ThreadsTerminated": 0,
  "LogFatal": 0,
  "LogError": 0,
  "LogWarn": 0,
  "LogInfo": 0
 },
{code}
It can be observed that 2.0.x onwards the field "tag.ProcessName" is populating 
as "IO" instead of expected "RegionServer" or "Master".

Ambari relies on this field process name to create a metric 
'jvm.RegionServer.JvmMetrics.GcTimeMillis' etc. See 
[code.|https://github.com/apache/ambari/blob/2ec4b055d99ec84c902da16dd57df91d571b48d6/ambari-server/src/main/java/org/apache/ambari/server/controller/metrics/timeline/AMSPropertyProvider.java#L722]

But post 2.0.x the field is getting populated as 'IO' and hence a metric with 
name 'jvm.JvmMetrics.GcTimeMillis' is created instead of expected 
'jvm.RegionServer.JvmMetrics.GcTimeMillis', thus mixing up the metric with 
various other metrics coming from rs, master, spark executor etc. running on 
same host.

*Expected*
Field "tag.ProcessName" should be populated as "RegionServer" or "Master" 
instead of "IO".

*Actual*
Field "tag.ProcessName" is populating as "IO" instead of expected 
"RegionServer" or "Master" causing incorrect metric being published by ambari 
and thus mixing up all metrics and raising various alerts around JVM metrics.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-27976) [hbase-operator-tools] Add spotless for hbase-operator-tools

2023-07-15 Thread Nihal Jain (Jira)
Nihal Jain created HBASE-27976:
--

 Summary: [hbase-operator-tools] Add spotless for 
hbase-operator-tools
 Key: HBASE-27976
 URL: https://issues.apache.org/jira/browse/HBASE-27976
 Project: HBase
  Issue Type: Task
Reporter: Nihal Jain
Assignee: Nihal Jain






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-27977) [hbase-operator-tools] Add spotless plugin to hbase-operator-tools pom

2023-07-15 Thread Nihal Jain (Jira)
Nihal Jain created HBASE-27977:
--

 Summary: [hbase-operator-tools] Add spotless plugin to 
hbase-operator-tools pom
 Key: HBASE-27977
 URL: https://issues.apache.org/jira/browse/HBASE-27977
 Project: HBase
  Issue Type: Sub-task
Reporter: Nihal Jain
Assignee: Nihal Jain






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-27978) [hbase-operator-tools] Add spotless in hbase-operator-tools pre-commit build

2023-07-15 Thread Nihal Jain (Jira)
Nihal Jain created HBASE-27978:
--

 Summary: [hbase-operator-tools] Add spotless in 
hbase-operator-tools pre-commit build
 Key: HBASE-27978
 URL: https://issues.apache.org/jira/browse/HBASE-27978
 Project: HBase
  Issue Type: Sub-task
Reporter: Nihal Jain






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (HBASE-27906) Fix the javadoc for SyncFutureCache

2023-07-16 Thread Nihal Jain (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nihal Jain resolved HBASE-27906.

Fix Version/s: 4.0.0-alpha-1
 Hadoop Flags: Reviewed
   Resolution: Fixed

Thanks for your first contribution [~dimitrios.efthymiou]. The PR has been 
merged to codebase.

> Fix the javadoc for SyncFutureCache
> ---
>
> Key: HBASE-27906
> URL: https://issues.apache.org/jira/browse/HBASE-27906
> Project: HBase
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Duo Zhang
>Assignee: Dimitrios Efthymiou
>Priority: Minor
> Fix For: 4.0.0-alpha-1
>
>
> It does not have any html markers so spotless messed it up...
> We should add html markers so it could keep the format after 'spotless:apply'
> {code}
> /**
>  * A cache of {@link SyncFuture}s. This class supports two methods
>  * {@link SyncFutureCache#getIfPresentOrNew()} and {@link 
> SyncFutureCache#offer()}.
>  * 
>  * Usage pattern:
>  * 
>  * 
>  *   SyncFuture sf = syncFutureCache.getIfPresentOrNew();
>  *   sf.reset(...);
>  *   // Use the sync future
>  *   finally: syncFutureCache.offer(sf);
>  * 
>  * 
>  * Offering the sync future back to the cache makes it eligible for reuse 
> within the same thread
>  * context. Cache keyed by the accessing thread instance and automatically 
> invalidated if it remains
>  * unused for {@link SyncFutureCache#SYNC_FUTURE_INVALIDATION_TIMEOUT_MINS} 
> minutes.
>  */
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-27980) Sync the hbck2 README page with hbck2 command help output

2023-07-18 Thread Nihal Jain (Jira)
Nihal Jain created HBASE-27980:
--

 Summary: Sync the hbck2 README page with hbck2 command help output
 Key: HBASE-27980
 URL: https://issues.apache.org/jira/browse/HBASE-27980
 Project: HBase
  Issue Type: Task
  Components: hbase-operator-tools, hbck2
Reporter: Nihal Jain
Assignee: Nihal Jain


There are major differences in the hbck2 
[README.md|https://github.com/apache/hbase-operator-tools/blob/master/hbase-hbck2/README.md]
 and the command help output, hence we should sync them across all command.

It should be same as the output of hbck2 help command for ease of maintenance. 

Also few new commands like {{recoverUnknown}} and {{regionInfoMismatch}} are 
missing, making users unaware of existence of those.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-28006) [hbase-connectors] Run spotless:apply on code base

2023-08-04 Thread Nihal Jain (Jira)
Nihal Jain created HBASE-28006:
--

 Summary: [hbase-connectors] Run spotless:apply on code base
 Key: HBASE-28006
 URL: https://issues.apache.org/jira/browse/HBASE-28006
 Project: HBase
  Issue Type: Sub-task
Reporter: Nihal Jain
Assignee: Nihal Jain






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-28007) [hbase-connectors] Manually fix javadoc messed due to spotless

2023-08-04 Thread Nihal Jain (Jira)
Nihal Jain created HBASE-28007:
--

 Summary: [hbase-connectors] Manually fix javadoc messed due to 
spotless
 Key: HBASE-28007
 URL: https://issues.apache.org/jira/browse/HBASE-28007
 Project: HBase
  Issue Type: Sub-task
Reporter: Nihal Jain
Assignee: Nihal Jain






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-28026) DefaultMetricsSystemInitializer should be called during HMaster or HRegionServer creation

2023-08-16 Thread Nihal Jain (Jira)
Nihal Jain created HBASE-28026:
--

 Summary: DefaultMetricsSystemInitializer should be called during 
HMaster or HRegionServer creation
 Key: HBASE-28026
 URL: https://issues.apache.org/jira/browse/HBASE-28026
 Project: HBase
  Issue Type: Bug
  Components: metrics
Reporter: Nihal Jain






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-28032) Fix ChaosMonkey documentation code block rendering

2023-08-18 Thread Nihal Jain (Jira)
Nihal Jain created HBASE-28032:
--

 Summary: Fix ChaosMonkey documentation code block rendering
 Key: HBASE-28032
 URL: https://issues.apache.org/jira/browse/HBASE-28032
 Project: HBase
  Issue Type: Task
  Components: documentation
Reporter: Nihal Jain
Assignee: Nihal Jain


The code blocks in document for ChaosMonkey isnot rendered correctly. Fix them 
and also add few more example. See 
[https://hbase.apache.org/book.html#_chaosmonkey_without_ssh] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-28034) Rewrite hbck2 documentation using ChatGPT

2023-08-19 Thread Nihal Jain (Jira)
Nihal Jain created HBASE-28034:
--

 Summary: Rewrite hbck2 documentation using ChatGPT
 Key: HBASE-28034
 URL: https://issues.apache.org/jira/browse/HBASE-28034
 Project: HBase
  Issue Type: Improvement
Reporter: Nihal Jain
Assignee: Nihal Jain


Just a thought, could we re-write the operator tools 
[README.md|https://github.com/apache/hbase-operator-tools/blob/master/README.md]
 using ChatGPT and make it better?

A sample paragraph re-written by ChatGPT is as follows:

Original:
{quote}
h3. Some General Principals

When making repair, make sure hbase:meta is consistent first before you go 
about fixing any other issue type such as a filesystem deviance. Deviance in 
the filesystem or problems with assign should be addressed after the hbase:meta 
has been put in order. If hbase:meta is out of whack, the Master cannot make 
proper placements when adopting orphan filesystem data or making region 
assignments.

Other general principles to keep in mind include a Region can not be assigned 
if it is in _CLOSING_ state (or the inverse, unassigned if in _OPENING_ state) 
without first transitioning via {_}CLOSED{_}: Regions must always move from 
{_}CLOSED{_}, to {_}OPENING{_}, to {_}OPEN{_}, and then to {_}CLOSING{_}, 
{_}CLOSED{_}.

When making repair, do fixup of a table-at-a-time.

Also, if a table is {_}DISABLED{_}, you cannot assign a Region. In the Master 
logs, you will see that the Master will report that the assign has been skipped 
because the table is {_}DISABLED{_}. You may want to assign a Region because it 
is currently in the _OPENING_ state and you want it in the _CLOSED_ state so it 
agrees with the table's _DISABLED_ state. In this situation, you may have to 
temporarily set the table status to {_}ENABLED{_}, just so you can do the 
assign, and then set it back again after the unassign. _HBCK2_ has facility to 
allow you do this. See the _HBCK2_ usage output.

What follows is a mix of notes and prescription that comes of experience 
running hbase-2.x so far. The root issues that brought on states described 
below has been fixed in later versions of hbase so upgrade if you can so as to 
avoid scenarios described.
{quote}
Rephrased:
{quote}*Some Basic Principles*

Ensure hbase:meta's consistency before attempting to fix any other type of 
problem, such as filesystem deviations. Issues related to filesystem or 
assignment should be addressed once hbase:meta is sorted out. If hbase:meta is 
awry, the Master cannot accurately allocate orphan filesystem data or region 
assignments.

Remember that a Region cannot be assigned if it is in a CLOSING state (or 
conversely, unassigned if in an OPENING state) without going through a CLOSED 
state first. Regions must always progress from CLOSED, to OPENING, to OPEN, to 
CLOSING, and then back to CLOSED.

Make repairs table-by-table.

Additionally, if a table is DISABLED, you cannot assign a Region. The Master 
logs will indicate that the assignment has been bypassed due to the table's 
DISABLED status. You might want to assign a Region because it is currently in 
the OPENING state, but you want it in the CLOSED state to match the DISABLED 
state of the table. In such cases, you might need to briefly change the table 
status to ENABLED to make the assignment, then switch it back after the 
unassignment. HBCK2 provides a facility for this. Please refer to the HBCK2 
usage output.

The following notes and instructions come from the experience of running 
hbase-2.x so far. The underlying issues causing the states described below have 
been resolved in later versions of hbase, so upgrading is recommended to avoid 
these scenarios.
{quote}
 

Is this worth the effort? Or do others feel current doc is good and does not 
need any refinement?

It may require some effort, as we may only start with first commit with 
untouched document generated by ChatGPT, but then the draft would need to be 
worked upon, based on some proofreading by the contributor and reviewers.

Curious to know how others feel.

Also, Apache has some guidelines around using of generative ai tools at 
[https://www.apache.org/legal/generative-tooling.html]

 

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-28035) ConnectionFactory.createConnection does not work with anything except ThreadPoolExecutor

2023-08-21 Thread Nihal Jain (Jira)
Nihal Jain created HBASE-28035:
--

 Summary: ConnectionFactory.createConnection does not work with 
anything except ThreadPoolExecutor
 Key: HBASE-28035
 URL: https://issues.apache.org/jira/browse/HBASE-28035
 Project: HBase
  Issue Type: Bug
Reporter: Nihal Jain


This looks like a regression where 
org.apache.hadoop.hbase.client.ConnectionFactory#createConnection(org.apache.hadoop.conf.Configuration,
 java.util.concurrent.ExecutorService) even though supports `ExecutorService` 
(but since HBASE-22244), has stopped working for `ForkJoinPool` and throws 
`java.lang.ClassCastException: java.util.concurrent.ForkJoinPool cannot be cast 
to java.util.concurrent.ThreadPoolExecutor`

I have been able to write a UT to verify the same and ran it on a branch not 
having above change i.e. branch-2.1 where the test passes while for branch-2, 
having this change, the test fails. Also it worth noting that the issue does 
not exist in master, i think its because of HBASE-21723 which removes 
`ConnectionImplementation` from master. 

 

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-28054) [hbase-connectors] Add spotless in hbase-connectors pre commit build

2023-08-30 Thread Nihal Jain (Jira)
Nihal Jain created HBASE-28054:
--

 Summary: [hbase-connectors] Add spotless in hbase-connectors pre 
commit build
 Key: HBASE-28054
 URL: https://issues.apache.org/jira/browse/HBASE-28054
 Project: HBase
  Issue Type: Sub-task
Reporter: Nihal Jain
Assignee: Nihal Jain






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-28057) [hbase-operator-tools] Run spotless:apply and fix any existing spotless issues

2023-08-31 Thread Nihal Jain (Jira)
Nihal Jain created HBASE-28057:
--

 Summary: [hbase-operator-tools] Run spotless:apply and fix any 
existing spotless issues
 Key: HBASE-28057
 URL: https://issues.apache.org/jira/browse/HBASE-28057
 Project: HBase
  Issue Type: Sub-task
  Components: build, hbase-operator-tools
Reporter: Nihal Jain
Assignee: Nihal Jain






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (HBASE-27976) [hbase-operator-tools] Add spotless for hbase-operator-tools

2023-09-05 Thread Nihal Jain (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nihal Jain resolved HBASE-27976.

Fix Version/s: hbase-operator-tools-1.3.0
 Release Note: Before creating a PR for hbase-operator-tools repo, 
developers can now run 'mvn spotless:apply' to fix code formatting issues .
   Resolution: Fixed

All the sub-tasks are done, marking the Jira as resolved.

> [hbase-operator-tools] Add spotless for hbase-operator-tools
> 
>
> Key: HBASE-27976
> URL: https://issues.apache.org/jira/browse/HBASE-27976
> Project: HBase
>  Issue Type: Umbrella
>  Components: build, hbase-operator-tools
>Reporter: Nihal Jain
>Assignee: Nihal Jain
>Priority: Major
> Fix For: hbase-operator-tools-1.3.0
>
>
> HBase code repo has spotless plugin to check and fix spotless issues 
> seamlessly, making it easier for developers to fix issue in case the builds 
> fails due to code formatting.
> The goal of this Jira is to integrate spotless with hbase-operator-tools.
>  * As a 1st step will try to add a plugin to run spotless check via maven
>  * Next will fix all spotless issues as part of same task or another (as 
> community suggests)
>  * Finally will integrate the same to pre-commit build to not let PRs wit 
> spotless issues get in. (Would need some support/direction on how to do this 
> as I am not much familiar with the Jenkins and related code.)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-28066) Move TestShellRSGroups.java inside /src/test/java

2023-09-06 Thread Nihal Jain (Jira)
Nihal Jain created HBASE-28066:
--

 Summary: Move TestShellRSGroups.java inside /src/test/java
 Key: HBASE-28066
 URL: https://issues.apache.org/jira/browse/HBASE-28066
 Project: HBase
  Issue Type: Test
Reporter: Nihal Jain
Assignee: Nihal Jain


Just noticed that {{TestShellRSGroups.java}} is at 
{{hbase-shell/src/test/rsgroup/org/apache/hadoop/hbase/client/rsgroup/TestShellRSGroups.java,}}
 but ideally it should be at 
{{hbase-shell/src/test/java/org/apache/hadoop/hbase/client/rsgroup/TestShellRSGroups.java}}
 instead.
Also because of same misplacement spotless skipped this file. Also need to run 
spotless for the same.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-28089) Upgrade BouncyCastle to fix CVE-2023-33201

2023-09-16 Thread Nihal Jain (Jira)
Nihal Jain created HBASE-28089:
--

 Summary: Upgrade BouncyCastle to fix CVE-2023-33201
 Key: HBASE-28089
 URL: https://issues.apache.org/jira/browse/HBASE-28089
 Project: HBase
  Issue Type: Task
Reporter: Nihal Jain
Assignee: Nihal Jain


HBase has a dependency on BouncyCastle 1.70 which is vulnerable with 
[CVE-2023-33201|https://nvd.nist.gov/vuln/detail/CVE-2023-33201]

Advisory: [https://github.com/bcgit/bc-java/wiki/CVE-2023-33201]

This JIRA's goal is to fix the following:
 * Upgrade to v1.76, the latest version.
 ** This requires  bcprov-jdk15on to be replaced with bcprov-jdk18on
 ** See [https://www.bouncycastle.org/latest_releases.html]
 *** 
{quote}*Java Version Details* With the arrival of Java 15. jdk15 is not quite 
as unambiguous as it was. The *jdk18on* jars are compiled to work with 
*anything* from Java 1.8 up. They are also multi-release jars so do support 
some features that were introduced in Java 9, Java 11, and Java 15. If you have 
issues with multi-release jars see the jdk15to18 release jars below.

*Packaging Change (users of 1.70 or earlier):* BC 1.71 changed the jdk15on jars 
to jdk18on so the base has now moved to Java 8. For earlier JVMs, or 
containers/applications that cannot cope with multi-release jars, you should 
now use the jdk15to18 jars.
{quote}
 * Exclude bcprov-jdk15on from everywhere else to avoid conflicts with 
bcprov-jdk18on



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-28102) [hbase-thirdparty] Bump hbase.stable.version to 2.5.5 in hbase-noop-htrace

2023-09-20 Thread Nihal Jain (Jira)
Nihal Jain created HBASE-28102:
--

 Summary: [hbase-thirdparty] Bump hbase.stable.version to 2.5.5 in 
hbase-noop-htrace
 Key: HBASE-28102
 URL: https://issues.apache.org/jira/browse/HBASE-28102
 Project: HBase
  Issue Type: Sub-task
Reporter: Nihal Jain
Assignee: Nihal Jain






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-28106) TestShadeSaslAuthenticationProvider fails for branch-2.5 and branch-2.4

2023-09-21 Thread Nihal Jain (Jira)
Nihal Jain created HBASE-28106:
--

 Summary: TestShadeSaslAuthenticationProvider fails for branch-2.5 
and branch-2.4
 Key: HBASE-28106
 URL: https://issues.apache.org/jira/browse/HBASE-28106
 Project: HBase
  Issue Type: Bug
Reporter: Nihal Jain






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (HBASE-28160) Build fails with Hadoop 3.3.5 and higher

2023-10-17 Thread Nihal Jain (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-28160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nihal Jain resolved HBASE-28160.

Resolution: Duplicate

Hi [~larsfrancke] this looks like a duplicate issue and has already been fixed 
with HBASE-27860. This fix was released as part of 2.4.18.

Also the master failure I could not reproduce: Did you run with 
{{-Dhadoop.profile=3.0}} by any chance? Could you try running below for master: 
{code:java}
mvn clean install -DskipTests -Phadoop-3.0 -Dhadoop-three.version=3.3.5
{code}
Feel free to create another JIRA if {{(Found Banned Dependency: 
org.bouncycastle:bcprov-jdk15on:jar:1.52)}} is still thrown.

> Build fails with Hadoop 3.3.5 and higher
> 
>
> Key: HBASE-28160
> URL: https://issues.apache.org/jira/browse/HBASE-28160
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.4.17
>Reporter: Lars Francke
>Priority: Minor
>
> https://issues.apache.org/jira/browse/HADOOP-15983 changed dependencies and 
> that makes our {{check-jar-contents-for-stuff-with-hadoop}} check fail:
> Excerpt:
> {noformat}
> [INFO] --- exec-maven-plugin:1.6.0:exec 
> (check-jar-contents-for-stuff-with-hadoop) @ 
> hbase-shaded-with-hadoop-check-invariants ---
> [ERROR] Found artifact with unexpected contents: 
> '/home/lars/Downloads/hbase/hbase-2.4.17-src/hbase-shaded/hbase-shaded-client/target/hbase-shaded-client-2.4.17.jar'
> Please check the following and either correct the build or update
> the allowed list with reasoning.
> com/
> com/sun/
> com/sun/jersey/
> com/sun/jersey/json/
> com/sun/jersey/json/impl/
> com/sun/jersey/json/impl/reader/
> com/sun/jersey/json/impl/reader/JsonXmlEvent$Attribute.class
> com/sun/jersey/json/impl/reader/JsonXmlStreamReader$1.class
> com/sun/jersey/json/impl/reader/XmlEventProvider$1.class
> com/sun/jersey/json/impl/reader/NaturalNotationEventProvider.class
> com/sun/jersey/json/impl/reader/XmlEventProvider.class
> com/sun/jersey/json/impl/reader/XmlEventProvider$ProcessingInfo.class
> com/sun/jersey/json/impl/reader/StartElementEvent.class
> com/sun/jersey/json/impl/reader/CharactersEvent.class
> com/sun/jersey/json/impl/reader/JacksonRootAddingParser$1.class
> com/sun/jersey/json/impl/reader/EndElementEvent.class
> com/sun/jersey/json/impl/reader/JsonXmlStreamReader.class
> com/sun/jersey/json/impl/reader/StaxLocation.class
> com/sun/jersey/json/impl/reader/JsonNamespaceContext.class
> com/sun/jersey/json/impl/reader/JsonXmlEvent.class
> com/sun/jersey/json/impl/reader/JacksonRootAddingParser.class
> com/sun/jersey/json/impl/reader/StartDocumentEvent.class
> com/sun/jersey/json/impl/reader/MappedNotationEventProvider.class
> com/sun/jersey/json/impl/reader/EndDocumentEvent.class
> com/sun/jersey/json/impl/reader/JsonFormatException.class
> com/sun/jersey/json/impl/reader/XmlEventProvider$CachedJsonParser.class
> com/sun/jersey/json/impl/reader/JacksonRootAddingParser$State.class
> com/sun/jersey/json/impl/JaxbRiXmlStructure.class
> com/sun/jersey/json/impl/ImplMessages.class
> com/sun/jersey/json/impl/JSONMarshallerImpl.class
> com/sun/jersey/json/impl/NameUtil.class
> com/sun/jersey/json/impl/FilteringInputStream.class
> com/sun/jersey/json/impl/JaxbProvider.class
> []
> {noformat}
> I'm afraid I'm a bit at a loss with the current Maven build system as to what 
> the actual fix would be.
> I tested it against 2.4.17 as well as master as of today. Master already 
> fails in an earlier step ({{Found Banned Dependency: 
> org.bouncycastle:bcprov-jdk15on:jar:1.52}}) which I assume is a separate 
> issue but I further assume that it would also fail at this step if it were to 
> get this far.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-28173) Make use of assertThrows in TestShadeSaslAuthenticationProvider

2023-10-21 Thread Nihal Jain (Jira)
Nihal Jain created HBASE-28173:
--

 Summary: Make use of assertThrows in 
TestShadeSaslAuthenticationProvider
 Key: HBASE-28173
 URL: https://issues.apache.org/jira/browse/HBASE-28173
 Project: HBase
  Issue Type: Task
  Components: security, test
Reporter: Duo Zhang
Assignee: Nihal Jain


The testNegativeAuthentication method is completely different between 
master/branch-3 and branch-2.x, we should try to align the test for these 
branches.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-28243) Bump jackson version to 2.15.2

2023-12-06 Thread Nihal Jain (Jira)
Nihal Jain created HBASE-28243:
--

 Summary:  Bump jackson version to 2.15.2 
 Key: HBASE-28243
 URL: https://issues.apache.org/jira/browse/HBASE-28243
 Project: HBase
  Issue Type: Improvement
Reporter: Nihal Jain
Assignee: Nihal Jain


We should bump jackson to 2.15.2 as it is already move to this in 
hbase-thirdparty in HBASE-28093 

Also 2.14.1 has 
[sonatype-2022-6438.|https://github.com/FasterXML/jackson-core/issues/861]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-28245) Sync internal protobuf version for hbase to be same as hbase-thirdparty

2023-12-06 Thread Nihal Jain (Jira)
Nihal Jain created HBASE-28245:
--

 Summary: Sync internal protobuf version for hbase to be same as 
hbase-thirdparty
 Key: HBASE-28245
 URL: https://issues.apache.org/jira/browse/HBASE-28245
 Project: HBase
  Issue Type: Task
Reporter: Nihal Jain
Assignee: Nihal Jain






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-28249) Bump jruby to 9.3.13.0 and related joni and jcodings to 2.2.1 and 1.0.58 respectively

2023-12-07 Thread Nihal Jain (Jira)
Nihal Jain created HBASE-28249:
--

 Summary: Bump jruby to 9.3.13.0 and related joni and jcodings to 
2.2.1 and 1.0.58 respectively
 Key: HBASE-28249
 URL: https://issues.apache.org/jira/browse/HBASE-28249
 Project: HBase
  Issue Type: Task
Reporter: Nihal Jain
Assignee: Nihal Jain


Given branch-2 including  is already on 9.3.9.0, we should bump to atleast 
9.3.13.0. This will fix the bundled *org.bouncycastle : bcprov-jdk18on : 1.71* 
having CVE-2023-33201 from out classpath for the least. 

As a follow up can try to bump to latest 9.4.x line, if others are fine with 
this. Please let me know what others think.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-28250) Bump jruby to 9.4.5.0 and related joni and jcodings

2023-12-07 Thread Nihal Jain (Jira)
Nihal Jain created HBASE-28250:
--

 Summary: Bump jruby to 9.4.5.0 and related joni and jcodings
 Key: HBASE-28250
 URL: https://issues.apache.org/jira/browse/HBASE-28250
 Project: HBase
  Issue Type: Task
Reporter: Nihal Jain
Assignee: Nihal Jain


Given branch-2 including branch-2.6 is already on 9.3.9.0, we should bump to at 
least 9.3.13.0. This will fix the bundled *org.bouncycastle : bcprov-jdk18on : 
1.71* having [CVE-2023-33201|https://nvd.nist.gov/vuln/detail/CVE-2023-33201] 
from out classpath for the least.

As a follow up can try to bump to latest 9.4.x line, if others are fine with 
this. Please let me know what others think.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-28269) Ruby scripts are broken as they reference class which do not exit

2023-12-18 Thread Nihal Jain (Jira)
Nihal Jain created HBASE-28269:
--

 Summary: Ruby scripts are broken as they reference class which do 
not exit
 Key: HBASE-28269
 URL: https://issues.apache.org/jira/browse/HBASE-28269
 Project: HBase
  Issue Type: Bug
Affects Versions: 3.0.0-alpha-4
Reporter: Nihal Jain
Assignee: Nihal Jain


Some of the ruby scripts are broken in 3.x as they are referencing non-existent 
classes:
 * {{org.apache.hadoop.hbase.client.HBaseAdmin}}
 * {{org.apache.hadoop.hbase.HTableDescriptor}}

Following 4 scripts are failing:
{code:java}
NameError: missing class name org.apache.hadoop.hbase.client.HBaseAdmin
  method_missing at org/jruby/javasupport/JavaPackage.java:253
   at region_status.rb:50
{code}
{code:java}
{NameError: missing class name org.apache.hadoop.hbase.HTableDescriptor
  method_missing at org/jruby/javasupport/JavaPackage.java:253
   at replication/copy_tables_desc.rb:30
{code}
{code:java}
NameError: missing class name org.apache.hadoop.hbase.client.HBaseAdmin
  method_missing at org/jruby/javasupport/JavaPackage.java:253
   at draining_servers.rb:28
{code}
{code:java}
NameError: missing class name org.apache.hadoop.hbase.client.HBaseAdmin
  method_missing at org/jruby/javasupport/JavaPackage.java:253
   at shutdown_regionserver.rb:27
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-28273) region_status.rb is broken

2023-12-19 Thread Nihal Jain (Jira)
Nihal Jain created HBASE-28273:
--

 Summary: region_status.rb is broken
 Key: HBASE-28273
 URL: https://issues.apache.org/jira/browse/HBASE-28273
 Project: HBase
  Issue Type: Sub-task
Affects Versions: 2.5.7, 3.0.0-alpha-4, 2.6.0
Reporter: Nihal Jain
Assignee: Nihal Jain


{{region_status.rb}} which is broken by all ends on all active branches.

Need to thoroughly fix it as it has multiple errors.

Not sure who uses it though as this is broken in branch-2 as well. We should 
maybe deprecate and remove it.

CC: [~zhangduo]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-28275) Flaky test: Fix 'list decommissioned regionservers' in admin2_test.rb

2023-12-20 Thread Nihal Jain (Jira)
Nihal Jain created HBASE-28275:
--

 Summary: Flaky test: Fix 'list decommissioned regionservers' in 
admin2_test.rb
 Key: HBASE-28275
 URL: https://issues.apache.org/jira/browse/HBASE-28275
 Project: HBase
  Issue Type: Bug
Reporter: Nihal Jain
Assignee: Nihal Jain






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-28295) Fews tests are failing due to NCDFE: org/bouncycastle/operator/OperatorCreationException

2024-01-07 Thread Nihal Jain (Jira)
Nihal Jain created HBASE-28295:
--

 Summary: Fews tests are failing due to NCDFE: 
org/bouncycastle/operator/OperatorCreationException
 Key: HBASE-28295
 URL: https://issues.apache.org/jira/browse/HBASE-28295
 Project: HBase
  Issue Type: Improvement
Reporter: Nihal Jain
Assignee: Nihal Jain
 Fix For: 2.6.0, 3.0.0-beta-2


See [https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/23/] for 
branch-2.6
 * [Test 
Result|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/23/testReport/]
 (6 failures / +4)
 ** [health checks / yetus jdk11 hadoop3 checks / 
org.apache.hadoop.hbase.mapreduce.TestHBaseMRTestingUtility.testMRYarnConfigsPopulation|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/23/testReport/junit/org.apache.hadoop.hbase.mapreduce/TestHBaseMRTestingUtility/health_checks___yetus_jdk11_hadoop3_checks___testMRYarnConfigsPopulation/]
 ** [health checks / yetus jdk11 hadoop3 checks / 
org.apache.hadoop.hbase.replication.TestVerifyReplicationCrossDiffHdfs.(?)|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/23/testReport/junit/org.apache.hadoop.hbase.replication/TestVerifyReplicationCrossDiffHdfs/health_checks___yetus_jdk11_hadoop3_checks__/]
 ** [health checks / yetus jdk11 hadoop3 checks / 
org.apache.hadoop.hbase.snapshot.TestMobSecureExportSnapshot.(?)|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/23/testReport/junit/org.apache.hadoop.hbase.snapshot/TestMobSecureExportSnapshot/health_checks___yetus_jdk11_hadoop3_checks__/]
 ** [health checks / yetus jdk11 hadoop3 checks / 
org.apache.hadoop.hbase.snapshot.TestSecureExportSnapshot.(?)|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/23/testReport/junit/org.apache.hadoop.hbase.snapshot/TestSecureExportSnapshot/health_checks___yetus_jdk11_hadoop3_checks__/]

See [https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/956/] for 
branch-2
 * [Test 
Result|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/956/testReport/]
 (8 failures / +7)
 ** [health checks / yetus jdk11 hadoop3 checks / 
org.apache.hadoop.hbase.mapreduce.TestHBaseMRTestingUtility.testMRYarnConfigsPopulation|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/956/testReport/junit/org.apache.hadoop.hbase.mapreduce/TestHBaseMRTestingUtility/health_checks___yetus_jdk11_hadoop3_checks___testMRYarnConfigsPopulation/]
 ** [health checks / yetus jdk11 hadoop3 checks / 
org.apache.hadoop.hbase.replication.TestVerifyReplicationCrossDiffHdfs.(?)|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/956/testReport/junit/org.apache.hadoop.hbase.replication/TestVerifyReplicationCrossDiffHdfs/health_checks___yetus_jdk11_hadoop3_checks__/]
 ** [health checks / yetus jdk11 hadoop3 checks / 
org.apache.hadoop.hbase.snapshot.TestMobSecureExportSnapshot.(?)|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/956/testReport/junit/org.apache.hadoop.hbase.snapshot/TestMobSecureExportSnapshot/health_checks___yetus_jdk11_hadoop3_checks__/]
 ** [health checks / yetus jdk11 hadoop3 checks / 
org.apache.hadoop.hbase.snapshot.TestSecureExportSnapshot.(?)|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/956/testReport/junit/org.apache.hadoop.hbase.snapshot/TestSecureExportSnapshot/health_checks___yetus_jdk11_hadoop3_checks__/]
 ** [health checks / yetus jdk8 hadoop3 checks / 
org.apache.hadoop.hbase.mapreduce.TestHBaseMRTestingUtility.testMRYarnConfigsPopulation|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/956/testReport/junit/org.apache.hadoop.hbase.mapreduce/TestHBaseMRTestingUtility/health_checks___yetus_jdk8_hadoop3_checks___testMRYarnConfigsPopulation/]
 ** [health checks / yetus jdk8 hadoop3 checks / 
org.apache.hadoop.hbase.replication.TestVerifyReplicationCrossDiffHdfs.(?)|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/956/testReport/junit/org.apache.hadoop.hbase.replication/TestVerifyReplicationCrossDiffHdfs/health_checks___yetus_jdk8_hadoop3_checks__/]
 ** [health checks / yetus jdk8 hadoop3 checks / 
org.apache.hadoop.hbase.snapshot.TestMobSecureExportSnapshot.(?)|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/956/testReport/junit/org.apache.hadoop.hbase.snapshot/TestMobSecureExportSnapshot/health_checks___yetus_jdk8_hadoop3_checks__/]
 ** [health checks / yetus jdk8 hadoop3 checks / 
org.apache.hadoop.hbase.snapshot.TestSecureExportSnapshot.(?)|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/956/testReport/junit/org.apache.hadoop.hbase.snapshot/TestSecureExportSnapshot/health_checks___yetus_jdk8_hadoop3_checks__/]

Also fails locally for me for master.
{code:java}
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running 
org.apache.hadoop.hbase.repli

[jira] [Reopened] (HBASE-28295) Few tests are failing due to NCDFE: org/bouncycastle/operator/OperatorCreationException

2024-01-07 Thread Nihal Jain (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-28295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nihal Jain reopened HBASE-28295:


Earlier reported tests have passed but a new one is coming in latest nightly 
build. Not sure how this was not reported in last build though: 
[https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/24/]

[Test 
Result|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/24/testReport/]
 (2 failures / -4)
 * [health checks / yetus jdk11 hadoop3 checks / 
org.apache.hadoop.hbase.backup.TestBackupSmallTests.(?)|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/24/testReport/junit/org.apache.hadoop.hbase.backup/TestBackupSmallTests/health_checks___yetus_jdk11_hadoop3_checks__/]

Reopening for an addendum fix!

 

> Few tests are failing due to NCDFE: 
> org/bouncycastle/operator/OperatorCreationException
> ---
>
> Key: HBASE-28295
> URL: https://issues.apache.org/jira/browse/HBASE-28295
> Project: HBase
>  Issue Type: Bug
>  Components: build, dependencies, hadoop3
>Reporter: Nihal Jain
>Assignee: Nihal Jain
>Priority: Major
> Fix For: 2.6.0, 3.0.0-beta-2
>
>
> See [https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/23/] for 
> branch-2.6
>  * [Test 
> Result|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/23/testReport/]
>  (6 failures / +4)
>  ** [health checks / yetus jdk11 hadoop3 checks / 
> org.apache.hadoop.hbase.mapreduce.TestHBaseMRTestingUtility.testMRYarnConfigsPopulation|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/23/testReport/junit/org.apache.hadoop.hbase.mapreduce/TestHBaseMRTestingUtility/health_checks___yetus_jdk11_hadoop3_checks___testMRYarnConfigsPopulation/]
>  ** [health checks / yetus jdk11 hadoop3 checks / 
> org.apache.hadoop.hbase.replication.TestVerifyReplicationCrossDiffHdfs.(?)|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/23/testReport/junit/org.apache.hadoop.hbase.replication/TestVerifyReplicationCrossDiffHdfs/health_checks___yetus_jdk11_hadoop3_checks__/]
>  ** [health checks / yetus jdk11 hadoop3 checks / 
> org.apache.hadoop.hbase.snapshot.TestMobSecureExportSnapshot.(?)|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/23/testReport/junit/org.apache.hadoop.hbase.snapshot/TestMobSecureExportSnapshot/health_checks___yetus_jdk11_hadoop3_checks__/]
>  ** [health checks / yetus jdk11 hadoop3 checks / 
> org.apache.hadoop.hbase.snapshot.TestSecureExportSnapshot.(?)|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/23/testReport/junit/org.apache.hadoop.hbase.snapshot/TestSecureExportSnapshot/health_checks___yetus_jdk11_hadoop3_checks__/]
> See [https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/956/] for 
> branch-2
>  * [Test 
> Result|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/956/testReport/]
>  (8 failures / +7)
>  ** [health checks / yetus jdk11 hadoop3 checks / 
> org.apache.hadoop.hbase.mapreduce.TestHBaseMRTestingUtility.testMRYarnConfigsPopulation|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/956/testReport/junit/org.apache.hadoop.hbase.mapreduce/TestHBaseMRTestingUtility/health_checks___yetus_jdk11_hadoop3_checks___testMRYarnConfigsPopulation/]
>  ** [health checks / yetus jdk11 hadoop3 checks / 
> org.apache.hadoop.hbase.replication.TestVerifyReplicationCrossDiffHdfs.(?)|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/956/testReport/junit/org.apache.hadoop.hbase.replication/TestVerifyReplicationCrossDiffHdfs/health_checks___yetus_jdk11_hadoop3_checks__/]
>  ** [health checks / yetus jdk11 hadoop3 checks / 
> org.apache.hadoop.hbase.snapshot.TestMobSecureExportSnapshot.(?)|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/956/testReport/junit/org.apache.hadoop.hbase.snapshot/TestMobSecureExportSnapshot/health_checks___yetus_jdk11_hadoop3_checks__/]
>  ** [health checks / yetus jdk11 hadoop3 checks / 
> org.apache.hadoop.hbase.snapshot.TestSecureExportSnapshot.(?)|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/956/testReport/junit/org.apache.hadoop.hbase.snapshot/TestSecureExportSnapshot/health_checks___yetus_jdk11_hadoop3_checks__/]
>  ** [health checks / yetus jdk8 hadoop3 checks / 
> org.apache.hadoop.hbase.mapreduce.TestHBaseMRTestingUtility.testMRYarnConfigsPopulation|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/956/testReport/junit/org.apache.hadoop.hbase.mapreduce/TestHBaseMRTestingUtility/health_checks___yetus_jdk8_hadoop3_checks___testMRYarnConfigsPopulation/]
>  ** [health checks / yetus jdk8 hadoop3 checks / 
> org.apache.hadoop.hbase.replication.TestVerifyReplicationCrossDiffHdfs.(?)|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branc

[jira] [Created] (HBASE-28297) IntegrationTestImportTsv is broken

2024-01-08 Thread Nihal Jain (Jira)
Nihal Jain created HBASE-28297:
--

 Summary: IntegrationTestImportTsv is broken
 Key: HBASE-28297
 URL: https://issues.apache.org/jira/browse/HBASE-28297
 Project: HBase
  Issue Type: Bug
  Components: integration tests, test
Reporter: Nihal Jain
Assignee: Nihal Jain


While trying to fix HBASE-28295, found issues in IntegrationTestImportTsv
{code:java}
INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running 
org.apache.hadoop.hbase.mapreduce.IntegrationTestFileBasedSFTBulkLoad
[INFO] Running org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList
[INFO] Running org.apache.hadoop.hbase.mapreduce.IntegrationTestImportTsv
[INFO] Running org.apache.hadoop.hbase.test.IntegrationTestLoadAndVerify
[ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 0.78 s 
<<< FAILURE! - in org.apache.hadoop.hbase.mapreduce.IntegrationTestImportTsv
[ERROR] org.apache.hadoop.hbase.mapreduce.IntegrationTestImportTsv  Time 
elapsed: 0.772 s  <<< ERROR!
java.lang.ExceptionInInitializerError
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
        at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
        at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
        at 
org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33)
        at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
        at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
        at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
        at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
        at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:316)
        at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:240)
        at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:214)
        at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:155)
        at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:385)
        at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:162)
        at 
org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:507)
        at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:495)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 1
        at 
org.apache.hadoop.hbase.mapreduce.IntegrationTestImportTsv$1.(IntegrationTestImportTsv.java:90)
        at 
org.apache.hadoop.hbase.mapreduce.IntegrationTestImportTsv.(IntegrationTestImportTsv.java:83)
        ... 20 more[ERROR] 
org.apache.hadoop.hbase.mapreduce.IntegrationTestImportTsv  Time elapsed: 0.772 
s  <<< ERROR!
java.lang.NoClassDefFoundError: Could not initialize class 
org.apache.hadoop.hbase.mapreduce.IntegrationTestImportTsv
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
        at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
        at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
        at 
org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46)
        at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33)
        at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
        at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
        at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:316)
        at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:240)
        at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:214)
        at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:155)
        at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:385)
        at 
org.apache.maven.surefire.booter.ForkedBooter.execute(

[jira] [Created] (HBASE-28299) Set proper error in response for all usages of HttpServer.isInstrumentationAccessAllowed()

2024-01-10 Thread Nihal Jain (Jira)
Nihal Jain created HBASE-28299:
--

 Summary: Set proper error in response for all usages of 
HttpServer.isInstrumentationAccessAllowed()
 Key: HBASE-28299
 URL: https://issues.apache.org/jira/browse/HBASE-28299
 Project: HBase
  Issue Type: Bug
Reporter: Nihal Jain
Assignee: Nihal Jain


During review https://github.com/apache/hbase/pull/5215, it was found we simply 
return 200 even if instrumentation is not allowed. While at some places we set 
proper error. This JIRA is to fix usages of the method and set proper response 
code.

CC: [~ndimiduk]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-28300) Refactor GarbageCollectorMXBean instantiation in process*.jsp

2024-01-10 Thread Nihal Jain (Jira)
Nihal Jain created HBASE-28300:
--

 Summary: Refactor GarbageCollectorMXBean instantiation in 
process*.jsp
 Key: HBASE-28300
 URL: https://issues.apache.org/jira/browse/HBASE-28300
 Project: HBase
  Issue Type: Improvement
Reporter: Nihal Jain
Assignee: Nihal Jain


During review of https://github.com/apache/hbase/pull/5215/ we saw that beans 
are instantiated based on assumptions around JVM, it is a good idea to refactor 
code so that we don't get errors when JVM assumptions change in future.

Review comment: https://github.com/apache/hbase/pull/5215/files#r1318304462

CC: [~ndimiduk]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-28301) IntegrationTestImportTsv fails with UnsupportedOperationException

2024-01-10 Thread Nihal Jain (Jira)
Nihal Jain created HBASE-28301:
--

 Summary: IntegrationTestImportTsv fails with 
UnsupportedOperationException
 Key: HBASE-28301
 URL: https://issues.apache.org/jira/browse/HBASE-28301
 Project: HBase
  Issue Type: Bug
Reporter: Nihal Jain
Assignee: Nihal Jain


IntegrationTestImportTsv fails with UnsupportedOperationException
{code:java}
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running org.apache.hadoop.hbase.mapreduce.IntegrationTestImportTsv
[ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 337.526 
s <<< FAILURE! - in org.apache.hadoop.hbase.mapreduce.IntegrationTestImportTsv
[ERROR] 
org.apache.hadoop.hbase.mapreduce.IntegrationTestImportTsv.testGenerateAndLoad  
Time elapsed: 279.783 s  <<< ERROR!
java.lang.UnsupportedOperationException: Unable to find suitable constructor 
for class org.apache.hadoop.hbase.mapreduce.IntegrationTestImportTsv$2
at 
org.apache.hadoop.hbase.util.ReflectionUtils.findConstructor(ReflectionUtils.java:133)
at 
org.apache.hadoop.hbase.util.ReflectionUtils.newInstance(ReflectionUtils.java:98)
at 
org.apache.hadoop.hbase.client.RawAsyncTableImpl.getScanner(RawAsyncTableImpl.java:628)
at 
org.apache.hadoop.hbase.client.RawAsyncTableImpl.getScanner(RawAsyncTableImpl.java:90)
at 
org.apache.hadoop.hbase.client.TableOverAsyncTable.getScanner(TableOverAsyncTable.java:198)
at 
org.apache.hadoop.hbase.mapreduce.IntegrationTestImportTsv.doLoadIncrementalHFiles(IntegrationTestImportTsv.java:156)
at 
org.apache.hadoop.hbase.mapreduce.IntegrationTestImportTsv.generateAndLoad(IntegrationTestImportTsv.java:206)
at 
org.apache.hadoop.hbase.mapreduce.IntegrationTestImportTsv.testGenerateAndLoad(IntegrationTestImportTsv.java:187)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:316)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:240)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:214)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:155)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:385)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:162)
at 
org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:507)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:495)

[INFO] 
[INFO] Results:
[INFO] 
[ERROR] Errors: 
[ERROR]   
IntegrationTestImportTsv.testGenerateAndLoad:187->generateAndLoad:206->doLoadIncrementalHFiles:156
 » UnsupportedOperation Unable to find suitable constructor for class 
org.apache.hadoop.hbase.mapreduce.IntegrationTestImportTsv$2

{code}



--
This message was sent by Atlassian Jira
(v8.20.10#82

[jira] [Created] (HBASE-28311) Few ITs (using MiniMRYarnCluster on hadoop-2) are failing due to NCDFE: com/sun/jersey/core/util/FeaturesAndProperties

2024-01-13 Thread Nihal Jain (Jira)
Nihal Jain created HBASE-28311:
--

 Summary: Few ITs (using MiniMRYarnCluster on hadoop-2) are failing 
due to NCDFE: com/sun/jersey/core/util/FeaturesAndProperties
 Key: HBASE-28311
 URL: https://issues.apache.org/jira/browse/HBASE-28311
 Project: HBase
  Issue Type: Bug
Reporter: Nihal Jain
Assignee: Nihal Jain


Found this while trying to run tests for HBASE-28301 locally, On branch-2 where 
hadoop 2 is default, the specified tests don't even run as MiniMRYarnCluster 
itself fails to start.

For example saw this while trying to run IntegrationTestImportTsv:
{code:java}
2024-01-12T01:10:13,486 ERROR [Thread-221 {}] log.Slf4jLog(87): Error starting 
handlers 
java.lang.NoClassDefFoundError: com/sun/jersey/core/util/FeaturesAndProperties
    at java.lang.ClassLoader.defineClass1(Native Method) ~[?:1.8.0_381]
    at java.lang.ClassLoader.defineClass(ClassLoader.java:756) ~[?:1.8.0_381]
    at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) 
~[?:1.8.0_381]
    at java.net.URLClassLoader.defineClass(URLClassLoader.java:473) 
~[?:1.8.0_381]
    at java.net.URLClassLoader.access$100(URLClassLoader.java:74) ~[?:1.8.0_381]
    at java.net.URLClassLoader$1.run(URLClassLoader.java:369) ~[?:1.8.0_381]
    at java.net.URLClassLoader$1.run(URLClassLoader.java:363) ~[?:1.8.0_381]
    at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_381]
    at java.net.URLClassLoader.findClass(URLClassLoader.java:362) ~[?:1.8.0_381]
    at java.lang.ClassLoader.loadClass(ClassLoader.java:418) ~[?:1.8.0_381]
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:355) 
~[?:1.8.0_381]
    at java.lang.ClassLoader.loadClass(ClassLoader.java:351) ~[?:1.8.0_381]
    at java.lang.ClassLoader.defineClass1(Native Method) ~[?:1.8.0_381]
    at java.lang.ClassLoader.defineClass(ClassLoader.java:756) ~[?:1.8.0_381]
    at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) 
~[?:1.8.0_381]
    at java.net.URLClassLoader.defineClass(URLClassLoader.java:473) 
~[?:1.8.0_381]
    at java.net.URLClassLoader.access$100(URLClassLoader.java:74) ~[?:1.8.0_381]
    at java.net.URLClassLoader$1.run(URLClassLoader.java:369) ~[?:1.8.0_381]
    at java.net.URLClassLoader$1.run(URLClassLoader.java:363) ~[?:1.8.0_381]
    at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_381]
    at java.net.URLClassLoader.findClass(URLClassLoader.java:362) ~[?:1.8.0_381]
    at java.lang.ClassLoader.loadClass(ClassLoader.java:418) ~[?:1.8.0_381]
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:355) 
~[?:1.8.0_381]
    at java.lang.ClassLoader.loadClass(ClassLoader.java:351) ~[?:1.8.0_381]
    at java.lang.ClassLoader.defineClass1(Native Method) ~[?:1.8.0_381]
    at java.lang.ClassLoader.defineClass(ClassLoader.java:756) ~[?:1.8.0_381]
    at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) 
~[?:1.8.0_381]
    at java.net.URLClassLoader.defineClass(URLClassLoader.java:473) 
~[?:1.8.0_381]
    at java.net.URLClassLoader.access$100(URLClassLoader.java:74) ~[?:1.8.0_381]
    at java.net.URLClassLoader$1.run(URLClassLoader.java:369) ~[?:1.8.0_381]
    at java.net.URLClassLoader$1.run(URLClassLoader.java:363) ~[?:1.8.0_381]
    at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_381]
    at java.net.URLClassLoader.findClass(URLClassLoader.java:362) ~[?:1.8.0_381]
    at java.lang.ClassLoader.loadClass(ClassLoader.java:418) ~[?:1.8.0_381]
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:355) 
~[?:1.8.0_381]
    at java.lang.ClassLoader.loadClass(ClassLoader.java:351) ~[?:1.8.0_381]
    at java.lang.Class.getDeclaredConstructors0(Native Method) ~[?:1.8.0_381]
    at java.lang.Class.privateGetDeclaredConstructors(Class.java:2671) 
~[?:1.8.0_381]
    at java.lang.Class.getDeclaredConstructors(Class.java:2020) ~[?:1.8.0_381]
    at 
com.google.inject.spi.InjectionPoint.forConstructorOf(InjectionPoint.java:243) 
~[guice-3.0.jar:?]
    at 
com.google.inject.internal.ConstructorBindingImpl.create(ConstructorBindingImpl.java:96)
 ~[guice-3.0.jar:?]
    at 
com.google.inject.internal.InjectorImpl.createUninitializedBinding(InjectorImpl.java:629)
 ~[guice-3.0.jar:?]
    at 
com.google.inject.internal.InjectorImpl.createJustInTimeBinding(InjectorImpl.java:845)
 ~[guice-3.0.jar:?]
    at 
com.google.inject.internal.InjectorImpl.createJustInTimeBindingRecursive(InjectorImpl.java:772)
 ~[guice-3.0.jar:?]
    at 
com.google.inject.internal.InjectorImpl.getJustInTimeBinding(InjectorImpl.java:256)
 ~[guice-3.0.jar:?]
    at 
com.google.inject.internal.InjectorImpl.getBindingOrThrow(InjectorImpl.java:205)
 ~[guice-3.0.jar:?]
    at 
com.google.inject.internal.InjectorImpl.getBinding(InjectorImpl.java:146) 
~[guice-3.0.jar:?]
    at com.google.inject.internal.InjectorImpl.getBinding(InjectorImpl.java:66) 
~[guice-3.0.

[jira] [Created] (HBASE-28367) Backport "HBASE-27811 Enable cache control for logs endpoint and set max age as 0" to branch-2

2024-02-14 Thread Nihal Jain (Jira)
Nihal Jain created HBASE-28367:
--

 Summary: Backport "HBASE-27811  Enable cache control for logs 
endpoint and set max age as 0" to branch-2
 Key: HBASE-28367
 URL: https://issues.apache.org/jira/browse/HBASE-28367
 Project: HBase
  Issue Type: Improvement
Reporter: Yash Dodeja
Assignee: Yash Dodeja
 Fix For: 3.0.0-alpha-4


Not setting the proper header values may cause browsers to store pages within 
their respective caches. On public, shared, or any other non-private computers, 
a malicious person may search through the browser cache to locate sensitive 
information cached during another user's session.

/logs endpoint contains sensitive information that an attacker can exploit.

Any page with sensitive information needs to have the following headers in 
response:
Cache-Control: no-cache, no-store, max-age=0
Pragma: no-cache
Expires: -1



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-28368) Backport "HBASE-27693 Support for Hadoop's LDAP Authentication mechanism (Web UI only)" to branch-2

2024-02-14 Thread Nihal Jain (Jira)
Nihal Jain created HBASE-28368:
--

 Summary: Backport "HBASE-27693 Support for Hadoop's LDAP 
Authentication mechanism (Web UI only)" to branch-2
 Key: HBASE-28368
 URL: https://issues.apache.org/jira/browse/HBASE-28368
 Project: HBase
  Issue Type: New Feature
Reporter: Yash Dodeja
Assignee: Yash Dodeja
 Fix For: 3.0.0-alpha-4


Hadoop's AuthenticationFilter has changed and now has support for ldap 
mechanism too. HBase still uses an older version tightly coupled with kerberos 
and spnego as the only auth mechanisms. HADOOP-12082 has added support for 
multiple auth handlers including LDAP. On trying to use Hadoop's 
AuthenticationFilterInitializer in hbase.http.filter.initializers, there is a 
casting exception as HBase requires it to extend 
org.apache.hadoop.hbase.http.FilterInitializer.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-28375) Build HBase Operator tool with hbase 2.6.0

2024-02-16 Thread Nihal Jain (Jira)
Nihal Jain created HBASE-28375:
--

 Summary: Build HBase Operator tool with hbase 2.6.0
 Key: HBASE-28375
 URL: https://issues.apache.org/jira/browse/HBASE-28375
 Project: HBase
  Issue Type: Task
Reporter: Nihal Jain
Assignee: Nihal Jain


HBase Operator Tools fails to compile with hbase 2.6.0.
{code:java}
[ERROR] 
/Users/nihjain/code/visa/hbase-operator-tools/hbase-hbck2/src/main/java/org/apache/hbase/hbck1/ReplicationChecker.java:[59,49]
 method getReplicationPeerStorage in class 
org.apache.hadoop.hbase.replication.ReplicationStorageFactory cannot be applied 
to given types;
[ERROR]   required: 
org.apache.hadoop.fs.FileSystem,org.apache.hadoop.hbase.zookeeper.ZKWatcher,org.apache.hadoop.conf.Configuration
[ERROR]   found: 
org.apache.hadoop.hbase.zookeeper.ZKWatcher,org.apache.hadoop.conf.Configuration
[ERROR]   reason: actual and formal argument lists differ in length {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (HBASE-28375) HBase Operator Tools fails to compile with hbase 2.6.0

2024-02-17 Thread Nihal Jain (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-28375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nihal Jain resolved HBASE-28375.

Hadoop Flags: Reviewed
  Resolution: Fixed

> HBase Operator Tools fails to compile with hbase 2.6.0
> --
>
> Key: HBASE-28375
> URL: https://issues.apache.org/jira/browse/HBASE-28375
> Project: HBase
>  Issue Type: Bug
>  Components: hbase-operator-tools
>Reporter: Nihal Jain
>Assignee: Nihal Jain
>Priority: Major
> Fix For: hbase-operator-tools-1.3.0
>
>
> HBase Operator Tools fails to compile with hbase 2.6.0.
> {code:java}
> [ERROR] 
> /file_path/hbase-operator-tools/hbase-hbck2/src/main/java/org/apache/hbase/hbck1/ReplicationChecker.java:[59,49]
>  method getReplicationPeerStorage in class 
> org.apache.hadoop.hbase.replication.ReplicationStorageFactory cannot be 
> applied to given types;
> [ERROR]   required: 
> org.apache.hadoop.fs.FileSystem,org.apache.hadoop.hbase.zookeeper.ZKWatcher,org.apache.hadoop.conf.Configuration
> [ERROR]   found: 
> org.apache.hadoop.hbase.zookeeper.ZKWatcher,org.apache.hadoop.conf.Configuration
> [ERROR]   reason: actual and formal argument lists differ in length {code}
> Seems there is a breaking change between 
> [https://github.com/apache/hbase/blob/branch-2.5/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationStorageFactory.java]
>  vs 
> [https://github.com/apache/hbase/blob/branch-2.6/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationStorageFactory.java]
>  where a public method has been dropped, which is used by operator tools and 
> hence  the build will fail for it. See 
> [https://github.com/apache/hbase-operator-tools/blob/master/hbase-hbck2/src/main/java/org/apache/hbase/hbck1/ReplicationChecker.java#L58]
>  where the effected method is invoked.
> Since ReplicationStorageFactory is @InterfaceAudience.Private so maybe it is 
> fine.
> Will try to fix and make changes in hbase-operator-tools to fall back to new 
> method, in case if build with branch-2.6
> CC: [~zhangduo]  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (HBASE-28142) Region Server Logs getting spammed with warning when storefile has no reader

2024-02-18 Thread Nihal Jain (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-28142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nihal Jain resolved HBASE-28142.

Hadoop Flags: Reviewed
  Resolution: Fixed

Pushed to branch-2.6+. Thanks for the PR [~anchalk1]. Thanks for the review 
[~chrajeshbab...@gmail.com]. Thanks for reporting [~nikitapande]!

> Region Server Logs getting spammed with warning when storefile has no reader
> 
>
> Key: HBASE-28142
> URL: https://issues.apache.org/jira/browse/HBASE-28142
> Project: HBase
>  Issue Type: Improvement
>Reporter: Nikita Pande
>Assignee: Anchal Kejriwal
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.6.0, 2.7.0, 3.0.0-beta-2
>
>
> Hbase tables which have IS_MOB set as TRUE and table metrics is enabled, 
> there are warning logs getting generated "StoreFile  has a null 
> Reader on hbase region server. 
> After setting IS_MOB as false for a table, this logs are not visible. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-28380) Build hbase-thirdparty with JDK17

2024-02-19 Thread Nihal Jain (Jira)
Nihal Jain created HBASE-28380:
--

 Summary: Build hbase-thirdparty with JDK17
 Key: HBASE-28380
 URL: https://issues.apache.org/jira/browse/HBASE-28380
 Project: HBase
  Issue Type: Task
  Components: java, thirdparty
Reporter: Nihal Jain
Assignee: Nihal Jain






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-28381) Build hbase-operator-tools with JDK17

2024-02-19 Thread Nihal Jain (Jira)
Nihal Jain created HBASE-28381:
--

 Summary: Build hbase-operator-tools with JDK17
 Key: HBASE-28381
 URL: https://issues.apache.org/jira/browse/HBASE-28381
 Project: HBase
  Issue Type: Improvement
  Components: java, thirdparty
Reporter: Nihal Jain
Assignee: Nihal Jain






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-28382) Build hbase-connectors with JDK17

2024-02-19 Thread Nihal Jain (Jira)
Nihal Jain created HBASE-28382:
--

 Summary: Build hbase-connectors with JDK17
 Key: HBASE-28382
 URL: https://issues.apache.org/jira/browse/HBASE-28382
 Project: HBase
  Issue Type: Improvement
  Components: java, thirdparty
Reporter: Nihal Jain
Assignee: Nihal Jain






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-28383) Update hbase-env.sh with alternates to JVM flags which are no longer supported with JDK17

2024-02-20 Thread Nihal Jain (Jira)
Nihal Jain created HBASE-28383:
--

 Summary: Update hbase-env.sh with alternates to JVM flags which 
are no longer supported with JDK17
 Key: HBASE-28383
 URL: https://issues.apache.org/jira/browse/HBASE-28383
 Project: HBase
  Issue Type: Improvement
Reporter: Nihal Jain
Assignee: Nihal Jain


Some JVM flags like {{{}-XX:+PrintGCDetails{}}}, {{-XX:+PrintGCDateStamps}} 
etc. are no longer supported with JDK 17 and hbase would fail to start if these 
are passed. 
We should do an audit and update 
[https://github.com/apache/hbase/blob/master/conf/hbase-env.sh] to capture 
alternate/fix. 

Will refer following for a fix/repalacement:
 * 
[https://stackoverflow.com/questions/54144713/is-there-a-replacement-for-the-garbage-collection-jvm-args-in-java-11]
 * 
[https://docs.oracle.com/javase/9/tools/java.htm#GUID-BE93ABDC-999C-4CB5-A88B-1994AAAC74D5__CONVERTGCLOGGINGFLAGSTOXLOG-A5046BD1]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-28388) Field sorting is broken in HBase Web UI

2024-02-21 Thread Nihal Jain (Jira)
Nihal Jain created HBASE-28388:
--

 Summary: Field sorting is broken in HBase Web UI
 Key: HBASE-28388
 URL: https://issues.apache.org/jira/browse/HBASE-28388
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Nihal Jain
Assignee: Nihal Jain






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   >