[jira] [Created] (HBASE-15054) Allow 0.94 to compile with JDK8

2015-12-29 Thread Lars Hofhansl (JIRA)
Lars Hofhansl created HBASE-15054:
-

 Summary: Allow 0.94 to compile with JDK8
 Key: HBASE-15054
 URL: https://issues.apache.org/jira/browse/HBASE-15054
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl


Fix the following two problems:
# PoolMap
# InputSampler




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


One last 0.94 release?

2015-12-29 Thread larsh
Hi All,
thinking about a last 0.94 release (0.94.28), and then I think we should 
"officially" EOL 0.94.The blurb on the downloads page still mentioned monthly 
updates for 0.94, which hasn't been true.

One change I'm planning is to make it compilable with JDK 8.

Comments?

-- Lars



[jira] [Created] (HBASE-15053) hbase.client.max.perregion.tasks can affect get or scan operation?

2015-12-29 Thread JIRA
胡托 created HBASE-15053:
--

 Summary: hbase.client.max.perregion.tasks  can  affect get or scan 
operation?
 Key: HBASE-15053
 URL: https://issues.apache.org/jira/browse/HBASE-15053
 Project: HBase
  Issue Type: Test
Reporter: 胡托


In Refrence guide,hbase.client.max.perregion.tasks is descripted  if there is 
already hbase.client.max.perregion.tasks writes in progress for this region, 
new puts won’t be sent to this region until some writes finishes.

Whether can affect the read operation?I want to know,thanks!






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


IndentationCheck of checkstyle

2015-12-29 Thread Ted Yu

> Hi,
> I noticed that there are a lot of checkstyle warnings in the following form:
> 
>  source="com.puppycrawl.tools.checkstyle.   
> checks.indentation.IndentationCheck"/>
> 
> To my knowledge, we use two spaces for each tab. Not sure why all of a sudden 
> we have so many IndentationCheck warnings:
> 
> grep 'hild have incorrect indentati' trunkCheckstyle.xml | wc
> 3133   52645  678294
> 
> If there is no objection, I would create a JIRA and relax IndentationCheck 
> warning.
> 
> Cheers


[jira] [Created] (HBASE-15052) Use EnvironmentEdgeManager in ReplicationSource

2015-12-29 Thread Matteo Bertozzi (JIRA)
Matteo Bertozzi created HBASE-15052:
---

 Summary: Use EnvironmentEdgeManager in ReplicationSource 
 Key: HBASE-15052
 URL: https://issues.apache.org/jira/browse/HBASE-15052
 Project: HBase
  Issue Type: Bug
  Components: Replication
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Trivial


ReplicationSource is passing System.currentTimeMillis() to 
MetricsSource.setAgeOfLastShippedOp() which is subtracting that from 
EnvironmentEdgeManager.currentTime().
{code}
// if there was nothing to ship and it's not an error
// set "ageOfLastShippedOp" to  to indicate that we're current
metrics.setAgeOfLastShippedOp(System.currentTimeMillis(), walGroupId);

public void setAgeOfLastShippedOp(long timestamp, String walGroup) {
long age = EnvironmentEdgeManager.currentTime() - timestamp;
{code}
 we should just use EnvironmentEdgeManager.currentTime() in ReplicationSource



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15051) Refactor HFileReaderImpl HFIleContext usage

2015-12-29 Thread Jonathan Hsieh (JIRA)
Jonathan Hsieh created HBASE-15051:
--

 Summary: Refactor HFileReaderImpl HFIleContext usage
 Key: HBASE-15051
 URL: https://issues.apache.org/jira/browse/HBASE-15051
 Project: HBase
  Issue Type: Improvement
Reporter: Jonathan Hsieh


In discusssion from HBASE-15035, [~ram_krish] noted a different approach for 
guranteeing that HFileContext settings are passed in bulk load.  This patch 
will take that idea and extend.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Build failed in Jenkins: HBase-1.2 » latest1.8,Hadoop #480

2015-12-29 Thread Stack
The below hung:

Hanging test : org.apache.hadoop.hbase.regionserver.TestHRegion

St.Ack

On Tue, Dec 29, 2015 at 9:28 AM, Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> See <
> https://builds.apache.org/job/HBase-1.2/jdk=latest1.8,label=Hadoop/480/changes
> >
>
> Changes:
>
> [tedyu] HBASE-14867 SimpleRegionNormalizer needs to have better heuristics
> to
>
> --
> [...truncated 45955 lines...]
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
> MaxPermSize=256m; support was removed in 8.0
> Running org.apache.hadoop.hbase.TestInfoServers
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
> MaxPermSize=256m; support was removed in 8.0
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.6 sec -
> in org.apache.hadoop.hbase.TestInfoServers
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
> MaxPermSize=256m; support was removed in 8.0
> Running org.apache.hadoop.hbase.filter.TestMultiRowRangeFilter
> Running org.apache.hadoop.hbase.filter.TestScanRowPrefix
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.556 sec
> - in org.apache.hadoop.hbase.filter.TestScanRowPrefix
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
> MaxPermSize=256m; support was removed in 8.0
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
> MaxPermSize=256m; support was removed in 8.0
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
> MaxPermSize=256m; support was removed in 8.0
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
> MaxPermSize=256m; support was removed in 8.0
> Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.965
> sec - in org.apache.hadoop.hbase.filter.TestMultiRowRangeFilter
> Running org.apache.hadoop.hbase.filter.TestFilterWrapper
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
> MaxPermSize=256m; support was removed in 8.0
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
> MaxPermSize=256m; support was removed in 8.0
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
> MaxPermSize=256m; support was removed in 8.0
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
> MaxPermSize=256m; support was removed in 8.0
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.628 sec
> - in org.apache.hadoop.hbase.filter.TestFilterWrapper
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
> MaxPermSize=256m; support was removed in 8.0
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
> MaxPermSize=256m; support was removed in 8.0
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
> MaxPermSize=256m; support was removed in 8.0
> Running org.apache.hadoop.hbase.filter.TestFuzzyRowFilterEndToEnd
> Tests run: 12, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 96.137
> sec - in org.apache.hadoop.hbase.namespace.TestNamespaceAuditor
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
> MaxPermSize=256m; support was removed in 8.0
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
> MaxPermSize=256m; support was removed in 8.0
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
> MaxPermSize=256m; support was removed in 8.0
> Running org.apache.hadoop.hbase.filter.TestFuzzyRowAndColumnRangeFilter
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 62.66 sec
> - in org.apache.hadoop.hbase.TestRegionRebalancing
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
> MaxPermSize=256m; support was removed in 8.0
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.758 sec
> - in org.apache.hadoop.hbase.filter.TestFuzzyRowAndColumnRangeFilter
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
> MaxPermSize=256m; support was removed in 8.0
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
> MaxPermSize=256m; support was removed in 8.0
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
> MaxPermSize=256m; support was removed in 8.0
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
> MaxPermSize=256m; support was removed in 8.0
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
> MaxPermSize=256m; support was removed in 8.0
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
> MaxPermSize=256m; support was removed in 8.0
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
> MaxPermSize=256m; support was removed in 8.0
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
> MaxPermSize=256m; support was removed in 8.0
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
> MaxPermSize=256m; support was removed in 8.0
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
> MaxPermSize=256m; support was removed in 8.0
> Running org.apache.hadoop.hbase.filter.TestColumnRangeFilter
> Running org.apache.hadoop.hbase.io.TestFileLink
> Running org.apache.hadoop.hbase.filter.TestFilterWithScanLimits
> Tests run: 1, Failures: 0, 

Successful: HBase Generate Website

2015-12-29 Thread Apache Jenkins Server
Build status: Successful

If successful, the website and docs have been generated. If failed, skip to the 
bottom of this email.

Use the following commands to download the patch and apply it to a clean branch 
based on origin/asf-site. If you prefer to keep the hbase-site repo around 
permanently, you can skip the clone step.

  git clone https://git-wip-us.apache.org/repos/asf/hbase-site.git
  cd hbase-site
  wget -O- 
https://builds.apache.org/job/hbase_generate_website/84/artifact/website.patch.zip
 | funzip > 1e4992c6eccb81166cdda842a68644fa962a3fdc.patch
  git fetch
  git checkout -b asf-site-1e4992c6eccb81166cdda842a68644fa962a3fdc 
origin/asf-site
  git am 1e4992c6eccb81166cdda842a68644fa962a3fdc.patch

At this point, you can preview the changes by opening index.html or any of the 
other HTML pages in your local 
asf-site-1e4992c6eccb81166cdda842a68644fa962a3fdc branch, and you can review 
the differences by running:

  git diff origin/asf-site

There are lots of spurious changes, such as timestamps and CSS styles in 
tables. To see a list of files that have been added, deleted, renamed, changed 
type, or are otherwise interesting, use the following command:

  git diff --name-status --diff-filter=ADCRTXUB origin/asf-site

To see only files that had 10 or more lines changed:

  git diff --stat origin/asf-site | grep -Ev "\|\s+\ [1-9]\ [\+-]+$"

When you are satisfied, publish your changes to origin/asf-site using this 
command:

  git push origin asf-site-1e4992c6eccb81166cdda842a68644fa962a3fdc:asf-site

Changes take a couple of minutes to be propagated. You can then remove your 
asf-site-1e4992c6eccb81166cdda842a68644fa962a3fdc branch:

  git checkout asf-site && git branch -d 
asf-site-1e4992c6eccb81166cdda842a68644fa962a3fdc



If failed, see https://builds.apache.org/job/hbase_generate_website/84/console

[jira] [Created] (HBASE-15050) Block Ref counting does not work in Region Split cases.

2015-12-29 Thread ramkrishna.s.vasudevan (JIRA)
ramkrishna.s.vasudevan created HBASE-15050:
--

 Summary: Block Ref counting does not work in Region Split cases.
 Key: HBASE-15050
 URL: https://issues.apache.org/jira/browse/HBASE-15050
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Critical
 Fix For: 2.0.0


The reference counting on the blocks does not work correctly when the 
HalfStorefileReader is used for compaction/scans. 
The reason is that getFirstKey and getLastKey API create a new scanner but does 
not do the needed close() call and because of that we do not decrement the 
count on the blocks. The same impact will also be observed on the ref count 
that we maintain on the reader. Issue found when I was trying to test some 
other feature with lot of evictions. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-15049) AuthTypes.NONE cause exception after HS2 start

2015-12-29 Thread Heng Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Heng Chen resolved HBASE-15049.
---
Resolution: Invalid

> AuthTypes.NONE cause exception after HS2 start
> --
>
> Key: HBASE-15049
> URL: https://issues.apache.org/jira/browse/HBASE-15049
> Project: HBase
>  Issue Type: Bug
>Reporter: Heng Chen
>
> I set {{hive.server2.authentication}} to be {{NONE}}
> After HS2 start, i see exception in log below:
> {code}
> 2015-12-29 16:58:42,339 ERROR [HiveServer2-Handler-Pool: Thread-31]: 
> server.TThreadPoolServer (TThreadPoolServer.java:run(296)) - Error occurred 
> during processing of message.
> java.lang.RuntimeException: 
> org.apache.thrift.transport.TSaslTransportException: No data or no sasl data 
> in the stream
> at 
> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
> at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:268)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.thrift.transport.TSaslTransportException: No data or no 
> sasl data in the stream
> at 
> org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:328)
> at 
> org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
> at 
> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
> ... 4 more
> {code}
> IMO the problem is we use Sasl transport when authType is NONE, 
> {code:title=HiveAuthFactory.java}
>   public TTransportFactory getAuthTransFactory() throws LoginException {
> TTransportFactory transportFactory;
> if (authTypeStr.equalsIgnoreCase(AuthTypes.KERBEROS.getAuthName())) {
>   try {
> transportFactory = 
> saslServer.createTransportFactory(getSaslProperties());
>   } catch (TTransportException e) {
> throw new LoginException(e.getMessage());
>   }
> } else if (authTypeStr.equalsIgnoreCase(AuthTypes.NONE.getAuthName())) {
>   transportFactory = 
> PlainSaslHelper.getPlainTransportFactory(authTypeStr);
> } else if (authTypeStr.equalsIgnoreCase(AuthTypes.LDAP.getAuthName())) {
>   transportFactory = 
> PlainSaslHelper.getPlainTransportFactory(authTypeStr);
> } else if (authTypeStr.equalsIgnoreCase(AuthTypes.PAM.getAuthName())) {
>   transportFactory = 
> PlainSaslHelper.getPlainTransportFactory(authTypeStr);
> } else if (authTypeStr.equalsIgnoreCase(AuthTypes.NOSASL.getAuthName())) {
>   transportFactory = new TTransportFactory();
> } else if (authTypeStr.equalsIgnoreCase(AuthTypes.CUSTOM.getAuthName())) {
>   transportFactory = 
> PlainSaslHelper.getPlainTransportFactory(authTypeStr);
> } else {
>   throw new LoginException("Unsupported authentication type " + 
> authTypeStr);
> }
> return transportFactory;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15049) AuthTypes.NONE cause exception after HS2 start

2015-12-29 Thread Heng Chen (JIRA)
Heng Chen created HBASE-15049:
-

 Summary: AuthTypes.NONE cause exception after HS2 start
 Key: HBASE-15049
 URL: https://issues.apache.org/jira/browse/HBASE-15049
 Project: HBase
  Issue Type: Bug
Reporter: Heng Chen


I set {{hive.server2.authentication}} to be {{NONE}}

After HS2 start, i see exception is log below:
{code}
2015-12-29 16:58:42,339 ERROR [HiveServer2-Handler-Pool: Thread-31]: 
server.TThreadPoolServer (TThreadPoolServer.java:run(296)) - Error occurred 
during processing of message.
java.lang.RuntimeException: 
org.apache.thrift.transport.TSaslTransportException: No data or no sasl data in 
the stream
at 
org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
at 
org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:268)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.thrift.transport.TSaslTransportException: No data or no 
sasl data in the stream
at 
org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:328)
at 
org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
at 
org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
... 4 more
{code}

IMO the problem is we use Sasl transport when authType is NONE, 
{code:title=HiveAuthFactory.java}
  public TTransportFactory getAuthTransFactory() throws LoginException {
TTransportFactory transportFactory;
if (authTypeStr.equalsIgnoreCase(AuthTypes.KERBEROS.getAuthName())) {
  try {
transportFactory = 
saslServer.createTransportFactory(getSaslProperties());
  } catch (TTransportException e) {
throw new LoginException(e.getMessage());
  }
} else if (authTypeStr.equalsIgnoreCase(AuthTypes.NONE.getAuthName())) {
  transportFactory = PlainSaslHelper.getPlainTransportFactory(authTypeStr);
} else if (authTypeStr.equalsIgnoreCase(AuthTypes.LDAP.getAuthName())) {
  transportFactory = PlainSaslHelper.getPlainTransportFactory(authTypeStr);
} else if (authTypeStr.equalsIgnoreCase(AuthTypes.PAM.getAuthName())) {
  transportFactory = PlainSaslHelper.getPlainTransportFactory(authTypeStr);
} else if (authTypeStr.equalsIgnoreCase(AuthTypes.NOSASL.getAuthName())) {
  transportFactory = new TTransportFactory();
} else if (authTypeStr.equalsIgnoreCase(AuthTypes.CUSTOM.getAuthName())) {
  transportFactory = PlainSaslHelper.getPlainTransportFactory(authTypeStr);
} else {
  throw new LoginException("Unsupported authentication type " + 
authTypeStr);
}
return transportFactory;
  }
{code}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15048) Add NORMALIZATION_ENABLED constant to hbase shell.

2015-12-29 Thread Appy (JIRA)
Appy created HBASE-15048:


 Summary: Add NORMALIZATION_ENABLED constant to hbase shell.
 Key: HBASE-15048
 URL: https://issues.apache.org/jira/browse/HBASE-15048
 Project: HBase
  Issue Type: Bug
Reporter: Appy
Assignee: Appy
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)