[jira] [Created] (HDFS-12946) Add a tool to check rack configuration against EC policies

2017-12-19 Thread Xiao Chen (JIRA)
Xiao Chen created HDFS-12946:


 Summary: Add a tool to check rack configuration against EC policies
 Key: HDFS-12946
 URL: https://issues.apache.org/jira/browse/HDFS-12946
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: erasure-coding
Reporter: Xiao Chen
Assignee: Xiao Chen


>From testing we have seen setups with problematic racks / datanodes that would 
>not suffice basic EC usages. These are usually found out only after the tests 
>failed.

We should provide a way to check this beforehand.

Some scenarios:
- not enough datanodes compared to EC policy's highest data+parity number
- not enough racks to satisfy BPPRackFaultTolerant
- highly uneven racks to satisfy BPPRackFaultTolerant
- highly uneven racks (so that BPP's considerLoad logic may exclude some busy 
nodes on the rack, resulting in #2)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-12-19 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/627/

[Dec 18, 2017 7:36:22 PM] (shv) HDFS-12818. Support multiple storages in 
DataNodeCluster /
[Dec 18, 2017 9:19:06 PM] (stevel) HADOOP-13974. S3Guard CLI to support 
list/purge of pending multipart
[Dec 18, 2017 9:20:06 PM] (jlowe) YARN-7661. NodeManager metrics return wrong 
value after update node
[Dec 18, 2017 9:25:47 PM] (cliang) HADOOP-15109. TestDFSIO -read -random 
doesn't work on file sized 4GB.
[Dec 19, 2017 2:02:30 AM] (szetszwo) HDFS-12347. 
TestBalancerRPCDelay#testBalancerRPCDelay fails very
[Dec 19, 2017 3:23:16 AM] (yqlin) HDFS-12930. Remove the extra space in 
HdfsImageViewer.md. Contributed by
[Dec 19, 2017 6:39:01 AM] (junping_du) Add 2.8.3 release jdiff files.
[Dec 19, 2017 7:31:34 AM] (yqlin) HDFS-12937. RBF: Add more unit tests for 
router admin commands.
[Dec 19, 2017 2:57:25 PM] (sunilg) YARN-7620. Allow node partition filters on 
Queues page of new YARN UI.




-1 overall


The following subsystems voted -1:
asflicense findbugs unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   module:hadoop-hdfs-project/hadoop-hdfs 
   Possible null pointer dereference of replication in 
org.apache.hadoop.hdfs.server.namenode.INodeFile$HeaderFormat.getBlockLayoutRedundancy(BlockType,
 Short, Byte) Dereferenced at INodeFile.java:replication in 
org.apache.hadoop.hdfs.server.namenode.INodeFile$HeaderFormat.getBlockLayoutRedundancy(BlockType,
 Short, Byte) Dereferenced at INodeFile.java:[line 210] 

FindBugs :

   module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
   org.apache.hadoop.yarn.api.records.Resource.getResources() may expose 
internal representation by returning Resource.resources At Resource.java:by 
returning Resource.resources At Resource.java:[line 234] 

FindBugs :

   module:hadoop-tools/hadoop-fs2img 
   new 
org.apache.hadoop.hdfs.server.namenode.ImageWriter(ImageWriter$Options) may 
fail to clean up java.io.OutputStream on checked exception Obligation to clean 
up resource created at ImageWriter.java:clean up java.io.OutputStream on 
checked exception Obligation to clean up resource created at 
ImageWriter.java:[line 184] is not discharged 

Failed junit tests :

   hadoop.fs.viewfs.TestViewFileSystemLocalFileSystem 
   hadoop.fs.viewfs.TestViewFileSystemWithAuthorityLocalFileSystem 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure 
   hadoop.hdfs.TestHdfsAdmin 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180 
   hadoop.hdfs.TestDistributedFileSystemWithECFile 
   hadoop.hdfs.TestDFSStripedInputStream 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030 
   hadoop.cli.TestErasureCodingCLI 
   hadoop.hdfs.TestRollingUpgrade 
   hadoop.hdfs.TestReplication 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.TestErasureCodingPolicies 
   hadoop.hdfs.TestReconstructStripedFile 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 
   hadoop.yarn.server.nodemanager.webapp.TestContainerLogsPage 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 
   hadoop.yarn.client.api.impl.TestAMRMClientOnRMRestart 
   hadoop.mapreduce.v2.app.rm.TestRMContainerAllocator 
   hadoop.mapreduce.v2.TestUberAM 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/627/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/627/artifact/out/diff-compile-javac-root.txt
  [280K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/627/artifact/out/diff-checkstyle-root.txt
  [17M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/627/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/627/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/627/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/627/artifact/out/whitespace-eol.txt
  [9.2M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/627/artifact/out/whitespace-tabs.txt
  [292K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/627/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/627/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-warnings.html
 

[jira] [Created] (HDFS-12945) Switch to ClientProtocol instead of NamenodeProtocols in NamenodeWebHdfsMethods

2017-12-19 Thread Wei Yan (JIRA)
Wei Yan created HDFS-12945:
--

 Summary: Switch to ClientProtocol instead of NamenodeProtocols in 
NamenodeWebHdfsMethods
 Key: HDFS-12945
 URL: https://issues.apache.org/jira/browse/HDFS-12945
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Wei Yan
Assignee: Wei Yan
Priority: Minor


In HDFS-12512 which is to add WebHDFS support to Router-based Federation, we 
found it would be good to switch from NamenodeProtocols to ClientProtocol in 
NamenodeWebHdfsMethods, to make code sharable between NameNode WebHDFS and 
Router WebHDFS. Would like to get some feedbacks about this refactor, any 
concerns?
cc [~elgoiri] [~szetszwo] [~sanjay.radia]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12944) Update NOTICE for AssertJ dependency

2017-12-19 Thread Chris Douglas (JIRA)
Chris Douglas created HDFS-12944:


 Summary: Update NOTICE for AssertJ dependency
 Key: HDFS-12944
 URL: https://issues.apache.org/jira/browse/HDFS-12944
 Project: Hadoop HDFS
  Issue Type: Task
Reporter: Chris Douglas


HDFS-12665 added a dependency on the ALv2 
[AssertJ|https://github.com/joel-costigliola/assertj-core] library. We should 
update the notice.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12943) Consistent Reads from Standby Node

2017-12-19 Thread Konstantin Shvachko (JIRA)
Konstantin Shvachko created HDFS-12943:
--

 Summary: Consistent Reads from Standby Node
 Key: HDFS-12943
 URL: https://issues.apache.org/jira/browse/HDFS-12943
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: hdfs
Reporter: Konstantin Shvachko


StandbyNode in HDFS is a replica of the active NameNode. The states of the 
NameNodes are coordinated via the journal. It is natural to consider 
StandbyNode as a read-only replica. As with any replicated distributed system 
the problem of stale reads should be resolved. Our main goal is to provide 
reads from standby in a consistent way in order to enable a wide range of 
existing applications running on top of HDFS.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [ANNOUNCE] Apache Hadoop 3.0.0 GA is released

2017-12-19 Thread Jonathan Kelly
Thanks, Andrew!

On Mon, Dec 18, 2017 at 4:54 PM Andrew Wang 
wrote:

> Thanks for the spot, I just pushed a correct tag. I can't delete the bad
> tag myself, will ask ASF infra for help.
>
> On Mon, Dec 18, 2017 at 4:46 PM, Jonathan Kelly 
> wrote:
>
>> Congrats on the huge release!
>>
>> I just noticed, though, that the Github repo does not appear to have the
>> correct tag for 3.0.0. I see a new tag called "rel/release-" that points to
>> the same commit as "release-3.0.0-RC1"
>> (c25427ceca461ee979d30edd7a4b0f50718e6533). I assume that should have
>> actually been called "rel/release-3.0.0" to match the pattern for prior
>> releases.
>>
>> Thanks,
>> Jonathan Kelly
>>
>> On Thu, Dec 14, 2017 at 10:45 AM Andrew Wang 
>> wrote:
>>
>>> Hi all,
>>>
>>> I'm pleased to announce that Apache Hadoop 3.0.0 is generally available
>>> (GA).
>>>
>>> 3.0.0 GA consists of 302 bug fixes, improvements, and other enhancements
>>> since 3.0.0-beta1. This release marks a point of quality and stability
>>> for
>>> the 3.0.0 release line, and users of earlier 3.0.0-alpha and -beta
>>> releases
>>> are encouraged to upgrade.
>>>
>>> Looking back, 3.0.0 GA is the culmination of over a year of work on the
>>> 3.0.0 line, starting with 3.0.0-alpha1 which was released in September
>>> 2016. Altogether, 3.0.0 incorporates 6,242 changes since 2.7.0.
>>>
>>> Users are encouraged to read the overview of major changes
>>>  in 3.0.0. The GA
>>> release
>>> notes
>>> <
>>> http://hadoop.apache.org/docs/r3.0.0/hadoop-project-dist/hadoop-common/release/3.0.0/RELEASENOTES.3.0.0.html
>>> >
>>>  and changelog
>>> <
>>> http://hadoop.apache.org/docs/r3.0.0/hadoop-project-dist/hadoop-common/release/3.0.0/CHANGES.3.0.0.html
>>> >
>>> detail
>>> the changes since 3.0.0-beta1.
>>>
>>> The ASF press release provides additional color and highlights some of
>>> the
>>> major features:
>>>
>>>
>>> https://globenewswire.com/news-release/2017/12/14/1261879/0/en/The-Apache-Software-Foundation-Announces-Apache-Hadoop-v3-0-0-General-Availability.html
>>>
>>> Let me end by thanking the many, many contributors who helped with this
>>> release line. We've only had three major releases in Hadoop's 10 year
>>> history, and this is our biggest major release ever. It's an incredible
>>> accomplishment for our community, and I'm proud to have worked with all
>>> of
>>> you.
>>>
>>> Best,
>>> Andrew
>>>
>>
>


[jira] [Created] (HDFS-12941) Ozone: ConfServlet does not trim values during the description parsing

2017-12-19 Thread Elek, Marton (JIRA)
Elek, Marton created HDFS-12941:
---

 Summary: Ozone: ConfServlet does not trim values during the 
description parsing
 Key: HDFS-12941
 URL: https://issues.apache.org/jira/browse/HDFS-12941
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-7240
Reporter: Elek, Marton
Assignee: Elek, Marton


In ozone-default.xml an unnecessary space is added to the 
ozone.open.key.expire.threshold key name. 

It causes a NPE in ConfServlet:

{code}
Server ErrorCaused 
by:java.lang.NullPointerException
at 
org.apache.hadoop.conf.ConfServlet.lambda$processConfigTagRequest$0(ConfServlet.java:147)
at java.util.Iterator.forEachRemaining(Iterator.java:116)
at 
java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
at 
java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580)
at 
org.apache.hadoop.conf.ConfServlet.processConfigTagRequest(ConfServlet.java:143)
at org.apache.hadoop.conf.ConfServlet.doGet(ConfServlet.java:101)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at 
org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772)
at 
org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:110)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
at 
org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1578)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at org.eclipse.jetty.server.Server.handle(Server.java:534)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
at 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
at java.lang.Thread.run(Thread.java:748)
{code}

The problem is here:

{code}
 Properties properties = config.getAllPropertiesByTags(tagList);
  if (propertyMap == null) {
loadDescriptions();
  }

  List filteredProperties = new ArrayList<>();

  properties.stringPropertyNames().stream().forEach(key -> {
if (config.get(key) != null) {
  propertyMap.get(key).setValue(config.get(key));
  filteredProperties.add(propertyMap.get(key));
}
  });
{code}

We iterate over the keys from the loaded configuration (which is trimmed by the 
config.getAllPropertiesByTags) and try to find the description, but in the 
key->description map the key was not trimmed. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To 

[jira] [Created] (HDFS-12940) Ozone: KSM: TestKeySpaceManager#testExpiredOpenKey fails occasionally

2017-12-19 Thread Nanda kumar (JIRA)
Nanda kumar created HDFS-12940:
--

 Summary: Ozone: KSM: TestKeySpaceManager#testExpiredOpenKey fails 
occasionally
 Key: HDFS-12940
 URL: https://issues.apache.org/jira/browse/HDFS-12940
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Nanda kumar


{{TestKeySpaceManager#testExpiredOpenKey}} is flaky.

In {{testExpiredOpenKey}} we are opening four keys for writing and wait for 
them to expire (without committing). Verification/Assertion is done by querying 
{{MiniOzoneCluster}} and matching the count. Since the {{cluster}} instance of 
{{MiniOzoneCluster}} is shared between test-cases in {{TestKeySpaceManager}}, 
we should not rely on the count. The verification should only happen by 
matching the keyNames and not with the count.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12939) Ozone: KSM crashes on jmx call

2017-12-19 Thread Nanda kumar (JIRA)
Nanda kumar created HDFS-12939:
--

 Summary: Ozone: KSM crashes on jmx call
 Key: HDFS-12939
 URL: https://issues.apache.org/jira/browse/HDFS-12939
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Reporter: Nanda kumar
Priority: Critical


When we issue {{jmx}} rest call (or) click on {{jxm}} option in KSM UI, KSM 
deamon crashes with the following exception
{noformat}
2017-12-19 16:01:17,002 DEBUG jmx.JMXJsonServlet: getting attribute 
UsageThresholdCount of java.lang:type=MemoryPool,name=PS Survivor Space threw 
an exception
javax.management.RuntimeMBeanException: 
java.lang.UnsupportedOperationException: Usage threshold is not supported
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrow(DefaultMBeanServerInterceptor.java:839)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrowMaybeMBeanException(DefaultMBeanServerInterceptor.java:852)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:651)
at 
com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
at 
org.apache.hadoop.jmx.JMXJsonServlet.writeAttribute(JMXJsonServlet.java:344)
at 
org.apache.hadoop.jmx.JMXJsonServlet.listBeans(JMXJsonServlet.java:322)
at org.apache.hadoop.jmx.JMXJsonServlet.doGet(JMXJsonServlet.java:216)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at 
org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772)
at 
org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:110)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
at 
org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1578)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at org.eclipse.jetty.server.Server.handle(Server.java:534)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
at 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.UnsupportedOperationException: Usage threshold is not 
supported
at 
sun.management.MemoryPoolImpl.getUsageThresholdCount(MemoryPoolImpl.java:189)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at