[GitHub] [hadoop] susheel-gupta opened a new pull request, #5320: YARN-11416. FS2CS should use CapacitySchedulerConfiguration in FSQueueConverterBuilder
susheel-gupta opened a new pull request, #5320: URL: https://github.com/apache/hadoop/pull/5320 …CapacitySchedulerConfiguration object Change-Id: Ifdab821bee6f0f6db4f8b17208d01cf3901820b7 ### Description of PR ### How was this patch tested? ### For code changes: - [ ] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16876) KMS delegation tokens are memory expensive
[ https://issues.apache.org/jira/browse/HADOOP-16876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17679681#comment-17679681 ] Bhavik Patel commented on HADOOP-16876: --- I think this is handled in https://issues.apache.org/jira/browse/HADOOP-16828 & https://issues.apache.org/jira/browse/HDFS-15383 [~weichiu] [~xyao] Can you confirm ? > KMS delegation tokens are memory expensive > -- > > Key: HADOOP-16876 > URL: https://issues.apache.org/jira/browse/HADOOP-16876 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Reporter: Wei-Chiu Chuang >Priority: Major > Attachments: Screen Shot 2020-02-20 at 5.04.12 PM.png > > > We recently saw a number of users reporting high memory consumption in KMS. > Part of the reason being HADOOP-14445. Without that, the number of kms > delegation tokens that zookeeper stores is proportional to the number of KMS > servers. > There are two problems: > (1) it exceeds zookeeper jute buffer length and operations fail. > (2) KMS uses more heap memory to store KMS DTs. > But even with HADOOP-14445, KMS DTs are still expensive. Looking at a heap > dump from KMS, the majority of the heap is occupied by znode and KMS DT > objects. With the growing number of encrypted clusters and use cases, this is > increasingly a problem our users encounter. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17912) ABFS: Support for Encryption Context
[ https://issues.apache.org/jira/browse/HADOOP-17912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17679670#comment-17679670 ] Pranav Saxena commented on HADOOP-17912: [~mehakmeet] [~mthakur], requesting you to kindly review the PR. Thanks. > ABFS: Support for Encryption Context > > > Key: HADOOP-17912 > URL: https://issues.apache.org/jira/browse/HADOOP-17912 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.3.1 >Reporter: Sumangala Patki >Assignee: Pranav Saxena >Priority: Major > Labels: pull-request-available > Time Spent: 1h > Remaining Estimate: 0h > > Support for customer-provided encryption keys at the file level, superceding > the global (account-level) key use in HADOOP-17536. > ABFS driver will support an "EncryptionContext" plugin for retrieving > encryption information, the implementation for which should be provided by > the client. The keys/context retrieved will be sent via request headers to > the server, which will store the encryption context. Subsequent REST calls to > server that access data/user metadata of the file will require fetching the > encryption context through a GetFileProperties call and retrieving the key > from the custom provider, before sending the request. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18592) Sasl connection failure should log remote address
[ https://issues.apache.org/jira/browse/HADOOP-18592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17679630#comment-17679630 ] Viraj Jasani commented on HADOOP-18592: --- [~ste...@apache.org] [~mthakur] I believe new RC would make progress after mvn site generation issue is fixed (HADOOP-18598). If that is correct and if the PR of this Jira is merged before new RC is built, could you please include this on 3.3.5? > Sasl connection failure should log remote address > - > > Key: HADOOP-18592 > URL: https://issues.apache.org/jira/browse/HADOOP-18592 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.3.4 >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > > If Sasl connection fails with some generic error, we miss logging remote > server that the client was trying to connect to. > Sample log: > {code:java} > 2023-01-12 00:22:28,148 WARN [20%2C1673404849949,1] ipc.Client - Exception > encountered while connecting to the server > java.io.IOException: Connection reset by peer > at sun.nio.ch.FileDispatcherImpl.read0(Native Method) > at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) > at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) > at sun.nio.ch.IOUtil.read(IOUtil.java:197) > at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379) > at > org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:57) > at > org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:141) > at > org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) > at > org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) > at java.io.FilterInputStream.read(FilterInputStream.java:133) > at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) > at java.io.BufferedInputStream.read(BufferedInputStream.java:265) > at java.io.DataInputStream.readInt(DataInputStream.java:387) > at org.apache.hadoop.ipc.Client$IpcStreams.readResponse(Client.java:1950) > at > org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:367) > at > org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:623) > at org.apache.hadoop.ipc.Client$Connection.access$2300(Client.java:414) > ... > ... {code} > We should log the remote server address. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org