[jira] [Resolved] (HDFS-17533) RBF: Unit tests that use embedded SQL failing in CI
[ https://issues.apache.org/jira/browse/HDFS-17533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Simbarashe Dzinamarira resolved HDFS-17533. --- Resolution: Fixed > RBF: Unit tests that use embedded SQL failing in CI > --- > > Key: HDFS-17533 > URL: https://issues.apache.org/jira/browse/HDFS-17533 > Project: Hadoop HDFS > Issue Type: Test >Reporter: Simbarashe Dzinamarira >Assignee: Simbarashe Dzinamarira >Priority: Major > > In the CI runs for RBF the following two tests are failing > {noformat} > [ERROR] Failures: > [ERROR] > org.apache.hadoop.hdfs.server.federation.router.security.token.TestSQLDelegationTokenSecretManagerImpl.null > [ERROR] Run 1: TestSQLDelegationTokenSecretManagerImpl Multiple Failures (2 > failures) > java.sql.SQLException: No suitable driver found for > jdbc:derby:memory:TokenStore;create=true > java.lang.RuntimeException: java.sql.SQLException: No suitable driver > found for jdbc:derby:memory:TokenStore;drop=true > [ERROR] Run 2: TestSQLDelegationTokenSecretManagerImpl Multiple Failures (2 > failures) > java.sql.SQLException: No suitable driver found for > jdbc:derby:memory:TokenStore;create=true > java.lang.RuntimeException: java.sql.SQLException: No suitable driver > found for jdbc:derby:memory:TokenStore;drop=true > [ERROR] Run 3: TestSQLDelegationTokenSecretManagerImpl Multiple Failures (2 > failures) > java.sql.SQLException: No suitable driver found for > jdbc:derby:memory:TokenStore;create=true > java.lang.RuntimeException: java.sql.SQLException: No suitable driver > found for jdbc:derby:memory:TokenStore;drop=true > [INFO] > [ERROR] > org.apache.hadoop.hdfs.server.federation.store.driver.TestStateStoreMySQL.null > [ERROR] Run 1: TestStateStoreMySQL Multiple Failures (2 failures) > java.sql.SQLException: No suitable driver found for > jdbc:derby:memory:StateStore;create=true > java.lang.RuntimeException: java.sql.SQLException: No suitable driver > found for jdbc:derby:memory:StateStore;drop=true > [ERROR] Run 2: TestStateStoreMySQL Multiple Failures (2 failures) > java.sql.SQLException: No suitable driver found for > jdbc:derby:memory:StateStore;create=true > java.lang.RuntimeException: java.sql.SQLException: No suitable driver > found for jdbc:derby:memory:StateStore;drop=true > [ERROR] Run 3: TestStateStoreMySQL Multiple Failures (2 failures) > java.sql.SQLException: No suitable driver found for > jdbc:derby:memory:StateStore;create=true > java.lang.RuntimeException: java.sql.SQLException: No suitable driver > found for jdbc:derby:memory:StateStore;drop=true {noformat} > [https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6804/5/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt] > > I believe the fix is first registering the driver: > [https://dev.mysql.com/doc/connector-j/en/connector-j-usagenotes-connect-drivermanager.html] > [https://stackoverflow.com/questions/22384710/java-sql-sqlexception-no-suitable-driver-found-for-jdbcmysql-localhost3306] -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17533) RBF: Unit tests that use embedded SQL failing in CI
[ https://issues.apache.org/jira/browse/HDFS-17533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17851725#comment-17851725 ] Simbarashe Dzinamarira commented on HDFS-17533: --- This was resolved by rolling back the derby version: https://github.com/apache/hadoop/pull/6841 > RBF: Unit tests that use embedded SQL failing in CI > --- > > Key: HDFS-17533 > URL: https://issues.apache.org/jira/browse/HDFS-17533 > Project: Hadoop HDFS > Issue Type: Test >Reporter: Simbarashe Dzinamarira >Assignee: Simbarashe Dzinamarira >Priority: Major > > In the CI runs for RBF the following two tests are failing > {noformat} > [ERROR] Failures: > [ERROR] > org.apache.hadoop.hdfs.server.federation.router.security.token.TestSQLDelegationTokenSecretManagerImpl.null > [ERROR] Run 1: TestSQLDelegationTokenSecretManagerImpl Multiple Failures (2 > failures) > java.sql.SQLException: No suitable driver found for > jdbc:derby:memory:TokenStore;create=true > java.lang.RuntimeException: java.sql.SQLException: No suitable driver > found for jdbc:derby:memory:TokenStore;drop=true > [ERROR] Run 2: TestSQLDelegationTokenSecretManagerImpl Multiple Failures (2 > failures) > java.sql.SQLException: No suitable driver found for > jdbc:derby:memory:TokenStore;create=true > java.lang.RuntimeException: java.sql.SQLException: No suitable driver > found for jdbc:derby:memory:TokenStore;drop=true > [ERROR] Run 3: TestSQLDelegationTokenSecretManagerImpl Multiple Failures (2 > failures) > java.sql.SQLException: No suitable driver found for > jdbc:derby:memory:TokenStore;create=true > java.lang.RuntimeException: java.sql.SQLException: No suitable driver > found for jdbc:derby:memory:TokenStore;drop=true > [INFO] > [ERROR] > org.apache.hadoop.hdfs.server.federation.store.driver.TestStateStoreMySQL.null > [ERROR] Run 1: TestStateStoreMySQL Multiple Failures (2 failures) > java.sql.SQLException: No suitable driver found for > jdbc:derby:memory:StateStore;create=true > java.lang.RuntimeException: java.sql.SQLException: No suitable driver > found for jdbc:derby:memory:StateStore;drop=true > [ERROR] Run 2: TestStateStoreMySQL Multiple Failures (2 failures) > java.sql.SQLException: No suitable driver found for > jdbc:derby:memory:StateStore;create=true > java.lang.RuntimeException: java.sql.SQLException: No suitable driver > found for jdbc:derby:memory:StateStore;drop=true > [ERROR] Run 3: TestStateStoreMySQL Multiple Failures (2 failures) > java.sql.SQLException: No suitable driver found for > jdbc:derby:memory:StateStore;create=true > java.lang.RuntimeException: java.sql.SQLException: No suitable driver > found for jdbc:derby:memory:StateStore;drop=true {noformat} > [https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6804/5/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt] > > I believe the fix is first registering the driver: > [https://dev.mysql.com/doc/connector-j/en/connector-j-usagenotes-connect-drivermanager.html] > [https://stackoverflow.com/questions/22384710/java-sql-sqlexception-no-suitable-driver-found-for-jdbcmysql-localhost3306] -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13603) Warmup NameNode EDEK thread retries continuously if there's an invalid key
[ https://issues.apache.org/jira/browse/HDFS-13603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17851721#comment-17851721 ] ASF GitHub Bot commented on HDFS-13603: --- simbadzina commented on PR #6860: URL: https://github.com/apache/hadoop/pull/6860#issuecomment-2145635848 I've merged https://github.com/apache/hadoop/pull/6860. Could you add an empty commit on this PR so that the tests are run against it merged with the latest trunk. > Warmup NameNode EDEK thread retries continuously if there's an invalid key > --- > > Key: HDFS-13603 > URL: https://issues.apache.org/jira/browse/HDFS-13603 > Project: Hadoop HDFS > Issue Type: Bug > Components: encryption, namenode >Affects Versions: 2.8.0 >Reporter: Antony Jay >Priority: Major > Labels: pull-request-available > > https://issues.apache.org/jira/browse/HDFS-9405 adds a background thread to > pre-warm EDEK cache. > However this fails and retries continuously if key retrieval fails for one > encryption zone. In our usecase, we have temporarily removed keys for certain > encryption zones. Currently namenode and kms log is filled up with errors > related to background thread retrying warmup for ever . > The pre-warm thread should > * Continue to refresh other encryption zones even if it fails for one > * Should retry only if it fails for all encryption zones, which will be the > case when kms is down. > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13603) Warmup NameNode EDEK thread retries continuously if there's an invalid key
[ https://issues.apache.org/jira/browse/HDFS-13603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17851710#comment-17851710 ] ASF GitHub Bot commented on HDFS-13603: --- simbadzina merged PR #6860: URL: https://github.com/apache/hadoop/pull/6860 > Warmup NameNode EDEK thread retries continuously if there's an invalid key > --- > > Key: HDFS-13603 > URL: https://issues.apache.org/jira/browse/HDFS-13603 > Project: Hadoop HDFS > Issue Type: Bug > Components: encryption, namenode >Affects Versions: 2.8.0 >Reporter: Antony Jay >Priority: Major > Labels: pull-request-available > > https://issues.apache.org/jira/browse/HDFS-9405 adds a background thread to > pre-warm EDEK cache. > However this fails and retries continuously if key retrieval fails for one > encryption zone. In our usecase, we have temporarily removed keys for certain > encryption zones. Currently namenode and kms log is filled up with errors > related to background thread retrying warmup for ever . > The pre-warm thread should > * Continue to refresh other encryption zones even if it fails for one > * Should retry only if it fails for all encryption zones, which will be the > case when kms is down. > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-17538) Add tranfer priority queue for decommissioning datanode
[ https://issues.apache.org/jira/browse/HDFS-17538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuanbo Liu resolved HDFS-17538. --- Resolution: Duplicate > Add tranfer priority queue for decommissioning datanode > --- > > Key: HDFS-17538 > URL: https://issues.apache.org/jira/browse/HDFS-17538 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Yuanbo Liu >Priority: Major > Attachments: image-2024-05-29-16-24-45-601.png, > image-2024-05-29-16-26-58-359.png, image-2024-05-29-16-27-35-886.png > > > When decommissioning datanode, blocks will be checked one by one disk, then > blocks will be sent to trigger tranfer works in DN. This will make one disk > of decommissioning dn very busy and cpus stuck in io-wait with high loads, > and sometime even lead to OOM as below: > !image-2024-05-29-16-24-45-601.png|width=909,height=170! > !image-2024-05-29-16-26-58-359.png|width=909,height=228! > !image-2024-05-29-16-27-35-886.png|width=930,height=218! > Proposal to add priority queue for transfering blocks when decommisioning > datanode. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org