[jira] [Updated] (PHOENIX-4319) Zookeeper connection should be closed immediately
[ https://issues.apache.org/jira/browse/PHOENIX-4319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jepson updated PHOENIX-4319: Description: *Code:* {code:java} val zkUrl = "192.168.100.40,192.168.100.41,192.168.100.42:2181:/hbase" val configuration = new Configuration() configuration.set("hbase.zookeeper.quorum",zkUrl) val spark = SparkSession .builder() .appName("SparkPhoenixTest1") .master("local[2]") .getOrCreate() for( a <- 1 to 100){ val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame( "DW.wms_do", Array("WAREHOUSE_NO", "DO_NO"), predicate = Some( """ |MOD_TIME >= TO_DATE('begin_day', '-MM-dd') |and MOD_TIME < TO_DATE('end_day', '-MM-dd') """.stripMargin.replaceAll("begin_day", "2017-10-01").replaceAll("end_day", "2017-10-25")), conf = configuration ) wms_doDF.show(100) } {code} *Description:* The connection to zookeeper is not getting closed,which causes the maximum number of client connections to be reached from a host( we have maxClientCnxns as 500 in zookeeper config). Zookeeper connections: [https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png] *Reference:* [https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html] was: *Code:* {code:java} val zkUrl = "192.168.100.40,192.168.100.41,192.168.100.42:2181:/hbase" val configuration = new Configuration() configuration.set("hbase.zookeeper.quorum",zkUrl) val spark = SparkSession .builder() .appName("SparkPhoenixTest1") .master("local[2]") .getOrCreate() for( a <- 1 to 100){ val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame( "DW.wms_do", Array("WAREHOUSE_NO", "DO_NO"), predicate = Some( """ |MOD_TIME >= TO_DATE('begin_day', '-MM-dd') |and MOD_TIME < TO_DATE('end_day', '-MM-dd') """.stripMargin.replaceAll("begin_day", "2017-10-01").replaceAll("end_day", "2017-10-25")), conf = configuration ) wms_doDF.show(100) } {code} *Description:* The connection to zookeeper is not getting closed,which causes the maximum number of client connections to be reached from a host( we have maxClientCnxns as 500 in zookeeper config). !https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png! [https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png] *Reference:* [https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html] > Zookeeper connection should be closed immediately > - > > Key: PHOENIX-4319 > URL: https://issues.apache.org/jira/browse/PHOENIX-4319 > Project: Phoenix > Issue Type: Bug >Affects Versions: 4.10.0 > Environment: phoenix4.10 hbase1.2.0 >Reporter: Jepson > Labels: patch > Original Estimate: 48h > Remaining Estimate: 48h > > *Code:* > {code:java} > val zkUrl = "192.168.100.40,192.168.100.41,192.168.100.42:2181:/hbase" > val configuration = new Configuration() > configuration.set("hbase.zookeeper.quorum",zkUrl) > val spark = SparkSession > .builder() > .appName("SparkPhoenixTest1") > .master("local[2]") > .getOrCreate() > for( a <- 1 to 100){ > val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame( > "DW.wms_do", > Array("WAREHOUSE_NO", "DO_NO"), > predicate = Some( > """ > |MOD_TIME >= TO_DATE('begin_day', '-MM-dd') > |and MOD_TIME < TO_DATE('end_day', '-MM-dd') > """.stripMargin.replaceAll("begin_day", > "2017-10-01").replaceAll("end_day", "2017-10-25")), > conf = configuration > ) > wms_doDF.show(100) > } > {code} > *Description:* > The connection to zookeeper is not getting closed,which causes the maximum > number of client connections to be reached from a host( we have > maxClientCnxns as 500 in zookeeper config). > Zookeeper connections: > [https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png] > *Reference:* > [https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html] -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (PHOENIX-4319) Zookeeper connection should be closed immediately
[ https://issues.apache.org/jira/browse/PHOENIX-4319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jepson updated PHOENIX-4319: Description: *Code:* {code:java} val zkUrl = "192.168.100.40,192.168.100.41,192.168.100.42:2181:/hbase" val configuration = new Configuration() configuration.set("hbase.zookeeper.quorum",zkUrl) val spark = SparkSession .builder() .appName("SparkPhoenixTest1") .master("local[2]") .getOrCreate() for( a <- 1 to 100){ val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame( "DW.wms_do", Array("WAREHOUSE_NO", "DO_NO"), predicate = Some( """ |MOD_TIME >= TO_DATE('begin_day', '-MM-dd') |and MOD_TIME < TO_DATE('end_day', '-MM-dd') """.stripMargin.replaceAll("begin_day", "2017-10-01").replaceAll("end_day", "2017-10-25")), conf = configuration ) wms_doDF.show(100) } {code} *Description:* The connection to zookeeper is not getting closed,which causes the maximum number of client connections to be reached from a host( we have maxClientCnxns as 500 in zookeeper config). *Zookeeper connections:* [https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png] *Reference:* [https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html] was: *Code:* {code:java} val zkUrl = "192.168.100.40,192.168.100.41,192.168.100.42:2181:/hbase" val configuration = new Configuration() configuration.set("hbase.zookeeper.quorum",zkUrl) val spark = SparkSession .builder() .appName("SparkPhoenixTest1") .master("local[2]") .getOrCreate() for( a <- 1 to 100){ val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame( "DW.wms_do", Array("WAREHOUSE_NO", "DO_NO"), predicate = Some( """ |MOD_TIME >= TO_DATE('begin_day', '-MM-dd') |and MOD_TIME < TO_DATE('end_day', '-MM-dd') """.stripMargin.replaceAll("begin_day", "2017-10-01").replaceAll("end_day", "2017-10-25")), conf = configuration ) wms_doDF.show(100) } {code} *Description:* The connection to zookeeper is not getting closed,which causes the maximum number of client connections to be reached from a host( we have maxClientCnxns as 500 in zookeeper config). Zookeeper connections: [https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png] *Reference:* [https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html] > Zookeeper connection should be closed immediately > - > > Key: PHOENIX-4319 > URL: https://issues.apache.org/jira/browse/PHOENIX-4319 > Project: Phoenix > Issue Type: Bug >Affects Versions: 4.10.0 > Environment: phoenix4.10 hbase1.2.0 >Reporter: Jepson > Labels: patch > Original Estimate: 48h > Remaining Estimate: 48h > > *Code:* > {code:java} > val zkUrl = "192.168.100.40,192.168.100.41,192.168.100.42:2181:/hbase" > val configuration = new Configuration() > configuration.set("hbase.zookeeper.quorum",zkUrl) > val spark = SparkSession > .builder() > .appName("SparkPhoenixTest1") > .master("local[2]") > .getOrCreate() > for( a <- 1 to 100){ > val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame( > "DW.wms_do", > Array("WAREHOUSE_NO", "DO_NO"), > predicate = Some( > """ > |MOD_TIME >= TO_DATE('begin_day', '-MM-dd') > |and MOD_TIME < TO_DATE('end_day', '-MM-dd') > """.stripMargin.replaceAll("begin_day", > "2017-10-01").replaceAll("end_day", "2017-10-25")), > conf = configuration > ) > wms_doDF.show(100) > } > {code} > *Description:* > The connection to zookeeper is not getting closed,which causes the maximum > number of client connections to be reached from a host( we have > maxClientCnxns as 500 in zookeeper config). > *Zookeeper connections:* > [https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png] > *Reference:* > [https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html] -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (PHOENIX-4319) Zookeeper connection should be closed immediately
[ https://issues.apache.org/jira/browse/PHOENIX-4319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jepson updated PHOENIX-4319: Description: *Code:* {code:java} val zkUrl = "192.168.100.40,192.168.100.41,192.168.100.42:2181:/hbase" val configuration = new Configuration() configuration.set("hbase.zookeeper.quorum",zkUrl) val spark = SparkSession .builder() .appName("SparkPhoenixTest1") .master("local[2]") .getOrCreate() for( a <- 1 to 100){ val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame( "DW.wms_do", Array("WAREHOUSE_NO", "DO_NO"), predicate = Some( """ |MOD_TIME >= TO_DATE('begin_day', '-MM-dd') |and MOD_TIME < TO_DATE('end_day', '-MM-dd') """.stripMargin.replaceAll("begin_day", "2017-10-01").replaceAll("end_day", "2017-10-25")), conf = configuration ) wms_doDF.show(100) } {code} *Description:* The connection to zookeeper is not getting closed,which causes the maximum number of client connections to be reached from a host( we have maxClientCnxns as 500 in zookeeper config). !https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png! [https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png] *Reference:* [https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html] was: *Code:* {code:java} val zkUrl = "192.168.100.40,192.168.100.41,192.168.100.42:2181:/hbase" val configuration = new Configuration() configuration.set("hbase.zookeeper.quorum",zkUrl) val spark = SparkSession .builder() .appName("SparkPhoenixTest1") .master("local[2]") .getOrCreate() for( a <- 1 to 100){ val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame( "DW.wms_do", Array("WAREHOUSE_NO", "DO_NO"), predicate = Some( """ |MOD_TIME >= TO_DATE('begin_day', '-MM-dd') |and MOD_TIME < TO_DATE('end_day', '-MM-dd') """.stripMargin.replaceAll("begin_day", "2017-10-01").replaceAll("end_day", "2017-10-25")), conf = configuration ) wms_doDF.show(100) } {code} *Description:* The connection to zookeeper is not getting closed,which causes the maximum number of client connections to be reached from a host( we have maxClientCnxns as 500 in zookeeper config). !https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png! *Reference:* [https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html] > Zookeeper connection should be closed immediately > - > > Key: PHOENIX-4319 > URL: https://issues.apache.org/jira/browse/PHOENIX-4319 > Project: Phoenix > Issue Type: Bug >Affects Versions: 4.10.0 > Environment: phoenix4.10 hbase1.2.0 >Reporter: Jepson > Labels: patch > Original Estimate: 48h > Remaining Estimate: 48h > > *Code:* > {code:java} > val zkUrl = "192.168.100.40,192.168.100.41,192.168.100.42:2181:/hbase" > val configuration = new Configuration() > configuration.set("hbase.zookeeper.quorum",zkUrl) > val spark = SparkSession > .builder() > .appName("SparkPhoenixTest1") > .master("local[2]") > .getOrCreate() > for( a <- 1 to 100){ > val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame( > "DW.wms_do", > Array("WAREHOUSE_NO", "DO_NO"), > predicate = Some( > """ > |MOD_TIME >= TO_DATE('begin_day', '-MM-dd') > |and MOD_TIME < TO_DATE('end_day', '-MM-dd') > """.stripMargin.replaceAll("begin_day", > "2017-10-01").replaceAll("end_day", "2017-10-25")), > conf = configuration > ) > wms_doDF.show(100) > } > {code} > *Description:* > The connection to zookeeper is not getting closed,which causes the maximum > number of client connections to be reached from a host( we have > maxClientCnxns as 500 in zookeeper config). > !https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png! > [https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png] > *Reference:* > [https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html] -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (PHOENIX-4319) Zookeeper connection should be closed immediately
[ https://issues.apache.org/jira/browse/PHOENIX-4319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jepson updated PHOENIX-4319: Description: *Code:* {code:java} val zkUrl = "192.168.100.40,192.168.100.41,192.168.100.42:2181:/hbase" val configuration = new Configuration() configuration.set("hbase.zookeeper.quorum",zkUrl) val spark = SparkSession .builder() .appName("SparkPhoenixTest1") .master("local[2]") .getOrCreate() for( a <- 1 to 100){ val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame( "DW.wms_do", Array("WAREHOUSE_NO", "DO_NO"), predicate = Some( """ |MOD_TIME >= TO_DATE('begin_day', '-MM-dd') |and MOD_TIME < TO_DATE('end_day', '-MM-dd') """.stripMargin.replaceAll("begin_day", "2017-10-01").replaceAll("end_day", "2017-10-25")), conf = configuration ) wms_doDF.show(100) } {code} *Description:* The connection to zookeeper is not getting closed,which causes the maximum number of client connections to be reached from a host( we have maxClientCnxns as 500 in zookeeper config). !https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png! *Reference:* [https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html] was: *Code:* {code:java} val zkUrl = "192.168.100.40,192.168.100.41,192.168.100.42:2181:/hbase" val configuration = new Configuration() configuration.set("hbase.zookeeper.quorum",zkUrl) val spark = SparkSession .builder() .appName("SparkPhoenixTest1") .master("local[2]") .getOrCreate() for( a <- 1 to 100){ val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame( "DW.wms_do", Array("WAREHOUSE_NO", "DO_NO"), predicate = Some( """ |MOD_TIME >= TO_DATE('begin_day', '-MM-dd') |and MOD_TIME < TO_DATE('end_day', '-MM-dd') """.stripMargin.replaceAll("begin_day", "2017-10-01").replaceAll("end_day", "2017-10-25")), conf = configuration ) wms_doDF.show(100) } {code} *Description:* The connection to zookeeper is not getting closed,which causes the maximum number of client connections to be reached from a host( we have maxClientCnxns as 500 in zookeeper config). !zookeeper connections|https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png! *Reference:* [https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html] > Zookeeper connection should be closed immediately > - > > Key: PHOENIX-4319 > URL: https://issues.apache.org/jira/browse/PHOENIX-4319 > Project: Phoenix > Issue Type: Bug >Affects Versions: 4.10.0 > Environment: phoenix4.10 hbase1.2.0 >Reporter: Jepson > Labels: patch > Original Estimate: 48h > Remaining Estimate: 48h > > *Code:* > {code:java} > val zkUrl = "192.168.100.40,192.168.100.41,192.168.100.42:2181:/hbase" > val configuration = new Configuration() > configuration.set("hbase.zookeeper.quorum",zkUrl) > val spark = SparkSession > .builder() > .appName("SparkPhoenixTest1") > .master("local[2]") > .getOrCreate() > for( a <- 1 to 100){ > val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame( > "DW.wms_do", > Array("WAREHOUSE_NO", "DO_NO"), > predicate = Some( > """ > |MOD_TIME >= TO_DATE('begin_day', '-MM-dd') > |and MOD_TIME < TO_DATE('end_day', '-MM-dd') > """.stripMargin.replaceAll("begin_day", > "2017-10-01").replaceAll("end_day", "2017-10-25")), > conf = configuration > ) > wms_doDF.show(100) > } > {code} > *Description:* > The connection to zookeeper is not getting closed,which causes the maximum > number of client connections to be reached from a host( we have > maxClientCnxns as 500 in zookeeper config). > !https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png! > *Reference:* > [https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html] -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (PHOENIX-4319) Zookeeper connection should be closed immediately
[ https://issues.apache.org/jira/browse/PHOENIX-4319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jepson updated PHOENIX-4319: Description: *Code:* {code:java} val zkUrl = "192.168.100.40,192.168.100.41,192.168.100.42:2181:/hbase" val configuration = new Configuration() configuration.set("hbase.zookeeper.quorum",zkUrl) val spark = SparkSession .builder() .appName("SparkPhoenixTest1") .master("local[2]") .getOrCreate() for( a <- 1 to 100){ val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame( "DW.wms_do", Array("WAREHOUSE_NO", "DO_NO"), predicate = Some( """ |MOD_TIME >= TO_DATE('begin_day', '-MM-dd') |and MOD_TIME < TO_DATE('end_day', '-MM-dd') """.stripMargin.replaceAll("begin_day", "2017-10-01").replaceAll("end_day", "2017-10-25")), conf = configuration ) wms_doDF.show(100) } {code} *Description:* The connection to zookeeper is not getting closed,which causes the maximum number of client connections to be reached from a host( we have maxClientCnxns as 500 in zookeeper config). !zookeeper connections|https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png! *Reference:* [https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html] was: *Code:* {code:java} val zkUrl = "192.168.17.37,192.168.17.38,192.168.17.40,192.168.17.41,192.168.17.42:2181" val configuration = new Configuration() configuration.set("hbase.zookeeper.quorum",zkUrl) val spark = SparkSession .builder() .appName("SparkPhoenixTest3") .master("local[2]") .getOrCreate() for( a <- 1 to 100){ val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame( "JYDW.wms_do", Array("WAREHOUSE_NO", "DO_NO"), predicate = Some( """ |MOD_TIME >= TO_DATE('begin_day', '-MM-dd') |and MOD_TIME < TO_DATE('end_day', '-MM-dd') """.stripMargin.replaceAll("begin_day", "2017-10-01").replaceAll("end_day", "2017-10-25")), conf = configuration ) wms_doDF.show(100) } {code} *Description:* The connection to zookeeper is not getting closed,which causes the maximum number of client connections to be reached from a host( we have maxClientCnxns as 500 in zookeeper config). !zookeeper connections|https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png! *Reference:* [https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html] > Zookeeper connection should be closed immediately > - > > Key: PHOENIX-4319 > URL: https://issues.apache.org/jira/browse/PHOENIX-4319 > Project: Phoenix > Issue Type: Bug >Affects Versions: 4.10.0 > Environment: phoenix4.10 hbase1.2.0 >Reporter: Jepson > Labels: patch > Original Estimate: 48h > Remaining Estimate: 48h > > *Code:* > {code:java} > val zkUrl = "192.168.100.40,192.168.100.41,192.168.100.42:2181:/hbase" > val configuration = new Configuration() > configuration.set("hbase.zookeeper.quorum",zkUrl) > val spark = SparkSession > .builder() > .appName("SparkPhoenixTest1") > .master("local[2]") > .getOrCreate() > for( a <- 1 to 100){ > val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame( > "DW.wms_do", > Array("WAREHOUSE_NO", "DO_NO"), > predicate = Some( > """ > |MOD_TIME >= TO_DATE('begin_day', '-MM-dd') > |and MOD_TIME < TO_DATE('end_day', '-MM-dd') > """.stripMargin.replaceAll("begin_day", > "2017-10-01").replaceAll("end_day", "2017-10-25")), > conf = configuration > ) > wms_doDF.show(100) > } > {code} > *Description:* > The connection to zookeeper is not getting closed,which causes the maximum > number of client connections to be reached from a host( we have > maxClientCnxns as 500 in zookeeper config). > !zookeeper > connections|https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png! > *Reference:* > [https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html] -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (PHOENIX-4319) Zookeeper connection should be closed immediately
[ https://issues.apache.org/jira/browse/PHOENIX-4319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jepson updated PHOENIX-4319: Description: *Code:* {code:java} val zkUrl = "192.168.17.37,192.168.17.38,192.168.17.40,192.168.17.41,192.168.17.42:2181" val configuration = new Configuration() configuration.set("hbase.zookeeper.quorum",zkUrl) val spark = SparkSession .builder() .appName("SparkPhoenixTest3") .master("local[2]") .getOrCreate() for( a <- 1 to 100){ val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame( "JYDW.wms_do", Array("WAREHOUSE_NO", "DO_NO"), predicate = Some( """ |MOD_TIME >= TO_DATE('begin_day', '-MM-dd') |and MOD_TIME < TO_DATE('end_day', '-MM-dd') """.stripMargin.replaceAll("begin_day", "2017-10-01").replaceAll("end_day", "2017-10-25")), conf = configuration ) wms_doDF.show(100) } {code} *Description:* The connection to zookeeper is not getting closed,which causes the maximum number of client connections to be reached from a host( we have maxClientCnxns as 500 in zookeeper config). !zookeeper connections|https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png! *Reference:* [https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html] was: *Code:* {code:scala} val zkUrl = "192.168.17.37,192.168.17.38,192.168.17.40,192.168.17.41,192.168.17.42:2181" val configuration = new Configuration() configuration.set("hbase.zookeeper.quorum",zkUrl) val spark = SparkSession .builder() .appName("SparkPhoenixTest3") .master("local[2]") .getOrCreate() for( a <- 1 to 100){ val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame( "JYDW.wms_do", Array("WAREHOUSE_NO", "DO_NO"), predicate = Some( """ |MOD_TIME >= TO_DATE('begin_day', '-MM-dd') |and MOD_TIME < TO_DATE('end_day', '-MM-dd') """.stripMargin.replaceAll("begin_day", "2017-10-01").replaceAll("end_day", "2017-10-25")), conf = configuration ) wms_doDF.show(100) } {code} *Description:* The connection to zookeeper is not getting closed,which causes the maximum number of client connections to be reached from a host( we have maxClientCnxns as 500 in zookeeper config). !zookeeper connections|https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png! *Reference:* [https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html] > Zookeeper connection should be closed immediately > - > > Key: PHOENIX-4319 > URL: https://issues.apache.org/jira/browse/PHOENIX-4319 > Project: Phoenix > Issue Type: Bug >Affects Versions: 4.10.0 > Environment: phoenix4.10 hbase1.2.0 >Reporter: Jepson > Labels: patch > Original Estimate: 48h > Remaining Estimate: 48h > > *Code:* > {code:java} > val zkUrl = > "192.168.17.37,192.168.17.38,192.168.17.40,192.168.17.41,192.168.17.42:2181" > val configuration = new Configuration() > configuration.set("hbase.zookeeper.quorum",zkUrl) > val spark = SparkSession > .builder() > .appName("SparkPhoenixTest3") > .master("local[2]") > .getOrCreate() > for( a <- 1 to 100){ > val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame( > "JYDW.wms_do", > Array("WAREHOUSE_NO", "DO_NO"), > predicate = Some( > """ > |MOD_TIME >= TO_DATE('begin_day', '-MM-dd') > |and MOD_TIME < TO_DATE('end_day', '-MM-dd') > """.stripMargin.replaceAll("begin_day", > "2017-10-01").replaceAll("end_day", "2017-10-25")), > conf = configuration > ) > wms_doDF.show(100) > } > {code} > *Description:* > The connection to zookeeper is not getting closed,which causes the maximum > number of client connections to be reached from a host( we have > maxClientCnxns as 500 in zookeeper config). > !zookeeper > connections|https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png! > *Reference:* > [https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html] -- This message was sent by Atlassian JIRA (v6.4.14#64029)