[ 
https://issues.apache.org/jira/browse/ATLAS-639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15385023#comment-15385023
 ] 

Suma Shivaprasad commented on ATLAS-639:
----------------------------------------

Also added enablePath() for loop gremlin Queries since this is recommended for 
loop closures - http://gremlindocs.spmallette.documentup.com/#pipeenablepath

> Exception for lineage request
> -----------------------------
>
>                 Key: ATLAS-639
>                 URL: https://issues.apache.org/jira/browse/ATLAS-639
>             Project: Atlas
>          Issue Type: Bug
>    Affects Versions: trunk
>            Reporter: Ayub Khan
>            Assignee: Vimal Sharma
>            Priority: Critical
>             Fix For: trunk
>
>         Attachments: ATLAS-639.1.patch, ATLAS-639.patch
>
>
> Exception in log for lineage request
> Steps to reproduce:
>       create table j13 (col4 String);
>       create table j14 (col5 String);
>       insert into table j13 select * from j14;
>       insert into table j14 select * from j13;
> Lineage API call i.e 
> http://localhost:21000/api/atlas/lineage/"guid"/inputs/graph is 
> non-responsive. The following exception is observed in log file:
> 2016-07-18 14:46:20,841 WARN  - [ZkClient-EventThread-99-localhost:9026:] ~ 
> [atlas_hw-f45c89ac3a11.local-1468832155097-fb78fddf], no brokers found when 
> trying to rebalance. (Logging$class:83)
> 2016-07-18 14:46:20,848 WARN  - [ZkClient-EventThread-99-localhost:9026:] ~ 
> [atlas_hw-f45c89ac3a11.local-1468832155097-fb78fddf], Topic for path 
> /brokers/topics/ATLAS_HOOK gets deleted, which should not happen at this time 
> (Logging$class:83)
> 2016-07-18 14:48:27,697 WARN  - 
> [NIOServerCxn.Factory:localhost/127.0.0.1:9026:] ~ caught end of stream 
> exception (NIOServerCnxn:357)
> EndOfStreamException: Unable to read additional data from client sessionid 
> 0x155fd3803b20033, likely client has closed socket
>       at 
> org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
>       at 
> org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
>       at java.lang.Thread.run(Thread.java:745)
> 2016-07-18 14:48:27,715 WARN  - 
> [NIOServerCxn.Factory:localhost/127.0.0.1:9026:] ~ caught end of stream 
> exception (NIOServerCnxn:357)
> EndOfStreamException: Unable to read additional data from client sessionid 
> 0x155fd3803b20031, likely client has closed socket
>       at 
> org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
>       at 
> org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
>       at java.lang.Thread.run(Thread.java:745)
> 2016-07-18 14:48:34,551 ERROR - [main-EventThread:] ~ Background operation 
> retry gave up (CuratorFrameworkImpl:537)
> org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode 
> = ConnectionLoss
>       at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
>       at 
> org.apache.curator.framework.imps.CuratorFrameworkImpl.checkBackgroundRetry(CuratorFrameworkImpl.java:708)
>       at 
> org.apache.curator.framework.imps.CuratorFrameworkImpl.processBackgroundOperation(CuratorFrameworkImpl.java:499)
>       at 
> org.apache.curator.framework.imps.BackgroundSyncImpl$1.processResult(BackgroundSyncImpl.java:50)
>       at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:609)
>       at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
> 2016-07-18 14:48:34,561 ERROR - [main-EventThread:] ~ Background operation 
> retry gave up (CuratorFrameworkImpl:537)
> org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode 
> = ConnectionLoss
>       at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
>       at 
> org.apache.curator.framework.imps.CuratorFrameworkImpl.checkBackgroundRetry(CuratorFrameworkImpl.java:708)
>       at 
> org.apache.curator.framework.imps.CuratorFrameworkImpl.processBackgroundOperation(CuratorFrameworkImpl.java:499)
>       at 
> org.apache.curator.framework.imps.BackgroundSyncImpl$1.processResult(BackgroundSyncImpl.java:50)
>       at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:609)
>       at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
> 2016-07-18 14:48:34,566 WARN  - 
> [NIOServerCxn.Factory:localhost/127.0.0.1:9026:] ~ Exception causing close of 
> session 0x155fd3803b20032 due to java.nio.channels.AsynchronousCloseException 
> (NIOServerCnxn:362)
> 2016-07-18 14:50:50,513 WARN  - [main-EventThread:] ~ Session expired event 
> received (ConnectionState:288)
> 2016-07-18 14:52:52,577 WARN  - 
> [NIOServerCxn.Factory:localhost/127.0.0.1:9026:] ~ caught end of stream 
> exception (NIOServerCnxn:357)
> EndOfStreamException: Unable to read additional data from client sessionid 
> 0x155fd3803b20034, likely client has closed socket
>       at 
> org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
>       at 
> org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
>       at java.lang.Thread.run(Thread.java:745)
> 2016-07-18 14:52:52,589 WARN  - 
> [atlas_hw-f45c89ac3a11.local-1468832155097-fb78fddf_watcher_executor:] ~ 
> [atlas_hw-f45c89ac3a11.local-1468832155097-fb78fddf], no brokers found when 
> trying to rebalance. (Logging$class:83)
> 2016-07-18 14:52:52,608 WARN  - [ZkClient-EventThread-99-localhost:9026:] ~ 
> [atlas_hw-f45c89ac3a11.local-1468832155097-fb78fddf], no brokers found when 
> trying to rebalance. (Logging$class:83)
> 2016-07-18 14:52:52,612 WARN  - [ZkClient-EventThread-99-localhost:9026:] ~ 
> [atlas_hw-f45c89ac3a11.local-1468832155097-fb78fddf], Topic for path 
> /brokers/topics/ATLAS_HOOK gets deleted, which should not happen at this time 
> (Logging$class:83)
> 2016-07-18 14:52:52,635 WARN  - 
> [atlas_hw-f45c89ac3a11.local-1468832155097-fb78fddf_watcher_executor:] ~ 
> [atlas_hw-f45c89ac3a11.local-1468832155097-fb78fddf], no brokers found when 
> trying to rebalance. (Logging$class:83)
> 2016-07-18 14:52:52,657 WARN  - 
> [atlas_hw-f45c89ac3a11.local-1468832155097-fb78fddf_watcher_executor:] ~ 
> [atlas_hw-f45c89ac3a11.local-1468832155097-fb78fddf], no brokers found when 
> trying to rebalance. (Logging$class:83)
> 2016-07-18 14:54:59,478 WARN  - 
> [NIOServerCxn.Factory:localhost/127.0.0.1:9026:] ~ caught end of stream 
> exception (NIOServerCnxn:357)
> EndOfStreamException: Unable to read additional data from client sessionid 
> 0x155fd3803b20035, likely client has closed socket
>       at 
> org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
>       at 
> org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
>       at java.lang.Thread.run(Thread.java:745)
> 2016-07-18 14:54:59,487 WARN  - [Curator-Framework-0:] ~ Connection attempt 
> unsuccessful after 126898 (greater than max timeout of 20000). Resetting 
> connection and trying again with a new connection. (ConnectionState:191)
> 2016-07-18 14:55:12,221 ERROR - [ZkClient-EventThread-65-localhost:9026:] ~ 
> Controller 1 epoch 299 initiated state change for partition 
> [ATLAS_ENTITIES,0] from OfflinePartition to OnlinePartition failed 
> (Logging$class:103)
> kafka.common.NoReplicaOnlineException: No replica for partition 
> [ATLAS_ENTITIES,0] is alive. Live brokers are: [Set()], Assigned replicas 
> are: [List(1)]
>       at 
> kafka.controller.OfflinePartitionLeaderSelector.selectLeader(PartitionLeaderSelector.scala:75)
>       at 
> kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:345)
>       at 
> kafka.controller.PartitionStateMachine.kafka$controller$PartitionStateMachine$$handleStateChange(PartitionStateMachine.scala:205)
>       at 
> kafka.controller.PartitionStateMachine$$anonfun$triggerOnlinePartitionStateChange$3.apply(PartitionStateMachine.scala:120)
>       at 
> kafka.controller.PartitionStateMachine$$anonfun$triggerOnlinePartitionStateChange$3.apply(PartitionStateMachine.scala:117)
>       at 
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
>       at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
>       at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
>       at 
> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
>       at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
>       at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
>       at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
>       at 
> kafka.controller.PartitionStateMachine.triggerOnlinePartitionStateChange(PartitionStateMachine.scala:117)
>       at 
> kafka.controller.PartitionStateMachine.startup(PartitionStateMachine.scala:70)
>       at 
> kafka.controller.KafkaController.onControllerFailover(KafkaController.scala:335)
>       at 
> kafka.controller.KafkaController$$anonfun$1.apply$mcV$sp(KafkaController.scala:166)
>       at 
> kafka.server.ZookeeperLeaderElector.elect(ZookeeperLeaderElector.scala:84)
>       at 
> kafka.controller.KafkaController$SessionExpirationListener$$anonfun$handleNewSession$1.apply$mcZ$sp(KafkaController.scala:1175)
>       at 
> kafka.controller.KafkaController$SessionExpirationListener$$anonfun$handleNewSession$1.apply(KafkaController.scala:1173)
>       at 
> kafka.controller.KafkaController$SessionExpirationListener$$anonfun$handleNewSession$1.apply(KafkaController.scala:1173)
>       at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:231)
>       at 
> kafka.controller.KafkaController$SessionExpirationListener.handleNewSession(KafkaController.scala:1173)
>       at org.I0Itec.zkclient.ZkClient$6.run(ZkClient.java:735)
>       at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to