Sounds good. 

Jean, do you mean you will drive the release?

Regards,
Shwetha






On 17/10/16, 5:40 PM, "Jean-Baptiste Onofré" <j...@nanthrax.net> wrote:

>Hi guys,
>
>As 0.7.0 is not super useful with this fix, I would propose to release
>0.7.1 at least to fix that.
>
>I have couple of other Jira in my bucket I would like to include in a
>0.7.1 release as well.
>
>Thoughts ?
>
>Regards
>JB
>
>On 10/17/2016 10:25 AM, Keval Bhatt wrote:
>> Hi Ismaël
>>
>> Recently Atlas UI was not loading after fresh build due to
>> jquery-asBreadcrumbs plugin changes and this is fixed on master.
>>
>> ATLAS-1199  <https://issues.apache.org/jira/browse/ATLAS-1199>
>>
>> Please checkout the latest code from master
>> <https://github.com/apache/incubator-atlas>
>>
>>
>> Thanks,
>> Keval Bhatt
>>
>> On Mon, Oct 17, 2016 at 1:15 PM, Ismaël Mejía <ieme...@gmail.com> wrote:
>>
>>> Hello,
>>>
>>> I used Atlas 0.5 and some of the earlier version of Atlas 0.7 with no
>>> issues.
>>> However I am trying to use the released version of Atlas 0.7 and I am
>>> having
>>> some problems. I built the binary distribution with embedded hbase/solr
>>> following the instructions from the website:
>>>
>>>     mvn clean package -Pdist,embedded-hbase-solr -DskipTests
>>>
>>> Then I start Atlas like this:
>>>
>>>     export MANAGE_LOCAL_SOLR=true
>>>     export MANAGE_LOCAL_HBASE=true
>>>     bin/atlas_start.py
>>>
>>> If I go to the initial webpage http://localhost:21000/
>>>
>>> I see the login/password page and once I log in with the admin user I
>>>get a
>>> blank page.
>>>
>>> I considered that maybe I was missing some basic data so I ran the
>>> quickstart:
>>>
>>>     bin/quick_start.py
>>>
>>> Then I log in again but still I can't see any data. Am I missing
>>>something
>>> ?
>>>
>>> The weird thing is that I don't have any exception for the web app, the
>>> only
>>> exception in the logs is on atlas_start:
>>>
>>> 2016-10-17 09:06:48,174 INFO  - [main:] ~ Guice modules loaded
>>> (GuiceServletConfig:120)
>>> 2016-10-17 09:06:48,177 INFO  - [main:] ~ Starting services
>>> (GuiceServletConfig:140)
>>> 2016-10-17 09:06:48,224 WARN  - [main-SendThread(localhost:9026):] ~
>>> Session 0x0 for server null, unexpected error, closing socket
>>>connection
>>> and attempting reconnect (ClientCnxn$SendThread:1102)
>>> java.net.ConnectException: Connection refused
>>>     at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>>>     at
>>> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
>>>     at
>>> org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
>>> ClientCnxnSocketNIO.java:361)
>>>     at org.apache.zookeeper.ClientCnxn$SendThread.run(
>>> ClientCnxn.java:1081)
>>> 2016-10-17 09:06:48,255 INFO  - [main:] ~ HA is disabled. Hence
>>>creating
>>> table on startup. (HBaseBasedAuditRepository:287)
>>> 2016-10-17 09:06:48,256 INFO  - [main:] ~ Checking if table
>>> apache_atlas_entity_audit exists (HBaseBasedAuditRepository:249)
>>> 2016-10-17 09:06:48,263 INFO  - [main:] ~ Creating table
>>> apache_atlas_entity_audit (HBaseBasedAuditRepository:251)
>>> 2016-10-17 09:06:49,326 WARN  - [main-SendThread(localhost:9026):] ~
>>> Session 0x0 for server null, unexpected error, closing socket
>>>connection
>>> and attempting reconnect (ClientCnxn$SendThread:1102)
>>> java.net.ConnectException: Connection refused
>>>     at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>>>     at
>>> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
>>>     at
>>> org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
>>> ClientCnxnSocketNIO.java:361)
>>>     at org.apache.zookeeper.ClientCnxn$SendThread.run(
>>> ClientCnxn.java:1081)
>>>
>>> and also I found this exception when the quick_start script creates the
>>> first entity:
>>>
>>> 2016-10-17 09:21:40,566 WARN  - [qtp161960012-16 -
>>> 888beb21-202d-4e96-9750-25d5ebe3bcac:] ~ The configuration
>>> auto.commit.enable = false was supplied but isn't a known config.
>>> (AbstractConfig:186)
>>> 2016-10-17 09:21:40,700 WARN  - [kafka-producer-network-thread |
>>> producer-1:] ~ Error while fetching metadata with correlation id 0 :
>>> {ATLAS_ENTITIES=LEADER_NOT_AVAILABLE}
>>> (NetworkClient$DefaultMetadataUpdater:600)
>>> 2016-10-17 09:21:40,732 WARN  -
>>> [org.apache.atlas.kafka.KafkaNotification:Controller-
>>> 1-to-broker-1-send-thread:]
>>> ~
>>> [org.apache.atlas.kafka.KafkaNotification:Controller-
>>> 1-to-broker-1-send-thread],
>>> Controller 1 epoch 1 fails to send request
>>> {controller_id=1,controller_epoch=1,partition_states=[{
>>> topic=ATLAS_ENTITIES,partition=0,controller_epoch=
>>> 1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]}
>>> ],live_leaders=[{id=1,host=localhost,port=9027}]}
>>> to broker localhost:9027 (id: 1 rack: null). Reconnecting to broker.
>>> (Logging$class:89)
>>> java.io.IOException: Connection to 1 was disconnected before the
>>>response
>>> was read
>>>     at
>>> kafka.utils.NetworkClientBlockingOps$$anonfun$blockingSendAndReceive$
>>> extension$1$$anonfun$apply$1.apply(NetworkClientBlockingOps.scala:87)
>>>     at
>>> kafka.utils.NetworkClientBlockingOps$$anonfun$blockingSendAndReceive$
>>> extension$1$$anonfun$apply$1.apply(NetworkClientBlockingOps.scala:84)
>>>     at scala.Option.foreach(Option.scala:236)
>>>     at
>>> kafka.utils.NetworkClientBlockingOps$$anonfun$blockingSendAndReceive$
>>> extension$1.apply(NetworkClientBlockingOps.scala:84)
>>>     at
>>> kafka.utils.NetworkClientBlockingOps$$anonfun$blockingSendAndReceive$
>>> extension$1.apply(NetworkClientBlockingOps.scala:80)
>>>     at
>>> kafka.utils.NetworkClientBlockingOps$.recursivePoll$2(
>>> NetworkClientBlockingOps.scala:137)
>>>     at
>>> kafka.utils.NetworkClientBlockingOps$.kafka$utils$
>>> NetworkClientBlockingOps$$pollContinuously$extension(
>>> NetworkClientBlockingOps.scala:143)
>>>     at
>>> kafka.utils.NetworkClientBlockingOps$.blockingSendAndReceive$extension(
>>> NetworkClientBlockingOps.scala:80)
>>>     at
>>> 
>>>kafka.controller.RequestSendThread.liftedTree1$1(ControllerChannelManage
>>>r.
>>> scala:189)
>>>     at
>>> kafka.controller.RequestSendThread.doWork(ControllerChannelManager.
>>> scala:180)
>>>     at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)
>>> 2016-10-17 09:21:40,809 WARN  - [kafka-producer-network-thread |
>>> producer-1:] ~ Error while fetching metadata with correlation id 1 :
>>> {ATLAS_ENTITIES=LEADER_NOT_AVAILABLE}
>>> (NetworkClient$DefaultMetadataUpdater:600)
>>> 2016-10-17 09:21:40,915 WARN  - [kafka-producer-network-thread |
>>> producer-1:] ~ Error while fetching metadata with correlation id 2 :
>>> {ATLAS_ENTITIES=LEADER_NOT_AVAILABLE}
>>> (NetworkClient$DefaultMetadataUpdater:600)
>>> 2016-10-17 09:21:41,018 WARN  - [kafka-producer-network-thread |
>>> producer-1:] ~ Error while fetching metadata with correlation id 3 :
>>> {ATLAS_ENTITIES=LEADER_NOT_AVAILABLE}
>>> (NetworkClient$DefaultMetadataUpdater:600)
>>>
>>> Notice that the quickstart script reports success in the entities
>>>creation.
>>>
>>> Am I missing maybe some additional configuration that blocks the UI
>>>from
>>> showing
>>> data, should I do something additionally ?
>>>
>>> I am really blocked by this issue, it is nice that I can still use the
>>>REST
>>> API,
>>> however my goal is to be able to visualize the lineage too.
>>>
>>> Thanks in advance for your help,
>>> Ismaël
>>>
>>
>
>-- 
>Jean-Baptiste Onofré
>jbono...@apache.org
>http://blog.nanthrax.net
>Talend - http://www.talend.com
>

Reply via email to