Daniel,  
I did some more digging and discovered that I can get Atlas server (with the UI 
at port 21000) to come up if I make a change to my mac network preferences.  In 
particular, in the DNS Server section (Apple menu | SystemPreferences | Network 
| Wifi | Advanced | DNS), I ended up removing the IP address of my local wifi 
router and replacing it with 8.8.8.8.  In fact, 8.8.8.8 is the only DNS entry.  
After doing this, the atlas_start.py works, and the quick_start.py works.  You 
can log into the server at localhost:21000 using the default username and 
password, and also do curl based queries.  The Atlas login screen appears, and 
then you go into the UI itself within the browser, as expected. 

The issue seems to be related to the local router acting as a DNS and making 
"bad" resolves for things like localhost for the Java calls coming from Atlas.  
As I mentioned earlier, for some reason the servers were being mapped to the 
router IP instead of the local mac.  Not sure I fully understand why that would 
be the case, but for now, this seems like things work for Atlas.

I hope this provides some direction in your situation.
Thanks,
-Anthony

> On Oct 6, 2017, at 12:23 PM, Anthony Daniell <[email protected]> wrote:
> 
> Daniel,
> Thanks for the additional steps you noted.  I think I found a possible 
> direction.  It appears that "localhost" is not being translated consistently 
> (at least on my system).  The SOLR server is being mapped to the router IP 
> address instead of the address of the mac in some cases.  I manually 
> specified the IP address using the -h option for the solr start, rather than 
> relying on the default "localhost" interpretation.  Not sure where it picks 
> that up.
> In any case, was able to get the tutorial of SOLR running (Tutorial 1), so I 
> think this might be helpful.
> I hope it helps with your case.
> Thanks,
> -Anthony
> 
> 
>> On Oct 6, 2017, at 10:20 AM, Daniel Lee <[email protected] 
>> <mailto:[email protected]>> wrote:
>> 
>> Many thanks for the reply Anthony. The above variables were set to true in 
>> the settings for most of my failures.
>> 
>> In an attempt to get a stable version, I went to the 0.8 branch and tried 
>> that, but ended up in the same place. I do have it running now and accepting 
>> queries but had to do quite a bit of mucking around.
>> 
>> I ended up starting the embedded hbase and solr by hand and doing the start 
>> up manually via hbase/bin/start-hbase.sh and solr/bin/solr start . Adding 
>> the vertex_index, edge_index, and fulltext_index manually seems to have 
>> cleared the final blockage. I'll have to double check my notes to make sure 
>> that that's all I did. However import-hive.sh still borks out on me although 
>> that could be a problem on my hive setup.
>> 
>> Thanks again!
>> 
>> Daniel Lee
>> 
>> On Fri, Oct 6, 2017 at 9:23 AM, Anthony Daniell <[email protected] 
>> <mailto:[email protected]>> wrote:
>> Daniel/All,
>> 
>> Not sure if this is full solution, but found an interesting post that might 
>> push things along:  
>> http://coheigea.blogspot.com/2017/04/securing-apache-hadoop-distributed-file_21.html
>>  
>> <http://coheigea.blogspot.com/2017/04/securing-apache-hadoop-distributed-file_21.html>
>> In particular, two additional environment variables need to be set to get 
>> local standalone versions of HBASE and SOLR started (I presume with their 
>> own zookeeper instances)
>> export MANAGE_LOCAL_HBASE=true
>> export MANAGE_LOCAL_SOLR=true
>> 
>> If you read through the atlas_start.py script, you see that local mode 
>> should be indicated when these are set to True.  
>> I hope this helps.
>> Thanks,
>> -Anthony
>> 
>> 
>>> On Oct 5, 2017, at 3:18 PM, Daniel Lee <[email protected] 
>>> <mailto:[email protected]>> wrote:
>>> 
>>> As a follow on, I tried doing the following:
>>> 
>>> Uncommented
>>> export HBASE_MANAGES_ZK=true
>>> 
>>> in hbase/conf/hbase-env.sh
>>> 
>>> and set
>>> 
>>> 
>>> atlas.server.run.setup.on.start=true
>>> 
>>> in conf/atlas-application.properties
>>> 
>>> Thanks
>>> 
>>> Daniel Lee
>>> 
>>> 
>>> On Thu, Oct 5, 2017 at 2:52 PM, Daniel Lee <[email protected] 
>>> <mailto:[email protected]>> wrote:
>>> Hey guys,
>>> 
>>> Still running into problems starting up a fully functional standalone mode 
>>> atlas instance. This is all on a Mac 10.12.6 . After borking out with the 
>>> lock error running against berkeleydb, I followed the instructions on
>>> 
>>> http://atlas.apache.org/InstallationSteps.html 
>>> <http://atlas.apache.org/InstallationSteps.html>
>>> 
>>> To try the embedded-hbase-solr instructions.
>>> 
>>> mvn clean package -Pdist,embedded-hbase-solr
>>> works fine and even starts and runs a local instance during the testing 
>>> phases.
>>> 
>>> The line :
>>> Using the embedded-hbase-solr profile will configure Atlas so that an HBase 
>>> instance and a Solr instance will be started and stopped along with the 
>>> Atlas server by default.
>>> 
>>> implies I should be able to start the whole shebang with
>>> 
>>> bin/atlas_start.py
>>> 
>>> But I get a pretty ugly error messages in both application.log and *.out. I 
>>> won't post it all, but should be easily replicable. Relevant portions in 
>>> the application.log are:
>>> 
>>> 2017-10-05 14:45:35,349 WARN  - [main-SendThread(localhost:2181):] ~ 
>>> Session 0x0 for server null, unexpected error, closing socket connection 
>>> and attempting reconnect (ClientCnxn$SendThread:1102)
>>> 
>>> java.net.ConnectException: Connection refused
>>> 
>>>         at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>>> 
>>>         at 
>>> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
>>> 
>>>         at 
>>> org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
>>> 
>>>         at 
>>> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
>>> 
>>> and
>>> 
>>> 2017-10-05 14:45:52,059 WARN  - [main:] ~ hconnection-0x5e9f73b0x0, 
>>> quorum=localhost:2181, baseZNode=/hbase Unable to set watcher on znode 
>>> (/hbase/hbaseid) (ZKUtil:544)
>>> 
>>> org.apache.zookeeper.KeeperException$ConnectionLossException: 
>>> KeeperErrorCode = ConnectionLoss for /hbase/hbaseid
>>> 
>>>         at 
>>> org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
>>> 
>>>         at 
>>> org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
>>> 
>>> 
>>>         at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1045)
>>> 
>>> All of which point to a failure on the zookeeper node. Do I need to start 
>>> up my own zookeeper instance locally?
>>> 
>>> Thanks!
>>> 
>>> Daniel Lee
>>> 
>>> 
>> 
>> 
> 

Reply via email to