[jira] [Commented] (ATLAS-1512) Hive Hook fails due to - Table not found exception

2017-03-28 Thread Russell Anderson (JIRA)

[ 
https://issues.apache.org/jira/browse/ATLAS-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15945775#comment-15945775
 ] 

Russell Anderson commented on ATLAS-1512:
-

This issue still occurs in .8rcx

> Hive Hook fails due to - Table not found exception
> --
>
> Key: ATLAS-1512
> URL: https://issues.apache.org/jira/browse/ATLAS-1512
> Project: Atlas
>  Issue Type: Bug
>  Components:  atlas-core, atlas-intg
> Environment: newly built, and configured Apache Atlas .7.1rc3
> BigInsights 4.2.0.0 -
> The example worked fine using Apache Atlas .7.0rc2
>    Reporter: Russell Anderson
>Priority: Critical
>
> After configuring Hive Hook: Ran the following Hive SQL command:
> Create table sysibm.sparktest as select * from sysibm.sparktable;
> -the table sysibm.sparktable exists
> -the table system.sparktest is getting created as a result of this command.
> -the table gets successfully created within the schema sysibm but the Hive 
> Hook does is not able deal with this correctly.
> 2017-01-31 07:49:56,979 INFO  metastore.HiveMetaStore 
> (HiveMetaStore.java:logIn\
> fo(746)) - 6: get_table : db=sysibm tbl=sparktest
> 2017-01-31 07:49:56,980 INFO  HiveMetaStore.audit 
> (HiveMetaStore.java:logAuditE\
> vent(371)) - ugi=hiveip=unknown-ip-addr  cmd=get_table : db=sysibm 
> tbl=\
> sparktest
> 2017-01-31 07:49:56,984 ERROR metadata.Hive (Hive.java:getTable(1119)) - 
> Table \
> sparktest not found: sysibm.sparktest table not found
> 2017-01-31 07:49:56,984 ERROR hook.HiveHook (HiveHook.java:run(207)) - Atlas 
> ho\
> ok failed due to error
> java.lang.reflect.UndeclaredThrowableException
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInform\
> ation.java:1672)
> at org.apache.atlas.hive.hook.HiveHook$2.run(HiveHook.java:197)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:5\
> 11)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor\
> .java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecuto\
> r.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hive.ql.metadata.InvalidTableException: Table 
> not \
> found sparktest
> at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:1120)
> at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:1090)
> at 
> org.apache.atlas.hive.hook.HiveHook.createOrUpdateEntities(HiveHook.\
> java:559)
> at 
> org.apache.atlas.hive.hook.HiveHook.createOrUpdateEntities(HiveHook.\
> java:581)
> at 
> org.apache.atlas.hive.hook.HiveHook.processHiveEntity(HiveHook.java:\
> 669)
> at 
> org.apache.atlas.hive.hook.HiveHook.registerProcess(HiveHook.java:64\
> 9)
> at org.apache.atlas.hive.hook.HiveHook.collect(HiveHook.java:270)
> at org.apache.atlas.hive.hook.HiveHook.access$200(HiveHook.java:85)
> at org.apache.atlas.hive.hook.HiveHook$2$1.run(HiveHook.java:200)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInform\
> ation.java:1657)
> ... 6 more
> 2017-01-31 07:49:57,029 INFO  log.PerfLogger 
> (PerfLogger.java:PerfLogEnd(148)) \



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [jira] [Commented] (ATLAS-1410) V2 Glossary API

2017-03-19 Thread Russell Anderson


These points that Mandy raises needs to be addressed.

Russ

Sent from my iPhone

> On Feb 19, 2017, at 6:37 AM, Mandy Chessell (JIRA) 
wrote:
>
>
>
[ 
https://issues.apache.org/jira/browse/ATLAS-1410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15873650#comment-15873650
 ]

>
> Mandy Chessell commented on ATLAS-1410:
> ---
>
> Comments on V1.0
>
> - Page numbers would help to tie these comments to the document.
> - Page 2 - Asset type - defined in terms of itself.  How are they used?
or is this not relevant to this paper?
> - Page 2 - Why do we need to know about V1 and V2?  I think it is because
the current interfaces works with V1 and the new one will work with V2 - it
would be helpful to state this explicitly.
> - Page 4 - bullets 4-5 - has-a and is-a relationships are semantic
relationships.
> - Page 4 - missing from list - ability to associate a semantic meaning to
a classification (v2), trait (v1)?
> - Page 4 - Missing from the list - "typed-by" relationship to associate
terms that include meaning in context with terms that describe more pure
objects.  For example Home Address is typed by Address.
> - Page 5 - Figure 1 - I am not comfortable with terms being owned by
categories.  I think each terms should be owned by a glossary and linked
into 0, 1 or more categories as appropriate.  This creates a much simpler
deletion rule for the API/End user - particularly when you look at Figure 2
where terms are owned by multiple categories. IE, delete term from its
glossary and it is deleted.  In the proposed design, it raises such
questions as "Is the term deleted when unlinked from all categories - or
the first category it is linked to?"
> - Page 6 - Figure 3 - I need more detail to understand the "classifies"
relationship and how it relates to a classification.  It seems redundant.
Would you not relate a term to a classification which is in itself
semantically classified by its definition term?
> - Page 6 - Bullet 6) - What is the alternative to using Gremlin queries?
> - Page 6 - Bullet 7) - is this an incomplete sentence - or does the
paragraph that follows supposed to be a nested bullet list?  Assuming it is
a follow on point.  My confusion is that I do not understand why the
term/category hierarchy is relevant to the enhancement of classifications?
The Classification object is defining the type of classification and its
meaning is coming from the term?  Is this suggesting that the relationships
between classifications is coming from the term relationships in the same
way we do thin in IGC today?  If so it may help to show an example?
> - Page 7 - Figure 4 and 5 - what is the difference between
"Classification" and "Classification Relationship"?
> - Page 7 - Maybe strange examples - the Glossaries would be for different
subject areas - for example, there may be a marketing glossary, a customer
care glossary, a banking glossary.  These may be used for associating
meaning to data assets (ie data assets).  there may also be glossaries for
different regulations, or standard governance approaches, and these may
include terms that can be used to describe classification for data that
drive operational governance?
> - Page 8 - I am not sure what the proposed enhancements are - it just
seems to list the problems with the current model.  All relationships in
metadata are bi-directional.  It should be the default.  This mechanism
seems complicated.  Really need to define relationships independent of
entities so we can define attributes on these relationships.  The
Classification is actually an example of an independently defined
relationship that includes the GUID of the 2 entities it connects.   This
should be the common style of relationship.
> - Page 9 - on discussion point - a Taxonomy is a hierarchy of categories
that the terms are placed in - I thought this was included in the proposal
and we do need this for organising terms so that people can find them - and
the category hierarchies (taxonomies) help to provide context to terms too.
Also, the semantic relationships discussed would mean we could support a
simple ontology.
> - Page 9 - Fully-qualified name - What a grandparent or parent term?
What does a fully qualified name mean and when is it used?  The unique name
is its GUID.  Its path name (there may be many) is the navigation to the
term through the category hierarchies.
> - Page 9 - why do Atlas terms need to follow the schema in defined at
this link -
https://www.ibm.com/support/knowledgecenter/en/SSN364_8.8.0/com.ibm.ima.using/comp/vocab/terms_prop.html?
   it seem to imply a lifecycle which is not included in this proposal and
a very specific modelling of the IBM industry models that have mandatory
fields that are not always applicable to all glossaries.  I think this doc
should describe the schema of the glossary term explicitly and explain the
fields.
> - page 10 - Figure 7 shows the navigation relationships and 1 

[jira] [Commented] (ATLAS-1512) Hive Hook fails due to - Table not found exception

2017-02-01 Thread Russell Anderson (JIRA)

[ 
https://issues.apache.org/jira/browse/ATLAS-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15848409#comment-15848409
 ] 

Russell Anderson commented on ATLAS-1512:
-

ATLAS-1274 - table not found exception with temporary tables reported and not 
resolved may also be related.

> Hive Hook fails due to - Table not found exception
> --
>
> Key: ATLAS-1512
> URL: https://issues.apache.org/jira/browse/ATLAS-1512
> Project: Atlas
>  Issue Type: Bug
>  Components:  atlas-core, atlas-intg
> Environment: newly built, and configured Apache Atlas .7.1rc3
> BigInsights 4.2.0.0 -
> The example worked fine using Apache Atlas .7.0rc2
>    Reporter: Russell Anderson
>Priority: Critical
>
> After configuring Hive Hook: Ran the following Hive SQL command:
> Create table sysibm.sparktest as select * from sysibm.sparktable;
> -the table sysibm.sparktable exists
> -the table system.sparktest is getting created as a result of this command.
> -the table gets successfully created within the schema sysibm but the Hive 
> Hook does is not able deal with this correctly.
> 2017-01-31 07:49:56,979 INFO  metastore.HiveMetaStore 
> (HiveMetaStore.java:logIn\
> fo(746)) - 6: get_table : db=sysibm tbl=sparktest
> 2017-01-31 07:49:56,980 INFO  HiveMetaStore.audit 
> (HiveMetaStore.java:logAuditE\
> vent(371)) - ugi=hiveip=unknown-ip-addr  cmd=get_table : db=sysibm 
> tbl=\
> sparktest
> 2017-01-31 07:49:56,984 ERROR metadata.Hive (Hive.java:getTable(1119)) - 
> Table \
> sparktest not found: sysibm.sparktest table not found
> 2017-01-31 07:49:56,984 ERROR hook.HiveHook (HiveHook.java:run(207)) - Atlas 
> ho\
> ok failed due to error
> java.lang.reflect.UndeclaredThrowableException
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInform\
> ation.java:1672)
> at org.apache.atlas.hive.hook.HiveHook$2.run(HiveHook.java:197)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:5\
> 11)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor\
> .java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecuto\
> r.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hive.ql.metadata.InvalidTableException: Table 
> not \
> found sparktest
> at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:1120)
> at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:1090)
> at 
> org.apache.atlas.hive.hook.HiveHook.createOrUpdateEntities(HiveHook.\
> java:559)
> at 
> org.apache.atlas.hive.hook.HiveHook.createOrUpdateEntities(HiveHook.\
> java:581)
> at 
> org.apache.atlas.hive.hook.HiveHook.processHiveEntity(HiveHook.java:\
> 669)
> at 
> org.apache.atlas.hive.hook.HiveHook.registerProcess(HiveHook.java:64\
> 9)
> at org.apache.atlas.hive.hook.HiveHook.collect(HiveHook.java:270)
> at org.apache.atlas.hive.hook.HiveHook.access$200(HiveHook.java:85)
> at org.apache.atlas.hive.hook.HiveHook$2$1.run(HiveHook.java:200)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInform\
> ation.java:1657)
> ... 6 more
> 2017-01-31 07:49:57,029 INFO  log.PerfLogger 
> (PerfLogger.java:PerfLogEnd(148)) \



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] (ATLAS-1512) Hive Hook fails due to - Table not found exception

2017-01-31 Thread Russell Anderson (JIRA)

[ 
https://issues.apache.org/jira/browse/ATLAS-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15847448#comment-15847448
 ] 

Russell Anderson commented on ATLAS-1512:
-

THIS CHANGE APPEARS TO BE RELATED

> HiveHook : Fix Auth issue with doAs
> ---
>
> Key: ATLAS-1364
> URL: https://issues.apache.org/jira/browse/ATLAS-1364
> Project: Atlas
>  Issue Type: Bug
>Affects Versions: 0.7-incubating, 0.8-incubating
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Fix For: 0.8-incubating, 0.7.1-incubating
>
> Attachments: ATLAS-1364.1.patch, ATLAS-1364.patch
>
>
> HiveHook currently is not using the passed in UGI(UserGroupInformation) from 
> Hive's HookContext and due to this when file sytem operations are invoked, it 
> is being done with the wrong user credentials

> Hive Hook fails due to - Table not found exception
> --
>
> Key: ATLAS-1512
> URL: https://issues.apache.org/jira/browse/ATLAS-1512
> Project: Atlas
>  Issue Type: Bug
>  Components:  atlas-core, atlas-intg
> Environment: newly built, and configured Apache Atlas .7.1rc3
> BigInsights 4.2.0.0 -
> The example worked fine using Apache Atlas .7.0rc2
>Reporter: Russell Anderson
>Priority: Critical
>
> After configuring Hive Hook: Ran the following Hive SQL command:
> Create table sysibm.sparktest as select * from sysibm.sparktable;
> -the table sysibm.sparktable exists
> -the table system.sparktest is getting created as a result of this command.
> -the table gets successfully created within the schema sysibm but the Hive 
> Hook does is not able deal with this correctly.
> 2017-01-31 07:49:56,979 INFO  metastore.HiveMetaStore 
> (HiveMetaStore.java:logIn\
> fo(746)) - 6: get_table : db=sysibm tbl=sparktest
> 2017-01-31 07:49:56,980 INFO  HiveMetaStore.audit 
> (HiveMetaStore.java:logAuditE\
> vent(371)) - ugi=hiveip=unknown-ip-addr  cmd=get_table : db=sysibm 
> tbl=\
> sparktest
> 2017-01-31 07:49:56,984 ERROR metadata.Hive (Hive.java:getTable(1119)) - 
> Table \
> sparktest not found: sysibm.sparktest table not found
> 2017-01-31 07:49:56,984 ERROR hook.HiveHook (HiveHook.java:run(207)) - Atlas 
> ho\
> ok failed due to error
> java.lang.reflect.UndeclaredThrowableException
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInform\
> ation.java:1672)
> at org.apache.atlas.hive.hook.HiveHook$2.run(HiveHook.java:197)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:5\
> 11)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor\
> .java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecuto\
> r.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hive.ql.metadata.InvalidTableException: Table 
> not \
> found sparktest
> at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:1120)
> at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:1090)
> at 
> org.apache.atlas.hive.hook.HiveHook.createOrUpdateEntities(HiveHook.\
> java:559)
> at 
> org.apache.atlas.hive.hook.HiveHook.createOrUpdateEntities(HiveHook.\
> java:581)
> at 
> org.apache.atlas.hive.hook.HiveHook.processHiveEntity(HiveHook.java:\
> 669)
> at 
> org.apache.atlas.hive.hook.HiveHook.registerProcess(HiveHook.java:64\
> 9)
> at org.apache.atlas.hive.hook.HiveHook.collect(HiveHook.java:270)
> at org.apache.atlas.hive.hook.HiveHook.access$200(HiveHook.java:85)
> at org.apache.atlas.hive.hook.HiveHook$2$1.run(HiveHook.java:200)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInform\
> ation.java:1657)
> ... 6 more
> 2017-01-31 07:49:57,029 INFO  log.PerfLogger 
> (PerfLogger.java:PerfLogEnd(148)) \



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] (ATLAS-1512) Hive Hook fails due to - Table not found exception

2017-01-31 Thread Russell Anderson (JIRA)
Russell Anderson created ATLAS-1512:
---

 Summary: Hive Hook fails due to - Table not found exception
 Key: ATLAS-1512
 URL: https://issues.apache.org/jira/browse/ATLAS-1512
 Project: Atlas
  Issue Type: Bug
  Components:  atlas-core, atlas-intg
 Environment: newly built, and configured Apache Atlas .7.1rc3
BigInsights 4.2.0.0 -

The example worked fine using Apache Atlas .7.0rc2
Reporter: Russell Anderson
Priority: Critical


After configuring Hive Hook: Ran the following Hive SQL command:
Create table sysibm.sparktest as select * from sysibm.sparktable;

-the table sysibm.sparktable exists
-the table system.sparktest is getting created as a result of this command.
-the table gets successfully created within the schema sysibm but the Hive Hook 
does is not able deal with this correctly.



2017-01-31 07:49:56,979 INFO  metastore.HiveMetaStore (HiveMetaStore.java:logIn\
fo(746)) - 6: get_table : db=sysibm tbl=sparktest
2017-01-31 07:49:56,980 INFO  HiveMetaStore.audit (HiveMetaStore.java:logAuditE\
vent(371)) - ugi=hiveip=unknown-ip-addr  cmd=get_table : db=sysibm tbl=\
sparktest
2017-01-31 07:49:56,984 ERROR metadata.Hive (Hive.java:getTable(1119)) - Table \
sparktest not found: sysibm.sparktest table not found
2017-01-31 07:49:56,984 ERROR hook.HiveHook (HiveHook.java:run(207)) - Atlas ho\
ok failed due to error
java.lang.reflect.UndeclaredThrowableException
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInform\
ation.java:1672)
at org.apache.atlas.hive.hook.HiveHook$2.run(HiveHook.java:197)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:5\
11)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor\
.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecuto\
r.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hive.ql.metadata.InvalidTableException: Table not \
found sparktest
at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:1120)
at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:1090)
at org.apache.atlas.hive.hook.HiveHook.createOrUpdateEntities(HiveHook.\
java:559)
at org.apache.atlas.hive.hook.HiveHook.createOrUpdateEntities(HiveHook.\
java:581)
at org.apache.atlas.hive.hook.HiveHook.processHiveEntity(HiveHook.java:\
669)
at org.apache.atlas.hive.hook.HiveHook.registerProcess(HiveHook.java:64\
9)
at org.apache.atlas.hive.hook.HiveHook.collect(HiveHook.java:270)
at org.apache.atlas.hive.hook.HiveHook.access$200(HiveHook.java:85)
at org.apache.atlas.hive.hook.HiveHook$2$1.run(HiveHook.java:200)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInform\
ation.java:1657)
... 6 more
2017-01-31 07:49:57,029 INFO  log.PerfLogger (PerfLogger.java:PerfLogEnd(148)) \





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] (ATLAS-1511) Hive Hook fails due to - Table not found exception

2017-01-31 Thread Russell Anderson (JIRA)
Russell Anderson created ATLAS-1511:
---

 Summary: Hive Hook fails due to - Table not found exception
 Key: ATLAS-1511
 URL: https://issues.apache.org/jira/browse/ATLAS-1511
 Project: Atlas
  Issue Type: Bug
  Components:  atlas-core, atlas-intg
 Environment: newly built, and configured Apache Atlas .7.1rc3
BigInsights 4.2.0.0 -

The example worked fine using Apache Atlas .7.0rc2
Reporter: Russell Anderson
Priority: Critical


After configuring Hive Hook: Ran the following Hive SQL command:
Create table sysibm.sparktest as select * from sysibm.sparktable;

-the table sysibm.sparktable exists
-the table system.sparktest is getting created as a result of this command.
-the table gets successfully created within the schema sysibm but the Hive Hook 
does is not able deal with this correctly.



2017-01-31 07:49:56,979 INFO  metastore.HiveMetaStore (HiveMetaStore.java:logIn\
fo(746)) - 6: get_table : db=sysibm tbl=sparktest
2017-01-31 07:49:56,980 INFO  HiveMetaStore.audit (HiveMetaStore.java:logAuditE\
vent(371)) - ugi=hiveip=unknown-ip-addr  cmd=get_table : db=sysibm tbl=\
sparktest
2017-01-31 07:49:56,984 ERROR metadata.Hive (Hive.java:getTable(1119)) - Table \
sparktest not found: sysibm.sparktest table not found
2017-01-31 07:49:56,984 ERROR hook.HiveHook (HiveHook.java:run(207)) - Atlas ho\
ok failed due to error
java.lang.reflect.UndeclaredThrowableException
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInform\
ation.java:1672)
at org.apache.atlas.hive.hook.HiveHook$2.run(HiveHook.java:197)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:5\
11)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor\
.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecuto\
r.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hive.ql.metadata.InvalidTableException: Table not \
found sparktest
at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:1120)
at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:1090)
at org.apache.atlas.hive.hook.HiveHook.createOrUpdateEntities(HiveHook.\
java:559)
at org.apache.atlas.hive.hook.HiveHook.createOrUpdateEntities(HiveHook.\
java:581)
at org.apache.atlas.hive.hook.HiveHook.processHiveEntity(HiveHook.java:\
669)
at org.apache.atlas.hive.hook.HiveHook.registerProcess(HiveHook.java:64\
9)
at org.apache.atlas.hive.hook.HiveHook.collect(HiveHook.java:270)
at org.apache.atlas.hive.hook.HiveHook.access$200(HiveHook.java:85)
at org.apache.atlas.hive.hook.HiveHook$2$1.run(HiveHook.java:200)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInform\
ation.java:1657)
... 6 more
2017-01-31 07:49:57,029 INFO  log.PerfLogger (PerfLogger.java:PerfLogEnd(148)) \





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: Using .7.1rc3 with import-hive.sh - Lineage is produced from EXTERNAL tables only - not MANAGED tables : by desing or bug ?

2017-01-30 Thread Russell Anderson
Hi,

Upon further examination of the code in
atlas/hive/bridge/HiveMetaStoreBridge.java - the public method importTable
line 270 - there is check :

if (table.getTableType() == TableType.EXTERNAL_TABLE)

By commenting out this check (and corresponding bracket) : I am now getting
Lineage for tables created from within the HiveView instance.

http://192.168.1.132:21000/api/atlas/lineage/hive/table/bigsql.branch_intersect@primary/inputs/graph

So I contend there is value in getting this lineage because all hive tables
have an inherent hdfs_path coming from the HiveView - meaning - a fully
qualified inherent path from which no duplicates can result. This is
assured due to the fact that the 'schema' will ensure uniqueness along with
the path to where the HDFS directory resides.




This results in a valid lineage picture such as :






From:   Russell Anderson/Worcester/IBM
To: dev@atlas.incubator.apache.org
Cc: "Ashutosh Mestry" <ames...@hortonworks.com>, Barry
Rosen/Worcester/IBM@ibmus, David Radley/UK/IBM@ibmgb, "Madhan
Neethiraj" <mad...@apache.org>, "Apoorv Naik"
<naik.apo...@gmail.com>, "Sarath Subramanian"
<sarath.ku...@gmail.com>
Date:   01/30/2017 06:22 AM
Subject:Re: Using .7.1rc3 with import-hive.sh - Lineage is produced
from EXTERNAL tables only - not MANAGED tables : by desing or
bug ?



Hi Vimal

This is very helpful understanding why lineage is not created.

I appreciate your explanation and will follow through with the creation of
a bug/enhancement against the import-hive

I suggest at least two things be done immediately:

1) document this as a limitation in the limitations section

2) the example that is on-line for branch_intersect by Horton works be
modified to only use external tables

Regards

Russ

Sent from my iPhone

On Jan 30, 2017, at 12:04 AM, Vimal Sharma <visha...@hortonworks.com>
wrote:

  Hi Russell,
  I responded to this question on HCC at
  
https://community.hortonworks.com/questions/66547/hivemetastorebridge-code-only-creating-lineage-for.html#answer-70360
  .

  When using import_hive.sh, lineage is created only for external
  tables. This is indeed by design. For external tables, it makes sense
  to mark the source HDFS path as the “source” node in lineage diagram.

  For MANAGED tables, I am not sure how much value it adds to create
  lineage diagram since the source HDFS path will inherently be
  {HIVE_DATA_ROOT}/{TABLENAME}.


  For managed tables created using CTAS as shown below:

  > create table dest as select * from source;

  We don’t have corresponding lineage after import_hive.sh

  source —>  CTAS Process —> dest

  This is because we don’t process the tables present in Hive metastore
  in a specific order which is necessary to get the above lineage. It
  would be a good improvement to the import_hive.sh utility and you can
      raise a bug to track it.

  Hope this helps
  - Vimal


  From: Russell Anderson <r...@us.ibm.com>
  Date: Sunday, January 29, 2017 at 11:14 PM
  To: "dev@atlas.incubator.apache.org" <dev@atlas.incubator.apache.org>
  Cc: Ashutosh Mestry <ames...@hortonworks.com>, Barry Rosen <
  rose...@us.ibm.com>, David Radley <david_rad...@uk.ibm.com>, Madhan
  Neethiraj <mad...@apache.org>, Apoorv Naik <naik.apo...@gmail.com>,
  Sarath Subramanian <sarath.ku...@gmail.com>, default <
  visha...@hortonworks.com>, Russell Anderson <r...@us.ibm.com>
  Subject: Using .7.1rc3 with import-hive.sh - Lineage is produced from
  EXTERNAL tables only - not MANAGED tables : by desing or bug ?



  Hi,

  Using the latest .7.1rc3 source - after building and installing on
  test system I have found that 'lineage' is only generated from
  EXTERNAL tables and not from MANAGED tables.

  I repeat 'lineage' - meaning the left to right flow. I get Metadata
  of the assets from MANAGED table but not left to right lineage.

  I do get lineage from External tables.

  Is this by design or is this a P1 bug?

  In prior release there was a code fix around that area of the Hive
  Bridge that checks this, and I am wondering has this been
  re-introduced ?

  If no one responds I will assume it is a bug, and will created one.

  Regards,

  Russ.

  Russell Anderson---01/24/2017 04:05:11 PM---Hi, What used to work
  in .7rc2 no longer seems to work with the Hive Hook: [ see stack
  trace below f

  From: Russell Anderson/Worcester/IBM
  To: dev@atlas.incubator.apache.org
  Cc: Ashutosh Mestry <ames...@hortonworks.com>, Barry
  Rosen/Worcester/IBM@IBMUS, David Radley <david_rad...@uk.ibm.com>,
  Madhan Neethiraj <mad...@apa

Re: Using .7.1rc3 with import-hive.sh - Lineage is produced from EXTERNAL tables only - not MANAGED tables : by desing or bug ?

2017-01-30 Thread Russell Anderson
Hi Vimal

This is very helpful understanding why lineage is not created.

I appreciate your explanation and will follow through with the creation of a 
bug/enhancement against the import-hive

I suggest at least two things be done immediately:

1) document this as a limitation in the limitations section

2) the example that is on-line for branch_intersect by Horton works be modified 
to only use external tables 

Regards 

Russ

Sent from my iPhone

> On Jan 30, 2017, at 12:04 AM, Vimal Sharma <visha...@hortonworks.com> wrote:
> 
> Hi Russell,
> I responded to this question on HCC at 
> https://community.hortonworks.com/questions/66547/hivemetastorebridge-code-only-creating-lineage-for.html#answer-70360.
> 
> When using import_hive.sh, lineage is created only for external tables. This 
> is indeed by design. For external tables, it makes sense to mark the source 
> HDFS path as the “source” node in lineage diagram.
> 
> For MANAGED tables, I am not sure how much value it adds to create lineage 
> diagram since the source HDFS path will inherently be 
> {HIVE_DATA_ROOT}/{TABLENAME}. 
> 
> 
> For managed tables created using CTAS as shown below:
> 
> > create table dest as select * from source;
> 
> We don’t have corresponding lineage after import_hive.sh 
> 
> source —>  CTAS Process —> dest
> 
> This is because we don’t process the tables present in Hive metastore in a 
> specific order which is necessary to get the above lineage. It would be a 
> good improvement to the import_hive.sh utility and you can raise a bug to 
> track it.
> 
> Hope this helps
> - Vimal
> 
> 
> From: Russell Anderson <r...@us.ibm.com>
> Date: Sunday, January 29, 2017 at 11:14 PM
> To: "dev@atlas.incubator.apache.org" <dev@atlas.incubator.apache.org>
> Cc: Ashutosh Mestry <ames...@hortonworks.com>, Barry Rosen 
> <rose...@us.ibm.com>, David Radley <david_rad...@uk.ibm.com>, Madhan 
> Neethiraj <mad...@apache.org>, Apoorv Naik <naik.apo...@gmail.com>, Sarath 
> Subramanian <sarath.ku...@gmail.com>, default <visha...@hortonworks.com>, 
> Russell Anderson <r...@us.ibm.com>
> Subject: Using .7.1rc3 with import-hive.sh - Lineage is produced from 
> EXTERNAL tables only - not MANAGED tables : by desing or bug ?
> 
> Hi,
> 
> Using the latest .7.1rc3 source - after building and installing on test 
> system I have found that 'lineage' is only generated from EXTERNAL tables and 
> not from MANAGED tables.
> 
> I repeat 'lineage' - meaning the left to right flow. I get Metadata of the 
> assets from MANAGED table but not left to right lineage.
> 
> I do get lineage from External tables.
> 
> Is this by design or is this a P1 bug?
> 
> In prior release there was a code fix around that area of the Hive Bridge 
> that checks this, and I am wondering has this been re-introduced ?
> 
> If no one responds I will assume it is a bug, and will created one.
> 
> Regards,
> 
> Russ.
> 
> Russell Anderson---01/24/2017 04:05:11 PM---Hi, What used to work in .7rc2 no 
> longer seems to work with the Hive Hook: [ see stack trace below f
> 
> From: Russell Anderson/Worcester/IBM
> To: dev@atlas.incubator.apache.org
> Cc: Ashutosh Mestry <ames...@hortonworks.com>, Barry 
> Rosen/Worcester/IBM@IBMUS, David Radley <david_rad...@uk.ibm.com>, Madhan 
> Neethiraj <mad...@apache.org>, Apoorv Naik <naik.apo...@gmail.com>, Sarath 
> Subramanian <sarath.ku...@gmail.com>, "Vimal Sharma" 
> <visha...@hortonworks.com>
> Date: 01/24/2017 04:05 PM
> Subject: Caused by: org.apache.hadoop.hive.ql.metadata.InvalidTableException 
> - Table not found in Atlas .7.1rc3
> 
> 
> Hi,
> 
> What used to work in .7rc2 no longer seems to work with the Hive Hook: [ see 
> stack trace below from hiveserver2.log]
> 
> Looking at the code it cannot find the new table 'russ88' - this simple test 
> case worked in the .7rc2 version.
> 
> I have complete permission to make this happen in the HIVEVIEW but somehow 
> the Hive Hook cannot deal with it.
> 
> Any ideas?
> 
> 
> 
> 
> 
> 2017-01-24 12:30:25,917 INFO bridge.HiveMetaStoreBridge 
> (HiveMetaStoreBridge.java:createOrUpdate\
> DBInstance(166)) - Importing objects from databaseName : bigsql
> 2017-01-24 12:30:25,917 INFO metastore.HiveMetaStore 
> (HiveMetaStore.java:logInfo(746)) - 5: get_\
> table : db=bigsql tbl=russ88
> 2017-01-24 12:30:25,917 INFO HiveMetaStore.audit 
> (HiveMetaStore.java:logAuditEvent(371)) - ugi=h\
> ive ip=unknown-ip-addr cmd=get_table : db=bigsql tbl=russ88
> 2017-01-24 12:30:25,919 ERROR metadata.Hive (Hive.java:getTable(1119)) - 
&g

Using .7.1rc3 with import-hive.sh - Lineage is produced from EXTERNAL tables only - not MANAGED tables : by desing or bug ?

2017-01-29 Thread Russell Anderson

Hi,

Using the latest .7.1rc3 source - after building and installing on test
system I have found that 'lineage' is only generated from EXTERNAL tables
and not from MANAGED tables.

I repeat 'lineage' - meaning the left to right flow. I get Metadata of the
assets from MANAGED table but not left to right lineage.

I do get lineage from External tables.

Is this by design or is this a P1 bug?

In prior release there was a code fix around that area of the Hive Bridge
that checks this, and I am wondering has this been re-introduced ?

If no one responds I will assume it is a bug, and will created one.

Regards,

Russ.



From:   Russell Anderson/Worcester/IBM
To: dev@atlas.incubator.apache.org
Cc: Ashutosh Mestry <ames...@hortonworks.com>, Barry
Rosen/Worcester/IBM@IBMUS, David Radley
<david_rad...@uk.ibm.com>, Madhan Neethiraj
<mad...@apache.org>, Apoorv Naik <naik.apo...@gmail.com>,
Sarath Subramanian <sarath.ku...@gmail.com>, "Vimal Sharma"
<visha...@hortonworks.com>
Date:   01/24/2017 04:05 PM
Subject:Caused by:
org.apache.hadoop.hive.ql.metadata.InvalidTableException -
Table not found in Atlas .7.1rc3


Hi,

What used to work in .7rc2 no longer seems to work with the Hive Hook:
[ see stack trace below from hiveserver2.log]

Looking at the code it cannot find the new table 'russ88' - this simple
test case worked in the .7rc2 version.

I have complete permission to make this happen in the HIVEVIEW but somehow
the Hive Hook cannot deal with it.

Any ideas?





2017-01-24 12:30:25,917 INFO  bridge.HiveMetaStoreBridge
(HiveMetaStoreBridge.java:createOrUpdate\
DBInstance(166)) - Importing objects from databaseName : bigsql
2017-01-24 12:30:25,917 INFO  metastore.HiveMetaStore
(HiveMetaStore.java:logInfo(746)) - 5: get_\
table : db=bigsql tbl=russ88
2017-01-24 12:30:25,917 INFO  HiveMetaStore.audit
(HiveMetaStore.java:logAuditEvent(371)) - ugi=h\
iveip=unknown-ip-addr  cmd=get_table : db=bigsql tbl=russ88
2017-01-24 12:30:25,919 ERROR metadata.Hive (Hive.java:getTable(1119)) -
Table russ88 not found: \
bigsql.russ88 table not found
2017-01-24 12:30:25,920 ERROR hook.HiveHook (HiveHook.java:run(207)) -
Atlas hook failed due to e\
rror
java.lang.reflect.UndeclaredThrowableException
at org.apache.hadoop.security.UserGroupInformation.doAs
(UserGroupInformation.java:1672)
at org.apache.atlas.hive.hook.HiveHook$2.run(HiveHook.java:197)
at java.util.concurrent.Executors$RunnableAdapter.call
(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker
(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run
(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hive.ql.metadata.InvalidTableException: Table
not found russ88
at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:1120)
at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:1090)
at org.apache.atlas.hive.hook.HiveHook.createOrUpdateEntities
(HiveHook.java:559)
at org.apache.atlas.hive.hook.HiveHook.createOrUpdateEntities
(HiveHook.java:581)
at org.apache.atlas.hive.hook.HiveHook.processHiveEntity
(HiveHook.java:669)
at org.apache.atlas.hive.hook.HiveHook.registerProcess
(HiveHook.java:649)
at org.apache.atlas.hive.hook.HiveHook.collect(HiveHook.java:270)
at org.apache.atlas.hive.hook.HiveHook.access$200(HiveHook.java:85)
at org.apache.atlas.hive.hook.HiveHook$2$1.run(HiveHook.java:200)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs
(UserGroupInformation.java:1657)
... 6 more





From:   Hemanth Yamijala <hyamij...@hortonworks.com>
To: "dev@atlas.incubator.apache.org"
<dev@atlas.incubator.apache.org>
Cc: Apoorv Naik <naik.apo...@gmail.com>, Madhan Neethiraj
<mad...@apache.org>, Ashutosh Mestry <ames...@hortonworks.com>,
Sarath Subramanian <sarath.ku...@gmail.com>, David Radley
<david_rad...@uk.ibm.com>, "Vimal Sharma"
<visha...@hortonworks.com>, Barry Rosen/Worcester/IBM@IBMUS
Date:   01/23/2017 11:06 PM
Subject:Re: .7.1 rc3 - change in requirements at Run Time ?



Hi Russell,


I am unable to see the exact error that you are facing - in case you
attached an image or message.


AFAIK, there is no change in requirement for 0.7.1 from 0.7.0.
Specifically, BerkeleyDB jars are not bundled with Atlas due to Apache
licensing restrictions. The default profile (when we build with -Pdist)
expects a setup of external HBase and So

Caused by: org.apache.hadoop.hive.ql.metadata.InvalidTableException - Table not found in Atlas .7.1rc3

2017-01-24 Thread Russell Anderson

Hi,

What used to work in .7rc2 no longer seems to work with the Hive Hook:
[ see stack trace below from hiveserver2.log]

Looking at the code it cannot find the new table 'russ88' - this simple
test case worked in the .7rc2 version.

I have complete permission to make this happen in the HIVEVIEW but somehow
the Hive Hook cannot deal with it.

Any ideas?






2017-01-24 12:30:25,917 INFO  bridge.HiveMetaStoreBridge
(HiveMetaStoreBridge.java:createOrUpdate\
DBInstance(166)) - Importing objects from databaseName : bigsql
2017-01-24 12:30:25,917 INFO  metastore.HiveMetaStore
(HiveMetaStore.java:logInfo(746)) - 5: get_\
table : db=bigsql tbl=russ88
2017-01-24 12:30:25,917 INFO  HiveMetaStore.audit
(HiveMetaStore.java:logAuditEvent(371)) - ugi=h\
iveip=unknown-ip-addr  cmd=get_table : db=bigsql tbl=russ88
2017-01-24 12:30:25,919 ERROR metadata.Hive (Hive.java:getTable(1119)) -
Table russ88 not found: \
bigsql.russ88 table not found
2017-01-24 12:30:25,920 ERROR hook.HiveHook (HiveHook.java:run(207)) -
Atlas hook failed due to e\
rror
java.lang.reflect.UndeclaredThrowableException
at org.apache.hadoop.security.UserGroupInformation.doAs
(UserGroupInformation.java:1672)
at org.apache.atlas.hive.hook.HiveHook$2.run(HiveHook.java:197)
at java.util.concurrent.Executors$RunnableAdapter.call
(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker
(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run
(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hive.ql.metadata.InvalidTableException: Table
not found russ88
at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:1120)
at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:1090)
at org.apache.atlas.hive.hook.HiveHook.createOrUpdateEntities
(HiveHook.java:559)
at org.apache.atlas.hive.hook.HiveHook.createOrUpdateEntities
(HiveHook.java:581)
at org.apache.atlas.hive.hook.HiveHook.processHiveEntity
(HiveHook.java:669)
at org.apache.atlas.hive.hook.HiveHook.registerProcess
(HiveHook.java:649)
at org.apache.atlas.hive.hook.HiveHook.collect(HiveHook.java:270)
at org.apache.atlas.hive.hook.HiveHook.access$200(HiveHook.java:85)
at org.apache.atlas.hive.hook.HiveHook$2$1.run(HiveHook.java:200)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs
(UserGroupInformation.java:1657)
... 6 more




From:   Hemanth Yamijala <hyamij...@hortonworks.com>
To: "dev@atlas.incubator.apache.org"
<dev@atlas.incubator.apache.org>
Cc: Apoorv Naik <naik.apo...@gmail.com>, Madhan Neethiraj
<mad...@apache.org>, Ashutosh Mestry <ames...@hortonworks.com>,
Sarath Subramanian <sarath.ku...@gmail.com>, David Radley
<david_rad...@uk.ibm.com>, "Vimal Sharma"
<visha...@hortonworks.com>, Barry Rosen/Worcester/IBM@IBMUS
Date:   01/23/2017 11:06 PM
Subject:Re: .7.1 rc3 - change in requirements at Run Time ?



Hi Russell,


I am unable to see the exact error that you are facing - in case you
attached an image or message.


AFAIK, there is no change in requirement for 0.7.1 from 0.7.0.
Specifically, BerkeleyDB jars are not bundled with Atlas due to Apache
licensing restrictions. The default profile (when we build with -Pdist)
expects a setup of external HBase and Solr, which is the preferred
deployment mode. If you need to build with BerkeleyDB and ElasticSearch,
you should use a specific profile and also manually get the dependent jars
copied to the deployment. This is documented in the 0.7 documentation here:
http://atlas.incubator.apache.org/0.7.0-incubating/InstallationSteps.html?.
Please lookup for the profile "berkeley-elasticsearch" and let us know if
that gives you information required.


Thanks

Hemanth


From: Russell Anderson <r...@us.ibm.com>
Sent: Tuesday, January 24, 2017 6:48 AM
To: dev@atlas.incubator.apache.org
Cc: Apoorv Naik; Madhan Neethiraj; Ashutosh Mestry; Sarath Subramanian;
David Radley; Vimal Sharma; Russell Anderson; Barry Rosen
Subject: .7.1 rc3 - change in requirements at Run Time ?


[cid:1__=8FBB0A21DF9506DF8f9e8a93df938690918c8FB@]

Can someone please tell me if there is a new run time required library for
Atlas .7.1 rc3 versus .7.0 rc2 ?

Atlas will not start up without this class - this appears to be a specific
Berkeley DB java class. What version of the JDK is required ?

Regards,

Russ.


[Inactive hide details for Sarath Subramanian ---01/23/2017 07:56:19
PM--

Re: .7.1 rc3 - change in requirements at Run Time ?

2017-01-24 Thread Russell Anderson


Hi

You are 100% correct- I had forgotten to create the extlib directory as
documented.

My apologies

Russ

Sent from my iPhone

> On Jan 23, 2017, at 11:06 PM, Hemanth Yamijala
<hyamij...@hortonworks.com> wrote:
>
> Hi Russell,
>
>
> I am unable to see the exact error that you are facing - in case you
attached an image or message.
>
>
> AFAIK, there is no change in requirement for 0.7.1 from 0.7.0.
Specifically, BerkeleyDB jars are not bundled with Atlas due to Apache
licensing restrictions. The default profile (when we build with -Pdist)
expects a setup of external HBase and Solr, which is the preferred
deployment mode. If you need to build with BerkeleyDB and ElasticSearch,
you should use a specific profile and also manually get the dependent jars
copied to the deployment. This is documented in the 0.7 documentation here:
http://atlas.incubator.apache.org/0.7.0-incubating/InstallationSteps.html?.
Please lookup for the profile "berkeley-elasticsearch" and let us know if
that gives you information required.
>
>
> Thanks
>
> Hemanth
>
> 
> From: Russell Anderson <r...@us.ibm.com>
> Sent: Tuesday, January 24, 2017 6:48 AM
> To: dev@atlas.incubator.apache.org
> Cc: Apoorv Naik; Madhan Neethiraj; Ashutosh Mestry; Sarath Subramanian;
David Radley; Vimal Sharma; Russell Anderson; Barry Rosen
> Subject: .7.1 rc3 - change in requirements at Run Time ?
>
>
> [cid:1__=8FBB0A21DF9506DF8f9e8a93df938690918c8FB@]
>
> Can someone please tell me if there is a new run time required library
for Atlas .7.1 rc3 versus .7.0 rc2 ?
>
> Atlas will not start up without this class - this appears to be a
specific Berkeley DB java class. What version of the JDK is required ?
>
> Regards,
>
> Russ.
>
>
> [Inactive hide details for Sarath Subramanian ---01/23/2017 07:56:19
PM-]Sarath
Subramanian ---01/23/2017 07:56:19
PM-- This is an
automatically generated e-mai
>
> From: Sarath Subramanian <sarath.ku...@gmail.com>
> To: Apoorv Naik <naik.apo...@gmail.com>, Madhan Neethiraj
<mad...@apache.org>, Ashutosh Mestry <ames...@hortonworks.com>
> Cc: Sarath Subramanian <sarath.ku...@gmail.com>, atlas
<dev@atlas.incubator.apache.org>, David Radley <david_rad...@uk.ibm.com>,
Vimal Sharma <visha...@hortonworks.com>
> Date: 01/23/2017 07:56 PM
> Subject: Re: Review Request 55358: [ATLAS-1312] Update QuickStart to use
the new APIs for type and entities creation
> Sent by: Sarath Subramanian <nore...@reviews.apache.org>
>
> 
>
>
>
>
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/55358/
> ---
>
> (Updated Jan. 23, 2017, 4:55 p.m.)
>
>
> Review request for atlas, Apoorv Naik, Ashutosh Mestry, Madhan Neethiraj,
and Suma Shivaprasad.
>
>
> Bugs: ATLAS-1312
>   https://issues.apache.org/jira/browse/ATLAS-1312
>
>
> Repository: atlas
>
>
> Description
> ---
>
> The quick start currently uses old APIs to create types and entities.
This needs to be updated to use the v2 APIs for types and entities.
>
>
> Diffs (updated)
> -
>
> client/src/main/java/org/apache/atlas/AtlasBaseClient.java d055b78
> client/src/main/java/org/apache/atlas/AtlasLineageClientV2.java
PRE-CREATION
> distro/src/bin/quick_start.py 14c8464
> distro/src/bin/quick_start_v1.py PRE-CREATION
> intg/src/main/java/org/apache/atlas/type/AtlasTypeUtil.java c866946
> webapp/src/main/java/org/apache/atlas/examples/QuickStart.java 8322bc6
> webapp/src/main/java/org/apache/atlas/examples/QuickStartV2.java
PRE-CREATION
> webapp/src/test/java/org/apache/atlas/examples/QuickStartV2IT.java
PRE-CREATION
> webapp/src/test/java/org/apache/atlas/web/resources/BaseResourceIT.java
51be64c
>
> Diff: https://reviews.apache.org/r/55358/diff/
>
>
> Testing
> ---
>
> Tested using POstman REST Client and new ITs added
>
>
> Thanks,
>
> Sarath Subramanian
>
>
>
>


.7.1 rc3 - change in requirements at Run Time ?

2017-01-23 Thread Russell Anderson




Can someone please tell me if there is a new run time required library for
Atlas .7.1 rc3 versus .7.0 rc2 ?

Atlas will not start up without this class - this appears to be a specific
Berkeley DB java class. What version of the JDK is required ?

Regards,

Russ.




From:   Sarath Subramanian 
To: Apoorv Naik , Madhan Neethiraj
, Ashutosh Mestry 
Cc: Sarath Subramanian , atlas
, David Radley
, Vimal Sharma

Date:   01/23/2017 07:56 PM
Subject:Re: Review Request 55358: [ATLAS-1312] Update QuickStart to use
the new APIs for type and entities creation
Sent by:Sarath Subramanian 




---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/55358/
---

(Updated Jan. 23, 2017, 4:55 p.m.)


Review request for atlas, Apoorv Naik, Ashutosh Mestry, Madhan Neethiraj,
and Suma Shivaprasad.


Bugs: ATLAS-1312
https://issues.apache.org/jira/browse/ATLAS-1312


Repository: atlas


Description
---

The quick start currently uses old APIs to create types and entities. This
needs to be updated to use the v2 APIs for types and entities.


Diffs (updated)
-

  client/src/main/java/org/apache/atlas/AtlasBaseClient.java d055b78
  client/src/main/java/org/apache/atlas/AtlasLineageClientV2.java
PRE-CREATION
  distro/src/bin/quick_start.py 14c8464
  distro/src/bin/quick_start_v1.py PRE-CREATION
  intg/src/main/java/org/apache/atlas/type/AtlasTypeUtil.java c866946
  webapp/src/main/java/org/apache/atlas/examples/QuickStart.java 8322bc6
  webapp/src/main/java/org/apache/atlas/examples/QuickStartV2.java
PRE-CREATION
  webapp/src/test/java/org/apache/atlas/examples/QuickStartV2IT.java
PRE-CREATION
  webapp/src/test/java/org/apache/atlas/web/resources/BaseResourceIT.java
51be64c

Diff: https://reviews.apache.org/r/55358/diff/


Testing
---

Tested using POstman REST Client and new ITs added


Thanks,

Sarath Subramanian





Re: [VOTE] Release Apache Atlas 0.7.1 (incubating) - release candidate 3 (dev group vote)

2017-01-22 Thread Russell Anderson

+1 - approve

Russ

Sent from my iPhone

> On Jan 20, 2017, at 5:15 PM, Venkat Ranganathan
 wrote:
>
> +1
>
> Venkat
>
> On 1/19/17, 8:45 AM, "Madhan Neethiraj"  wrote:
>
>Atlas team,
>
>
>
>Apache Atlas 0.7.1 (incubating) release candidate #3 is now available
for a vote within dev community. Links to the release artifacts are given
below. Can you please review and vote?
>
>
>
>I apologize for yet another release-candidate. Only change in “release
candidate 3” is the update to build instructions in README.txt. There are
no other changes.
>
>
>
>We currently have 7 binding votes and 7 non-binding votes for the
earlier release candidates. Thank you everyone for validating the release
candidates, your feedback and vote.
>
>  +1 (binding):  7 votes
>
> - Shwetha Shivalingamurthy
>
> - Venkat Ranganathan
>
> - Keval Bhatt
>
> - Vimal Sharma
>
> - Suma Shivaprasad
>
> - Hemanth Yamijala
>
> - Madhan Neethiraj
>
>
>
>  +1 (non-binding): 7 votes
>
> - David Radley
>
> - Sarath Kumar Subramanian
>
> - Ismaël Mejía
>
> - Jean-Baptiste Onofré
>
> - Ayub Khan Pathan
>
> - Nixon Rodrigues
>
> - Apoorv Naik
>
>
>
>
>
>Changes since last release-candidate:
>
>  - updated the build instructions in README.txt (ATLAS-1000)
>
>
>
>The vote will be open for at least 72 hours or until necessary votes
are reached.
>
>[ ] +1  approve
>
>[ ] +0  no opinion
>
>[ ] -1  disapprove (and reason why)
>
>
>
>Thanks,
>
>Madhan
>
>
>
>
>
>List of issues addressed in this release:
https://issues.apache.org/jira/issues/?jql=project%20%3D%20Atlas%20AND%20fixVersion%20%3D%200.7.1-incubating%20ORDER%20BY%20key%20DESC

>
>
>
>Git tag for the release:
https://github.com/apache/incubator-atlas/tree/release-0.7.1-rc3
>
>
>
>Sources for the release:
https://dist.apache.org/repos/dist/dev/incubator/atlas/0.7.1-incubating-rc3/apache-atlas-0.7.1-incubating-sources.tar.gz

>
>
>
>Source release verification:
>
>  PGP Signature:
https://dist.apache.org/repos/dist/dev/incubator/atlas/0.7.1-incubating-rc3/apache-atlas-0.7.1-incubating-sources.tar.gz.asc

>
>  MD5 Hash:
https://dist.apache.org/repos/dist/dev/incubator/atlas/0.7.1-incubating-rc3/apache-atlas-0.7.1-incubating-sources.tar.gz.mds

>
>  Keys to verify the signature of the release artifacts are
available at: https://dist.apache.org/repos/dist/dev/incubator/atlas/KEYS
>
>
>
>
>
>
>
>
>
>
>
>
>


Re: Hive Hook - Create table xxx AS - Hive Hook BUG

2016-12-31 Thread Russell Anderson

Hi dev list,

After analyzing the changes as part of ATLAS-1364 - it appears that these
two may be related although no one has responded to my email here.

I have made the changes as described in this patch, will rebuild, and
re-test.

If anyone believes that there is yet another problem , please respond now?

Regards,

Russ.




From:   Russell Anderson/Worcester/IBM
To: dev@atlas.incubator.apache.org
Cc: Barry Rosen/Worcester/IBM@IBMUS, Russell
Anderson/Worcester/IBM@IBMUS
Date:   12/29/2016 03:03 PM
Subject:Re: Hive Hook - Create table xxx AS - Hive Hook BUG


The following SQL causes the Hive Hook to fail with the error below:

create table bigsql.missy999 as select * from bigsql.brancha;

Question: Is this a new SEVERE BUG in Atlas7rc2 Or has this bug been fixed
in .8 but not in .7rc2 ?


Excerpt from /var/log/hive/hiveserver2.log
==

2016-12-29 11:52:10,787 INFO  bridge.HiveMetaStoreBridge
(HiveMetaStoreBridge.ja\
va:createOrUpdateDBInstance(162)) - Importing objects from databaseName :
bigsql
2016-12-29 11:52:10,787 INFO  metastore.HiveMetaStore
(HiveMetaStore.java:logInf\
o(746)) - 6: get_table : db=bigsql tbl=missy999
2016-12-29 11:52:10,787 INFO  HiveMetaStore.audit
(HiveMetaStore.java:logAuditEv\
ent(371)) - ugi=admin   ip=unknown-ip-addr  cmd=get_table : db=bigsql
tbl=mi\
ssy999
2016-12-29 11:52:10,918 ERROR metadata.Hive (Hive.java:getTable(1119)) -
Table m\
issy999 not found: bigsql.missy999 table not found
2016-12-29 11:52:10,919 ERROR hook.HiveHook (HiveHook.java:run(184)) -
Atlas hoo\
k failed due to error
org.apache.hadoop.hive.ql.metadata.InvalidTableException: Table not found
missy9\
99
at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:1120)
at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:1090)
at org.apache.atlas.hive.hook.HiveHook.createOrUpdateEntities
(HiveHook.j\
ava:503)
at org.apache.atlas.hive.hook.HiveHook.createOrUpdateEntities
(HiveHook.j\
ava:524)
at org.apache.atlas.hive.hook.HiveHook.processHiveEntity
(HiveHook.java:6\
05)
at org.apache.atlas.hive.hook.HiveHook.registerProcess
(HiveHook.java:585\
)
at org.apache.atlas.hive.hook.HiveHook.fireAndForget
(HiveHook.java:223)
at org.apache.atlas.hive.hook.HiveHook.access$200(HiveHook.java:78)
at org.apache.atlas.hive.hook.HiveHook$2.run(HiveHook.java:182)
at java.util.concurrent.Executors$RunnableAdapter.call
(Executors.java:51\
1)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker
(ThreadPoolExecutor.\
java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run
(ThreadPoolExecutor\
.java:617)
at java.lang.Thread.run(Thread.java:745)
2016-12-29 11:52:11,221 INFO  log.PerfLogger (PerfLogger.java:PerfLogEnd
(148)) -\
 
2016-12-29 11:52:11,348 ERROR mr.ExecDriver (ExecDriver.java:execute(400))
- yar\
n
2016-12-29 11:52:13,528 INFO  impl.TimelineClientImpl
(TimelineClientImpl.java:s\
erviceInit(296)) - Timeline service address:
http://biginsights.ibm.com:8188/ws/\
v1/timeline/
2016-12-29 11:52:13,641 INFO  client.RMProxy (RMProxy.java:createRMProxy
(98)) - \
Connecting to ResourceManager at biginsights.ibm.com/192.168.1.132:8050
2016-12-29 11:52:14,269 INFO  fs.FSStatsPublisher
(FSStatsPublisher.java:init(49\
)) - created :
hdfs://biginsights.ibm.com:8020/apps/hive/warehouse/bigsql.db/.hi\
ve-staging_hive_2016-12-29_11-52-08_283_558364161490028907-1/-ext-10002


Russell G. Anderson
 Senior Technical Consultant





From:   Hemanth Yamijala <yhema...@gmail.com>
To: dev@atlas.incubator.apache.org
Cc: Russell Anderson/Worcester/IBM@IBMUS, Barry
Rosen/Worcester/IBM@IBMUS
Date:   12/29/2016 11:17 AM
Subject:Re: Hive Hook - what is missing ? [ RESOLVED }



No worries. Glad it helped.

On 27-Dec-2016 21:18, "Russell Anderson" <r...@us.ibm.com> wrote:

Hi Memanth Yamijala,

The magic of your thoughts !!!

So I did as you suggested but then I noticed a couple of things that hadn't
seemed important at the time :

1) In my Hadoop there were multiple entries for PRE, and FAILURE for Hive
Hook. These were left as default - so i changed these to match the one
recommended in the documentation.

2) I re-copied the libraries i had built where my AUX_PATH was pointing to.

3) I changed the permissions to match all the other jars in the hive/lib
directory - which were 'root'.

4) I then carefully examined the var/log/hive/hiveserver2.log for Hive Hook
messages

I now have the Hive Hook working as desired !!!

Thank you for getting me to look and think about things!!!

Russ.



[image: Inactive hide details for Hemanth Yamijala ---12/26/2016 08:11:29
PM---Hi, A few questions to help debug:]Hemanth Yamijala ---12/26/2016
08:11:29 PM---Hi, A few questions to help debug:

From: Hemanth Yamijala <yhema...@gmail.com>
To: de

Re: Hive Hook - Create table xxx AS - Hive Hook BUG

2016-12-29 Thread Russell Anderson

The following SQL causes the Hive Hook to fail with the error below:

create table bigsql.missy999 as select * from bigsql.brancha;

Question: Is this a new SEVERE BUG in Atlas7rc2 Or has this bug been fixed
in .8 but not in .7rc2 ?


Excerpt from /var/log/hive/hiveserver2.log
==

2016-12-29 11:52:10,787 INFO  bridge.HiveMetaStoreBridge
(HiveMetaStoreBridge.ja\
va:createOrUpdateDBInstance(162)) - Importing objects from databaseName :
bigsql
2016-12-29 11:52:10,787 INFO  metastore.HiveMetaStore
(HiveMetaStore.java:logInf\
o(746)) - 6: get_table : db=bigsql tbl=missy999
2016-12-29 11:52:10,787 INFO  HiveMetaStore.audit
(HiveMetaStore.java:logAuditEv\
ent(371)) - ugi=admin   ip=unknown-ip-addr  cmd=get_table : db=bigsql
tbl=mi\
ssy999
2016-12-29 11:52:10,918 ERROR metadata.Hive (Hive.java:getTable(1119)) -
Table m\
issy999 not found: bigsql.missy999 table not found
2016-12-29 11:52:10,919 ERROR hook.HiveHook (HiveHook.java:run(184)) -
Atlas hoo\
k failed due to error
org.apache.hadoop.hive.ql.metadata.InvalidTableException: Table not found
missy9\
99
at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:1120)
at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:1090)
at org.apache.atlas.hive.hook.HiveHook.createOrUpdateEntities
(HiveHook.j\
ava:503)
at org.apache.atlas.hive.hook.HiveHook.createOrUpdateEntities
(HiveHook.j\
ava:524)
at org.apache.atlas.hive.hook.HiveHook.processHiveEntity
(HiveHook.java:6\
05)
at org.apache.atlas.hive.hook.HiveHook.registerProcess
(HiveHook.java:585\
)
at org.apache.atlas.hive.hook.HiveHook.fireAndForget
(HiveHook.java:223)
at org.apache.atlas.hive.hook.HiveHook.access$200(HiveHook.java:78)
at org.apache.atlas.hive.hook.HiveHook$2.run(HiveHook.java:182)
at java.util.concurrent.Executors$RunnableAdapter.call
(Executors.java:51\
1)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker
(ThreadPoolExecutor.\
java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run
(ThreadPoolExecutor\
.java:617)
at java.lang.Thread.run(Thread.java:745)
2016-12-29 11:52:11,221 INFO  log.PerfLogger (PerfLogger.java:PerfLogEnd
(148)) -\
 
2016-12-29 11:52:11,348 ERROR mr.ExecDriver (ExecDriver.java:execute(400))
- yar\
n
2016-12-29 11:52:13,528 INFO  impl.TimelineClientImpl
(TimelineClientImpl.java:s\
erviceInit(296)) - Timeline service address:
http://biginsights.ibm.com:8188/ws/\
v1/timeline/
2016-12-29 11:52:13,641 INFO  client.RMProxy (RMProxy.java:createRMProxy
(98)) - \
Connecting to ResourceManager at biginsights.ibm.com/192.168.1.132:8050
2016-12-29 11:52:14,269 INFO  fs.FSStatsPublisher
(FSStatsPublisher.java:init(49\
)) - created :
hdfs://biginsights.ibm.com:8020/apps/hive/warehouse/bigsql.db/.hi\
ve-staging_hive_2016-12-29_11-52-08_283_558364161490028907-1/-ext-10002


Russell G. Anderson
 Senior Technical Consultant




From:   Hemanth Yamijala <yhema...@gmail.com>
To: dev@atlas.incubator.apache.org
Cc: Russell Anderson/Worcester/IBM@IBMUS, Barry
Rosen/Worcester/IBM@IBMUS
Date:   12/29/2016 11:17 AM
Subject:Re: Hive Hook - what is missing ? [ RESOLVED }



No worries. Glad it helped.

On 27-Dec-2016 21:18, "Russell Anderson" <r...@us.ibm.com> wrote:

Hi Memanth Yamijala,

The magic of your thoughts !!!

So I did as you suggested but then I noticed a couple of things that hadn't
seemed important at the time :

1) In my Hadoop there were multiple entries for PRE, and FAILURE for Hive
Hook. These were left as default - so i changed these to match the one
recommended in the documentation.

2) I re-copied the libraries i had built where my AUX_PATH was pointing to.

3) I changed the permissions to match all the other jars in the hive/lib
directory - which were 'root'.

4) I then carefully examined the var/log/hive/hiveserver2.log for Hive Hook
messages

I now have the Hive Hook working as desired !!!

Thank you for getting me to look and think about things!!!

Russ.



[image: Inactive hide details for Hemanth Yamijala ---12/26/2016 08:11:29
PM---Hi, A few questions to help debug:]Hemanth Yamijala ---12/26/2016
08:11:29 PM---Hi, A few questions to help debug:

From: Hemanth Yamijala <yhema...@gmail.com>
To: dev@atlas.incubator.apache.org
Date: 12/26/2016 08:11 PM
Subject: Re: Hive Hook - what is missing ?
--



Hi,

A few questions to help debug:

1) Are you using Hive CLI or Beeline? Best results would be to
configure these hooks with HiveServer2 and use Beeline.

2) Could you please check hive logs and look for AtlasHook related
messages?

Thanks
Hemanth

On Mon, Dec 26, 2016 at 8:59 PM, Russell Anderson <r...@us.ibm.com> wrote:
>
>
> Hi dev list,
>
> I have built the Atlas7rc2 - have working the Hive Import, the Dashboard,
> and generally thi

Re: Hive Hook - what is missing ? [ RESOLVED }

2016-12-27 Thread Russell Anderson

Hi Memanth Yamijala,

The magic of your thoughts !!!

So I did as you suggested but then I noticed a couple of things that hadn't
seemed important at the time :

1) In my Hadoop there were multiple entries for PRE, and FAILURE for Hive
Hook. These were left as default - so i changed these to match the one
recommended in the documentation.

2) I re-copied the libraries i had built where my AUX_PATH was pointing to.

3) I changed the permissions to match all the other jars in the hive/lib
directory - which were 'root'.

4) I then carefully examined the var/log/hive/hiveserver2.log for Hive Hook
messages

I now have the Hive Hook working as desired !!!

Thank you for getting me to look and think about things!!!

Russ.





From:   Hemanth Yamijala <yhema...@gmail.com>
To: dev@atlas.incubator.apache.org
Date:   12/26/2016 08:11 PM
Subject:Re: Hive Hook - what is missing ?



Hi,

A few questions to help debug:

1) Are you using Hive CLI or Beeline? Best results would be to
configure these hooks with HiveServer2 and use Beeline.

2) Could you please check hive logs and look for AtlasHook related
messages?

Thanks
Hemanth

On Mon, Dec 26, 2016 at 8:59 PM, Russell Anderson <r...@us.ibm.com> wrote:
>
>
> Hi dev list,
>
> I have built the Atlas7rc2 - have working the Hive Import, the Dashboard,
> and generally things appear to be working as expected.
>
> However, I have followed the instruction below but cannot seem to get the
> Hive Hook to detect table hive creations to be added to the metadata /
> lineage.
>
> Can anyone suggest how to debug what could be missing regardless of
> following the instructions below ? Any and all help appreciated !!!
>
> Regards,
>
> Russ.
>
>
=

>
> Hive Hook
>
>
> Hive supports listeners on hive command execution using hive hooks. This
is
> used to add/update/remove entities in Atlas using the model defined in
> org.apache.atlas.hive.model.HiveDataModelGenerator. The hook submits the
> request to a thread pool executor to avoid blocking the command
execution.
> The thread submits the entities as message to the notification server and
> atlas server reads these messages and registers the entities. Follow
these
> instructions in your hive set-up to add hive hook for Atlas:
>   Set-up atlas hook in hive-site.xml of your hive configuration:
> 
>   hive.exec.post.hooks
>   org.apache.atlas.hive.hook.HiveHook
> 
>
>
> 
>   atlas.cluster.name
>   primary
> 
>
>
>   Add 'export HIVE_AUX_JARS_PATH=/hook/hive' in
>   hive-env.sh of your hive configuration
>   Copy /atlas-application.properties to the hive conf
>   directory.
>
>
> The following properties in /atlas-application.properties
> control the thread pool and notification details:
>   atlas.hook.hive.synchronous - boolean, true to run the hook
>   synchronously. default false. Recommended to be set to false to
avoid
>   delays in hive query completion.
>   atlas.hook.hive.numRetries - number of retries for notification
>   failure. default 3
>   atlas.hook.hive.minThreads - core number of threads. default 5
>   atlas.hook.hive.maxThreads - maximum number of threads. default 5
>   atlas.hook.hive.keepAliveTime - keep alive time in msecs. default
10
>   atlas.hook.hive.queueSize - queue size for the threadpool. default
>   1





Hive Hook - what is missing ?

2016-12-26 Thread Russell Anderson


Hi dev list,

I have built the Atlas7rc2 - have working the Hive Import, the Dashboard,
and generally things appear to be working as expected.

However, I have followed the instruction below but cannot seem to get the
Hive Hook to detect table hive creations to be added to the metadata /
lineage.

Can anyone suggest how to debug what could be missing regardless of
following the instructions below ? Any and all help appreciated !!!

Regards,

Russ.

=

Hive Hook


Hive supports listeners on hive command execution using hive hooks. This is
used to add/update/remove entities in Atlas using the model defined in
org.apache.atlas.hive.model.HiveDataModelGenerator. The hook submits the
request to a thread pool executor to avoid blocking the command execution.
The thread submits the entities as message to the notification server and
atlas server reads these messages and registers the entities. Follow these
instructions in your hive set-up to add hive hook for Atlas:
  Set-up atlas hook in hive-site.xml of your hive configuration:

  hive.exec.post.hooks
  org.apache.atlas.hive.hook.HiveHook




  atlas.cluster.name
  primary



  Add 'export HIVE_AUX_JARS_PATH=/hook/hive' in
  hive-env.sh of your hive configuration
  Copy /atlas-application.properties to the hive conf
  directory.


The following properties in /atlas-application.properties
control the thread pool and notification details:
  atlas.hook.hive.synchronous - boolean, true to run the hook
  synchronously. default false. Recommended to be set to false to avoid
  delays in hive query completion.
  atlas.hook.hive.numRetries - number of retries for notification
  failure. default 3
  atlas.hook.hive.minThreads - core number of threads. default 5
  atlas.hook.hive.maxThreads - maximum number of threads. default 5
  atlas.hook.hive.keepAliveTime - keep alive time in msecs. default 10
  atlas.hook.hive.queueSize - queue size for the threadpool. default
  1


Re: [jira] [Resolved] (ATLAS-1298) UI never finishes loading

2016-12-21 Thread Russell Anderson


This works!!! After clearing the cache

Great work

Russ

Sent from my iPhone

> On Dec 20, 2016, at 11:56 PM, Keval Bhatt (JIRA)  wrote:
>
>
>
[ 
https://issues.apache.org/jira/browse/ATLAS-1298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

>
> Keval Bhatt resolved ATLAS-1298.
> 
>Resolution: Fixed
>
> Committed to 0.7-incubating
(https://github.com/apache/incubator-atlas/commit/19f491d95c805b568aa5b1e2fa9af21f4e0062e9)

>
>> UI never finishes loading
>> -
>>
>>Key: ATLAS-1298
>>URL: https://issues.apache.org/jira/browse/ATLAS-1298
>>Project: Atlas
>> Issue Type: Bug
>>   Affects Versions: 0.7-incubating
>>Environment: Atlas 0.7 - Profile Berkeley/ElasticSearch
>>   Reporter: PJ Van Aeken
>>   Assignee: Keval Bhatt
>>Fix For: 0.7.1-incubating
>>
>>Attachments: ATLAS-1298.patch, graycol.gif
>>
>>
>> The REST API  works fine, and I can see the data from the quickstart
script but when I go to the UI, I only get the login screen, followed by
the two-tone background and a "loading circle" which never completes
loading. There are no errors in application.log.
>> Note: I am running inside a docker container but port 21000 is exposed
to the host. I am querying the API from my browser on the host machine, so
not from inside the docker container. It works with both localhost and the
internal docker ip.
>> Any help debugging/fixing this would be greatly appreciated.
>> *EDIT:* Running the master branch outside of docker,
"mylaptopname:21000" works just fine but "localhost:21000" suffers from the
same issue.
>
>
>
> --
> This message was sent by Atlassian JIRA
> (v6.3.4#6332)
>


Re: [jira] [Updated] (ATLAS-1298) UI never finishes loading

2016-12-20 Thread Russell Anderson

Hi,

I made the code changes identified in the work around from ATLAS-1298.patch

However, I started with atlas7 rc2 as the code base.

My build was run with 'mvn clean package -Pdist,berkeley-elasticsearch
-DskipTests -DskipCheck=true'

All completed successfully - however - the same behavior results - infinite
circle - never ends

The import-hive.sh works fine, and the REST API returns lineage and works
as expected.

Regards,

Russ.




From:   "Keval Bhatt (JIRA)" 
To: dev@atlas.incubator.apache.org
Date:   12/20/2016 12:38 AM
Subject:[jira] [Updated] (ATLAS-1298) UI never finishes loading




 [
https://issues.apache.org/jira/browse/ATLAS-1298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Keval Bhatt updated ATLAS-1298:
---
Fix Version/s: 0.7.1-incubating

> UI never finishes loading
> -
>
> Key: ATLAS-1298
> URL: https://issues.apache.org/jira/browse/ATLAS-1298
> Project: Atlas
>  Issue Type: Bug
>Affects Versions: 0.7-incubating
> Environment: Atlas 0.7 - Profile Berkeley/ElasticSearch
>Reporter: PJ Van Aeken
>Assignee: Keval Bhatt
> Fix For: 0.7.1-incubating
>
> Attachments: ATLAS-1298.patch, graycol.gif
>
>
> The REST API  works fine, and I can see the data from the quickstart
script but when I go to the UI, I only get the login screen, followed by
the two-tone background and a "loading circle" which never completes
loading. There are no errors in application.log.
> Note: I am running inside a docker container but port 21000 is exposed to
the host. I am querying the API from my browser on the host machine, so not
from inside the docker container. It works with both localhost and the
internal docker ip.
> Any help debugging/fixing this would be greatly appreciated.
> *EDIT:* Running the master branch outside of docker, "mylaptopname:21000"
works just fine but "localhost:21000" suffers from the same issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)





[jira] [Updated] (ATLAS-1298) UI never finishes loading

2016-12-17 Thread Russell Anderson (JIRA)

 [ 
https://issues.apache.org/jira/browse/ATLAS-1298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russell Anderson updated ATLAS-1298:

Attachment: graycol.gif


Hi again,

I just realized that this is NOT resolved for .7 - so I will the patch once
it becomes available.

It is already declared as  a MAJOR bug - created on November 16, 2016.

Does anyone have a time frame as to when a fix might be developed ?

Regards,

Russ.



From:   Russell Anderson/Worcester/IBM
To: "Keval Bhatt (JIRA)" <j...@apache.org>
Cc: Russell Anderson/Worcester/IBM@IBMUS, Barry
Rosen/Worcester/IBM@IBMUS
Date:   12/16/2016 03:44 PM
Subject:Fw: [jira] [Commented] (ATLAS-1298) UI never finishes loading


*** THIS IS EXACTLY THE BEHAVIOR I AM SEEING *** I have Apache Atlas .7rc2

Question : In order to fix my Atlas .7rc2 build - Am i required to pull the
changes myself and rebuild ?

Thank you in advance,

Russ.


----- Forwarded by Russell Anderson/Worcester/IBM on 12/16/2016 03:33 PM
-

From:   "Keval Bhatt (JIRA)" <j...@apache.org>
To: dev@atlas.incubator.apache.org
Date:   11/17/2016 01:57 AM
Subject:[jira] [Commented] (ATLAS-1298) UI never finishes loading




[
https://issues.apache.org/jira/browse/ATLAS-1298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672981#comment-15672981
 ]

Keval Bhatt commented on ATLAS-1298:


Hi [~vanaepi]
Recently Atlas UI was not loading after fresh build due to
jquery-asBreadcrumbs plugin changes and this is fixed on master
( [ATLAS-1199 | https://issues.apache.org/jira/browse/ATLAS-1199] ) but the
issue is open for 0.7.

[Please checkout the latest code from master |
https://github.com/apache/incubator-atlas ]


script but when I go to the UI, I only get the login screen, followed by
the two-tone background and a "loading circle" which never completes
loading. There are no errors in application.log.
the host. I am querying the API from my browser on the host machine, so not
from inside the docker container. It works with both localhost and the
internal docker ip.
works just fine but "localhost:21000" suffers from the same issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)






> UI never finishes loading
> -
>
> Key: ATLAS-1298
> URL: https://issues.apache.org/jira/browse/ATLAS-1298
> Project: Atlas
>  Issue Type: Bug
>Affects Versions: 0.7-incubating
> Environment: Atlas 0.7 - Profile Berkeley/ElasticSearch
>Reporter: PJ Van Aeken
> Attachments: graycol.gif
>
>
> The REST API  works fine, and I can see the data from the quickstart script 
> but when I go to the UI, I only get the login screen, followed by the 
> two-tone background and a "loading circle" which never completes loading. 
> There are no errors in application.log.
> Note: I am running inside a docker container but port 21000 is exposed to the 
> host. I am querying the API from my browser on the host machine, so not from 
> inside the docker container. It works with both localhost and the internal 
> docker ip.
> Any help debugging/fixing this would be greatly appreciated.
> *EDIT:* Running the master branch outside of docker, "mylaptopname:21000" 
> works just fine but "localhost:21000" suffers from the same issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ATLAS-1298) UI never finishes loading

2016-12-16 Thread Russell Anderson (JIRA)

[ 
https://issues.apache.org/jira/browse/ATLAS-1298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15755469#comment-15755469
 ] 

Russell Anderson commented on ATLAS-1298:
-



*** THIS IS EXACTLY THE BEHAVIOR I AM SEEING *** I have Apache Atlas .7rc2

Question : In order to fix my Atlas .7rc2 build - Am i required to pull the
changes myself and rebuild ?

Thank you in advance,

Russ.


- Forwarded by Russell Anderson/Worcester/IBM on 12/16/2016 03:33 PM
-

From:   "Keval Bhatt (JIRA)" <j...@apache.org>
To: dev@atlas.incubator.apache.org
Date:   11/17/2016 01:57 AM
Subject:[jira] [Commented] (ATLAS-1298) UI never finishes loading




[
https://issues.apache.org/jira/browse/ATLAS-1298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672981#comment-15672981
 ]

Keval Bhatt commented on ATLAS-1298:


Hi [~vanaepi]
Recently Atlas UI was not loading after fresh build due to
jquery-asBreadcrumbs plugin changes and this is fixed on master
( [ATLAS-1199 | https://issues.apache.org/jira/browse/ATLAS-1199] ) but the
issue is open for 0.7.

[Please checkout the latest code from master |
https://github.com/apache/incubator-atlas ]


script but when I go to the UI, I only get the login screen, followed by
the two-tone background and a "loading circle" which never completes
loading. There are no errors in application.log.
the host. I am querying the API from my browser on the host machine, so not
from inside the docker container. It works with both localhost and the
internal docker ip.
works just fine but "localhost:21000" suffers from the same issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)




> UI never finishes loading
> -
>
> Key: ATLAS-1298
> URL: https://issues.apache.org/jira/browse/ATLAS-1298
> Project: Atlas
>  Issue Type: Bug
>Affects Versions: 0.7-incubating
> Environment: Atlas 0.7 - Profile Berkeley/ElasticSearch
>Reporter: PJ Van Aeken
>
> The REST API  works fine, and I can see the data from the quickstart script 
> but when I go to the UI, I only get the login screen, followed by the 
> two-tone background and a "loading circle" which never completes loading. 
> There are no errors in application.log.
> Note: I am running inside a docker container but port 21000 is exposed to the 
> host. I am querying the API from my browser on the host machine, so not from 
> inside the docker container. It works with both localhost and the internal 
> docker ip.
> Any help debugging/fixing this would be greatly appreciated.
> *EDIT:* Running the master branch outside of docker, "mylaptopname:21000" 
> works just fine but "localhost:21000" suffers from the same issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Atlas server will not start up correctly when using solr and hbase

2016-11-02 Thread Russell Anderson


(See attached file: application.log)- my application log after having
manually created the Solr indexes as seen below

https://www.mail-archive.com/dev@atlas.incubator.apache.org/msg07653.html
from Hermanth Yamijala dated Jun 22, 2016

The person reporting this in the above URL had exactly the same error as I
did.

The recommendation was to following these steps and create the Solr Indices
- Which I did however no change in startup behavior.

QUESTION 1: Am I wasting my time using this type of configuration OR is
there something else required for this type of configuration not
documented?

QUESTION 2 - Hermanth Yamijala noted that when using
berkleydb_elasticsearch the Atlas server starts up without error - IS THIS
the only way to get Atlas to run?

Take note that I am using or trying to use Atlas 7 rc2, and Hermanth tried
both this version and version .8 incubating-Snapshot.

Any help or advice is appreciated.

Regards,

Russell G. Anderson
Center of Excellence
 Senior Technical Architect

- Forwarded by Russell Anderson/Worcester/IBM on 11/02/2016 05:39 PM
-

From:   Russell Anderson/Worcester/IBM
To: Russell Anderson/Worcester/IBM@IBMUS
Date:   11/02/2016 05:33 PM
Subject:Create the indices required by solr5




TO MAKE IT WORK RIGHT I HAD TO REMOVE the -d option since it defaults to
the correct directory

The URL for the following was FOUND :
http://atlas.incubator.apache.org/InstallationSteps.html

  For e.g., to bring up a Solr node listening on port 8983 on a
  machine, you can use the command:
  $SOLR_HOME/bin/solr start -c -z  -p 8983


  Run the following commands from SOLR_BIN (e.g. $SOLR_HOME/bin)
  directory to create collections in Solr corresponding to the indexes
  that Atlas uses. In the case that the ATLAS and SOLR instance are on
  2 different hosts,
first copy the required configuration files from ATLAS_HOME/conf/solr on
the ATLAS instance host to the Solr instance host. SOLR_CONF in the below
mentioned commands refer to the directory where the solr configuration
files have been copied to on Solr host:
  $SOLR_BIN/solr create -c vertex_index -d SOLR_CONF -shards #numShards
-replicationFactor #replicationFactor
  $SOLR_BIN/solr create -c edge_index -d SOLR_CONF -shards #numShards
-replicationFactor #replicationFactor
  $SOLR_BIN/solr create -c fulltext_index -d SOLR_CONF -shards #numShards
-replicationFactor #replicationFactor




Note: If numShards and replicationFactor are not specified, they default to
1 which suffices if you are trying out solr with ATLAS on a single node
instance. Otherwise specify numShards according to the number of hosts that
are in the Solr cluster and the maxShardsPerNode configuration. The number
of shards cannot exceed the total number of Solr nodes in your !SolrCloud
cluster.


The number of replicas (replicationFactor) can be set according to the
redundancy required.


Also note that solr will automatically be called to create the indexes when
the Atlas server is started if the SOLR_BIN and SOLR_CONF environment
variables are set and the search indexing backend is set to 'solr5'.

Russell G. Anderson
 Senior Technical Consultant