[jira] [Created] (ATLAS-2017) Import API: Make Request Parameter Optional
Ashutosh Mestry created ATLAS-2017: -- Summary: Import API: Make Request Parameter Optional Key: ATLAS-2017 URL: https://issues.apache.org/jira/browse/ATLAS-2017 Project: Atlas Issue Type: Improvement Components: atlas-core Affects Versions: 0.8-incubating Reporter: Ashutosh Mestry Assignee: Ashutosh Mestry Priority: Minor Fix For: trunk Existing Import API needs _request_ parameter, which is empty JSON most of the time. It would help usability if this parameter is made optional. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ATLAS-2047) NotificationHookConsumer: Exception Thrown by Kafka Consumer Ends up Filling Logs Due to Incorrect Handling
[ https://issues.apache.org/jira/browse/ATLAS-2047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Mestry updated ATLAS-2047: --- Attachment: ATLAS-2047-HookConsumer-ExceptionHandling.patch > NotificationHookConsumer: Exception Thrown by Kafka Consumer Ends up Filling > Logs Due to Incorrect Handling > > > Key: ATLAS-2047 > URL: https://issues.apache.org/jira/browse/ATLAS-2047 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: trunk, 0.8-incubating >Reporter: Ashutosh Mestry >Assignee: Ashutosh Mestry > Fix For: trunk > > Attachments: ATLAS-2047-HookConsumer-ExceptionHandling.patch > > > *Background* > _KafkaConsumer_ is abstracted by _AtlasKafkaConsumer_. This is run using > _HookConsumer_ which is derived from _kafka.utils.ShutdownableThread_. > The _ShutdownableThread_ manages the thread. It handles exceptions thrown in > the _doWork_ method and logs them. > *Problem* > Exception reported in the bug is thrown by _KafkaConsumer_ is handed by > _HookConsumer_. Exception is logged but the thread keeps running. In cases > where Kafa is in irrecoverable state, this behavior ends up filling up logs > and providing no value in return. > *Solution* > Let _ShutdownableThread_ hand exceptions thrown by _KafkaConsumer_. Stop the > thread if _KafkaConsumer_ suffers from irrecoverable error. This will avoid > the situation described in the problem section. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (ATLAS-2047) NotificationHookConsumer: Exception Thrown by Kafka Consumer Ends up Filling Logs Due to Incorrect Handling
[ https://issues.apache.org/jira/browse/ATLAS-2047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127514#comment-16127514 ] Ashutosh Mestry commented on ATLAS-2047: CC: [~nixonrodrigues] CC: [~madhan.neethiraj] > NotificationHookConsumer: Exception Thrown by Kafka Consumer Ends up Filling > Logs Due to Incorrect Handling > > > Key: ATLAS-2047 > URL: https://issues.apache.org/jira/browse/ATLAS-2047 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: trunk, 0.8-incubating >Reporter: Ashutosh Mestry >Assignee: Ashutosh Mestry > Fix For: trunk > > Attachments: ATLAS-2047-HookConsumer-ExceptionHandling.patch > > > *Background* > _KafkaConsumer_ is abstracted by _AtlasKafkaConsumer_. This is run using > _HookConsumer_ which is derived from _kafka.utils.ShutdownableThread_. > The _ShutdownableThread_ manages the thread. It handles exceptions thrown in > the _doWork_ method and logs them. > *Problem* > Exception reported in the bug is thrown by _KafkaConsumer_ is handed by > _HookConsumer_. Exception is logged but the thread keeps running. In cases > where Kafa is in irrecoverable state, this behavior ends up filling up logs > and providing no value in return. > *Solution* > Let _ShutdownableThread_ hand exceptions thrown by _KafkaConsumer_. Stop the > thread if _KafkaConsumer_ suffers from irrecoverable error. This will avoid > the situation described in the problem section. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (ATLAS-1716) Export API is successfully exporting metadata when deleted entity is given as starting point.
[ https://issues.apache.org/jira/browse/ATLAS-1716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16126235#comment-16126235 ] Ashutosh Mestry commented on ATLAS-1716: [~ayubkhan] This behavior is by design. We don't have plans to change this. > Export API is successfully exporting metadata when deleted entity is given as > starting point. > - > > Key: ATLAS-1716 > URL: https://issues.apache.org/jira/browse/ATLAS-1716 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.8-incubating >Reporter: Ayub Pathan >Assignee: Ashutosh Mestry >Priority: Critical > Fix For: 0.9-incubating > > > export API should not honor the metadata export operation when deleted entity > is given as starting point. But this is not the case, the API is exporting > the deleted entity metadata as well. Is this a new requiremen? -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (ATLAS-1948) Importing hive_table in a database which is a CTAS of another table in different database throws exception due to export order.
[ https://issues.apache.org/jira/browse/ATLAS-1948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Mestry reassigned ATLAS-1948: -- Assignee: Ashutosh Mestry > Importing hive_table in a database which is a CTAS of another table in > different database throws exception due to export order. > --- > > Key: ATLAS-1948 > URL: https://issues.apache.org/jira/browse/ATLAS-1948 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.9-incubating >Reporter: Sharmadha Sainath >Assignee: Ashutosh Mestry >Priority: Critical > Fix For: 0.9-incubating > > Attachments: ImportTransformsErrorOnCTASonDiffDB.txt > > > 1.Created 2 databases db1 , db2 in cluster1 > 2.Created 2 tables > 1. db1.t1 > 2. db2.t2 as select * from db1.t1 > 3.Exported db1.t1 into zip file. > 4.Imported zip file into cluster 2 with transforms option : > {code} > { > "options": { >"transforms": "{ \"hive_column\": { \"qualifiedName\": [ > \"replace:cl1:cl2\" ]} }" > } > } > {code} > 5. Import fails with > {code} > {"errorCode":"ATLAS-500-00-001","errorMessage":"org.apache.atlas.exception.AtlasBaseException: > ObjectId is not valid > AtlasObjectId{guid='51c77c1e-265e-46ab-bbb5-5316cf80a53c', > typeName='hive_column', uniqueAttributes={}}"} > {code} > Only db1.t1 is imported into Atlas without any lineage. > Attached the exception stack trace. > After this exporting db2.t2 and importing completes successfully. > That is , first import ,either db1.t1 or db2.t1 is unsuccessful with > exception. Next import is successful. > The exception *doesn't* happen and tables are successfully imported If both > the tables are in a single database. Export order if tables are in same db is > 1.table1, > 2.db, > 3.table2, > 4.hive_process > 5. hive_column_lineage > If the tables are in different db , the order is , > 1.table1, > 2.db1, > 3.hive_process, > 4.hive_column_lineage > 5.ctas table > 6.db2 > which is possibly causing the issue. > When cluster2 starts importing , it imports table1 , db1 and when it comes > to hive_column_lineage , it finds that column specified in > hive_column_lineage is not in cluster2 yet ,since ctas table comes after the > hive_column_lineage in import order and it throws "ObjectId is not valid > AtlasObjectId{guid='51c77c1e-265e-46ab-bbb5-5316cf80a53c', > typeName='hive_column' ". > Thanks [~ayubkhan] for the analysis. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (ATLAS-1942) Export/Import - Transforms option doesn't work for datatypes other than string
[ https://issues.apache.org/jira/browse/ATLAS-1942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086651#comment-16086651 ] Ashutosh Mestry commented on ATLAS-1942: I have updated documentation to say that these transforms apply only to string types. > Export/Import - Transforms option doesn't work for datatypes other than string > -- > > Key: ATLAS-1942 > URL: https://issues.apache.org/jira/browse/ATLAS-1942 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.9-incubating >Reporter: Sharmadha Sainath >Assignee: Ashutosh Mestry > Fix For: 0.9-incubating > > Attachments: Import-API-Transform-Minor-update.patch > > > Import with transforms option on any string attribute works and the string > attribute is updated. Other data types like int,date,boolean etc., are not > updated. > transforms.json file : > Following works: > {code} > "options": { >"transforms": "{ \"hive_table\": { \"qualifiedName\": [ > \"replace:@cl1:@cl2\" ] }}" > } > } > {code} > Following doesn't work: > {code} > "options": { >"transforms": "{ \"hive_table\": { \"retention\": [ \"replace:0:1\" ] }}" > } > } > {code} > In both cases , import is successful and there are no exceptions found in > application logs. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ATLAS-1942) Export/Import - Transforms option doesn't work for datatypes other than string
[ https://issues.apache.org/jira/browse/ATLAS-1942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Mestry updated ATLAS-1942: --- Attachment: Import-API-Transform-Minor-update.patch > Export/Import - Transforms option doesn't work for datatypes other than string > -- > > Key: ATLAS-1942 > URL: https://issues.apache.org/jira/browse/ATLAS-1942 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.9-incubating >Reporter: Sharmadha Sainath >Assignee: Ashutosh Mestry > Fix For: 0.9-incubating > > Attachments: Import-API-Transform-Minor-update.patch > > > Import with transforms option on any string attribute works and the string > attribute is updated. Other data types like int,date,boolean etc., are not > updated. > transforms.json file : > Following works: > {code} > "options": { >"transforms": "{ \"hive_table\": { \"qualifiedName\": [ > \"replace:@cl1:@cl2\" ] }}" > } > } > {code} > Following doesn't work: > {code} > "options": { >"transforms": "{ \"hive_table\": { \"retention\": [ \"replace:0:1\" ] }}" > } > } > {code} > In both cases , import is successful and there are no exceptions found in > application logs. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ATLAS-1919) Export of hive_table with fetchType "connected" fails with duplicate entry. Seems like, same edge(asset) is traversed twice.
[ https://issues.apache.org/jira/browse/ATLAS-1919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Mestry updated ATLAS-1919: --- Attachment: ATLAS-1919.patch > Export of hive_table with fetchType "connected" fails with duplicate entry. > Seems like, same edge(asset) is traversed twice. > > > Key: ATLAS-1919 > URL: https://issues.apache.org/jira/browse/ATLAS-1919 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.9-incubating >Reporter: Ayub Pathan >Priority: Critical > Fix For: 0.9-incubating > > Attachments: ATLAS-1919.patch > > > Export of hive_table with fetchType "connected" fails with duplicate entry. > Seems like, same edge(asset) is traversed twice. Possibly a loop condition. > Export request payload > {noformat} > { > "itemsToExport": [ > { > "typeName": "hive_table", > "uniqueAttributes": { > "qualifiedName": "any_random_table@cluster" > } > } > ], > "options": { > "fetchType" : "connected" > } > } > {noformat} > stacktrace > {noformat} > 2017-07-04 08:44:19,159 INFO - [pool-2-thread-9 - > 8bc8d958-e650-4351-acf0-24c28d15627a:] ~ > export(item=AtlasObjectId{guid='null', typeName='hive_table', > uniqueAttributes={qualifiedName:db1.table1@cl1}}; matchType=null, > fetchType=CONNECTED): found 1 entities (ExportService:280) > 2017-07-04 08:44:19,794 ERROR - [pool-2-thread-9 - > 8bc8d958-e650-4351-acf0-24c28d15627a:] ~ Fetching entity failed for: > AtlasObjectId{guid='null', typeName='hive_table', > uniqueAttributes={qualifiedName:db1.table1@cl1}} (ExportService:207) > org.apache.atlas.exception.AtlasBaseException: Error writing file > 75a90231-3eca-4ba7-9e21-93e7ec2b0df2. > at org.apache.atlas.repository.impexp.ZipSink.saveToZip(ZipSink.java:91) > at org.apache.atlas.repository.impexp.ZipSink.add(ZipSink.java:50) > at > org.apache.atlas.repository.impexp.ExportService.addEntity(ExportService.java:433) > at > org.apache.atlas.repository.impexp.ExportService.processEntity(ExportService.java:297) > at > org.apache.atlas.repository.impexp.ExportService.processObjectId(ExportService.java:198) > at > org.apache.atlas.repository.impexp.ExportService.processItems(ExportService.java:162) > at > org.apache.atlas.repository.impexp.ExportService.run(ExportService.java:95) > at > org.apache.atlas.web.resources.AdminResource.export(AdminResource.java:325) > at sun.reflect.GeneratedMethodAccessor238.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60) > at > com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205) > at > com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75) > at > com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302) > at > com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) > at > com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108) > at > com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) > at > com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84) > at > com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542) > at > com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473) > at > com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419) > at > com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409) > at > com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409) > at > com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558) > at > com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) > at > com.google.inject.servlet.ServletDefinition.doServiceImpl(ServletDefinition.java:286) > at >
[jira] [Commented] (ATLAS-1939) Export/Import Regression : NPE during import
[ https://issues.apache.org/jira/browse/ATLAS-1939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16082249#comment-16082249 ] Ashutosh Mestry commented on ATLAS-1939: [~ssainath] The Import API now accepts multi-part data. Please see the updated documentation. This is what will work: {code:java} curl -g -X POST -u admin:admin -H "Content-Type: multipart/form-data" -H "Cache-Control: no-cache" -F data=@salesNewTypeAttrs.zip "http://localhost:21000/api/atlas/admin/import; {code} > Export/Import Regression : NPE during import > > > Key: ATLAS-1939 > URL: https://issues.apache.org/jira/browse/ATLAS-1939 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.9-incubating >Reporter: Sharmadha Sainath >Priority: Blocker > Attachments: Import_NPE.txt > > > Exported a hive_table and created zip file t5.zip and tried to import in into > another cluster using > {code} > curl -v -X POST -u admin:admin -H "Content-Type: multipart/form-data" -H > "Cache-Control: no-cache" --data-binary @t5.zip > "http://host2:21000/api/atlas/admin/import; > {code} > Import Request failed with 500 internal server error with NPE in application > logs. > Attached the exception stack trace. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Comment Edited] (ATLAS-1939) Export/Import Regression : NPE during import
[ https://issues.apache.org/jira/browse/ATLAS-1939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16082249#comment-16082249 ] Ashutosh Mestry edited comment on ATLAS-1939 at 7/11/17 2:26 PM: - [~ssainath] The Import API now accepts multi-part data. Please see the updated documentation. This is what will work: {code:java} curl -g -X POST -u admin:admin -H "Content-Type: multipart/form-data" -H "Cache-Control: no-cache" -F request=@importOptions.json -F data=@salesNewTypeAttrs.zip "http://localhost:21000/api/atlas/admin/import; {code} was (Author: ashutoshm): [~ssainath] The Import API now accepts multi-part data. Please see the updated documentation. This is what will work: {code:java} curl -g -X POST -u admin:admin -H "Content-Type: multipart/form-data" -H "Cache-Control: no-cache" -F data=@salesNewTypeAttrs.zip "http://localhost:21000/api/atlas/admin/import; {code} > Export/Import Regression : NPE during import > > > Key: ATLAS-1939 > URL: https://issues.apache.org/jira/browse/ATLAS-1939 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.9-incubating >Reporter: Sharmadha Sainath >Priority: Blocker > Attachments: Import_NPE.txt > > > Exported a hive_table and created zip file t5.zip and tried to import in into > another cluster using > {code} > curl -v -X POST -u admin:admin -H "Content-Type: multipart/form-data" -H > "Cache-Control: no-cache" --data-binary @t5.zip > "http://host2:21000/api/atlas/admin/import; > {code} > Import Request failed with 500 internal server error with NPE in application > logs. > Attached the exception stack trace. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (ATLAS-1939) Export/Import Regression : NPE during import
[ https://issues.apache.org/jira/browse/ATLAS-1939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16087923#comment-16087923 ] Ashutosh Mestry commented on ATLAS-1939: I think its a good fix! > Export/Import Regression : NPE during import > > > Key: ATLAS-1939 > URL: https://issues.apache.org/jira/browse/ATLAS-1939 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.9-incubating >Reporter: Sharmadha Sainath >Priority: Blocker > Attachments: ATLAS-1939.patch, Import_error_NPE2.txt, Import_NPE.txt > > > Exported a hive_table and created zip file t5.zip and tried to import in into > another cluster using > {code} > curl -v -X POST -u admin:admin -H "Content-Type: multipart/form-data" -H > "Cache-Control: no-cache" --data-binary @t5.zip > "http://host2:21000/api/atlas/admin/import; > {code} > Import Request failed with 500 internal server error with NPE in application > logs. > Attached the exception stack trace. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ATLAS-1950) Import Transform option using supertype instead of a specific type
[ https://issues.apache.org/jira/browse/ATLAS-1950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Mestry updated ATLAS-1950: --- Attachment: ATLAS-1950.patch > Import Transform option using supertype instead of a specific type > -- > > Key: ATLAS-1950 > URL: https://issues.apache.org/jira/browse/ATLAS-1950 > Project: Atlas > Issue Type: Improvement > Components: atlas-core >Affects Versions: 0.9-incubating >Reporter: Sharmadha Sainath >Assignee: Ashutosh Mestry > Attachments: ATLAS-1950.patch > > > Users can provide a tranforms option while import to replace @cl1 in > hive_table to @cl2 using the following JSON. > {code} > { > "options": { > "transforms": "{ \"hive_table\": { \"qualifiedName\": [ > \"replace:@cl1:@cl2\" ] } }" > } > } > {code} > It would be easy to specify a super type like 'Asset' to transform all types > in the export items which inherit from the super type to have "@cl1" replaced > with "@cl2" like > {code} > { > "options": { > "transforms": "{ \"Asset\": { \"qualifiedName\": [ > \"replace:@cl1:@cl2\" ] } }" > } > } > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (ATLAS-1919) Export of hive_table with fetchType "connected" fails with duplicate entry. Seems like, same edge(asset) is traversed twice.
[ https://issues.apache.org/jira/browse/ATLAS-1919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Mestry reassigned ATLAS-1919: -- Assignee: Ashutosh Mestry > Export of hive_table with fetchType "connected" fails with duplicate entry. > Seems like, same edge(asset) is traversed twice. > > > Key: ATLAS-1919 > URL: https://issues.apache.org/jira/browse/ATLAS-1919 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.9-incubating >Reporter: Ayub Pathan >Assignee: Ashutosh Mestry >Priority: Critical > Fix For: 0.9-incubating > > Attachments: ATLAS-1919.patch > > > Export of hive_table with fetchType "connected" fails with duplicate entry. > Seems like, same edge(asset) is traversed twice. Possibly a loop condition. > Export request payload > {noformat} > { > "itemsToExport": [ > { > "typeName": "hive_table", > "uniqueAttributes": { > "qualifiedName": "any_random_table@cluster" > } > } > ], > "options": { > "fetchType" : "connected" > } > } > {noformat} > stacktrace > {noformat} > 2017-07-04 08:44:19,159 INFO - [pool-2-thread-9 - > 8bc8d958-e650-4351-acf0-24c28d15627a:] ~ > export(item=AtlasObjectId{guid='null', typeName='hive_table', > uniqueAttributes={qualifiedName:db1.table1@cl1}}; matchType=null, > fetchType=CONNECTED): found 1 entities (ExportService:280) > 2017-07-04 08:44:19,794 ERROR - [pool-2-thread-9 - > 8bc8d958-e650-4351-acf0-24c28d15627a:] ~ Fetching entity failed for: > AtlasObjectId{guid='null', typeName='hive_table', > uniqueAttributes={qualifiedName:db1.table1@cl1}} (ExportService:207) > org.apache.atlas.exception.AtlasBaseException: Error writing file > 75a90231-3eca-4ba7-9e21-93e7ec2b0df2. > at org.apache.atlas.repository.impexp.ZipSink.saveToZip(ZipSink.java:91) > at org.apache.atlas.repository.impexp.ZipSink.add(ZipSink.java:50) > at > org.apache.atlas.repository.impexp.ExportService.addEntity(ExportService.java:433) > at > org.apache.atlas.repository.impexp.ExportService.processEntity(ExportService.java:297) > at > org.apache.atlas.repository.impexp.ExportService.processObjectId(ExportService.java:198) > at > org.apache.atlas.repository.impexp.ExportService.processItems(ExportService.java:162) > at > org.apache.atlas.repository.impexp.ExportService.run(ExportService.java:95) > at > org.apache.atlas.web.resources.AdminResource.export(AdminResource.java:325) > at sun.reflect.GeneratedMethodAccessor238.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60) > at > com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205) > at > com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75) > at > com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302) > at > com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) > at > com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108) > at > com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) > at > com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84) > at > com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542) > at > com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473) > at > com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419) > at > com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409) > at > com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409) > at > com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558) > at > com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) > at > com.google.inject.servlet.ServletDefinition.doServiceImpl(ServletDefinition.java:286) > at >
[jira] [Updated] (ATLAS-1968) Export/Import - Exception thrown when Import fired with zip file found on server
[ https://issues.apache.org/jira/browse/ATLAS-1968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Mestry updated ATLAS-1968: --- Attachment: ATLAS-1968-Importfile-option.patch > Export/Import - Exception thrown when Import fired with zip file found on > server > - > > Key: ATLAS-1968 > URL: https://issues.apache.org/jira/browse/ATLAS-1968 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.9-incubating, 0.8.1-incubating >Reporter: Sharmadha Sainath >Assignee: Ashutosh Mestry >Priority: Blocker > Attachments: ATLAS-1968-Importfile-option.patch, > ImportExceptionWhenZipFileOnServer.txt > > > 1. Exported an entity on the cluster1 ,created zip file and placed it in /tmp > location of cluster2 . > 2. Fired the following import command against cluster2 : > {code} > curl -v -X POST -u admin:admin > "http://cluster2:21000/api/atlas/admin/importFile?FILENAME=/tmp/entity.zip; > {code} > The import command failed with 500 internal server error. > Attached the exception stack trace found in cluster2's application logs. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Closed] (ATLAS-1889) Failure in simultaneous updates to an entity tag association
[ https://issues.apache.org/jira/browse/ATLAS-1889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Mestry closed ATLAS-1889. -- > Failure in simultaneous updates to an entity tag association > > > Key: ATLAS-1889 > URL: https://issues.apache.org/jira/browse/ATLAS-1889 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.8-incubating >Reporter: Ashutosh Mestry >Assignee: Ashutosh Mestry > Fix For: trunk, 0.8.1-incubating > > Attachments: ATLAS-1889-Handling-multiple-entities.patch > > > *Background* > When multiple requests simultaneously attempt to change the tags associated > with an entity, the current implementation might result in APIs returning > fewer tags for the entity than is actually associated. > The problem thus causes UI not to behave correctly. > > *Solution* > Implements a synchronization mechanism. > > Steps to duplicate the problem: > * Create several tags. Eg. tag1, tag2, tag3, tag4. > * Attach these tags to and entity say ‘testtable1’ > * Use the following CURL commands to delete the tags: > {code} > curl -X DELETE -u admin:admin > 'http://localhost:21000/api/atlas/entities/a2d0b023-aafe-4ddd-bb90-65164af8796c/traits/tag1' > -H 'Content-Type: application/json' & > curl -X DELETE -u admin:admin > 'http://localhost:21000/api/atlas/entities/a2d0b023-aafe-4ddd-bb90-65164af8796c/traits/tag2' > -H 'Content-Type: application/json' & > curl -X DELETE -u admin:admin > 'http://localhost:21000/api/atlas/entities/a2d0b023-aafe-4ddd-bb90-65164af8796c/traits/tag3' > -H 'Content-Type: application/json' & > curl -X DELETE -u admin:admin > 'http://localhost:21000/api/atlas/entities/a2d0b023-aafe-4ddd-bb90-65164af8796c/traits/tag4' > -H 'Content-Type: application/json' > {code} > You will notice that entity _testtable1_ ends up with some tags not deleted. > Additional querying with gremlin will show that the edges are deleted > correctly. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Closed] (ATLAS-1734) Import API: Add Support to Update Attributes of Existing Types During Import
[ https://issues.apache.org/jira/browse/ATLAS-1734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Mestry closed ATLAS-1734. -- > Import API: Add Support to Update Attributes of Existing Types During Import > > > Key: ATLAS-1734 > URL: https://issues.apache.org/jira/browse/ATLAS-1734 > Project: Atlas > Issue Type: Improvement > Components: atlas-core >Affects Versions: trunk, 0.8-incubating >Reporter: Ashutosh Mestry >Assignee: Ashutosh Mestry > Labels: patch > Fix For: trunk > > Attachments: > ATLAS-1734-Import-with-additional-attribute-processi.patch > > > *Background* > Existing version of Import API allows for importing types that are not > already present in the system being imported in. This causes import to fail > in the cases where the data being imported happens to have the additional > attribute. > *Solution* > During import, existing types are checked to determine if the types being > imported have additional attributes. If additional attributes exist, then the > existing type is updated with the new attributes. The import then proceeds. > *Impact Assessment* > - Import API: > -- Type import: Additional capability (mentioned above). > -- Entity creation and processing: No impact. > - Export API: No impact. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (ATLAS-1960) Import command fired on passive server throws Exception
[ https://issues.apache.org/jira/browse/ATLAS-1960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Mestry reassigned ATLAS-1960: -- Assignee: Ashutosh Mestry > Import command fired on passive server throws Exception > --- > > Key: ATLAS-1960 > URL: https://issues.apache.org/jira/browse/ATLAS-1960 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.9-incubating >Reporter: Sharmadha Sainath >Assignee: Ashutosh Mestry >Priority: Critical > Attachments: ImportOnPassiveHost.txt > > > 1.Fired import command on an ACTIVE host which resulted in successful import. > 2. On firing import command with a zip file containing exported items of a > hive_table ,on PASSIVE host, following exception is thrown : > {code} > {"errorCode":"ATLAS-500-00-001","errorMessage":"org.apache.atlas.exception.AtlasBaseException: > org.apache.atlas.typesystem.exception.TypeNotFoundException: Unknown > datatype: hive_table"} > {code} > Expected 30X with redirection URL. > Attached the complete exception stack trace found in PASSIVE host's > application logs. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (ATLAS-1982) Update references to "incubator" in website
[ https://issues.apache.org/jira/browse/ATLAS-1982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16097888#comment-16097888 ] Ashutosh Mestry commented on ATLAS-1982: I tried: * mvn clean package * mvn clean package -DskipTests * ./docs: mvn site:run The .JAR & .WAR generated still have incubating in it. Hope that is expected. Thanks for painstakingly arranging all the developer names. Changes look good. +1 on patch. > Update references to "incubator" in website > --- > > Key: ATLAS-1982 > URL: https://issues.apache.org/jira/browse/ATLAS-1982 > Project: Atlas > Issue Type: Task >Affects Versions: trunk >Reporter: Madhan Neethiraj >Assignee: Madhan Neethiraj > Fix For: trunk > > Attachments: ATLAS-1982.patch > > > Now that Atlas has graduated to a top-level project, update website to remove > incubator related references (git repo, emails, URLs). -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ATLAS-1996) split the atlas application log file if it grows big
[ https://issues.apache.org/jira/browse/ATLAS-1996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Mestry updated ATLAS-1996: --- Attachment: ATLAS-1996.patch > split the atlas application log file if it grows big > > > Key: ATLAS-1996 > URL: https://issues.apache.org/jira/browse/ATLAS-1996 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.9-incubating >Reporter: Deepak Sharma >Assignee: Ashutosh Mestry > Fix For: 0.9-incubating > > Attachments: ATLAS-1996.patch > > > application log size is growing in GB's for atlas > some times it goes till 100 GB. > it is better to split it if it grows more than around 100-200MB. > for eg: some instance are seen with 20GB log file -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (ATLAS-1996) split the atlas application log file if it grows big
[ https://issues.apache.org/jira/browse/ATLAS-1996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Mestry reassigned ATLAS-1996: -- Assignee: Ashutosh Mestry > split the atlas application log file if it grows big > > > Key: ATLAS-1996 > URL: https://issues.apache.org/jira/browse/ATLAS-1996 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.9-incubating >Reporter: Deepak Sharma >Assignee: Ashutosh Mestry > Fix For: 0.9-incubating > > > application log size is growing in GB's for atlas > some times it goes till 100 GB. > it is better to split it if it grows more than around 100-200MB. > for eg: some instance are seen with 20GB log file -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (ATLAS-1995) Performance of Entity Creation Can Be Improved By Using Index Query to Fetch Entity Using Unique Attributes
[ https://issues.apache.org/jira/browse/ATLAS-1995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16102717#comment-16102717 ] Ashutosh Mestry commented on ATLAS-1995: Preliminary analysis of the implementation seem to result in 50% improvement. The improvements depends on the amount of data present in the database. For database with less data, the 2 approaches yield comparable fetch times. For database with large amount of data, the improvement is upwards of 50%. > Performance of Entity Creation Can Be Improved By Using Index Query to Fetch > Entity Using Unique Attributes > > > Key: ATLAS-1995 > URL: https://issues.apache.org/jira/browse/ATLAS-1995 > Project: Atlas > Issue Type: Improvement > Components: atlas-core >Affects Versions: trunk, 0.8-incubating >Reporter: Ashutosh Mestry >Assignee: Ashutosh Mestry > > *Background* > On profiling entity creation flow, it was observed that several calls are > made to _AtlasGraphUtilsV1.getVertexByUniqueAttributes_. > These calls result in querying database using graph query. There is a > potential for improving this if index query was used. > *Analysis* > Upon experimentation, it was found that there is a 50% improvement in > performance of entity creation if this method was replaced with equivalent > that uses _indexQuery_. > Also, when large number of entities are created (typically using > _import_hive.sh_), the CPU usage on Atlas was reduced, as the Solr was being > used for doing some of the work. > *Implementation Guidance* > * Add new method to _AtlasGraphUtilsV1.getAtlasVertexFromIndexQuery_ that > will use _AtlasGraphProvider.indexQuery_ to fetch vertices. > * Ensure that query created is 'escaped' appropriately. > * Include logic to fallback to graph query if the property being queried for > is not indexed. > Since this is a high-impact change, it will be worth while to verify other > dependent modules. > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (ATLAS-1716) Export API is successfully exporting metadata when deleted entity is given as starting point.
[ https://issues.apache.org/jira/browse/ATLAS-1716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Mestry reassigned ATLAS-1716: -- Assignee: Ashutosh Mestry > Export API is successfully exporting metadata when deleted entity is given as > starting point. > - > > Key: ATLAS-1716 > URL: https://issues.apache.org/jira/browse/ATLAS-1716 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.8-incubating >Reporter: Ayub Pathan >Assignee: Ashutosh Mestry >Priority: Critical > Fix For: 0.9-incubating > > > export API should not honor the metadata export operation when deleted entity > is given as starting point. But this is not the case, the API is exporting > the deleted entity metadata as well. Is this a new requiremen? -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ATLAS-2033) AbstractNotification Message Serializer Uses Pretty JSON
[ https://issues.apache.org/jira/browse/ATLAS-2033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Mestry updated ATLAS-2033: --- Attachment: ATLAS-2033.patch > AbstractNotification Message Serializer Uses Pretty JSON > > > Key: ATLAS-2033 > URL: https://issues.apache.org/jira/browse/ATLAS-2033 > Project: Atlas > Issue Type: Improvement > Components: atlas-core >Affects Versions: trunk, 0.8-incubating >Reporter: Ashutosh Mestry >Assignee: Ashutosh Mestry >Priority: Minor > Fix For: trunk > > Attachments: ATLAS-2033.patch > > > _InstanceSerialization.scala_ uses _writePretty_ instead of _write_. This > causes prettified JSON to be generated. > Pretty JSON is about 30% larger than ugly JSON. > This change will help in dealing Kafa's message threshold of 1MB. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ATLAS-2033) AbstractNotification Message Serializer Uses Pretty JSON
Ashutosh Mestry created ATLAS-2033: -- Summary: AbstractNotification Message Serializer Uses Pretty JSON Key: ATLAS-2033 URL: https://issues.apache.org/jira/browse/ATLAS-2033 Project: Atlas Issue Type: Improvement Components: atlas-core Affects Versions: 0.8-incubating, trunk Reporter: Ashutosh Mestry Assignee: Ashutosh Mestry Priority: Minor Fix For: trunk _InstanceSerialization.scala_ uses _writePretty_ instead of _write_. This causes prettified JSON to be generated. Pretty JSON is about 30% larger than ugly JSON. This change will help in dealing Kafa's message threshold of 1MB. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ATLAS-2037) Unit Test Failure: NotificationHookConsumerTest.testConsumersAreStoppedWhenInstanceBecomesPassive
[ https://issues.apache.org/jira/browse/ATLAS-2037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Mestry updated ATLAS-2037: --- Attachment: ATLAS-2037-HookConsumer.patch > Unit Test Failure: > NotificationHookConsumerTest.testConsumersAreStoppedWhenInstanceBecomesPassive > - > > Key: ATLAS-2037 > URL: https://issues.apache.org/jira/browse/ATLAS-2037 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: trunk >Reporter: Ashutosh Mestry >Assignee: Ashutosh Mestry >Priority: Minor > Fix For: trunk, 0.8.1-incubating > > Attachments: ATLAS-2037-HookConsumer.patch > > > *Analysis* > - The test has few mocks, but we don’t mock _HookConsumer_ (derived from > _ShutdownableThread_). Hence, the object needs be in line with its usage > pattern. > - The _ShutdownableThread_ has a _CountdownLatch_ that is checked during > shutdown. > In the test, the _HookConsumer_ was not being started at all. This caused > _shutdownLatch_ (of type _CountdownLatch_) not to decrement, since no work > was performed, but anticipating work is going to be done. The test thus ended > up not getting completed, since the thread was in perpetual wait. > *Solution* > The _HookConsumer_ should be used such that it is started and shutdown. So > the test passes. > _HookConsumer_ should check for _shouldRun_ in _shutdown_ method, so that the > case where _shutdown_ is called without _start_ is handled. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ATLAS-2037) Unit Test Failure: NotificationHookConsumerTest.testConsumersAreStoppedWhenInstanceBecomesPassive
Ashutosh Mestry created ATLAS-2037: -- Summary: Unit Test Failure: NotificationHookConsumerTest.testConsumersAreStoppedWhenInstanceBecomesPassive Key: ATLAS-2037 URL: https://issues.apache.org/jira/browse/ATLAS-2037 Project: Atlas Issue Type: Bug Components: atlas-core Affects Versions: trunk Reporter: Ashutosh Mestry Assignee: Ashutosh Mestry Priority: Minor Fix For: trunk, 0.8.1-incubating *Analysis* - The test has few mocks, but we don’t mock _HookConsumer_ (derived from _ShutdownableThread_). Hence, the object needs be in line with its usage pattern. - The _ShutdownableThread_ has a _CountdownLatch_ that is checked during shutdown. In the test, the _HookConsumer_ was not being started at all. This caused _shutdownLatch_ (of type _CountdownLatch_) not to decrement, since no work was performed, but anticipating work is going to be done. The test thus ended up not getting completed, since the thread was in perpetual wait. *Solution* The _HookConsumer_ should be used such that it is started and shutdown. So the test passes. _HookConsumer_ should check for _shouldRun_ in _shutdown_ method, so that the case where _shutdown_ is called without _start_ is handled. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ATLAS-2063) Compressed HiveHook Messages
Ashutosh Mestry created ATLAS-2063: -- Summary: Compressed HiveHook Messages Key: ATLAS-2063 URL: https://issues.apache.org/jira/browse/ATLAS-2063 Project: Atlas Issue Type: Improvement Components: atlas-intg Affects Versions: 0.8-incubating Reporter: Ashutosh Mestry Assignee: Ashutosh Mestry Fix For: trunk *Background* Messages posted by hooks to Atlas Kafka topics are JSON format. Kafka imposes a 1MB limit on the message size. Occasionally, depending on operations performed, this threshold is reached. This results in messages being dropped. The entities are thus not reflected in Atlas. *Solution* Applying compression to these message will alleviate the problem. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ATLAS-2064) Compressed HiveHook Messages
Ashutosh Mestry created ATLAS-2064: -- Summary: Compressed HiveHook Messages Key: ATLAS-2064 URL: https://issues.apache.org/jira/browse/ATLAS-2064 Project: Atlas Issue Type: Improvement Components: atlas-intg Affects Versions: 0.8-incubating Reporter: Ashutosh Mestry Assignee: Ashutosh Mestry Fix For: trunk *Background* Messages posted by hooks to Atlas Kafka topics are JSON format. Kafka imposes a 1MB limit on the message size. Occasionally, depending on operations performed, this threshold is reached. This results in messages being dropped. The entities are thus not reflected in Atlas. *Solution* Applying compression to these message will alleviate the problem. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ATLAS-2064) Compress Hook Messages Posted to Kafka Atlas Topic
[ https://issues.apache.org/jira/browse/ATLAS-2064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Mestry updated ATLAS-2064: --- Summary: Compress Hook Messages Posted to Kafka Atlas Topic (was: Compress Hook Messages Posted to Kafka) > Compress Hook Messages Posted to Kafka Atlas Topic > -- > > Key: ATLAS-2064 > URL: https://issues.apache.org/jira/browse/ATLAS-2064 > Project: Atlas > Issue Type: Improvement > Components: atlas-intg >Affects Versions: 0.8-incubating >Reporter: Ashutosh Mestry >Assignee: Ashutosh Mestry > Fix For: trunk > > Attachments: ATLAS-2064-compressed-hook-message-envelope.patch > > > *Background* > Messages posted by hooks to Atlas Kafka topics are JSON format. > Kafka imposes a 1MB limit on the message size. > Occasionally, depending on operations performed, this threshold is reached. > This results in messages being dropped. > The entities are thus not reflected in Atlas. > *Solution* > Applying compression to these message will alleviate the problem. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ATLAS-2064) Compress Hook Messages Posted to Kafka
[ https://issues.apache.org/jira/browse/ATLAS-2064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Mestry updated ATLAS-2064: --- Summary: Compress Hook Messages Posted to Kafka (was: Compressed HiveHook Messages) > Compress Hook Messages Posted to Kafka > -- > > Key: ATLAS-2064 > URL: https://issues.apache.org/jira/browse/ATLAS-2064 > Project: Atlas > Issue Type: Improvement > Components: atlas-intg >Affects Versions: 0.8-incubating >Reporter: Ashutosh Mestry >Assignee: Ashutosh Mestry > Fix For: trunk > > Attachments: ATLAS-2064-compressed-hook-message-envelope.patch > > > *Background* > Messages posted by hooks to Atlas Kafka topics are JSON format. > Kafka imposes a 1MB limit on the message size. > Occasionally, depending on operations performed, this threshold is reached. > This results in messages being dropped. > The entities are thus not reflected in Atlas. > *Solution* > Applying compression to these message will alleviate the problem. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ATLAS-2064) Compressed HiveHook Messages
[ https://issues.apache.org/jira/browse/ATLAS-2064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Mestry updated ATLAS-2064: --- Attachment: ATLAS-2064-compressed-hook-message-envelope.patch > Compressed HiveHook Messages > > > Key: ATLAS-2064 > URL: https://issues.apache.org/jira/browse/ATLAS-2064 > Project: Atlas > Issue Type: Improvement > Components: atlas-intg >Affects Versions: 0.8-incubating >Reporter: Ashutosh Mestry >Assignee: Ashutosh Mestry > Fix For: trunk > > Attachments: ATLAS-2064-compressed-hook-message-envelope.patch > > > *Background* > Messages posted by hooks to Atlas Kafka topics are JSON format. > Kafka imposes a 1MB limit on the message size. > Occasionally, depending on operations performed, this threshold is reached. > This results in messages being dropped. > The entities are thus not reflected in Atlas. > *Solution* > Applying compression to these message will alleviate the problem. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (ATLAS-2132) Error code during invalid file path/unreadable file provided during import
[ https://issues.apache.org/jira/browse/ATLAS-2132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Mestry reassigned ATLAS-2132: -- Assignee: Ashutosh Mestry > Error code during invalid file path/unreadable file provided during import > -- > > Key: ATLAS-2132 > URL: https://issues.apache.org/jira/browse/ATLAS-2132 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.9-incubating >Reporter: Sharmadha Sainath >Assignee: Ashutosh Mestry >Priority: Minor > > When firing import command using > {code} > /api/atlas/admin/importfile > {code} > when file provided in the import_options.json doesn't have read permission or > file is not present , Atlas throws 500 Internal server error but with proper > error message. > Example : > {code} > {"errorCode":"ATLAS-500-00-001","errorMessage":"org.apache.atlas.exception.AtlasBaseException: > invalid parameters: /exportimport/db5.zip: file not found"} > {code} > {code} > {"errorCode":"ATLAS-500-00-001","errorMessage":"org.apache.atlas.exception.AtlasBaseException: > java.io.IOException: File '/exportimport/db6.zip' cannot be read"} > {code} > Expected that Atlas would throw 400 Bad Request instead of 500 Internal > Server error. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ATLAS-2129) Atlas shutdown during progress of bulk import throws ConcurrentModificationException
[ https://issues.apache.org/jira/browse/ATLAS-2129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Mestry updated ATLAS-2129: --- Attachment: ATLAS-2129-bulkImport-UnitTest-Fix.patch > Atlas shutdown during progress of bulk import throws > ConcurrentModificationException > > > Key: ATLAS-2129 > URL: https://issues.apache.org/jira/browse/ATLAS-2129 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.9-incubating >Reporter: Sharmadha Sainath >Assignee: Ashutosh Mestry >Priority: Blocker > Fix For: 0.9-incubating, 0.8.2 > > Attachments: ATLAS-2129-bulkImport-Refactor.patch, > ATLAS-2129-bulkImport-UnitTest-Fix.patch > > > 1. Exported an hive_db containing 300 hive_table entities from cluster1 into > a zip file . > 2. Tried to import into cluster2. > 3. When the import was in progress ( at 34%) , stopped Atlas. > 4. Following exception was seen in application logs of cluster2 : > {code} > 2017-09-11 10:17:09,192 INFO - [pool-2-thread-9 - > 83fe24a2-ff2b-4add-a94e-a54b09090912:] ~ bulkImport(): progress: 34% (of 301) > - > entity:last-imported:hive_table:[102]:(2d629a8e-5c94-40e8-b83f-8a9c91c6de8d) > (AtlasEntityStoreV1:238) > 2017-09-11 10:17:09,340 ERROR - [SIGTERM handler:] ~ Could not commit > transaction [1] due to exception (StandardTitanGraph:731) > java.util.ConcurrentModificationException > at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:901) > at java.util.ArrayList$Itr.next(ArrayList.java:851) > at > atlas.shaded.titan.guava.common.collect.Iterators$7.computeNext(Iterators.java:701) > at > atlas.shaded.titan.guava.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143) > at > atlas.shaded.titan.guava.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) > at > com.thinkaurelius.titan.graphdb.database.StandardTitanGraph.prepareCommit(StandardTitanGraph.java:473) > at > com.thinkaurelius.titan.graphdb.database.StandardTitanGraph.commit(StandardTitanGraph.java:654) > at > com.thinkaurelius.titan.graphdb.transaction.StandardTitanTx.commit(StandardTitanTx.java:1337) > at > com.thinkaurelius.titan.graphdb.blueprints.TitanBlueprintsGraph.shutdown(TitanBlueprintsGraph.java:120) > at > com.thinkaurelius.titan.graphdb.database.StandardTitanGraph.shutdownInternal(StandardTitanGraph.java:171) > at > com.thinkaurelius.titan.graphdb.database.StandardTitanGraph.access$700(StandardTitanGraph.java:75) > at > com.thinkaurelius.titan.graphdb.database.StandardTitanGraph$ShutdownThread.start(StandardTitanGraph.java:756) > at > java.lang.ApplicationShutdownHooks.runHooks(ApplicationShutdownHooks.java:102) > at > java.lang.ApplicationShutdownHooks$1.run(ApplicationShutdownHooks.java:46) > at java.lang.Shutdown.runHooks(Shutdown.java:123) > at java.lang.Shutdown.sequence(Shutdown.java:167) > at java.lang.Shutdown.exit(Shutdown.java:212) > at java.lang.Terminator$1.handle(Terminator.java:52) > at sun.misc.Signal$1.run(Signal.java:212) > at java.lang.Thread.run(Thread.java:748) > {code} > Other operations that are called during Atlas shut down such as Shutdown hook > , Stopping KafkaNotification service , NotificationHookConsumer , > HBaseBasedAuditRepository are not called. > 5.After that , restarted Atlas. Atlas functioned properly. > 6.Resumed import with import option , startGuid= %. Atlas was stopped when Import was going on at 34% > > 7. Import completed successfully. > 8.Post import, only entities from 33% to 100% were imported . Initial 32% of > the entities were not imported. > 9.Fired an import command again without any interruption and without > specifying the startGUID. All 300 hive_table entities and 1 hive_db were > imported successfully. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (ATLAS-2090) UI - Cache busting for static content (css, js)
[ https://issues.apache.org/jira/browse/ATLAS-2090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16168095#comment-16168095 ] Ashutosh Mestry commented on ATLAS-2090: [~kevalbhatt] This is a valuable addition. > UI - Cache busting for static content (css, js) > --- > > Key: ATLAS-2090 > URL: https://issues.apache.org/jira/browse/ATLAS-2090 > Project: Atlas > Issue Type: Bug >Affects Versions: 0.9-incubating >Reporter: Nixon Rodrigues >Assignee: Keval Bhatt > Fix For: 0.9-incubating, 0.8.2 > > Attachments: ATLAS-2090.1.patch, ATLAS-2090.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (ATLAS-2129) Atlas shutdown during progress of bulk import throws ConcurrentModificationException
[ https://issues.apache.org/jira/browse/ATLAS-2129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Mestry reassigned ATLAS-2129: -- Assignee: Ashutosh Mestry > Atlas shutdown during progress of bulk import throws > ConcurrentModificationException > > > Key: ATLAS-2129 > URL: https://issues.apache.org/jira/browse/ATLAS-2129 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.9-incubating >Reporter: Sharmadha Sainath >Assignee: Ashutosh Mestry >Priority: Blocker > > 1. Exported an hive_db containing 300 hive_table entities from cluster1 into > a zip file . > 2. Tried to import into cluster2. > 3. When the import was in progress ( at 34%) , stopped Atlas. > 4. Following exception was seen in application logs of cluster2 : > {code} > 2017-09-11 10:17:09,192 INFO - [pool-2-thread-9 - > 83fe24a2-ff2b-4add-a94e-a54b09090912:] ~ bulkImport(): progress: 34% (of 301) > - > entity:last-imported:hive_table:[102]:(2d629a8e-5c94-40e8-b83f-8a9c91c6de8d) > (AtlasEntityStoreV1:238) > 2017-09-11 10:17:09,340 ERROR - [SIGTERM handler:] ~ Could not commit > transaction [1] due to exception (StandardTitanGraph:731) > java.util.ConcurrentModificationException > at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:901) > at java.util.ArrayList$Itr.next(ArrayList.java:851) > at > atlas.shaded.titan.guava.common.collect.Iterators$7.computeNext(Iterators.java:701) > at > atlas.shaded.titan.guava.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143) > at > atlas.shaded.titan.guava.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) > at > com.thinkaurelius.titan.graphdb.database.StandardTitanGraph.prepareCommit(StandardTitanGraph.java:473) > at > com.thinkaurelius.titan.graphdb.database.StandardTitanGraph.commit(StandardTitanGraph.java:654) > at > com.thinkaurelius.titan.graphdb.transaction.StandardTitanTx.commit(StandardTitanTx.java:1337) > at > com.thinkaurelius.titan.graphdb.blueprints.TitanBlueprintsGraph.shutdown(TitanBlueprintsGraph.java:120) > at > com.thinkaurelius.titan.graphdb.database.StandardTitanGraph.shutdownInternal(StandardTitanGraph.java:171) > at > com.thinkaurelius.titan.graphdb.database.StandardTitanGraph.access$700(StandardTitanGraph.java:75) > at > com.thinkaurelius.titan.graphdb.database.StandardTitanGraph$ShutdownThread.start(StandardTitanGraph.java:756) > at > java.lang.ApplicationShutdownHooks.runHooks(ApplicationShutdownHooks.java:102) > at > java.lang.ApplicationShutdownHooks$1.run(ApplicationShutdownHooks.java:46) > at java.lang.Shutdown.runHooks(Shutdown.java:123) > at java.lang.Shutdown.sequence(Shutdown.java:167) > at java.lang.Shutdown.exit(Shutdown.java:212) > at java.lang.Terminator$1.handle(Terminator.java:52) > at sun.misc.Signal$1.run(Signal.java:212) > at java.lang.Thread.run(Thread.java:748) > {code} > Other operations that are called during Atlas shut down such as Shutdown hook > , Stopping KafkaNotification service , NotificationHookConsumer , > HBaseBasedAuditRepository are not called. > 5.After that , restarted Atlas. Atlas functioned properly. > 6.Resumed import with import option , startGuid= %. Atlas was stopped when Import was going on at 34% > > 7. Import completed successfully. > 8.Post import, only entities from 33% to 100% were imported . Initial 32% of > the entities were not imported. > 9.Fired an import command again without any interruption and without > specifying the startGUID. All 300 hive_table entities and 1 hive_db were > imported successfully. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (ATLAS-2132) Error code during invalid file path/unreadable file provided during import
[ https://issues.apache.org/jira/browse/ATLAS-2132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170218#comment-16170218 ] Ashutosh Mestry commented on ATLAS-2132: [~bpgergo] Can you please open a review request for this? > Error code during invalid file path/unreadable file provided during import > -- > > Key: ATLAS-2132 > URL: https://issues.apache.org/jira/browse/ATLAS-2132 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.9-incubating >Reporter: Sharmadha Sainath >Assignee: Péter Gergő Barna >Priority: Minor > Attachments: ATLAS-2132.patch > > > When firing import command using > {code} > /api/atlas/admin/importfile > {code} > when file provided in the import_options.json doesn't have read permission or > file is not present , Atlas throws 500 Internal server error but with proper > error message. > Example : > {code} > {"errorCode":"ATLAS-500-00-001","errorMessage":"org.apache.atlas.exception.AtlasBaseException: > invalid parameters: /exportimport/db5.zip: file not found"} > {code} > {code} > {"errorCode":"ATLAS-500-00-001","errorMessage":"org.apache.atlas.exception.AtlasBaseException: > java.io.IOException: File '/exportimport/db6.zip' cannot be read"} > {code} > Expected that Atlas would throw 400 Bad Request instead of 500 Internal > Server error. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (ATLAS-2145) Errors during build
[ https://issues.apache.org/jira/browse/ATLAS-2145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170147#comment-16170147 ] Ashutosh Mestry commented on ATLAS-2145: [~bpgergo] Thanks for addressing this! +1 for the patch. > Errors during build > --- > > Key: ATLAS-2145 > URL: https://issues.apache.org/jira/browse/ATLAS-2145 > Project: Atlas > Issue Type: Bug > Components: atlas-intg >Affects Versions: 0.8-incubating, 0.9-incubating >Reporter: Péter Gergő Barna >Assignee: Péter Gergő Barna > Attachments: ATLAS-2145.patch > > > Compilation error while building atlas-intg > atlas-intg error: > {noformat} > [INFO] BUILD FAILURE > [INFO] > > [INFO] Total time: 8.358 s > [INFO] Finished at: 2017-09-18T14:38:00+02:00 > [INFO] Final Memory: 70M/888M > [INFO] > > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) > on project atlas-intg: Compilation failure: Compilation failure: > [ERROR] > /Users/pbarna/Desktop/apache-source/atlas/intg/src/main/java/org/apache/atlas/model/SearchFilter.java:[20,32] > package com.sun.jersey.core.util does not exist > {noformat} > dashboardv2 npm error: > {noformat} > [ERROR] npm ERR! path > /Users/pbarna/Desktop/apache-source/atlas/dashboardv2/target/node_modules/js-beautify/js/bin/css-beautify.js > [ERROR] npm ERR! code ENOENT > [ERROR] npm ERR! errno -2 > [ERROR] npm ERR! syscall chmod > [ERROR] > [ERROR] npm ERR! enoent ENOENT: no such file or directory, chmod > '/Users/pbarna/Desktop/apache-source/atlas/dashboardv2/target/node_modules/js-beautify/js/bin/css-beautify.js' > [ERROR] npm ERR! enoent ENOENT: no such file or directory, chmod > '/Users/pbarna/Desktop/apache-source/atlas/dashboardv2/target/node_modules/js-beautify/js/bin/css-beautify.js' > [ERROR] npm ERR! enoent This is most likely not a problem with npm itself > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ATLAS-2146) Remove Powermock Usage
Ashutosh Mestry created ATLAS-2146: -- Summary: Remove Powermock Usage Key: ATLAS-2146 URL: https://issues.apache.org/jira/browse/ATLAS-2146 Project: Atlas Issue Type: Improvement Reporter: Ashutosh Mestry Assignee: Ashutosh Mestry Priority: Minor *Background* Some time back _Powermocks_ library was added with the intention of being able to test private methods in unit tests. Short coming of this is that does do it in a type safe way. *Solution* Existing _TestNG_ framework supports _VisibleForTesting_ annotation that allows for a type safe way to address this. Short coming of this is that _VisibleForTesting_ does is supported only for a given package. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Comment Edited] (ATLAS-2145) Errors during build
[ https://issues.apache.org/jira/browse/ATLAS-2145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170352#comment-16170352 ] Ashutosh Mestry edited comment on ATLAS-2145 at 9/18/17 5:37 PM: - The 2 POM file changes don't seem necessary. In _intg_: The _jersey.core_ dependency seems to be getting resolved from the parent. In _client_: What is the purpose of _hadoop.cli_ dependency? was (Author: ashutoshm): The 2 POM file changes don't seem necessary. > Errors during build > --- > > Key: ATLAS-2145 > URL: https://issues.apache.org/jira/browse/ATLAS-2145 > Project: Atlas > Issue Type: Bug > Components: atlas-intg >Affects Versions: 0.8-incubating, 0.9-incubating >Reporter: Péter Gergő Barna >Assignee: Péter Gergő Barna > Attachments: ATLAS-2141-2.patch, ATLAS-2145.patch > > > Compilation error while building atlas-intg > atlas-intg error: > {noformat} > [INFO] BUILD FAILURE > [INFO] > > [INFO] Total time: 8.358 s > [INFO] Finished at: 2017-09-18T14:38:00+02:00 > [INFO] Final Memory: 70M/888M > [INFO] > > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) > on project atlas-intg: Compilation failure: Compilation failure: > [ERROR] > /Users/pbarna/Desktop/apache-source/atlas/intg/src/main/java/org/apache/atlas/model/SearchFilter.java:[20,32] > package com.sun.jersey.core.util does not exist > {noformat} > dashboardv2 npm error: > {noformat} > [ERROR] npm ERR! path > /Users/pbarna/Desktop/apache-source/atlas/dashboardv2/target/node_modules/js-beautify/js/bin/css-beautify.js > [ERROR] npm ERR! code ENOENT > [ERROR] npm ERR! errno -2 > [ERROR] npm ERR! syscall chmod > [ERROR] > [ERROR] npm ERR! enoent ENOENT: no such file or directory, chmod > '/Users/pbarna/Desktop/apache-source/atlas/dashboardv2/target/node_modules/js-beautify/js/bin/css-beautify.js' > [ERROR] npm ERR! enoent ENOENT: no such file or directory, chmod > '/Users/pbarna/Desktop/apache-source/atlas/dashboardv2/target/node_modules/js-beautify/js/bin/css-beautify.js' > [ERROR] npm ERR! enoent This is most likely not a problem with npm itself > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (ATLAS-2145) Errors during build
[ https://issues.apache.org/jira/browse/ATLAS-2145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170315#comment-16170315 ] Ashutosh Mestry commented on ATLAS-2145: [~sarath.ku...@gmail.com] Thanks for your help! [~bpgergo] Commits: * Master: https://github.com/apache/atlas/commit/dc358fab2dc5fcfefd8aacb5252a671be316c328 * Branch 0.8: https://github.com/apache/atlas/commit/bad66bc665f9433ffedc7c3395b034b4e85a86d0 > Errors during build > --- > > Key: ATLAS-2145 > URL: https://issues.apache.org/jira/browse/ATLAS-2145 > Project: Atlas > Issue Type: Bug > Components: atlas-intg >Affects Versions: 0.8-incubating, 0.9-incubating >Reporter: Péter Gergő Barna >Assignee: Péter Gergő Barna > Attachments: ATLAS-2145.patch > > > Compilation error while building atlas-intg > atlas-intg error: > {noformat} > [INFO] BUILD FAILURE > [INFO] > > [INFO] Total time: 8.358 s > [INFO] Finished at: 2017-09-18T14:38:00+02:00 > [INFO] Final Memory: 70M/888M > [INFO] > > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) > on project atlas-intg: Compilation failure: Compilation failure: > [ERROR] > /Users/pbarna/Desktop/apache-source/atlas/intg/src/main/java/org/apache/atlas/model/SearchFilter.java:[20,32] > package com.sun.jersey.core.util does not exist > {noformat} > dashboardv2 npm error: > {noformat} > [ERROR] npm ERR! path > /Users/pbarna/Desktop/apache-source/atlas/dashboardv2/target/node_modules/js-beautify/js/bin/css-beautify.js > [ERROR] npm ERR! code ENOENT > [ERROR] npm ERR! errno -2 > [ERROR] npm ERR! syscall chmod > [ERROR] > [ERROR] npm ERR! enoent ENOENT: no such file or directory, chmod > '/Users/pbarna/Desktop/apache-source/atlas/dashboardv2/target/node_modules/js-beautify/js/bin/css-beautify.js' > [ERROR] npm ERR! enoent ENOENT: no such file or directory, chmod > '/Users/pbarna/Desktop/apache-source/atlas/dashboardv2/target/node_modules/js-beautify/js/bin/css-beautify.js' > [ERROR] npm ERR! enoent This is most likely not a problem with npm itself > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (ATLAS-2145) Errors during build
[ https://issues.apache.org/jira/browse/ATLAS-2145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170352#comment-16170352 ] Ashutosh Mestry commented on ATLAS-2145: The 2 POM file changes don't seem necessary. > Errors during build > --- > > Key: ATLAS-2145 > URL: https://issues.apache.org/jira/browse/ATLAS-2145 > Project: Atlas > Issue Type: Bug > Components: atlas-intg >Affects Versions: 0.8-incubating, 0.9-incubating >Reporter: Péter Gergő Barna >Assignee: Péter Gergő Barna > Attachments: ATLAS-2141-2.patch, ATLAS-2145.patch > > > Compilation error while building atlas-intg > atlas-intg error: > {noformat} > [INFO] BUILD FAILURE > [INFO] > > [INFO] Total time: 8.358 s > [INFO] Finished at: 2017-09-18T14:38:00+02:00 > [INFO] Final Memory: 70M/888M > [INFO] > > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) > on project atlas-intg: Compilation failure: Compilation failure: > [ERROR] > /Users/pbarna/Desktop/apache-source/atlas/intg/src/main/java/org/apache/atlas/model/SearchFilter.java:[20,32] > package com.sun.jersey.core.util does not exist > {noformat} > dashboardv2 npm error: > {noformat} > [ERROR] npm ERR! path > /Users/pbarna/Desktop/apache-source/atlas/dashboardv2/target/node_modules/js-beautify/js/bin/css-beautify.js > [ERROR] npm ERR! code ENOENT > [ERROR] npm ERR! errno -2 > [ERROR] npm ERR! syscall chmod > [ERROR] > [ERROR] npm ERR! enoent ENOENT: no such file or directory, chmod > '/Users/pbarna/Desktop/apache-source/atlas/dashboardv2/target/node_modules/js-beautify/js/bin/css-beautify.js' > [ERROR] npm ERR! enoent ENOENT: no such file or directory, chmod > '/Users/pbarna/Desktop/apache-source/atlas/dashboardv2/target/node_modules/js-beautify/js/bin/css-beautify.js' > [ERROR] npm ERR! enoent This is most likely not a problem with npm itself > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ATLAS-2145) Errors during build
[ https://issues.apache.org/jira/browse/ATLAS-2145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Mestry updated ATLAS-2145: --- Attachment: ATLAS-2141-2.patch > Errors during build > --- > > Key: ATLAS-2145 > URL: https://issues.apache.org/jira/browse/ATLAS-2145 > Project: Atlas > Issue Type: Bug > Components: atlas-intg >Affects Versions: 0.8-incubating, 0.9-incubating >Reporter: Péter Gergő Barna >Assignee: Péter Gergő Barna > Attachments: ATLAS-2141-2.patch, ATLAS-2145.patch > > > Compilation error while building atlas-intg > atlas-intg error: > {noformat} > [INFO] BUILD FAILURE > [INFO] > > [INFO] Total time: 8.358 s > [INFO] Finished at: 2017-09-18T14:38:00+02:00 > [INFO] Final Memory: 70M/888M > [INFO] > > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) > on project atlas-intg: Compilation failure: Compilation failure: > [ERROR] > /Users/pbarna/Desktop/apache-source/atlas/intg/src/main/java/org/apache/atlas/model/SearchFilter.java:[20,32] > package com.sun.jersey.core.util does not exist > {noformat} > dashboardv2 npm error: > {noformat} > [ERROR] npm ERR! path > /Users/pbarna/Desktop/apache-source/atlas/dashboardv2/target/node_modules/js-beautify/js/bin/css-beautify.js > [ERROR] npm ERR! code ENOENT > [ERROR] npm ERR! errno -2 > [ERROR] npm ERR! syscall chmod > [ERROR] > [ERROR] npm ERR! enoent ENOENT: no such file or directory, chmod > '/Users/pbarna/Desktop/apache-source/atlas/dashboardv2/target/node_modules/js-beautify/js/bin/css-beautify.js' > [ERROR] npm ERR! enoent ENOENT: no such file or directory, chmod > '/Users/pbarna/Desktop/apache-source/atlas/dashboardv2/target/node_modules/js-beautify/js/bin/css-beautify.js' > [ERROR] npm ERR! enoent This is most likely not a problem with npm itself > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ATLAS-2146) Remove Powermock Usage
[ https://issues.apache.org/jira/browse/ATLAS-2146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Mestry updated ATLAS-2146: --- Attachment: ATLAS-2146.patch > Remove Powermock Usage > -- > > Key: ATLAS-2146 > URL: https://issues.apache.org/jira/browse/ATLAS-2146 > Project: Atlas > Issue Type: Improvement >Reporter: Ashutosh Mestry >Assignee: Ashutosh Mestry >Priority: Minor > Attachments: ATLAS-2146.patch > > > *Background* > Some time back _Powermocks_ library was added with the intention of being > able to test private methods in unit tests. Short coming of this is that does > do it in a type safe way. > *Solution* > Existing _TestNG_ framework supports _VisibleForTesting_ annotation that > allows for a type safe way to address this. > Short coming of this is that _VisibleForTesting_ does is supported only for a > given package. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ATLAS-2145) Errors during build
[ https://issues.apache.org/jira/browse/ATLAS-2145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Mestry updated ATLAS-2145: --- Attachment: ATLAS-2145-2.patch > Errors during build > --- > > Key: ATLAS-2145 > URL: https://issues.apache.org/jira/browse/ATLAS-2145 > Project: Atlas > Issue Type: Bug > Components: atlas-intg >Affects Versions: 0.8-incubating, 0.9-incubating >Reporter: Péter Gergő Barna >Assignee: Péter Gergő Barna > Attachments: ATLAS-2145-2.patch, ATLAS-2145.patch > > > Compilation error while building atlas-intg > atlas-intg error: > {noformat} > [INFO] BUILD FAILURE > [INFO] > > [INFO] Total time: 8.358 s > [INFO] Finished at: 2017-09-18T14:38:00+02:00 > [INFO] Final Memory: 70M/888M > [INFO] > > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) > on project atlas-intg: Compilation failure: Compilation failure: > [ERROR] > /Users/pbarna/Desktop/apache-source/atlas/intg/src/main/java/org/apache/atlas/model/SearchFilter.java:[20,32] > package com.sun.jersey.core.util does not exist > {noformat} > dashboardv2 npm error: > {noformat} > [ERROR] npm ERR! path > /Users/pbarna/Desktop/apache-source/atlas/dashboardv2/target/node_modules/js-beautify/js/bin/css-beautify.js > [ERROR] npm ERR! code ENOENT > [ERROR] npm ERR! errno -2 > [ERROR] npm ERR! syscall chmod > [ERROR] > [ERROR] npm ERR! enoent ENOENT: no such file or directory, chmod > '/Users/pbarna/Desktop/apache-source/atlas/dashboardv2/target/node_modules/js-beautify/js/bin/css-beautify.js' > [ERROR] npm ERR! enoent ENOENT: no such file or directory, chmod > '/Users/pbarna/Desktop/apache-source/atlas/dashboardv2/target/node_modules/js-beautify/js/bin/css-beautify.js' > [ERROR] npm ERR! enoent This is most likely not a problem with npm itself > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ATLAS-2145) Errors during build
[ https://issues.apache.org/jira/browse/ATLAS-2145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Mestry updated ATLAS-2145: --- Attachment: (was: ATLAS-2141-2.patch) > Errors during build > --- > > Key: ATLAS-2145 > URL: https://issues.apache.org/jira/browse/ATLAS-2145 > Project: Atlas > Issue Type: Bug > Components: atlas-intg >Affects Versions: 0.8-incubating, 0.9-incubating >Reporter: Péter Gergő Barna >Assignee: Péter Gergő Barna > Attachments: ATLAS-2145-2.patch, ATLAS-2145.patch > > > Compilation error while building atlas-intg > atlas-intg error: > {noformat} > [INFO] BUILD FAILURE > [INFO] > > [INFO] Total time: 8.358 s > [INFO] Finished at: 2017-09-18T14:38:00+02:00 > [INFO] Final Memory: 70M/888M > [INFO] > > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) > on project atlas-intg: Compilation failure: Compilation failure: > [ERROR] > /Users/pbarna/Desktop/apache-source/atlas/intg/src/main/java/org/apache/atlas/model/SearchFilter.java:[20,32] > package com.sun.jersey.core.util does not exist > {noformat} > dashboardv2 npm error: > {noformat} > [ERROR] npm ERR! path > /Users/pbarna/Desktop/apache-source/atlas/dashboardv2/target/node_modules/js-beautify/js/bin/css-beautify.js > [ERROR] npm ERR! code ENOENT > [ERROR] npm ERR! errno -2 > [ERROR] npm ERR! syscall chmod > [ERROR] > [ERROR] npm ERR! enoent ENOENT: no such file or directory, chmod > '/Users/pbarna/Desktop/apache-source/atlas/dashboardv2/target/node_modules/js-beautify/js/bin/css-beautify.js' > [ERROR] npm ERR! enoent ENOENT: no such file or directory, chmod > '/Users/pbarna/Desktop/apache-source/atlas/dashboardv2/target/node_modules/js-beautify/js/bin/css-beautify.js' > [ERROR] npm ERR! enoent This is most likely not a problem with npm itself > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ATLAS-2132) Error code during invalid file path/unreadable file provided during import
[ https://issues.apache.org/jira/browse/ATLAS-2132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Mestry updated ATLAS-2132: --- Attachment: ATLAS-2132-2.patch > Error code during invalid file path/unreadable file provided during import > -- > > Key: ATLAS-2132 > URL: https://issues.apache.org/jira/browse/ATLAS-2132 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.9-incubating >Reporter: Sharmadha Sainath >Assignee: Péter Gergő Barna >Priority: Minor > Attachments: ATLAS-2132-2.patch, ATLAS-2132.patch > > > When firing import command using > {code} > /api/atlas/admin/importfile > {code} > when file provided in the import_options.json doesn't have read permission or > file is not present , Atlas throws 500 Internal server error but with proper > error message. > Example : > {code} > {"errorCode":"ATLAS-500-00-001","errorMessage":"org.apache.atlas.exception.AtlasBaseException: > invalid parameters: /exportimport/db5.zip: file not found"} > {code} > {code} > {"errorCode":"ATLAS-500-00-001","errorMessage":"org.apache.atlas.exception.AtlasBaseException: > java.io.IOException: File '/exportimport/db6.zip' cannot be read"} > {code} > Expected that Atlas would throw 400 Bad Request instead of 500 Internal > Server error. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (ATLAS-2132) Error code during invalid file path/unreadable file provided during import
[ https://issues.apache.org/jira/browse/ATLAS-2132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170851#comment-16170851 ] Ashutosh Mestry commented on ATLAS-2132: I would prefer the other patch (attached). > Error code during invalid file path/unreadable file provided during import > -- > > Key: ATLAS-2132 > URL: https://issues.apache.org/jira/browse/ATLAS-2132 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.9-incubating >Reporter: Sharmadha Sainath >Assignee: Péter Gergő Barna >Priority: Minor > Attachments: ATLAS-2132-2.patch, ATLAS-2132.patch > > > When firing import command using > {code} > /api/atlas/admin/importfile > {code} > when file provided in the import_options.json doesn't have read permission or > file is not present , Atlas throws 500 Internal server error but with proper > error message. > Example : > {code} > {"errorCode":"ATLAS-500-00-001","errorMessage":"org.apache.atlas.exception.AtlasBaseException: > invalid parameters: /exportimport/db5.zip: file not found"} > {code} > {code} > {"errorCode":"ATLAS-500-00-001","errorMessage":"org.apache.atlas.exception.AtlasBaseException: > java.io.IOException: File '/exportimport/db6.zip' cannot be read"} > {code} > Expected that Atlas would throw 400 Bad Request instead of 500 Internal > Server error. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ATLAS-2148) Saved Search: Introduce BASIC and ADVANCE Search Types During Save
[ https://issues.apache.org/jira/browse/ATLAS-2148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Mestry updated ATLAS-2148: --- Description: *Background* Existing implementation of _SavedSearch_ does not have a way to differentiate saving of basic and advanced searches. Basic searches are the ones where user constructs the query using the new search UI. Advanced queries are where the user types a DSL query into the search box. Having this notion should allow for better differentiation of the 2 types of queries. If the type is used in specifying unique attribute, user could create query with same name. *Implementation* Additional attribute added to ___AtlasSavedSearch_. REST API updated to include the query type parameter. was: *Background* Existing implementation of _SavedSearch_ does not have a way to differentiate saving of basic and advanced searches. Basic searches are the ones where user constructs the query using the new search UI. Advanced queries are where the user types a DSL query into the search box. Having this notion should allow for better differentiation of the 2 types of queries. If the type is used in specifying unique attribute, user could create query with same name. *Implementation* Additional attribute added to __AtlasSavedSearch. REST API updated to include the query type parameter. > Saved Search: Introduce BASIC and ADVANCE Search Types During Save > -- > > Key: ATLAS-2148 > URL: https://issues.apache.org/jira/browse/ATLAS-2148 > Project: Atlas > Issue Type: Improvement > Components: atlas-core, atlas-webui >Affects Versions: trunk >Reporter: Ashutosh Mestry >Assignee: Ashutosh Mestry >Priority: Minor > > *Background* > Existing implementation of _SavedSearch_ does not have a way to differentiate > saving of basic and advanced searches. > Basic searches are the ones where user constructs the query using the new > search UI. > Advanced queries are where the user types a DSL query into the search box. > Having this notion should allow for better differentiation of the 2 types of > queries. If the type is used in specifying unique attribute, user could > create query with same name. > *Implementation* > Additional attribute added to ___AtlasSavedSearch_. REST API updated to > include the query type parameter. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ATLAS-2148) Saved Search: Introduce BASIC and ADVANCE Search Types During Save
Ashutosh Mestry created ATLAS-2148: -- Summary: Saved Search: Introduce BASIC and ADVANCE Search Types During Save Key: ATLAS-2148 URL: https://issues.apache.org/jira/browse/ATLAS-2148 Project: Atlas Issue Type: Improvement Components: atlas-core, atlas-webui Affects Versions: trunk Reporter: Ashutosh Mestry Assignee: Ashutosh Mestry Priority: Minor *Background* Existing implementation of _SavedSearch_ does not have a way to differentiate saving of basic and advanced searches. Basic searches are the ones where user constructs the query using the new search UI. Advanced queries are where the user types a DSL query into the search box. Having this notion should allow for better differentiation of the 2 types of queries. If the type is used in specifying unique attribute, user could create query with same name. *Implementation* Additional attribute added to __AtlasSavedSearch. REST API updated to include the query type parameter. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ATLAS-2160) Saved Search: Introduce REST API to Execute Saved Search
[ https://issues.apache.org/jira/browse/ATLAS-2160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Mestry updated ATLAS-2160: --- Attachment: ATLAS-2160-SearchWithSavedSearch.patch > Saved Search: Introduce REST API to Execute Saved Search > > > Key: ATLAS-2160 > URL: https://issues.apache.org/jira/browse/ATLAS-2160 > Project: Atlas > Issue Type: Improvement > Components: atlas-core >Affects Versions: 0.8-incubating >Reporter: Ashutosh Mestry >Assignee: Ashutosh Mestry >Priority: Minor > Fix For: trunk > > Attachments: ATLAS-2160-SearchWithSavedSearch.patch > > > The existing APIs for Saved Search provide rich functionality for management > of saved searches. > It will be a worthwhile addition if an API that takes saved search name and > executes the corresponding search and returns the search results. > Inputs should include: > * Saved search using name. > * Saved Search using guid. > Should support BASIC and ADVANCED search. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ATLAS-2160) Saved Search: Introduce REST API to Execute Saved Search
Ashutosh Mestry created ATLAS-2160: -- Summary: Saved Search: Introduce REST API to Execute Saved Search Key: ATLAS-2160 URL: https://issues.apache.org/jira/browse/ATLAS-2160 Project: Atlas Issue Type: Improvement Components: atlas-core Affects Versions: 0.8-incubating Reporter: Ashutosh Mestry Assignee: Ashutosh Mestry Priority: Minor Fix For: trunk The existing APIs for Saved Search provide rich functionality for management of saved searches. It will be a worthwhile addition if an API that takes saved search name and executes the corresponding search and returns the search results. Inputs should include: * Saved search using name. * Saved Search using guid. Should support BASIC and ADVANCED search. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ATLAS-2158) ZipFileResourceTestUtils: Need Improvement to Handle Case for Failure in Loading
[ https://issues.apache.org/jira/browse/ATLAS-2158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Mestry updated ATLAS-2158: --- Attachment: ATLAS-2158-Master-ZipFileResourceTestUtils.patch > ZipFileResourceTestUtils: Need Improvement to Handle Case for Failure in > Loading > > > Key: ATLAS-2158 > URL: https://issues.apache.org/jira/browse/ATLAS-2158 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: trunk, 0.8-incubating >Reporter: Ashutosh Mestry >Assignee: Ashutosh Mestry >Priority: Trivial > Fix For: trunk > > Attachments: ATLAS-2158-Master-ZipFileResourceTestUtils.patch, > ATLAS-2158-ZipFileResourceTestUtils.patch > > > _ZipFileResourceTestUtils_ should handle case where model file loading fails. > A NULL check and/or exception thrown should be sufficient. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ATLAS-2134) Code Improvement To Follow Best Practices
Ashutosh Mestry created ATLAS-2134: -- Summary: Code Improvement To Follow Best Practices Key: ATLAS-2134 URL: https://issues.apache.org/jira/browse/ATLAS-2134 Project: Atlas Issue Type: Bug Components: atlas-core Affects Versions: 0.8-incubating Reporter: Ashutosh Mestry Assignee: Ashutosh Mestry Fix For: trunk Code Improvement To Follow Best Practices -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ATLAS-2134) Code Improvement To Follow Best Practices
[ https://issues.apache.org/jira/browse/ATLAS-2134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Mestry updated ATLAS-2134: --- Attachment: ATLAS-2134-CodeImprovements.patch > Code Improvement To Follow Best Practices > - > > Key: ATLAS-2134 > URL: https://issues.apache.org/jira/browse/ATLAS-2134 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.8-incubating >Reporter: Ashutosh Mestry >Assignee: Ashutosh Mestry > Fix For: trunk > > Attachments: ATLAS-2134-CodeImprovements.patch > > > Code Improvement To Follow Best Practices -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ATLAS-2134) Code Improvement To Follow Best Practices
[ https://issues.apache.org/jira/browse/ATLAS-2134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Mestry updated ATLAS-2134: --- Attachment: (was: ATLAS-2134-ZipSource-printStack.patch) > Code Improvement To Follow Best Practices > - > > Key: ATLAS-2134 > URL: https://issues.apache.org/jira/browse/ATLAS-2134 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.8-incubating >Reporter: Ashutosh Mestry >Assignee: Ashutosh Mestry > Fix For: trunk > > Attachments: ATLAS-2134-CodeImprovements.patch > > > Code Improvement To Follow Best Practices -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ATLAS-2134) Code Improvement To Follow Best Practices
[ https://issues.apache.org/jira/browse/ATLAS-2134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Mestry updated ATLAS-2134: --- Attachment: ATLAS-2134-ZipSource-printStack.patch > Code Improvement To Follow Best Practices > - > > Key: ATLAS-2134 > URL: https://issues.apache.org/jira/browse/ATLAS-2134 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.8-incubating >Reporter: Ashutosh Mestry >Assignee: Ashutosh Mestry > Fix For: trunk > > Attachments: ATLAS-2134-ZipSource-printStack.patch > > > Code Improvement To Follow Best Practices -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (ATLAS-2141) Regression : Disassociating tag from Entity and Editing tag attributes associated to an entity throw NPE
[ https://issues.apache.org/jira/browse/ATLAS-2141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16169382#comment-16169382 ] Ashutosh Mestry commented on ATLAS-2141: [~madhan.neethiraj] Thanks for taking care of this! +1 for the patch. > Regression : Disassociating tag from Entity and Editing tag attributes > associated to an entity throw NPE > > > Key: ATLAS-2141 > URL: https://issues.apache.org/jira/browse/ATLAS-2141 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.8.2 >Reporter: Sharmadha Sainath >Assignee: Madhan Neethiraj > Fix For: 0.8.2 > > Attachments: ATLAS-2141.patch, NPE_during_tag_delete.txt, > NPE_during_tag_edit.txt > > > Editing tag attribute value associated to an entity throws 500 Internal > server error with NPE and with Error Notification "Tag could not be > added". > Disassociating a tag from an entity also throws 500 Internal server error > with NPE , and with Error Notification "Tag could not be deleted". > Tag addition works fine. > This is a regression caused after > [ATLAS-2100|https://issues.apache.org/jira/browse/ATLAS-2100]. > Attached the exception stack trace seen for both cases. > CC :[~ashutoshm] [~madhan.neethiraj] -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ATLAS-2134) Code Improvement To Follow Best Practices
[ https://issues.apache.org/jira/browse/ATLAS-2134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Mestry updated ATLAS-2134: --- Attachment: (was: ATLAS-2134-CodeImprovements.patch) > Code Improvement To Follow Best Practices > - > > Key: ATLAS-2134 > URL: https://issues.apache.org/jira/browse/ATLAS-2134 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.8-incubating >Reporter: Ashutosh Mestry >Assignee: Ashutosh Mestry > Fix For: trunk > > > Code Improvement To Follow Best Practices -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ATLAS-2134) Code Improvement To Follow Best Practices
[ https://issues.apache.org/jira/browse/ATLAS-2134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Mestry updated ATLAS-2134: --- Attachment: (was: ATLAS-2134-CodeImprovements.patch) > Code Improvement To Follow Best Practices > - > > Key: ATLAS-2134 > URL: https://issues.apache.org/jira/browse/ATLAS-2134 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.8-incubating >Reporter: Ashutosh Mestry >Assignee: Ashutosh Mestry > Fix For: trunk > > > Code Improvement To Follow Best Practices -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ATLAS-2134) Code Improvement To Follow Best Practices
[ https://issues.apache.org/jira/browse/ATLAS-2134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Mestry updated ATLAS-2134: --- Attachment: ATLAS-2134-CodeImprovements.patch > Code Improvement To Follow Best Practices > - > > Key: ATLAS-2134 > URL: https://issues.apache.org/jira/browse/ATLAS-2134 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.8-incubating >Reporter: Ashutosh Mestry >Assignee: Ashutosh Mestry > Fix For: trunk > > Attachments: ATLAS-2134-CodeImprovements.patch > > > Code Improvement To Follow Best Practices -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ATLAS-2129) Atlas shutdown during progress of bulk import throws ConcurrentModificationException
[ https://issues.apache.org/jira/browse/ATLAS-2129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Mestry updated ATLAS-2129: --- Attachment: ATLAS-2129-bulkImport-Refactor.patch > Atlas shutdown during progress of bulk import throws > ConcurrentModificationException > > > Key: ATLAS-2129 > URL: https://issues.apache.org/jira/browse/ATLAS-2129 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.9-incubating >Reporter: Sharmadha Sainath >Assignee: Ashutosh Mestry >Priority: Blocker > Attachments: ATLAS-2129-bulkImport-Refactor.patch > > > 1. Exported an hive_db containing 300 hive_table entities from cluster1 into > a zip file . > 2. Tried to import into cluster2. > 3. When the import was in progress ( at 34%) , stopped Atlas. > 4. Following exception was seen in application logs of cluster2 : > {code} > 2017-09-11 10:17:09,192 INFO - [pool-2-thread-9 - > 83fe24a2-ff2b-4add-a94e-a54b09090912:] ~ bulkImport(): progress: 34% (of 301) > - > entity:last-imported:hive_table:[102]:(2d629a8e-5c94-40e8-b83f-8a9c91c6de8d) > (AtlasEntityStoreV1:238) > 2017-09-11 10:17:09,340 ERROR - [SIGTERM handler:] ~ Could not commit > transaction [1] due to exception (StandardTitanGraph:731) > java.util.ConcurrentModificationException > at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:901) > at java.util.ArrayList$Itr.next(ArrayList.java:851) > at > atlas.shaded.titan.guava.common.collect.Iterators$7.computeNext(Iterators.java:701) > at > atlas.shaded.titan.guava.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143) > at > atlas.shaded.titan.guava.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) > at > com.thinkaurelius.titan.graphdb.database.StandardTitanGraph.prepareCommit(StandardTitanGraph.java:473) > at > com.thinkaurelius.titan.graphdb.database.StandardTitanGraph.commit(StandardTitanGraph.java:654) > at > com.thinkaurelius.titan.graphdb.transaction.StandardTitanTx.commit(StandardTitanTx.java:1337) > at > com.thinkaurelius.titan.graphdb.blueprints.TitanBlueprintsGraph.shutdown(TitanBlueprintsGraph.java:120) > at > com.thinkaurelius.titan.graphdb.database.StandardTitanGraph.shutdownInternal(StandardTitanGraph.java:171) > at > com.thinkaurelius.titan.graphdb.database.StandardTitanGraph.access$700(StandardTitanGraph.java:75) > at > com.thinkaurelius.titan.graphdb.database.StandardTitanGraph$ShutdownThread.start(StandardTitanGraph.java:756) > at > java.lang.ApplicationShutdownHooks.runHooks(ApplicationShutdownHooks.java:102) > at > java.lang.ApplicationShutdownHooks$1.run(ApplicationShutdownHooks.java:46) > at java.lang.Shutdown.runHooks(Shutdown.java:123) > at java.lang.Shutdown.sequence(Shutdown.java:167) > at java.lang.Shutdown.exit(Shutdown.java:212) > at java.lang.Terminator$1.handle(Terminator.java:52) > at sun.misc.Signal$1.run(Signal.java:212) > at java.lang.Thread.run(Thread.java:748) > {code} > Other operations that are called during Atlas shut down such as Shutdown hook > , Stopping KafkaNotification service , NotificationHookConsumer , > HBaseBasedAuditRepository are not called. > 5.After that , restarted Atlas. Atlas functioned properly. > 6.Resumed import with import option , startGuid= %. Atlas was stopped when Import was going on at 34% > > 7. Import completed successfully. > 8.Post import, only entities from 33% to 100% were imported . Initial 32% of > the entities were not imported. > 9.Fired an import command again without any interruption and without > specifying the startGUID. All 300 hive_table entities and 1 hive_db were > imported successfully. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ATLAS-2158) ZipFileResourceTestUtils: Need Improvement to Handle Case for Failure in Loading
[ https://issues.apache.org/jira/browse/ATLAS-2158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Mestry updated ATLAS-2158: --- Attachment: ATLAS-2158-ZipFileResourceTestUtils.patch > ZipFileResourceTestUtils: Need Improvement to Handle Case for Failure in > Loading > > > Key: ATLAS-2158 > URL: https://issues.apache.org/jira/browse/ATLAS-2158 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: trunk, 0.8-incubating >Reporter: Ashutosh Mestry >Assignee: Ashutosh Mestry >Priority: Trivial > Fix For: trunk > > Attachments: ATLAS-2158-ZipFileResourceTestUtils.patch > > > _ZipFileResourceTestUtils_ should handle case where model file loading fails. > A NULL check and/or exception thrown should be sufficient. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ATLAS-2158) ZipFileResourceTestUtils: Need Improvement to Handle Case for Failure in Loading
Ashutosh Mestry created ATLAS-2158: -- Summary: ZipFileResourceTestUtils: Need Improvement to Handle Case for Failure in Loading Key: ATLAS-2158 URL: https://issues.apache.org/jira/browse/ATLAS-2158 Project: Atlas Issue Type: Bug Components: atlas-core Affects Versions: 0.8-incubating, trunk Reporter: Ashutosh Mestry Assignee: Ashutosh Mestry Priority: Trivial Fix For: trunk _ZipFileResourceTestUtils_ should handle case where model file loading fails. A NULL check and/or exception thrown should be sufficient. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (ATLAS-2186) Atlas not responding for long time after startup
[ https://issues.apache.org/jira/browse/ATLAS-2186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190258#comment-16190258 ] Ashutosh Mestry commented on ATLAS-2186: Review: https://reviews.apache.org/r/62755/ > Atlas not responding for long time after startup > > > Key: ATLAS-2186 > URL: https://issues.apache.org/jira/browse/ATLAS-2186 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.8.2 >Reporter: Ayub Pathan >Priority: Blocker > Fix For: 0.8.2 > > Attachments: atlas.20171003-122531.out > > > Atlas with latest bits is not responding for any query, the request times > out. This behavior can be seen for more than ~15 minutes after atlas startup > and everything will be back to normal after this period. > I suspect one of the threads is looping or hanging which is resulting in this > state. Atlas logs does not show any suspicious information (checked with > debug logs as well). I took a thread dump of Atlas, attaching the dump file > and report below. I strongly suspect, this might be an issue with newer jetty > version introduced recently. > Thread dump report: > http://fastthread.io/my-thread-report.jsp?p=c2hhcmVkLzIwMTcvMTAvMy8tLWF0bGFzLjIwMTcxMDAzLTEyMjUzMS5vdXQtLTEyLTE2LTQ2 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ATLAS-2186) Atlas not responding for long time after startup
[ https://issues.apache.org/jira/browse/ATLAS-2186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Mestry updated ATLAS-2186: --- Attachment: ATLAS-2186-Jetty-dowgrade.patch > Atlas not responding for long time after startup > > > Key: ATLAS-2186 > URL: https://issues.apache.org/jira/browse/ATLAS-2186 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.8.2 >Reporter: Ayub Pathan >Priority: Blocker > Fix For: 0.8.2 > > Attachments: atlas.20171003-122531.out, > ATLAS-2186-Jetty-dowgrade.patch > > > Atlas with latest bits is not responding for any query, the request times > out. This behavior can be seen for more than ~15 minutes after atlas startup > and everything will be back to normal after this period. > I suspect one of the threads is looping or hanging which is resulting in this > state. Atlas logs does not show any suspicious information (checked with > debug logs as well). I took a thread dump of Atlas, attaching the dump file > and report below. I strongly suspect, this might be an issue with newer jetty > version introduced recently. > Thread dump report: > http://fastthread.io/my-thread-report.jsp?p=c2hhcmVkLzIwMTcvMTAvMy8tLWF0bGFzLjIwMTcxMDAzLTEyMjUzMS5vdXQtLTEyLTE2LTQ2 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (ATLAS-2190) Remove duplicate dependency blocks with different versions, resulting in atlas startup failure with signature mismatch exception
[ https://issues.apache.org/jira/browse/ATLAS-2190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191612#comment-16191612 ] Ashutosh Mestry commented on ATLAS-2190: +1 for the patch. > Remove duplicate dependency blocks with different versions, resulting in > atlas startup failure with signature mismatch exception > > > Key: ATLAS-2190 > URL: https://issues.apache.org/jira/browse/ATLAS-2190 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.8.2 >Reporter: Ayub Pathan >Assignee: Ayub Pathan >Priority: Blocker > Fix For: 0.8.2 > > Attachments: ATLAS-2190-branch-0.8.patch > > > In the current pom, below two are duplicate entries resulting in atlas > startup failure > {noformat} > - > -com.sun.jersey > -jersey-servlet > -${jersey.version} > - > - > - > -javax.servlet.jsp > -jsp-api > -2.0 > - > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (ATLAS-2186) Atlas not responding for long time after startup
[ https://issues.apache.org/jira/browse/ATLAS-2186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Mestry reassigned ATLAS-2186: -- Assignee: Ashutosh Mestry > Atlas not responding for long time after startup > > > Key: ATLAS-2186 > URL: https://issues.apache.org/jira/browse/ATLAS-2186 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.8.2 >Reporter: Ayub Pathan >Assignee: Ashutosh Mestry >Priority: Blocker > Fix For: 0.8.2 > > Attachments: atlas.20171003-122531.out, > ATLAS-2186-Jetty-dowgrade.patch > > > Atlas with latest bits is not responding for any query, the request times > out. This behavior can be seen for more than ~15 minutes after atlas startup > and everything will be back to normal after this period. > I suspect one of the threads is looping or hanging which is resulting in this > state. Atlas logs does not show any suspicious information (checked with > debug logs as well). I took a thread dump of Atlas, attaching the dump file > and report below. I strongly suspect, this might be an issue with newer jetty > version introduced recently. > Thread dump report: > http://fastthread.io/my-thread-report.jsp?p=c2hhcmVkLzIwMTcvMTAvMy8tLWF0bGFzLjIwMTcxMDAzLTEyMjUzMS5vdXQtLTEyLTE2LTQ2 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ATLAS-2195) Stale transaction eviction errors observed in atlas application logs
[ https://issues.apache.org/jira/browse/ATLAS-2195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Mestry updated ATLAS-2195: --- Attachment: ATLAS-2195-2.patch > Stale transaction eviction errors observed in atlas application logs > > > Key: ATLAS-2195 > URL: https://issues.apache.org/jira/browse/ATLAS-2195 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.8.2 >Reporter: Ayub Pathan >Assignee: Ashutosh Mestry >Priority: Critical > Fix For: 0.8.2 > > Attachments: ATLAS-2195-2.patch, ATLAS-2195.patch > > > {noformat} > 2017-10-09 20:18:20,728 ERROR - [Thread-371:] ~ Evicted > [258@ac1b1980131887-ctr-e134-1499953498516-209460-01-04-hwx-site1] from > cache but waiting too long for transactions to close. Stale transaction alert > on: [standardtitantx[null], standardtitantx[null]] (ManagementLogger:189) > 2017-10-09 20:18:20,738 ERROR - [Thread-370:] ~ Evicted > [257@ac1b1980131887-ctr-e134-1499953498516-209460-01-04-hwx-site1] from > cache but waiting too long for transactions to close. Stale transaction alert > on: [standardtitantx[null], standardtitantx[null]] (ManagementLogger:189) > 2017-10-09 20:18:20,808 ERROR - [Thread-372:] ~ Evicted > [259@ac1b1980131887-ctr-e134-1499953498516-209460-01-04-hwx-site1] from > cache but waiting too long for transactions to close. Stale transaction alert > on: [standardtitantx[null], standardtitantx[null]] (ManagementLogger:189) > 2017-10-09 20:18:25,658 ERROR - [Thread-373:] ~ Evicted > [260@ac1b1980131887-ctr-e134-1499953498516-209460-01-04-hwx-site1] from > cache but waiting too long for transactions to close. Stale transaction alert > on: [standardtitantx[null], standardtitantx[null]] (ManagementLogger:189) > 2017-10-09 20:18:25,698 ERROR - [Thread-374:] ~ Evicted > [261@ac1b1980131887-ctr-e134-1499953498516-209460-01-04-hwx-site1] from > cache but waiting too long for transactions to close. Stale transaction alert > on: [standardtitantx[null], standardtitantx[null]] (ManagementLogger:189) > 2017-10-09 20:18:25,721 ERROR - [Thread-375:] ~ Evicted > [262@ac1b1980131887-ctr-e134-1499953498516-209460-01-04-hwx-site1] from > cache but waiting too long for transactions to close. Stale transaction alert > on: [standardtitantx[null], standardtitantx[null]] (ManagementLogger:189) > 2017-10-09 20:18:25,739 ERROR - [Thread-376:] ~ Evicted > [263@ac1b1980131887-ctr-e134-1499953498516-209460-01-04-hwx-site1] from > cache but waiting too long for transactions to close. Stale transaction alert > on: [standardtitantx[null], standardtitantx[null]] (ManagementLogger:189) > 2017-10-09 20:18:25,778 ERROR - [Thread-377:] ~ Evicted > [264@ac1b1980131887-ctr-e134-1499953498516-209460-01-04-hwx-site1] from > cache but waiting too long for transactions to close. Stale transaction alert > on: [standardtitantx[null], standardtitantx[null]] (ManagementLogger:189) > 2017-10-09 20:18:30,661 ERROR - [Thread-378:] ~ Evicted > [265@ac1b1980131887-ctr-e134-1499953498516-209460-01-04-hwx-site1] from > cache but waiting too long for transactions to close. Stale transaction alert > on: [standardtitantx[null], standardtitantx[null]] (ManagementLogger:189) > 2017-10-09 20:18:30,687 ERROR - [Thread-379:] ~ Evicted > [266@ac1b1980131887-ctr-e134-1499953498516-209460-01-04-hwx-site1] from > cache but waiting too long for transactions to close. Stale transaction alert > on: [standardtitantx[null], standardtitantx[null]] (ManagementLogger:189) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (ATLAS-2120) Inconsistency in Importing already existing types on backup cluster with new definition.
[ https://issues.apache.org/jira/browse/ATLAS-2120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Mestry reassigned ATLAS-2120: -- Assignee: Ashutosh Mestry > Inconsistency in Importing already existing types on backup cluster with new > definition. > > > Key: ATLAS-2120 > URL: https://issues.apache.org/jira/browse/ATLAS-2120 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.9-incubating >Reporter: Sharmadha Sainath >Assignee: Ashutosh Mestry >Priority: Critical > > 1.Created a tag tag1 on cluster1 with attributes : > * attrib1 : string > * attrib2 : integer > 2.Created a tag with same name on cluster2 with attributes: > *attrib1: date > *attrib3: integer > (Note the tag names are same , and attrib1 is same but datatypes of attrib1 > are different in both the clusters) > 3. On cluster1 , created an entity and associated the tag1 to the entity with > attribute values > *attrib1: "randstr" > *attrib2: 5 > and exported the entity into zip file . > 4.Tried to import the entity into cluster2. > Import failed with 500 Internal server error and with following exception : > {code} > {"errorCode":"ATLAS-500-00-001","errorMessage":"org.apache.atlas.exception.AtlasBaseException: > org.apache.atlas.repository.graphdb.AtlasSchemaViolationException: > com.thinkaurelius.titan.core.SchemaViolationException: Value [rand_str] is > not an instance of the expected data type for property key [tag1.attrib1] and > cannot be converted. Expected: class java.lang.Long, found: class > java.lang.String"} > {code} > Following is the inconsistency observed : > Entity is not imported into the cluster2 , but the type definition of tag1 in > cluster2 had 3 attributes now (attrib1 : date , attrib2:Integer , > attrib3:Integer) and 500 Internal server error is thrown. > Normally, when a datatype of an attribute is attempted to be updated , Atlas > throws the following exception and the type is not updated. > {code} > {"errorCode":"ATLAS-400-00-029","errorMessage":"Data type update for > attribute is not supported"} > {code} > Expected the same to happen while importing (i.e) Import failing with Bad > request with the proper error message. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ATLAS-2120) Inconsistency in Importing already existing types on backup cluster with new definition.
[ https://issues.apache.org/jira/browse/ATLAS-2120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Mestry updated ATLAS-2120: --- Attachment: ATLAS-2120-Import-Failure-Test.patch > Inconsistency in Importing already existing types on backup cluster with new > definition. > > > Key: ATLAS-2120 > URL: https://issues.apache.org/jira/browse/ATLAS-2120 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.9-incubating >Reporter: Sharmadha Sainath >Assignee: Ashutosh Mestry >Priority: Critical > Attachments: ATLAS-2120-Import-Failure-Test.patch, hdfs_path1.zip > > > 1.Created a tag tag1 on cluster1 with attributes : > * attrib1 : string > * attrib2 : integer > 2.Created a tag with same name on cluster2 with attributes: > *attrib1: date > *attrib3: integer > (Note the tag names are same , and attrib1 is same but datatypes of attrib1 > are different in both the clusters) > 3. On cluster1 , created an entity and associated the tag1 to the entity with > attribute values > *attrib1: "randstr" > *attrib2: 5 > and exported the entity into zip file . > 4.Tried to import the entity into cluster2. > Import failed with 500 Internal server error and with following exception : > {code} > {"errorCode":"ATLAS-500-00-001","errorMessage":"org.apache.atlas.exception.AtlasBaseException: > org.apache.atlas.repository.graphdb.AtlasSchemaViolationException: > com.thinkaurelius.titan.core.SchemaViolationException: Value [rand_str] is > not an instance of the expected data type for property key [tag1.attrib1] and > cannot be converted. Expected: class java.lang.Long, found: class > java.lang.String"} > {code} > Following is the inconsistency observed : > Entity is not imported into the cluster2 , but the type definition of tag1 in > cluster2 had 3 attributes now (attrib1 : date , attrib2:Integer , > attrib3:Integer) and 500 Internal server error is thrown. > Normally, when a datatype of an attribute is attempted to be updated , Atlas > throws the following exception and the type is not updated. > {code} > {"errorCode":"ATLAS-400-00-029","errorMessage":"Data type update for > attribute is not supported"} > {code} > Expected the same to happen while importing (i.e) Import failing with Bad > request with the proper error message. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (ATLAS-2120) Inconsistency in Importing already existing types on backup cluster with new definition.
[ https://issues.apache.org/jira/browse/ATLAS-2120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159363#comment-16159363 ] Ashutosh Mestry commented on ATLAS-2120: [~ssainath] I am able to duplicate this problem on my unit test. > Inconsistency in Importing already existing types on backup cluster with new > definition. > > > Key: ATLAS-2120 > URL: https://issues.apache.org/jira/browse/ATLAS-2120 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.9-incubating >Reporter: Sharmadha Sainath >Assignee: Ashutosh Mestry >Priority: Critical > Attachments: hdfs_path1.zip > > > 1.Created a tag tag1 on cluster1 with attributes : > * attrib1 : string > * attrib2 : integer > 2.Created a tag with same name on cluster2 with attributes: > *attrib1: date > *attrib3: integer > (Note the tag names are same , and attrib1 is same but datatypes of attrib1 > are different in both the clusters) > 3. On cluster1 , created an entity and associated the tag1 to the entity with > attribute values > *attrib1: "randstr" > *attrib2: 5 > and exported the entity into zip file . > 4.Tried to import the entity into cluster2. > Import failed with 500 Internal server error and with following exception : > {code} > {"errorCode":"ATLAS-500-00-001","errorMessage":"org.apache.atlas.exception.AtlasBaseException: > org.apache.atlas.repository.graphdb.AtlasSchemaViolationException: > com.thinkaurelius.titan.core.SchemaViolationException: Value [rand_str] is > not an instance of the expected data type for property key [tag1.attrib1] and > cannot be converted. Expected: class java.lang.Long, found: class > java.lang.String"} > {code} > Following is the inconsistency observed : > Entity is not imported into the cluster2 , but the type definition of tag1 in > cluster2 had 3 attributes now (attrib1 : date , attrib2:Integer , > attrib3:Integer) and 500 Internal server error is thrown. > Normally, when a datatype of an attribute is attempted to be updated , Atlas > throws the following exception and the type is not updated. > {code} > {"errorCode":"ATLAS-400-00-029","errorMessage":"Data type update for > attribute is not supported"} > {code} > Expected the same to happen while importing (i.e) Import failing with Bad > request with the proper error message. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Comment Edited] (ATLAS-2120) Inconsistency in Importing already existing types on backup cluster with new definition.
[ https://issues.apache.org/jira/browse/ATLAS-2120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159363#comment-16159363 ] Ashutosh Mestry edited comment on ATLAS-2120 at 9/8/17 9:56 PM: [~ssainath] I am able to duplicate this problem on my unit test. Only that this is not a scenario we are supporting. We are making best effort to merge the attributes. But for the case (like the one you have) where the same attribute is of different type, we are failing the import. I think it will be good to close this bug. CC: [~madhan.neethiraj] was (Author: ashutoshm): [~ssainath] I am able to duplicate this problem on my unit test. > Inconsistency in Importing already existing types on backup cluster with new > definition. > > > Key: ATLAS-2120 > URL: https://issues.apache.org/jira/browse/ATLAS-2120 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.9-incubating >Reporter: Sharmadha Sainath >Assignee: Ashutosh Mestry >Priority: Critical > Attachments: hdfs_path1.zip > > > 1.Created a tag tag1 on cluster1 with attributes : > * attrib1 : string > * attrib2 : integer > 2.Created a tag with same name on cluster2 with attributes: > *attrib1: date > *attrib3: integer > (Note the tag names are same , and attrib1 is same but datatypes of attrib1 > are different in both the clusters) > 3. On cluster1 , created an entity and associated the tag1 to the entity with > attribute values > *attrib1: "randstr" > *attrib2: 5 > and exported the entity into zip file . > 4.Tried to import the entity into cluster2. > Import failed with 500 Internal server error and with following exception : > {code} > {"errorCode":"ATLAS-500-00-001","errorMessage":"org.apache.atlas.exception.AtlasBaseException: > org.apache.atlas.repository.graphdb.AtlasSchemaViolationException: > com.thinkaurelius.titan.core.SchemaViolationException: Value [rand_str] is > not an instance of the expected data type for property key [tag1.attrib1] and > cannot be converted. Expected: class java.lang.Long, found: class > java.lang.String"} > {code} > Following is the inconsistency observed : > Entity is not imported into the cluster2 , but the type definition of tag1 in > cluster2 had 3 attributes now (attrib1 : date , attrib2:Integer , > attrib3:Integer) and 500 Internal server error is thrown. > Normally, when a datatype of an attribute is attempted to be updated , Atlas > throws the following exception and the type is not updated. > {code} > {"errorCode":"ATLAS-400-00-029","errorMessage":"Data type update for > attribute is not supported"} > {code} > Expected the same to happen while importing (i.e) Import failing with Bad > request with the proper error message. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ATLAS-2120) Inconsistency in Importing already existing types on backup cluster with new definition.
[ https://issues.apache.org/jira/browse/ATLAS-2120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Mestry updated ATLAS-2120: --- Attachment: ATLAS-2120-Import-Failure-Test.patch > Inconsistency in Importing already existing types on backup cluster with new > definition. > > > Key: ATLAS-2120 > URL: https://issues.apache.org/jira/browse/ATLAS-2120 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.9-incubating >Reporter: Sharmadha Sainath >Assignee: Ashutosh Mestry >Priority: Critical > Attachments: ATLAS-2120-Import-Failure-Test.patch, hdfs_path1.zip > > > 1.Created a tag tag1 on cluster1 with attributes : > * attrib1 : string > * attrib2 : integer > 2.Created a tag with same name on cluster2 with attributes: > *attrib1: date > *attrib3: integer > (Note the tag names are same , and attrib1 is same but datatypes of attrib1 > are different in both the clusters) > 3. On cluster1 , created an entity and associated the tag1 to the entity with > attribute values > *attrib1: "randstr" > *attrib2: 5 > and exported the entity into zip file . > 4.Tried to import the entity into cluster2. > Import failed with 500 Internal server error and with following exception : > {code} > {"errorCode":"ATLAS-500-00-001","errorMessage":"org.apache.atlas.exception.AtlasBaseException: > org.apache.atlas.repository.graphdb.AtlasSchemaViolationException: > com.thinkaurelius.titan.core.SchemaViolationException: Value [rand_str] is > not an instance of the expected data type for property key [tag1.attrib1] and > cannot be converted. Expected: class java.lang.Long, found: class > java.lang.String"} > {code} > Following is the inconsistency observed : > Entity is not imported into the cluster2 , but the type definition of tag1 in > cluster2 had 3 attributes now (attrib1 : date , attrib2:Integer , > attrib3:Integer) and 500 Internal server error is thrown. > Normally, when a datatype of an attribute is attempted to be updated , Atlas > throws the following exception and the type is not updated. > {code} > {"errorCode":"ATLAS-400-00-029","errorMessage":"Data type update for > attribute is not supported"} > {code} > Expected the same to happen while importing (i.e) Import failing with Bad > request with the proper error message. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ATLAS-2120) Inconsistency in Importing already existing types on backup cluster with new definition.
[ https://issues.apache.org/jira/browse/ATLAS-2120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Mestry updated ATLAS-2120: --- Attachment: (was: ATLAS-2120-Import-Failure-Test.patch) > Inconsistency in Importing already existing types on backup cluster with new > definition. > > > Key: ATLAS-2120 > URL: https://issues.apache.org/jira/browse/ATLAS-2120 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.9-incubating >Reporter: Sharmadha Sainath >Assignee: Ashutosh Mestry >Priority: Critical > Attachments: ATLAS-2120-Import-Failure-Test.patch, hdfs_path1.zip > > > 1.Created a tag tag1 on cluster1 with attributes : > * attrib1 : string > * attrib2 : integer > 2.Created a tag with same name on cluster2 with attributes: > *attrib1: date > *attrib3: integer > (Note the tag names are same , and attrib1 is same but datatypes of attrib1 > are different in both the clusters) > 3. On cluster1 , created an entity and associated the tag1 to the entity with > attribute values > *attrib1: "randstr" > *attrib2: 5 > and exported the entity into zip file . > 4.Tried to import the entity into cluster2. > Import failed with 500 Internal server error and with following exception : > {code} > {"errorCode":"ATLAS-500-00-001","errorMessage":"org.apache.atlas.exception.AtlasBaseException: > org.apache.atlas.repository.graphdb.AtlasSchemaViolationException: > com.thinkaurelius.titan.core.SchemaViolationException: Value [rand_str] is > not an instance of the expected data type for property key [tag1.attrib1] and > cannot be converted. Expected: class java.lang.Long, found: class > java.lang.String"} > {code} > Following is the inconsistency observed : > Entity is not imported into the cluster2 , but the type definition of tag1 in > cluster2 had 3 attributes now (attrib1 : date , attrib2:Integer , > attrib3:Integer) and 500 Internal server error is thrown. > Normally, when a datatype of an attribute is attempted to be updated , Atlas > throws the following exception and the type is not updated. > {code} > {"errorCode":"ATLAS-400-00-029","errorMessage":"Data type update for > attribute is not supported"} > {code} > Expected the same to happen while importing (i.e) Import failing with Bad > request with the proper error message. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (ATLAS-2120) Inconsistency in Importing already existing types on backup cluster with new definition.
[ https://issues.apache.org/jira/browse/ATLAS-2120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159973#comment-16159973 ] Ashutosh Mestry commented on ATLAS-2120: [~madhan.neethiraj] As of now, we add _attrib2_ since it is absent. We don't modify _attrib1_ since it is already present. I have updated unit test (attached) to reflect this. Should I change behavior so that we don't update if there is any conflict? > Inconsistency in Importing already existing types on backup cluster with new > definition. > > > Key: ATLAS-2120 > URL: https://issues.apache.org/jira/browse/ATLAS-2120 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.9-incubating >Reporter: Sharmadha Sainath >Assignee: Ashutosh Mestry >Priority: Critical > Attachments: hdfs_path1.zip > > > 1.Created a tag tag1 on cluster1 with attributes : > * attrib1 : string > * attrib2 : integer > 2.Created a tag with same name on cluster2 with attributes: > *attrib1: date > *attrib3: integer > (Note the tag names are same , and attrib1 is same but datatypes of attrib1 > are different in both the clusters) > 3. On cluster1 , created an entity and associated the tag1 to the entity with > attribute values > *attrib1: "randstr" > *attrib2: 5 > and exported the entity into zip file . > 4.Tried to import the entity into cluster2. > Import failed with 500 Internal server error and with following exception : > {code} > {"errorCode":"ATLAS-500-00-001","errorMessage":"org.apache.atlas.exception.AtlasBaseException: > org.apache.atlas.repository.graphdb.AtlasSchemaViolationException: > com.thinkaurelius.titan.core.SchemaViolationException: Value [rand_str] is > not an instance of the expected data type for property key [tag1.attrib1] and > cannot be converted. Expected: class java.lang.Long, found: class > java.lang.String"} > {code} > Following is the inconsistency observed : > Entity is not imported into the cluster2 , but the type definition of tag1 in > cluster2 had 3 attributes now (attrib1 : date , attrib2:Integer , > attrib3:Integer) and 500 Internal server error is thrown. > Normally, when a datatype of an attribute is attempted to be updated , Atlas > throws the following exception and the type is not updated. > {code} > {"errorCode":"ATLAS-400-00-029","errorMessage":"Data type update for > attribute is not supported"} > {code} > Expected the same to happen while importing (i.e) Import failing with Bad > request with the proper error message. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ATLAS-1996) split the atlas application log file if it grows big
[ https://issues.apache.org/jira/browse/ATLAS-1996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Mestry updated ATLAS-1996: --- Attachment: (was: ATLAS-1996.patch) > split the atlas application log file if it grows big > > > Key: ATLAS-1996 > URL: https://issues.apache.org/jira/browse/ATLAS-1996 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.9-incubating >Reporter: Deepak Sharma >Assignee: Ashutosh Mestry > Fix For: 0.9-incubating > > > application log size is growing in GB's for atlas > some times it goes till 100 GB. > it is better to split it if it grows more than around 100-200MB. > for eg: some instance are seen with 20GB log file -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ATLAS-1996) split the atlas application log file if it grows big
[ https://issues.apache.org/jira/browse/ATLAS-1996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Mestry updated ATLAS-1996: --- Attachment: ATLAS-1996.patch > split the atlas application log file if it grows big > > > Key: ATLAS-1996 > URL: https://issues.apache.org/jira/browse/ATLAS-1996 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.9-incubating >Reporter: Deepak Sharma >Assignee: Ashutosh Mestry > Fix For: 0.9-incubating > > Attachments: ATLAS-1996.patch > > > application log size is growing in GB's for atlas > some times it goes till 100 GB. > it is better to split it if it grows more than around 100-200MB. > for eg: some instance are seen with 20GB log file -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ATLAS-2129) Atlas shutdown during progress of bulk import throws ConcurrentModificationException
[ https://issues.apache.org/jira/browse/ATLAS-2129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Mestry updated ATLAS-2129: --- Attachment: import-nested-txn.patch > Atlas shutdown during progress of bulk import throws > ConcurrentModificationException > > > Key: ATLAS-2129 > URL: https://issues.apache.org/jira/browse/ATLAS-2129 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.9-incubating >Reporter: Sharmadha Sainath >Assignee: Ashutosh Mestry >Priority: Blocker > Attachments: import-nested-txn.patch > > > 1. Exported an hive_db containing 300 hive_table entities from cluster1 into > a zip file . > 2. Tried to import into cluster2. > 3. When the import was in progress ( at 34%) , stopped Atlas. > 4. Following exception was seen in application logs of cluster2 : > {code} > 2017-09-11 10:17:09,192 INFO - [pool-2-thread-9 - > 83fe24a2-ff2b-4add-a94e-a54b09090912:] ~ bulkImport(): progress: 34% (of 301) > - > entity:last-imported:hive_table:[102]:(2d629a8e-5c94-40e8-b83f-8a9c91c6de8d) > (AtlasEntityStoreV1:238) > 2017-09-11 10:17:09,340 ERROR - [SIGTERM handler:] ~ Could not commit > transaction [1] due to exception (StandardTitanGraph:731) > java.util.ConcurrentModificationException > at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:901) > at java.util.ArrayList$Itr.next(ArrayList.java:851) > at > atlas.shaded.titan.guava.common.collect.Iterators$7.computeNext(Iterators.java:701) > at > atlas.shaded.titan.guava.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143) > at > atlas.shaded.titan.guava.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) > at > com.thinkaurelius.titan.graphdb.database.StandardTitanGraph.prepareCommit(StandardTitanGraph.java:473) > at > com.thinkaurelius.titan.graphdb.database.StandardTitanGraph.commit(StandardTitanGraph.java:654) > at > com.thinkaurelius.titan.graphdb.transaction.StandardTitanTx.commit(StandardTitanTx.java:1337) > at > com.thinkaurelius.titan.graphdb.blueprints.TitanBlueprintsGraph.shutdown(TitanBlueprintsGraph.java:120) > at > com.thinkaurelius.titan.graphdb.database.StandardTitanGraph.shutdownInternal(StandardTitanGraph.java:171) > at > com.thinkaurelius.titan.graphdb.database.StandardTitanGraph.access$700(StandardTitanGraph.java:75) > at > com.thinkaurelius.titan.graphdb.database.StandardTitanGraph$ShutdownThread.start(StandardTitanGraph.java:756) > at > java.lang.ApplicationShutdownHooks.runHooks(ApplicationShutdownHooks.java:102) > at > java.lang.ApplicationShutdownHooks$1.run(ApplicationShutdownHooks.java:46) > at java.lang.Shutdown.runHooks(Shutdown.java:123) > at java.lang.Shutdown.sequence(Shutdown.java:167) > at java.lang.Shutdown.exit(Shutdown.java:212) > at java.lang.Terminator$1.handle(Terminator.java:52) > at sun.misc.Signal$1.run(Signal.java:212) > at java.lang.Thread.run(Thread.java:748) > {code} > Other operations that are called during Atlas shut down such as Shutdown hook > , Stopping KafkaNotification service , NotificationHookConsumer , > HBaseBasedAuditRepository are not called. > 5.After that , restarted Atlas. Atlas functioned properly. > 6.Resumed import with import option , startGuid= %. Atlas was stopped when Import was going on at 34% > > 7. Import completed successfully. > 8.Post import, only entities from 33% to 100% were imported . Initial 32% of > the entities were not imported. > 9.Fired an import command again without any interruption and without > specifying the startGUID. All 300 hive_table entities and 1 hive_db were > imported successfully. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (ATLAS-2129) Atlas shutdown during progress of bulk import throws ConcurrentModificationException
[ https://issues.apache.org/jira/browse/ATLAS-2129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163480#comment-16163480 ] Ashutosh Mestry commented on ATLAS-2129: [~ssainath] I don't think this should be a blocker. At best it can be marked as _Major_. > Atlas shutdown during progress of bulk import throws > ConcurrentModificationException > > > Key: ATLAS-2129 > URL: https://issues.apache.org/jira/browse/ATLAS-2129 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.9-incubating >Reporter: Sharmadha Sainath >Assignee: Ashutosh Mestry >Priority: Blocker > Attachments: ATLAS-2129-bulkImport-nested-txn.patch > > > 1. Exported an hive_db containing 300 hive_table entities from cluster1 into > a zip file . > 2. Tried to import into cluster2. > 3. When the import was in progress ( at 34%) , stopped Atlas. > 4. Following exception was seen in application logs of cluster2 : > {code} > 2017-09-11 10:17:09,192 INFO - [pool-2-thread-9 - > 83fe24a2-ff2b-4add-a94e-a54b09090912:] ~ bulkImport(): progress: 34% (of 301) > - > entity:last-imported:hive_table:[102]:(2d629a8e-5c94-40e8-b83f-8a9c91c6de8d) > (AtlasEntityStoreV1:238) > 2017-09-11 10:17:09,340 ERROR - [SIGTERM handler:] ~ Could not commit > transaction [1] due to exception (StandardTitanGraph:731) > java.util.ConcurrentModificationException > at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:901) > at java.util.ArrayList$Itr.next(ArrayList.java:851) > at > atlas.shaded.titan.guava.common.collect.Iterators$7.computeNext(Iterators.java:701) > at > atlas.shaded.titan.guava.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143) > at > atlas.shaded.titan.guava.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) > at > com.thinkaurelius.titan.graphdb.database.StandardTitanGraph.prepareCommit(StandardTitanGraph.java:473) > at > com.thinkaurelius.titan.graphdb.database.StandardTitanGraph.commit(StandardTitanGraph.java:654) > at > com.thinkaurelius.titan.graphdb.transaction.StandardTitanTx.commit(StandardTitanTx.java:1337) > at > com.thinkaurelius.titan.graphdb.blueprints.TitanBlueprintsGraph.shutdown(TitanBlueprintsGraph.java:120) > at > com.thinkaurelius.titan.graphdb.database.StandardTitanGraph.shutdownInternal(StandardTitanGraph.java:171) > at > com.thinkaurelius.titan.graphdb.database.StandardTitanGraph.access$700(StandardTitanGraph.java:75) > at > com.thinkaurelius.titan.graphdb.database.StandardTitanGraph$ShutdownThread.start(StandardTitanGraph.java:756) > at > java.lang.ApplicationShutdownHooks.runHooks(ApplicationShutdownHooks.java:102) > at > java.lang.ApplicationShutdownHooks$1.run(ApplicationShutdownHooks.java:46) > at java.lang.Shutdown.runHooks(Shutdown.java:123) > at java.lang.Shutdown.sequence(Shutdown.java:167) > at java.lang.Shutdown.exit(Shutdown.java:212) > at java.lang.Terminator$1.handle(Terminator.java:52) > at sun.misc.Signal$1.run(Signal.java:212) > at java.lang.Thread.run(Thread.java:748) > {code} > Other operations that are called during Atlas shut down such as Shutdown hook > , Stopping KafkaNotification service , NotificationHookConsumer , > HBaseBasedAuditRepository are not called. > 5.After that , restarted Atlas. Atlas functioned properly. > 6.Resumed import with import option , startGuid= %. Atlas was stopped when Import was going on at 34% > > 7. Import completed successfully. > 8.Post import, only entities from 33% to 100% were imported . Initial 32% of > the entities were not imported. > 9.Fired an import command again without any interruption and without > specifying the startGUID. All 300 hive_table entities and 1 hive_db were > imported successfully. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ATLAS-2129) Atlas shutdown during progress of bulk import throws ConcurrentModificationException
[ https://issues.apache.org/jira/browse/ATLAS-2129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Mestry updated ATLAS-2129: --- Attachment: (was: import-nested-txn.patch) > Atlas shutdown during progress of bulk import throws > ConcurrentModificationException > > > Key: ATLAS-2129 > URL: https://issues.apache.org/jira/browse/ATLAS-2129 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.9-incubating >Reporter: Sharmadha Sainath >Assignee: Ashutosh Mestry >Priority: Blocker > Attachments: ATLAS-2129-bulkImport-nested-txn.patch > > > 1. Exported an hive_db containing 300 hive_table entities from cluster1 into > a zip file . > 2. Tried to import into cluster2. > 3. When the import was in progress ( at 34%) , stopped Atlas. > 4. Following exception was seen in application logs of cluster2 : > {code} > 2017-09-11 10:17:09,192 INFO - [pool-2-thread-9 - > 83fe24a2-ff2b-4add-a94e-a54b09090912:] ~ bulkImport(): progress: 34% (of 301) > - > entity:last-imported:hive_table:[102]:(2d629a8e-5c94-40e8-b83f-8a9c91c6de8d) > (AtlasEntityStoreV1:238) > 2017-09-11 10:17:09,340 ERROR - [SIGTERM handler:] ~ Could not commit > transaction [1] due to exception (StandardTitanGraph:731) > java.util.ConcurrentModificationException > at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:901) > at java.util.ArrayList$Itr.next(ArrayList.java:851) > at > atlas.shaded.titan.guava.common.collect.Iterators$7.computeNext(Iterators.java:701) > at > atlas.shaded.titan.guava.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143) > at > atlas.shaded.titan.guava.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) > at > com.thinkaurelius.titan.graphdb.database.StandardTitanGraph.prepareCommit(StandardTitanGraph.java:473) > at > com.thinkaurelius.titan.graphdb.database.StandardTitanGraph.commit(StandardTitanGraph.java:654) > at > com.thinkaurelius.titan.graphdb.transaction.StandardTitanTx.commit(StandardTitanTx.java:1337) > at > com.thinkaurelius.titan.graphdb.blueprints.TitanBlueprintsGraph.shutdown(TitanBlueprintsGraph.java:120) > at > com.thinkaurelius.titan.graphdb.database.StandardTitanGraph.shutdownInternal(StandardTitanGraph.java:171) > at > com.thinkaurelius.titan.graphdb.database.StandardTitanGraph.access$700(StandardTitanGraph.java:75) > at > com.thinkaurelius.titan.graphdb.database.StandardTitanGraph$ShutdownThread.start(StandardTitanGraph.java:756) > at > java.lang.ApplicationShutdownHooks.runHooks(ApplicationShutdownHooks.java:102) > at > java.lang.ApplicationShutdownHooks$1.run(ApplicationShutdownHooks.java:46) > at java.lang.Shutdown.runHooks(Shutdown.java:123) > at java.lang.Shutdown.sequence(Shutdown.java:167) > at java.lang.Shutdown.exit(Shutdown.java:212) > at java.lang.Terminator$1.handle(Terminator.java:52) > at sun.misc.Signal$1.run(Signal.java:212) > at java.lang.Thread.run(Thread.java:748) > {code} > Other operations that are called during Atlas shut down such as Shutdown hook > , Stopping KafkaNotification service , NotificationHookConsumer , > HBaseBasedAuditRepository are not called. > 5.After that , restarted Atlas. Atlas functioned properly. > 6.Resumed import with import option , startGuid= %. Atlas was stopped when Import was going on at 34% > > 7. Import completed successfully. > 8.Post import, only entities from 33% to 100% were imported . Initial 32% of > the entities were not imported. > 9.Fired an import command again without any interruption and without > specifying the startGUID. All 300 hive_table entities and 1 hive_db were > imported successfully. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ATLAS-2129) Atlas shutdown during progress of bulk import throws ConcurrentModificationException
[ https://issues.apache.org/jira/browse/ATLAS-2129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Mestry updated ATLAS-2129: --- Attachment: ATLAS-2129-bulkImport-nested-txn.patch > Atlas shutdown during progress of bulk import throws > ConcurrentModificationException > > > Key: ATLAS-2129 > URL: https://issues.apache.org/jira/browse/ATLAS-2129 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.9-incubating >Reporter: Sharmadha Sainath >Assignee: Ashutosh Mestry >Priority: Blocker > Attachments: ATLAS-2129-bulkImport-nested-txn.patch > > > 1. Exported an hive_db containing 300 hive_table entities from cluster1 into > a zip file . > 2. Tried to import into cluster2. > 3. When the import was in progress ( at 34%) , stopped Atlas. > 4. Following exception was seen in application logs of cluster2 : > {code} > 2017-09-11 10:17:09,192 INFO - [pool-2-thread-9 - > 83fe24a2-ff2b-4add-a94e-a54b09090912:] ~ bulkImport(): progress: 34% (of 301) > - > entity:last-imported:hive_table:[102]:(2d629a8e-5c94-40e8-b83f-8a9c91c6de8d) > (AtlasEntityStoreV1:238) > 2017-09-11 10:17:09,340 ERROR - [SIGTERM handler:] ~ Could not commit > transaction [1] due to exception (StandardTitanGraph:731) > java.util.ConcurrentModificationException > at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:901) > at java.util.ArrayList$Itr.next(ArrayList.java:851) > at > atlas.shaded.titan.guava.common.collect.Iterators$7.computeNext(Iterators.java:701) > at > atlas.shaded.titan.guava.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143) > at > atlas.shaded.titan.guava.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) > at > com.thinkaurelius.titan.graphdb.database.StandardTitanGraph.prepareCommit(StandardTitanGraph.java:473) > at > com.thinkaurelius.titan.graphdb.database.StandardTitanGraph.commit(StandardTitanGraph.java:654) > at > com.thinkaurelius.titan.graphdb.transaction.StandardTitanTx.commit(StandardTitanTx.java:1337) > at > com.thinkaurelius.titan.graphdb.blueprints.TitanBlueprintsGraph.shutdown(TitanBlueprintsGraph.java:120) > at > com.thinkaurelius.titan.graphdb.database.StandardTitanGraph.shutdownInternal(StandardTitanGraph.java:171) > at > com.thinkaurelius.titan.graphdb.database.StandardTitanGraph.access$700(StandardTitanGraph.java:75) > at > com.thinkaurelius.titan.graphdb.database.StandardTitanGraph$ShutdownThread.start(StandardTitanGraph.java:756) > at > java.lang.ApplicationShutdownHooks.runHooks(ApplicationShutdownHooks.java:102) > at > java.lang.ApplicationShutdownHooks$1.run(ApplicationShutdownHooks.java:46) > at java.lang.Shutdown.runHooks(Shutdown.java:123) > at java.lang.Shutdown.sequence(Shutdown.java:167) > at java.lang.Shutdown.exit(Shutdown.java:212) > at java.lang.Terminator$1.handle(Terminator.java:52) > at sun.misc.Signal$1.run(Signal.java:212) > at java.lang.Thread.run(Thread.java:748) > {code} > Other operations that are called during Atlas shut down such as Shutdown hook > , Stopping KafkaNotification service , NotificationHookConsumer , > HBaseBasedAuditRepository are not called. > 5.After that , restarted Atlas. Atlas functioned properly. > 6.Resumed import with import option , startGuid= %. Atlas was stopped when Import was going on at 34% > > 7. Import completed successfully. > 8.Post import, only entities from 33% to 100% were imported . Initial 32% of > the entities were not imported. > 9.Fired an import command again without any interruption and without > specifying the startGUID. All 300 hive_table entities and 1 hive_db were > imported successfully. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (ATLAS-2127) Import of an entity associated with a tag into backup cluster with updateTypeDefinition options set to false
[ https://issues.apache.org/jira/browse/ATLAS-2127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Mestry reassigned ATLAS-2127: -- Assignee: Ashutosh Mestry > Import of an entity associated with a tag into backup cluster with > updateTypeDefinition options set to false > > > Key: ATLAS-2127 > URL: https://issues.apache.org/jira/browse/ATLAS-2127 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.9-incubating >Reporter: Sharmadha Sainath >Assignee: Ashutosh Mestry > Attachments: ImportExceptionWithTag.txt > > > 1. On cluster1 , created an entity and associated it to a tag. > 2. Exported the entity into a zipfile. > 3. cluster2 is in clean state.(i.e no entity / tag is created) > 4. Tried to import the entity zip into cluster2 with import option > "updateTypeDefinition" set to false. > 5. Import failed with NPE. > Since updateTypeDefinition is set to false , NPE is thrown when the entity is > attempted to be associated to the tag in the backup cluster. > This is the expected behavior , but the cause of the issue is not very > explicit to the end user when Atlas throws NPE. > Attached the exception stack trace. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (ATLAS-2063) Compressed HiveHook Messages
[ https://issues.apache.org/jira/browse/ATLAS-2063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16161966#comment-16161966 ] Ashutosh Mestry commented on ATLAS-2063: Duplicate of: [ATLAS-204|https://issues.apache.org/jira/browse/ATLAS-2064] > Compressed HiveHook Messages > > > Key: ATLAS-2063 > URL: https://issues.apache.org/jira/browse/ATLAS-2063 > Project: Atlas > Issue Type: Improvement > Components: atlas-intg >Affects Versions: 0.8-incubating >Reporter: Ashutosh Mestry >Assignee: Ashutosh Mestry > Fix For: trunk > > > *Background* > Messages posted by hooks to Atlas Kafka topics are JSON format. > Kafka imposes a 1MB limit on the message size. > Occasionally, depending on operations performed, this threshold is reached. > This results in messages being dropped. > The entities are thus not reflected in Atlas. > *Solution* > Applying compression to these message will alleviate the problem. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (ATLAS-2120) Inconsistency in Importing already existing types on backup cluster with new definition.
[ https://issues.apache.org/jira/browse/ATLAS-2120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16161968#comment-16161968 ] Ashutosh Mestry commented on ATLAS-2120: [~ssainath] We have decided to address this by checking attribute type of attributes that exist. If attribute exists and its type differ, we throw an exception. I will update documentation with this. > Inconsistency in Importing already existing types on backup cluster with new > definition. > > > Key: ATLAS-2120 > URL: https://issues.apache.org/jira/browse/ATLAS-2120 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.9-incubating >Reporter: Sharmadha Sainath >Assignee: Ashutosh Mestry >Priority: Critical > Attachments: ATLAS-2120-Import-Failure-Test.patch, hdfs_path1.zip > > > 1.Created a tag tag1 on cluster1 with attributes : > * attrib1 : string > * attrib2 : integer > 2.Created a tag with same name on cluster2 with attributes: > *attrib1: date > *attrib3: integer > (Note the tag names are same , and attrib1 is same but datatypes of attrib1 > are different in both the clusters) > 3. On cluster1 , created an entity and associated the tag1 to the entity with > attribute values > *attrib1: "randstr" > *attrib2: 5 > and exported the entity into zip file . > 4.Tried to import the entity into cluster2. > Import failed with 500 Internal server error and with following exception : > {code} > {"errorCode":"ATLAS-500-00-001","errorMessage":"org.apache.atlas.exception.AtlasBaseException: > org.apache.atlas.repository.graphdb.AtlasSchemaViolationException: > com.thinkaurelius.titan.core.SchemaViolationException: Value [rand_str] is > not an instance of the expected data type for property key [tag1.attrib1] and > cannot be converted. Expected: class java.lang.Long, found: class > java.lang.String"} > {code} > Following is the inconsistency observed : > Entity is not imported into the cluster2 , but the type definition of tag1 in > cluster2 had 3 attributes now (attrib1 : date , attrib2:Integer , > attrib3:Integer) and 500 Internal server error is thrown. > Normally, when a datatype of an attribute is attempted to be updated , Atlas > throws the following exception and the type is not updated. > {code} > {"errorCode":"ATLAS-400-00-029","errorMessage":"Data type update for > attribute is not supported"} > {code} > Expected the same to happen while importing (i.e) Import failing with Bad > request with the proper error message. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (ATLAS-2089) Upgrade Jetty version for Atlas
[ https://issues.apache.org/jira/browse/ATLAS-2089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16184395#comment-16184395 ] Ashutosh Mestry commented on ATLAS-2089: [~nixonrodrigues] I agree with what you are saying. Some of the fixes related to _SharedBlockingCallback.BlockerTimeoutException_ are only in v9.3. > Upgrade Jetty version for Atlas > --- > > Key: ATLAS-2089 > URL: https://issues.apache.org/jira/browse/ATLAS-2089 > Project: Atlas > Issue Type: Bug >Reporter: Nixon Rodrigues >Assignee: Nixon Rodrigues > Attachments: ATLAS-2089.patch > > > Jetty version in Atlas pom.xml is ‘9.2.12.v20150709’, which is about 2 years > old (released date 2015-07-09). Most recent one in maven central is > ‘9.4.6.v20170531’. We should try this version. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (ATLAS-2089) Upgrade Jetty version for Atlas
[ https://issues.apache.org/jira/browse/ATLAS-2089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16186056#comment-16186056 ] Ashutosh Mestry commented on ATLAS-2089: [~nixonrodrigues] I have added your patch to the review. > Upgrade Jetty version for Atlas > --- > > Key: ATLAS-2089 > URL: https://issues.apache.org/jira/browse/ATLAS-2089 > Project: Atlas > Issue Type: Bug >Reporter: Nixon Rodrigues >Assignee: Nixon Rodrigues > Attachments: ATLAS-2089.patch, jetty-upgrade.patch > > > Jetty version in Atlas pom.xml is ‘9.2.12.v20150709’, which is about 2 years > old (released date 2015-07-09). Most recent one in maven central is > ‘9.4.6.v20170531’. We should try this version. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ATLAS-2075) Support Arbitrarily Large Size Messages from Hooks
[ https://issues.apache.org/jira/browse/ATLAS-2075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Mestry updated ATLAS-2075: --- Attachment: ATLAS-2075-message-split-combine.patch > Support Arbitrarily Large Size Messages from Hooks > -- > > Key: ATLAS-2075 > URL: https://issues.apache.org/jira/browse/ATLAS-2075 > Project: Atlas > Issue Type: Improvement > Components: atlas-core >Affects Versions: 0.8-incubating >Reporter: Ashutosh Mestry >Assignee: Ashutosh Mestry > Fix For: trunk > > Attachments: ATLAS-2075-message-split-combine.patch > > > *Background* > Messages produced by Hooks have potential to be larger than 1MB, which is the > size threshold imposed by Kafka. > Although, compressing the messages (see > [ATLAS-2064|https://issues.apache.org/jira/browse/ATLAS-2064]) alleviates the > the problem, it is not a complete solution. It is possible even for > compressed messages to exceed the size threshold. > *Solution* > If the compressed message produced exceeds the size threshold, split the > messages. Accumulate the message at consumer end. > Account for cases such as: > - Messages are not received in order they are produced. > - Atlas server is shutdown before it can consume all the split messages. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ATLAS-2075) Support Arbitrarily Large Size Messages from Hooks
[ https://issues.apache.org/jira/browse/ATLAS-2075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Mestry updated ATLAS-2075: --- Description: *Background* Messages produced by Hooks have potential to be larger than 1MB, which is the size threshold imposed by Kafka. Although, compressing the messages (see [ATLAS-2064|https://issues.apache.org/jira/browse/ATLAS-2064]) alleviates the the problem, it is not a complete solution. It is possible even for compressed messages to exceed the size threshold. *Solution* If the compressed message produced exceeds the size threshold, split the messages. Accumulate the message at consumer end. Account for cases such as: - Messages are not received in order they are produced. - Atlas server is shutdown before it can consume all the split messages. was: *Background* Messages produced by Hooks have potential to be larger than 1MB, which is the size threshold imposed by Kafka. Although, compressing the messages (see [ATLAS-2064](https://issues.apache.org/jira/browse/ATLAS-2064)) alleviates the the problem, it is not a complete solution. It is possible even for compressed messages to exceed the size threshold. *Solution* If the compressed message produced exceeds the size threshold, split the messages. Accumulate the message at consumer end. Account for cases such as: - Messages are not received in order they are produced. - Atlas server is shutdown before it can consume all the split messages. > Support Arbitrarily Large Size Messages from Hooks > -- > > Key: ATLAS-2075 > URL: https://issues.apache.org/jira/browse/ATLAS-2075 > Project: Atlas > Issue Type: Improvement > Components: atlas-core >Affects Versions: 0.8-incubating >Reporter: Ashutosh Mestry >Assignee: Ashutosh Mestry > Fix For: trunk > > Attachments: ATLAS-2075-message-split-combine.patch > > > *Background* > Messages produced by Hooks have potential to be larger than 1MB, which is the > size threshold imposed by Kafka. > Although, compressing the messages (see > [ATLAS-2064|https://issues.apache.org/jira/browse/ATLAS-2064]) alleviates the > the problem, it is not a complete solution. It is possible even for > compressed messages to exceed the size threshold. > *Solution* > If the compressed message produced exceeds the size threshold, split the > messages. Accumulate the message at consumer end. > Account for cases such as: > - Messages are not received in order they are produced. > - Atlas server is shutdown before it can consume all the split messages. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ATLAS-2075) Support Messages from Hooks of Arbitrarily Large Size
[ https://issues.apache.org/jira/browse/ATLAS-2075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Mestry updated ATLAS-2075: --- Summary: Support Messages from Hooks of Arbitrarily Large Size (was: Support Arbitrarily Large Size Messages from Hooks) > Support Messages from Hooks of Arbitrarily Large Size > - > > Key: ATLAS-2075 > URL: https://issues.apache.org/jira/browse/ATLAS-2075 > Project: Atlas > Issue Type: Improvement > Components: atlas-core >Affects Versions: 0.8-incubating >Reporter: Ashutosh Mestry >Assignee: Ashutosh Mestry > Fix For: trunk > > Attachments: ATLAS-2075-message-split-combine.patch > > > *Background* > Messages produced by Hooks have potential to be larger than 1MB, which is the > size threshold imposed by Kafka. > Although, compressing the messages (see > [ATLAS-2064|https://issues.apache.org/jira/browse/ATLAS-2064]) alleviates the > the problem, it is not a complete solution. It is possible even for > compressed messages to exceed the size threshold. > *Solution* > If the compressed message produced exceeds the size threshold, split the > messages. Accumulate the message at consumer end. > Account for cases such as: > - Messages are not received in order they are produced. > - Atlas server is shutdown before it can consume all the split messages. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ATLAS-2100) Support for Saving Searches
[ https://issues.apache.org/jira/browse/ATLAS-2100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Mestry updated ATLAS-2100: --- Attachment: ATLAS-2100-SavedSearch-EntityBased.patch > Support for Saving Searches > --- > > Key: ATLAS-2100 > URL: https://issues.apache.org/jira/browse/ATLAS-2100 > Project: Atlas > Issue Type: Improvement > Components: atlas-core >Affects Versions: trunk >Reporter: Ashutosh Mestry >Assignee: Ashutosh Mestry > Fix For: trunk > > Attachments: ATLAS-2100-SavedSearch-EntityBased.patch > > > *Background* > The current set of features around search allows user to construct complex > queries. These queries once constructed, cannot be persisted and re-used > later. The user will have to reconstruct these if they were to run them again. > *Solution* > The ability to save constructed queries and retrieve them later will benefit > the user. > *Implementation* > A logged in user should be able to: > * Save the constructed query. > * View all the queries constructed. > * Edit already constructed query. > * Execute already constructed query. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ATLAS-2100) Support for Saving Searches
[ https://issues.apache.org/jira/browse/ATLAS-2100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Mestry updated ATLAS-2100: --- Attachment: (was: ATLAS-2100-SavedSearch.patch) > Support for Saving Searches > --- > > Key: ATLAS-2100 > URL: https://issues.apache.org/jira/browse/ATLAS-2100 > Project: Atlas > Issue Type: Improvement > Components: atlas-core >Affects Versions: trunk >Reporter: Ashutosh Mestry >Assignee: Ashutosh Mestry > Fix For: trunk > > > *Background* > The current set of features around search allows user to construct complex > queries. These queries once constructed, cannot be persisted and re-used > later. The user will have to reconstruct these if they were to run them again. > *Solution* > The ability to save constructed queries and retrieve them later will benefit > the user. > *Implementation* > A logged in user should be able to: > * Save the constructed query. > * View all the queries constructed. > * Edit already constructed query. > * Execute already constructed query. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (ATLAS-2037) Unit Test Failure: NotificationHookConsumerTest.testConsumersAreStoppedWhenInstanceBecomesPassive
[ https://issues.apache.org/jira/browse/ATLAS-2037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16151308#comment-16151308 ] Ashutosh Mestry commented on ATLAS-2037: 0.8-incubating commit: 31991ee46e6de438fb75dd6be85f033b16b98baa master commit: 6f9684b4fb0a1c96993df900305d0c45c9a4e32f The commit incorrectly has ATLAS-2033 in its message. > Unit Test Failure: > NotificationHookConsumerTest.testConsumersAreStoppedWhenInstanceBecomesPassive > - > > Key: ATLAS-2037 > URL: https://issues.apache.org/jira/browse/ATLAS-2037 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: trunk >Reporter: Ashutosh Mestry >Assignee: Ashutosh Mestry >Priority: Minor > Fix For: trunk, 0.8.1-incubating > > Attachments: ATLAS-2037-HookConsumer.patch > > > *Analysis* > - The test has few mocks, but we don’t mock _HookConsumer_ (derived from > _ShutdownableThread_). Hence, the object needs be in line with its usage > pattern. > - The _ShutdownableThread_ has a _CountdownLatch_ that is checked during > shutdown. > In the test, the _HookConsumer_ was not being started at all. This caused > _shutdownLatch_ (of type _CountdownLatch_) not to decrement, since no work > was performed, but anticipating work is going to be done. The test thus ended > up not getting completed, since the thread was in perpetual wait. > *Solution* > The _HookConsumer_ should be used such that it is started and shutdown. So > the test passes. > _HookConsumer_ should check for _shouldRun_ in _shutdown_ method, so that the > case where _shutdown_ is called without _start_ is handled. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ATLAS-2092) Failures following concurrent updates
[ https://issues.apache.org/jira/browse/ATLAS-2092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Mestry updated ATLAS-2092: --- Attachment: import-nested-txn.patch > Failures following concurrent updates > - > > Key: ATLAS-2092 > URL: https://issues.apache.org/jira/browse/ATLAS-2092 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Reporter: Graham Wallis > Attachments: import-nested-txn.patch, Investigations and findings > relating to concurrent updates in Atlas.pdf > > > There is a race condition that causes duplication of schema vertices as a > result of concurrent graph updates. This in turn leads to failure of queries > that specify a type such as an edge label used in an attribute that > references another entity. This problem is known to affect Atlas entity refs > – which create graph edges that use edge label schema vertices. It is likely > that it also affects other types in Atlas. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Comment Edited] (ATLAS-2092) Failures following concurrent updates
[ https://issues.apache.org/jira/browse/ATLAS-2092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16149210#comment-16149210 ] Ashutosh Mestry edited comment on ATLAS-2092 at 8/31/17 4:45 PM: - [~grahamwallis] I am reading the PDF now. Analysis is comprehensive and I find it interesting read. Thanks for putting this together! One thing we have noticed is that version of _Berkeley DB_ we are using has a few problems. Mostly around transactions commits. Only yesterday we were contemplating on effort required to move our tests to use embedded HBASE and embedded Solr. We have not reached any conclusion yet. Doing this would help us get around some of the intermittent failures we see in test environments. It will also make our unit & IT tests run in an environment that is close to production. How easy is it for you to duplicate this problem on HBASE/Solr setup? It would be interesting to see the results. About 'double wrapping of transactions' (Appendix, Observation 2). I had worked on trying to resolve it. Please find patch attached. was (Author: ashutoshm): [~grahamwallis] I am reading the PDF now. Analysis is comprehensive and I find it interesting read. Thanks for putting this together! One thing we have noticed is that version of _Berkeley DB_ we are using has a few problems. Mostly around transactions commits. Only yesterday we were contemplating on effort required to move our tests to use embedded HBASE and embedded Solr. We have not reached any conclusion yet. Doing this would help us get around some of the intermittent failures we see in test environments. It will also make our unit & IT tests run in an environment that is close to production. How easy is it for you to duplicate this problem on HBASE/Solr setup? It would be interesting to see the results. > Failures following concurrent updates > - > > Key: ATLAS-2092 > URL: https://issues.apache.org/jira/browse/ATLAS-2092 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Reporter: Graham Wallis > Attachments: Investigations and findings relating to concurrent > updates in Atlas.pdf > > > There is a race condition that causes duplication of schema vertices as a > result of concurrent graph updates. This in turn leads to failure of queries > that specify a type such as an edge label used in an attribute that > references another entity. This problem is known to affect Atlas entity refs > – which create graph edges that use edge label schema vertices. It is likely > that it also affects other types in Atlas. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ATLAS-2100) Support for Saving Searches
Ashutosh Mestry created ATLAS-2100: -- Summary: Support for Saving Searches Key: ATLAS-2100 URL: https://issues.apache.org/jira/browse/ATLAS-2100 Project: Atlas Issue Type: Improvement Components: atlas-core Affects Versions: trunk Reporter: Ashutosh Mestry Assignee: Ashutosh Mestry Fix For: trunk *Background* The current set of features around search allows user to construct complex queries. These queries once constructed, cannot be persisted and re-used later. The user will have to reconstruct these if they were to run them again. *Solution* The ability to save constructed queries and retrieve them later will benefit the user. *Implementation* A logged in user should be able to: * Save the constructed query. * View all the queries constructed. * Edit already constructed query. * Execute already constructed query. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ATLAS-2101) Remove use of Guava Stopwatch from Atlas
Ashutosh Mestry created ATLAS-2101: -- Summary: Remove use of Guava Stopwatch from Atlas Key: ATLAS-2101 URL: https://issues.apache.org/jira/browse/ATLAS-2101 Project: Atlas Issue Type: Bug Components: atlas-core Affects Versions: trunk Reporter: Ashutosh Mestry Assignee: Ashutosh Mestry Priority: Minor Fix For: trunk *Background* Using _IntelliJ_ attempt to start Atlas server. Chances are the startup will fail due to error in resolving _com.google.common.base.StopWatch_. *Solution* * Add _StandardIDPool.java_ to shaded _Titan0_ JAR. * Replace use of _Stopwatch_ with some alternative implementation. See also: [HBASE-14963|https://issues.apache.org/jira/browse/HBASE-14963] -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ATLAS-2101) Remove use of Guava Stopwatch from Atlas
[ https://issues.apache.org/jira/browse/ATLAS-2101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Mestry updated ATLAS-2101: --- Description: *Background* Using _IntelliJ_ attempt to start Atlas server. Chances are the startup will fail due to error in resolving _com.google.common.base.StopWatch_. Similar problem is observed on some build environments. *Solution* * Add _StandardIDPool.java_ to shaded _Titan0_ JAR. * Replace use of _Stopwatch_ with some alternative implementation. See also: [HBASE-14963|https://issues.apache.org/jira/browse/HBASE-14963] was: *Background* Using _IntelliJ_ attempt to start Atlas server. Chances are the startup will fail due to error in resolving _com.google.common.base.StopWatch_. *Solution* * Add _StandardIDPool.java_ to shaded _Titan0_ JAR. * Replace use of _Stopwatch_ with some alternative implementation. See also: [HBASE-14963|https://issues.apache.org/jira/browse/HBASE-14963] > Remove use of Guava Stopwatch from Atlas > > > Key: ATLAS-2101 > URL: https://issues.apache.org/jira/browse/ATLAS-2101 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: trunk >Reporter: Ashutosh Mestry >Assignee: Ashutosh Mestry >Priority: Minor > Fix For: trunk > > > *Background* > Using _IntelliJ_ attempt to start Atlas server. Chances are the startup will > fail due to error in resolving _com.google.common.base.StopWatch_. > Similar problem is observed on some build environments. > *Solution* > * Add _StandardIDPool.java_ to shaded _Titan0_ JAR. > * Replace use of _Stopwatch_ with some alternative implementation. > See also: [HBASE-14963|https://issues.apache.org/jira/browse/HBASE-14963] -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ATLAS-2101) Remove use of Guava Stopwatch from Atlas
[ https://issues.apache.org/jira/browse/ATLAS-2101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Mestry updated ATLAS-2101: --- Attachment: StandardIDPool.java > Remove use of Guava Stopwatch from Atlas > > > Key: ATLAS-2101 > URL: https://issues.apache.org/jira/browse/ATLAS-2101 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: trunk >Reporter: Ashutosh Mestry >Assignee: Ashutosh Mestry >Priority: Minor > Fix For: trunk > > Attachments: StandardIDPool.java > > > *Background* > Using _IntelliJ_ attempt to start Atlas server. Chances are the startup will > fail due to error in resolving _com.google.common.base.StopWatch_. > Similar problem is observed on some build environments. > *Solution* > * Add _StandardIDPool.java_ to shaded _Titan0_ JAR. > * Replace use of _Stopwatch_ with some alternative implementation. > See also: [HBASE-14963|https://issues.apache.org/jira/browse/HBASE-14963] -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ATLAS-2100) Support for Saving Searches
[ https://issues.apache.org/jira/browse/ATLAS-2100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Mestry updated ATLAS-2100: --- Attachment: ATLAS-2100-SavedSearch.patch > Support for Saving Searches > --- > > Key: ATLAS-2100 > URL: https://issues.apache.org/jira/browse/ATLAS-2100 > Project: Atlas > Issue Type: Improvement > Components: atlas-core >Affects Versions: trunk >Reporter: Ashutosh Mestry >Assignee: Ashutosh Mestry > Fix For: trunk > > Attachments: ATLAS-2100-SavedSearch.patch > > > *Background* > The current set of features around search allows user to construct complex > queries. These queries once constructed, cannot be persisted and re-used > later. The user will have to reconstruct these if they were to run them again. > *Solution* > The ability to save constructed queries and retrieve them later will benefit > the user. > *Implementation* > A logged in user should be able to: > * Save the constructed query. > * View all the queries constructed. > * Edit already constructed query. > * Execute already constructed query. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (ATLAS-2186) Atlas not responding for long time after startup
[ https://issues.apache.org/jira/browse/ATLAS-2186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192065#comment-16192065 ] Ashutosh Mestry commented on ATLAS-2186: Here's the rationale for downgrading Jetty to v8.x: - Some customers reported seeing the bug where Atlas UI does not load. Server side logs show _SharedBlockingCallback$BlockerTimeoutException_ exception. Based on some research it appears that the current Jetty version has a [Bug 472621|https://bugs.eclipse.org/bugs/show_bug.cgi?id=472621] that has been addressed in v9.3. Now v9.3 and some versions of v9.4 have other reported bugs related to SSL behavior. - After discussion with Ambari team, it appeared that they had encountered similar bug with Jetty. They thought it was safe to downgrade to v8. This appears to be more stable than the more recent Jetty version. > Atlas not responding for long time after startup > > > Key: ATLAS-2186 > URL: https://issues.apache.org/jira/browse/ATLAS-2186 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.8.2 >Reporter: Ayub Pathan >Assignee: Ashutosh Mestry >Priority: Blocker > Fix For: 0.8.2 > > Attachments: atlas.20171003-122531.out, > ATLAS-2186-Jetty-dowgrade.patch > > > Atlas with latest bits is not responding for any query, the request times > out. This behavior can be seen for more than ~15 minutes after atlas startup > and everything will be back to normal after this period. > I suspect one of the threads is looping or hanging which is resulting in this > state. Atlas logs does not show any suspicious information (checked with > debug logs as well). I took a thread dump of Atlas, attaching the dump file > and report below. I strongly suspect, this might be an issue with newer jetty > version introduced recently. > Thread dump report: > http://fastthread.io/my-thread-report.jsp?p=c2hhcmVkLzIwMTcvMTAvMy8tLWF0bGFzLjIwMTcxMDAzLTEyMjUzMS5vdXQtLTEyLTE2LTQ2 -- This message was sent by Atlassian JIRA (v6.4.14#64029)