Re: Questions regarding committing code to Malhar

2015-12-29 Thread Sandeep Deshmukh
You should not be checking in your credentials into the repository. A user,
if needs to run this, will need to create his/her credentials and use the
application.

If your code is in malhar-contrib, it will be automatically skipped from
unit testing cycle.

Regards,
Sandeep

On Wed, Dec 30, 2015 at 1:39 AM, Perumal, Nithin 
wrote:

> I have fixed the demos that were not working the last time I submitted a
> pull request to Malhar.  I also made a number of the improvements asked
> for on my last pull request, but I need to have a couple of questions
> answered before I can resubmit my files:
>
>
> 1. For each additional twitter-demo feature that I added, I also added a
> testing file similar to those that already exist for the twitter-demo.
> I was wondering whether I should keep the Consumer Key, consumer secret,
> access token and access token secret in the resources XML file as is done
> in the existing resources?
>
>
> 2. I had initially set up my testing files to read the twitter access
> credentials from *"/home/dtadmin/.dt/dt-site.xml"* (I am developing on
> the sandbox).  This way, the credentials would not be exposed in the xml
> file and the user would store his keys in this XML file.  Is this way
> acceptable?  Or should I put the credentials in the testing resources xml
> file as it currently is.  If I continue to access credentials from
> *"/home/dtadmin/.dt/dt-site.xml" *
>
>
>
>
>


Re: [malhar-users] Loading the properties file at run time in DT Application.

2015-12-30 Thread Sandeep Deshmukh
For 3, handleIdleTime() is not the right thing for this requirement. This
function call is not guaranteed. You can use beginWindow or endWindow for
such purpose.

Regards,
Sandeep

On Wed, Dec 30, 2015 at 6:02 PM, Yogi Devendra 
wrote:

> 1. Web UI has facility to manually change the operator properties.
>
> 2. If you need to change it programmatically you can have a look at REST
> API documentation available here:
> http://docs.datatorrent.com/dtgateway_api/
>
> This is like 'push'  the properties whenever there is some change.
>
> 3. If you are looking for continuous polling of some properties file where
> some external program is writing property values:
> You can make use of handleIdleTime() from in your operator code and
> achieve this.
>
> But, I would recommend option 2 over option 3.
>
> ~ Yogi
>
> On Wed, Dec 30, 2015 at 5:20 PM, PULLARAO KOTA <10fe1a0...@gmail.com>
> wrote:
>
>> Hi I am not using Dtcli,Using the DataTorrent Console Web UI To Upload
>> Jar and Run the Application i want to load the properties at the time of
>> Launching the application instead of giving each property separately.
>>
>> The Properties file will Be As Key=Value pairs.Please Suggest a method to
>> solve the problem.
>>
>>
>>
>> On Wednesday, December 30, 2015 at 4:32:35 PM UTC+5:30, Yogi Devendra
>> wrote:
>>>
>>> Hi,
>>>
>>> I assume you are referring to operator properties loaded from xml or
>>> json configuration.
>>>
>>> 1. dtcli provides a mechanism to connect to the running application
>>> using following command:
>>>
>>> connect app-id
>>> Connect to an app
>>>
>>> 2. Following command is available after you connect the application to
>>> set the operator properties.
>>>
>>> set-operator-property operator-name property-name property-value
>>> Set a property of an operator
>>>
>>> ~ Yogi
>>>
>>> P. S: malhar...@googlegroups.com is now deprecated.
>>>  Kindly use us...@apex.incubator.apache.org mailing list for any
>>> further  questions.
>>>
>>> On Wed, Dec 30, 2015 at 3:04 PM, PULLARAO KOTA <10fe1...@gmail.com>
>>> wrote:
>>>
 Hi,
 I have a DataTorrent application which loads the properties from file
 and use them in code, and some times we have to change the properties at
 regular intervals, in this case every time we have to stop and start the
 Application. This will be a hectic job restart the Application each time.
 Is there any method how to read properties at run time even after we change
 the file?

 --
 You received this message because you are subscribed to the Google
 Groups "Malhar" group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to malhar-users...@googlegroups.com.
 To post to this group, send email to malhar...@googlegroups.com.
 Visit this group at https://groups.google.com/group/malhar-users.
 For more options, visit https://groups.google.com/d/optout.

>>>
>>> --
>> You received this message because you are subscribed to the Google Groups
>> "Malhar" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to malhar-users+unsubscr...@googlegroups.com.
>> To post to this group, send email to malhar-us...@googlegroups.com.
>> Visit this group at https://groups.google.com/group/malhar-users.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Malhar" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to malhar-users+unsubscr...@googlegroups.com.
> To post to this group, send email to malhar-us...@googlegroups.com.
> Visit this group at https://groups.google.com/group/malhar-users.
> For more options, visit https://groups.google.com/d/optout.
>


Re: What-if analysis with apex

2016-01-28 Thread Sandeep Deshmukh
Hi Amit,

Your concern is that change of one cell is going to trigger update for
large number of cells and you are interested in doing this in parallel to
get real-time response. This can be very well achieved using Apex.

I think we are still not very clear on your use case and hence what we have
proposed may not fit match what you are looking for.

We would like to know how you will be representing your dependencies in a
graph. How many such dependency graphs will be there? Do you have one graph
per change of cell defining its dependent cells? So, for the example you
mentioned, do you define it as O1 dependent cells into one graph? Then
there is another graph which defines what values are updated if some other
cell O7 is updated.

Once we fully understand your requirements, we should be able to guide you
better.


Regards,
Sandeep

On Thu, Jan 28, 2016 at 2:56 PM, Amit Shah  wrote:

> Ashwin, Below are follow up queries that I have based on your response.
>
> The store I mentioned is just an abstraction. It can be in memory store,
>> or a cache backed lookup from a database.
>
>
> Yes I understand by the term store but I didn't follow the need of it.
>
> How does your UI interact with your server today?
>
>
> Our UI is built over angularjs so it communicates with the server through
> REST api's.
>
> You dont have to create a new DAG for each cell you are changing. You can
>> have a single DAG running and send across your query with the cell changes
>> in the schema you define. You can perform all corresponding changes for
>> other cells/table rows in the store operator.
>
>
> I was under the impression that by defining one operator per column index
> I could take the advantage of apex running individual operators on
> individual jvm's and hence parallel writes in real-time or near real-time
> response time. If we have single static DAG that accepts the cell
> identiifer (row Id, column index and table id) as parameters then we would
> not be able to concurrently updates cell values right?
> If your understanding is different from the flow I explained in my
> previous mail, what do I gain by using apex?
>
>
> Thanks,
> Amit.
>
>
> On Thu, Jan 28, 2016 at 12:51 AM, Ashwin Chandra Putta <
> ashwinchand...@gmail.com> wrote:
>
>> Amit,
>>
>> The store I mentioned is just an abstraction. It can be in memory store,
>> or a cache backed lookup from a database.
>>
>> For the query/query response, when interacting with a UI - you can send
>> your queries to the query operator and listen for response from the query
>> response operator. Historically we have used json over websockets to
>> interact from browser. How does your UI interact with your server today?
>>
>> You dont have to create a new DAG for each cell you are changing. You can
>> have a single DAG running and send across your query with the cell changes
>> in the schema you define. You can perform all corresponding changes for
>> other cells/table rows in the store operator.
>>
>> If you still want to depend completely on your existing server for
>> loading initial data, then you can load it to a cache in store and do your
>> analysis on that data in memory.
>>
>> Regards,
>> Ashwin.
>>
>> On Wed, Jan 27, 2016 at 7:42 AM, Amol Kekre  wrote:
>>
>>>
>>> Amit,
>>> Here are some answers
>>> - Logic that you want to run can be coded as an utility, that is then
>>> invoked by any other operator
>>> - PopulateDAG() is today part of roll out of the app, i.e it is similar
>>> to "compileTime" and not "runTime". You could do runTime, but then you will
>>> need to go through dtcli. Today runTime changes via dtcli will need a lot
>>> more coding. A very early version of runTime changes (based on system
>>> metrics) exist, but the ask is for changes based on application data. That
>>> ask is in the roadmap of module rollout (phase II?) and others can comment
>>> on the roadmap for runtTime populateDAG.
>>> - Outputs of many operators can be streamed as input to one operator in
>>> following ways
>>>- Each output having different schema will mean different input ports
>>> on that operator as port schema is fixed. This is fine, but will clutter
>>> the DAG
>>>- If the schema of these output ports is same, there is a merge
>>> operator that does that (
>>> https://github.com/apache/incubator-apex-malhar/blob/master/library/src/main/java/com/datatorrent/lib/stream/StreamMerger.java).
>>> You can write one for Nx1 merge by extending the above class.
>>>
>>> Thks,
>>> Amol
>>>
>>>
>>> On Wed, Jan 27, 2016 at 6:03 AM, Amit Shah  wrote:
>>>
 Thanks Ashwin for the follow up.
 I am not sure if I completely follow the query -> store -> query
 pattern. What does query mean here? Why would we need a in-memory store?
 Trying to list down the flow I came up with below points

1. We need to build a DAG after we get to know the cell (table, row
and column index) that is modified by the user.
2. Once we receive user input (i.e. once

Re: What-if analysis with apex

2016-01-28 Thread Sandeep Deshmukh
Thanks Amit. We have better understanding of your requirements now.

It is not necessary that each cell will be one operator. Please don't get
biased by that assumption.

Here are few more queries.
>1. Loading values for unmodified cells
What is the source of these unmodified cells?

> 3. Execute the cells in parallel (if possible)
Which cells you are referring to? Table1, row 1, column 1 - that is the
cells that are changed will trigger dependent cells recalculation or the
two dependent cells?

Regards
Sandeep
On 28-Jan-2016 8:20 pm, "Amit Shah"  wrote:

> Thanks Sandeep for the follow up. I have tried responding to your queries.
> Kindly let me know if that gives you an idea on what I am trying to achieve
>
> how you will be representing your dependencies in a graph
>
>
> Attached a sample dependency graph. I was assuming each cell to be
> represented as an operator in apex terms so that they could be executed in
> parallel
>
> How many such dependency graphs will be there?
>
>
> Total number of graphs would be approximately equal to the number of rows
> that could be modified by the user (considering the worst case). The number
> should be in 1000's.
>
> Do you have one graph per change of cell defining its dependent cells? So,
>> for the example you mentioned, do you define it as O1 dependent cells into
>> one graph? Then there is another graph which defines what values are
>> updated if some other cell O7 is updated.
>
>
> Yes approximately one graph per cell. The dependency graph I have tried
> presenting in the attached diagram could be executed if any of the cell
> values in table 1, 2 or 4 are updated. For simplicity I have picked up
> cells from distinct tables.
>
> In my view, once the user sees the tables on the UI, we could create the
> dependency graphs in the background. Once he/she updates a cell value, our
> application would figure out its corresponding dependency graph and start
> its execution by
> 1. Loading values for unmodified cells
> 2. Determine the cells (or operators) that are to be recalculated. For
> e.g. if the cell with identifier as table1, row 1, column 1 is updated, the
> application would determine that 2 cell values are to be updated.
> 3. Execute the cells in parallel (if possible)
> 4. Render the updated values in real time to the user.
>
> Thanks,
> Amit.
>
> On Thu, Jan 28, 2016 at 7:28 PM, Sandeep Deshmukh  > wrote:
>
>> Hi Amit,
>>
>> Your concern is that change of one cell is going to trigger update for
>> large number of cells and you are interested in doing this in parallel to
>> get real-time response. This can be very well achieved using Apex.
>>
>> I think we are still not very clear on your use case and hence what we
>> have proposed may not fit match what you are looking for.
>>
>> We would like to know how you will be representing your dependencies in a
>> graph. How many such dependency graphs will be there? Do you have one graph
>> per change of cell defining its dependent cells? So, for the example you
>> mentioned, do you define it as O1 dependent cells into one graph? Then
>> there is another graph which defines what values are updated if some other
>> cell O7 is updated.
>>
>> Once we fully understand your requirements, we should be able to guide
>> you better.
>>
>>
>> Regards,
>> Sandeep
>>
>> On Thu, Jan 28, 2016 at 2:56 PM, Amit Shah  wrote:
>>
>>> Ashwin, Below are follow up queries that I have based on your response.
>>>
>>> The store I mentioned is just an abstraction. It can be in memory store,
>>>> or a cache backed lookup from a database.
>>>
>>>
>>> Yes I understand by the term store but I didn't follow the need of it.
>>>
>>> How does your UI interact with your server today?
>>>
>>>
>>> Our UI is built over angularjs so it communicates with the server
>>> through REST api's.
>>>
>>> You dont have to create a new DAG for each cell you are changing. You
>>>> can have a single DAG running and send across your query with the cell
>>>> changes in the schema you define. You can perform all corresponding changes
>>>> for other cells/table rows in the store operator.
>>>
>>>
>>> I was under the impression that by defining one operator per column
>>> index I could take the advantage of apex running individual operators on
>>> individual jvm's and hence parallel writes in real-time or near real-time
>>> response time. If we have single static DAG that accepts the cell
>>> identii

Three day Apache Hadoop and Apache Apex workshop in Pune (for students only)

2016-02-12 Thread Sandeep Deshmukh
Dear Apex Users,

We are organizing three day Apache Hadoop and Apache Apex workshop (for
students only)  at Pune Institute of Computer Technology (PICT), Pune.

Open for all
Day 1 (2/12/16)
1.30pm to 2.00pm Registration
2.00om to 2.15pm Apache & Open Source Overview
2.15pm to 5.00pm Introduction to Hadoop


Only for selected participants

Day 2 (2/13/16)
9.00am to 1.00pm Hadoop Hands-on (HDFS & MapReduce)
2.00pm to 5.00pm Apache Apex Basics
Day 3 (2/14/16)
9.00am to 1.00pm Apache Apex Hands-on

We have constraints on the number of participants for hands on session and
hence only selected participants will be attending the workshop on day 2
and day 3.

We had the first session (Day 1) today and 100+ students attended the same.
In general, students were very interested in knowing Big Data and Hadoop
technologies.

We will share our experiences with remaining workshop early next week.

Regards,
Sandeep


Re: resources @ incubator website

2016-03-07 Thread Sandeep Deshmukh
+1

Regards,
Sandeep

On Tue, Mar 8, 2016 at 9:58 AM, Pradeep Dalvi  wrote:

> +1
>
> On Tue, Mar 8, 2016 at 9:58 AM, Shubham Pathak 
> wrote:
>
>> +1
>>
>> Thanks,
>> Shubham
>>
>> On Tue, Mar 8, 2016 at 9:52 AM, Thomas Weise 
>> wrote:
>>
>>> +1
>>>
>>>
>>> On Mon, Mar 7, 2016 at 8:21 PM, Amol Kekre  wrote:
>>>

 This is a good idea.

 Thks
 Amol

 On Mon, Mar 7, 2016 at 7:05 PM, Sandesh Hegde 
 wrote:

> Hello Team,
>
> How about having a single page which links to all the resources
> available for learning Apex?
>
> For examples:
> 1. Slideshares
> 2. YouTube links
> 3. BrightTalk links
> 4. Blogs ?
> etc etc.
>
> Thanks
> Sandesh
>


>>>
>>
>
>
> --
> Pradeep A. Dalvi
>
> Software Engineer
> DataTorrent (India)
>


Re: Facebook app for Apex

2016-03-07 Thread Sandeep Deshmukh
Dear Sairam,

There is no such app as of now. You are most welcome to build one.

Regards
Sandeep

Regards,
Sandeep

On Tue, Mar 8, 2016 at 11:55 AM, Sairam Kannan 
wrote:

> Hi Apex Developer Community,
>   I am planning to build a facebook application for Apex.
> Is there any facebook application built for Apex (similar to that of
> Twitter) ?. If not, can you suggest some ideas in creating an facebook
> application for Apex, using their authentication and building operators for
> different functions, For example like analyzing the timeline data of the
> users/friends in real-time based on their posts etc. I am open to new ideas
> and suggestions :).
>
> Thanks and Regards,
> Sairam Kannan
> LinkedIn - https://www.linkedin.com/in/sairamkannan
>


Re: AWS EMR: Container is running beyond virtual memory limits

2016-03-11 Thread Sandeep Deshmukh
What is the memory allocated to the container?
On 11-Mar-2016 8:34 pm, "Pradeep A. Dalvi"  wrote:

> We are facing following error message while starting any containers on AWS
> EMR.
>
> Container [pid=8107,containerID=container_1457702160744_0001_01_07] is 
> running beyond virtual memory limits. Current usage: 186.1 MB of 256 MB 
> physical memory used; 2.0 GB of 1.3 GB virtual memory used. Killing container.
> Dump of the process-tree for container_1457702160744_0001_01_07 :
>   |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) 
> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>   |- 8222 8107 8107 8107 (java) 589 62 2041503744 46944 
> /usr/lib/jvm/java-openjdk/bin/java -Xmx234881024 
> -Ddt.attr.APPLICATION_PATH=hdfs://ip-172-31-9-174.ec2.internal:8020/user/hadoop/datatorrent/apps/application_1457702160744_0001
>  
> -Djava.io.tmpdir=/mnt1/yarn/usercache/hadoop/appcache/application_1457702160744_0001/container_1457702160744_0001_01_07/tmp
>  -Ddt.cid=container_1457702160744_0001_01_07 
> -Dhadoop.root.logger=INFO,RFA 
> -Dhadoop.log.dir=/var/log/hadoop-yarn/containers/application_1457702160744_0001/container_1457702160744_0001_01_07
>  com.datatorrent.stram.engine.StreamingContainer
>   |- 8107 8105 8107 8107 (bash) 1 5 115806208 705 /bin/bash -c 
> /usr/lib/jvm/java-openjdk/bin/java  -Xmx234881024  
> -Ddt.attr.APPLICATION_PATH=hdfs://ip-172-31-9-174.ec2.internal:8020/user/hadoop/datatorrent/apps/application_1457702160744_0001
>  
> -Djava.io.tmpdir=/mnt1/yarn/usercache/hadoop/appcache/application_1457702160744_0001/container_1457702160744_0001_01_07/tmp
>  -Ddt.cid=container_1457702160744_0001_01_07 
> -Dhadoop.root.logger=INFO,RFA 
> -Dhadoop.log.dir=/var/log/hadoop-yarn/containers/application_1457702160744_0001/container_1457702160744_0001_01_07
>  com.datatorrent.stram.engine.StreamingContainer 
> 1>/var/log/hadoop-yarn/containers/application_1457702160744_0001/container_1457702160744_0001_01_07/stdout
>  
> 2>/var/log/hadoop-yarn/containers/application_1457702160744_0001/container_1457702160744_0001_01_07/stderr
>
> Container killed on request. Exit code is 143
> Container exited with a non-zero exit code 143
>
>
> We had 1 m3.xlarge MASTER & 2 m3.xlarge CORE instances provisioned. We
> also have tried m4.4xlarge instances. EMR Task configurations can be found
> at
> http://docs.aws.amazon.com/ElasticMapReduce/latest/DeveloperGuide/TaskConfiguration_H2.html
>
> We tried changing following yarn configurations, however they did not seem
> to help much.
>
>   *yarn.nodemanager.resource.memory-mb*
> *12288*
>   *yarn.scheduler.maximum-allocation-mb*
> *4096*
>*yarn.nodemanager.vmem-check-enabled* 
> *false* 
>*yarn.nodemanager.vmem-pmem-ratio* *50*
> 
>
>
> Thanks,
> --
> Pradeep A. Dalvi
>


Re: Best practises to specify the Application

2016-04-29 Thread Sandeep Deshmukh
Apex needs Java 1.7 min.
What is the Java version you are using?

Regards,
Sandeep

On Fri, Apr 29, 2016 at 1:50 PM, Ananth Gundabattula <
agundabatt...@gmail.com> wrote:

> I had a look at the dtgateway.log and observed the following stacktrace :
> I guess the root cause might be Apex running on a lower version of java and
> the application code compiled with a higher version. I shall update
> everyone after the upgrade to see if it fixes the issue.
>
>
> Fatal error encountered
>>
>> 2016-04-29 08:04:45,181 ERROR
>> com.datatorrent.gateway.resources.ws.v2.WSResource: Caught exception
>> com.datatorrent.gateway.y: java.lang.UnsupportedClassVersionError:
>> com/tx/y/z/Operator : Unsupported major.minor version 52.0
>> at java.lang.ClassLoader.defineClass1(Native Method)
>> at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
>> at
>> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>> at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
>> at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
>> at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
>> at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>> at java.security.AccessController.doPrivileged(Native Method)
>> at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>> at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>> at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>> at
>> com.datatorrent.stram.webapp.OperatorDiscoverer.addDefaultValue(OperatorDiscoverer.java:297)
>> at
>> com.datatorrent.stram.cli.DTCli$GetJarOperatorClassesCommand.execute(DTCli.java:3010)
>> at
>> com.datatorrent.stram.cli.DTCli$GetAppPackageOperatorsCommand.execute(DTCli.java:3755)
>> at com.datatorrent.stram.cli.DTCli$3.run(DTCli.java:1449)
>>
>
>
> On Fri, Apr 29, 2016 at 5:49 PM, Ananth Gundabattula <
> agundabatt...@gmail.com> wrote:
>
>> Thanks Shubham.
>>
>> Those properties were being set by the parent pom which are inherited by
>> the current pom.xml ( Kind of wanted all apex apps to have same version
>> structure etc) and hence added that property as part of parent pom.
>>
>> I am still stumped as to what is causing this.
>>
>> Regards,
>> Ananth
>>
>>
>>
>> On Fri, Apr 29, 2016 at 4:23 PM, Shubham Pathak 
>> wrote:
>>
>>> Hi Ananth,
>>>
>>> I tried $ mvn archetype:generate  -DarchetypeGroupId=org.apache.apex
>>>  -DarchetypeArtifactId=apex-app-archetype 
>>> -DarchetypeVersion=3.3.0-incubating
>>>  -DgroupId=com.example -Dpackage=com.example.mydtapp -DartifactId=mydtapp
>>>  -Dversion=1.0-SNAPSHOT
>>> and it works for me as well.
>>>
>>> I compared the pom.xml file and noticed values for following properties
>>> were missing.
>>>  
>>>
>>>
>>> ${apache.apex.apppackage.classpath}
>>>
>>> ${apache.apex.engine.version}
>>>
>>>  
>>>
>>> Could you add the following to pom.xml and try again.
>>>
>>>  
>>> 
>>> 
>>> 3.3.0-incubating
>>> 
>>> lib/*.jar
>>>   
>>>
>>>
>>> Thanks,
>>> Shubham
>>>
>>> On Fri, Apr 29, 2016 at 11:12 AM, Ananth Gundabattula <
>>> agundabatt...@gmail.com> wrote:
>>>
 Hello David,

 I reattempted the packaging with the archetype based approach. I guess
 the 3.3.0 archetype did not exist when I attempted it earlier. Thanks for
 pointing it out.

 The default package generated by the archetype command seems to be
 working fine when uploading it but the moment I add in my code and
 dependencies it goes back the issue I am seeing :


 My app package looks like app.zip  ( I removed all of the
 application.class for brevity sake ) . Please note that I deleted all the
 jars under /lib folder and the contents of the /app folder in the apa
 pacakge.

 I have also attached a copy of my pom.xml file.

 Could you please advise if you something amiss right away in the
 structure below ? It may be noted that I retained the classes of the
 original simple project generated by the archetype and even they stopped
 showing with the launch button the moment I added in all of the application
 dependencies in maven.

 Regards,
 Ananth

 On Fri, Apr 29, 2016 at 12:01 PM, David Yan 
 wrote:

> Hi Ananth,
>
> I just tried with 3.3.0-incubating with the following and it's
> working:
>
> $ mvn archetype:generate  -DarchetypeGroupId=org.apache.apex
>  -DarchetypeArtifactId=apex-app-archetype
> -DarchetypeVersion=3.3.0-incubating  -DgroupId=com.example
> -Dpackage=com.example.mydtapp -DartifactId=mydtapp  -Dversion=1.0-SNAPSHOT
>
> Let me know whether this works for you or not.
>
> The system uses the MANIFEST.MF file to get the basic meta information
> on the app package and it looks at all the jars in the app directory and
> searches for cl

Re: Containers getting killed by application master

2016-05-17 Thread Sandeep Deshmukh
Dear Ananth,

Could you please check the STRAM logs for any details of these containers.
The first guess would be container going out of memory .

Regards,
Sandeep

On Wed, May 18, 2016 at 10:05 AM, Ananth Gundabattula <
agundabatt...@gmail.com> wrote:

> Hello All,
>
> I was wondering what would be the case for a container to be killed by the
> application master ?
>
> I see the following in the UI when I click on details :
>
> "
>
> Container killed by the ApplicationMaster.
> Container killed on request. Exit code is 143
> Container exited with a non-zero exit code 143
>
> "
>
> I see zome exceptions in the dtgateway.log and am not sure if they are 
> related.
>
> I am running Apex 3.3.0 on CDH 5.7 and HA enabled (HA for YARN as well as 
> HDFS is enabled).
>
>
>
>
>