IgniteCache.destroy() taking long time

2019-08-13 Thread Shravya Nethula
Hi,

I have created a cache using the following API:
IgniteCache cache = (IgniteCache) 
ignite.getOrCreateCache(cacheCfg);

Now when i try to delete the cache using IgniteCache.destroy() API, it is 
taking about 12-13 seconds.

Why is it taking more execution time? Will there be any exchange of cache 
information among the nodes whenever a cache is deleted?
Is there any way in which, the execution time can be optimized?

Regards,
Shravya Nethula.



Regards,

Shravya Nethula,

BigData Developer,

[cid:08a38b0c-9ee5-4e1b-94ae-7f42ee561277]

Hyderabad.


Fwd: The Apache(R) Software Foundation Announces Annual Report for 2019 Fiscal Year

2019-08-13 Thread Denis Magda
>
> 18. Top 5 most active mailing lists (user@ + dev@): Flink, Beam, Lucene,
> *Ignite*, and Kafka;


Community fellows, congrats! Ignite continues being of the top ASF projects
among 300+ in the category above. Thanks the dev community for contribution
and user community for selecting Ignite and helping us to improve it over
the time.

-
Denis
Ignite PMC Chair

-- Forwarded message -
From: Sally Khudairi 
Date: Tue, Aug 13, 2019 at 8:03 PM
Subject: Fwd: The Apache® Software Foundation Announces Annual Report for
2019 Fiscal Year
To: , ASF Operations 
Cc: ASF Marketing & Publicity 


We are live. Thank you, everyone, for your help in getting this completed.

Warm regards,
Sally

- - -
Vice President Marketing & Publicity
Vice President Sponsor Relations
The Apache Software Foundation

Tel +1 617 921 8656 | s...@apache.org

- Original message -
From: Sally Khudairi 
To: Apache Announce List 
Subject: The Apache® Software Foundation Announces Annual Report for 2019
Fiscal Year
Date: Tuesday, August 13, 2019 13:01

[this announcement is available online at  https://s.apache.org/w7bw1 ]

World's largest Open Source foundation’s 300+ freely-available,
enterprise-grade Apache projects power some of the most visible and widely
used applications in computing today.

Wakefield, MA —13 August 2019— The Apache® Software Foundation (ASF), the
all-volunteer developers, stewards, and incubators of more than 350 Open
Source projects and initiatives, announced today the availability of the
annual report for its 2019 fiscal year, which ended 30 April 2019.

Celebrating its 20th Anniversary, the world's largest Open Source
foundation’s "Apache Way" of community-driven development is the process
behind hundreds of freely-available (100% no cost), enterprise-grade Apache
projects that serve as the backbone for some of the most visible and widely
used applications in Artificial Intelligence and Deep Learning, Big Data,
build management, Cloud Computing, content management, DevOps, IoT and Edge
computing, mobile, servers, and Web frameworks, among many other categories.

The ubiquity of Apache software is undeniable, with Apache projects
managing exabytes of data, executing teraflops of operations, and storing
billions of objects in virtually every industry. Apache software is an
integral part of nearly every end user computing device, from laptops to
tablets to phones. Apache software is used in every Internet-connected
country on the planet.

Highlights include:

1. ASF codebase is conservatively valued at least $20B, using the COCOMO 2
model;
2. Continued guardianship of 190M+ lines of code in the Apache repositories;
3. Profit for FY2018-2019: $585,486;
4. Total of 10 Platinum Sponsors, 9 Gold Sponsors, 11 Silver Sponsors, 25
Bronze Sponsors, and 6 Platinum Targeted Sponsors, 5 5. Gold Targeted
Sponsors, 3 Silver Targeted Sponsors, and 10 Bronze Targeted Sponsors;
5. 35 new individual ASF Members elected, totalling 766;
6. Exceeded 7,000 code Committers;
7. 202 Top-Level communities overseeing 332 Apache projects and
sub-projects;
8. 17 newly-graduated Top-Level Projects from the Apache Incubator;
9. 47 projects currently undergoing development in the Apache Incubator;
10. Top 5 most active/visited Apache projects: Hadoop, Kafka, Lucene, POI,
ZooKeeper;
11. Top 5 Apache repositories by number of commits: Camel, Hadoop, HBase,
Beam, and Flink;
12. Top 5 Apache repositories by lines of code: NetBeans, OpenOffice, Flex
(combined), Mynewt (combined), and Trafodion;
13. 35M page views per week across apache.org;
14. 9M+ source code downloads from Apache mirrors (excluding convenience
binaries);
15. Web requests received from every Internet-connected country on the
planet;
16. 3,280 Committers changed 71,186,324 lines of code over 222,684 commits;
17. 18,750 authors sent 1,402,267 emails on 570,469 topics across 1,131
mailing lists;
18. Top 5 most active mailing lists (user@ + dev@): Flink, Beam, Lucene,
Ignite, and Kafka;
19. Automated Gitbox across ~1,800 git repositories containing ~75GB of
code and repository history;
20. Each GitHub account monitored for security compliance;
21. GitHub traffic: Top 5 most active Apache sources --clones: Thrift,
Cordova, Arrow, Airflow, and Beam;
22. GitHub traffic: Top 5 most active Apache sources --visits: Spark,
Camel, Flink, Kafka, and Airflow;
23. 24th anniversary of the Apache HTTP Server (20 years under the ASF
umbrella);
24. 770 Individual Contributor License Agreements (CLAs) signed;
25. 28 Corporate Contributor License Agreements signed;
26. 26 Software Grant Agreements signed; and
27. ASF is a mentoring organization in Google Summer of Code for 14th
consecutive year.


The full report is available online at
https://s.apache.org/FY2019AnnualReport

About The Apache Software Foundation (ASF)
Established in 1999, the all-volunteer Foundation oversees more than 350
leading Open Source projects, including Apache HTTP Server —the world's
most popular Web server so

Re: Ignite Spark Example Question

2019-08-13 Thread sri hari kali charan Tummala
can I run ignite and spark on cluster mode ? in the github example what I
see is just local mode, if I use grid cloud ignite cluster how would I
install spark distributed mode is it comes with the ignite cluster ?

https://github.com/apache/ignite/blob/1f8cf042f67f523e23f795571f609a9c81726258/examples/src/main/spark/org/apache/ignite/examples/spark/IgniteDataFrameWriteExample.scala#L89

On Tue, Aug 13, 2019 at 6:53 AM Stephen Darlington <
stephen.darling...@gridgain.com> wrote:

> As I say, there’s nothing "out of the box” — you’d have to write it
> yourself. Exactly how you architect it would depend on what you’re trying
> to do.
>
> Regards,
> Stephen
>
> On 12 Aug 2019, at 19:59, sri hari kali charan Tummala <
> kali.tumm...@gmail.com> wrote:
>
> Thanks Stephen , last question so I have to keep looping to find new data
> files in S3 and write to cache real time or is it already built in ?
>
> On Mon, Aug 12, 2019 at 5:43 AM Stephen Darlington <
> stephen.darling...@gridgain.com> wrote:
>
>> I don’t think there’s anything “out of the box,” but you could write a
>> custom CacheStore to do that.
>>
>> See here for more details:
>> https://apacheignite.readme.io/docs/3rd-party-store#section-custom-cachestore
>>
>> Regards,
>> Stephen
>>
>> On 9 Aug 2019, at 21:50, sri hari kali charan Tummala <
>> kali.tumm...@gmail.com> wrote:
>>
>> one last question, is there an S3 connector for Ignite which can load s3
>> objects in realtime to ignite cache and data updates directly back to S3? I
>> can use spark as one alternative but is there another approach of doing?
>>
>> Let's say I want to build in-memory near real-time data lake files which
>> get loaded to S3 automatically gets loaded to Ignite (I can use spark
>> structured streaming jobs but is there a direct approach ?)
>>
>> On Fri, Aug 9, 2019 at 4:34 PM sri hari kali charan Tummala <
>> kali.tumm...@gmail.com> wrote:
>>
>>> Thank you, I got it now I have to change the id values to see the same
>>> data as extra results (this is just for testing) amazing.
>>>
>>> val df = spark.sql(SELECT monolitically_id() as id, name, department
>>> FROM json_person)
>>>
>>> df.write(append)... to ignite
>>>
>>> Thanks
>>> Sri
>>>
>>>
>>> On Fri, Aug 9, 2019 at 6:08 AM Andrei Aleksandrov <
>>> aealexsand...@gmail.com> wrote:
>>>
 Hi,

 Spark contains several *SaveModes *that will be applied if the table
 that you are going to use exists:

 * *Overwrite *- with this option you *will try to re-create* existed
 table or create new and load data there using IgniteDataStreamer
 implementation
 * *Append *- with this option you *will not try to re-create* existed
 table or create new table and just load the data to existed table

 * *ErrorIfExists *- with this option you will get the exception if the
 table that you are going to use exists

 * *Ignore *- with this option nothing will be done in case if the
 table that you are going to use exists. If table already exists, the save
 operation is expected to not save the contents of the DataFrame and to not
 change the existing data.
 According to your question:

 You should use the *Append *SaveMode for your spark integration in
 case if you are going to store new data to cache and save the previous
 stored data.

 Note, that in case if you will store the data for the same Primary Keys
 then with data will be overwritten in Ignite table. For example:

 1)Add person {id=1, name=Vlad, age=19} where id is the primary key
 2)Add person {id=1, name=Nikita, age=26} where id is the primary key

 In Ignite you will see only {id=1, name=Nikita, age=26}.

 Also here you can see the code sample for you and other information
 about SaveModes:


 https://apacheignite-fs.readme.io/docs/ignite-data-frame#section-saving-dataframes

 BR,
 Andrei

 On 2019/08/08 17:33:39, sri hari kali charan Tummala 
  wrote:
 > Hi All,>
 >
 > I am new to Apache Ignite community I am testing out ignite for
 knowledge>
 > sake in the below example the code reads a json file and writes to
 ingite>
 > in-memory table is it overwriting can I do append mode I did try
 spark>
 > append mode .mode(org.apache.spark.sql.SaveMode.Append)>
 > without stopping one ignite application inginte.stop which keeps the
 cache>
 > alive and tried to insert data to cache twice but I am still getting
 4>
 > records I was expecting 8 records , what would be the reason ?>
 >
 >
 https://github.com/apache/ignite/blob/1f8cf042f67f523e23f795571f609a9c81726258/examples/src/main/spark/org/apache/ignite/examples/spark/IgniteDataFrameWriteExample.scala#L89>

 >
 > -- >
 > Thanks & Regards>
 > Sri Tummala>
 >

>>>
>>>
>>> --
>>> Thanks & Regards
>>> Sri Tummala
>>>
>>>
>>
>> --
>> Thanks & Regards
>> Sri Tummala
>>
>>
>>
>>
>
> --
>

Re: Ignite Spark Example Question

2019-08-13 Thread Stephen Darlington
As I say, there’s nothing "out of the box” — you’d have to write it yourself. 
Exactly how you architect it would depend on what you’re trying to do.

Regards,
Stephen

> On 12 Aug 2019, at 19:59, sri hari kali charan Tummala 
>  wrote:
> 
> Thanks Stephen , last question so I have to keep looping to find new data 
> files in S3 and write to cache real time or is it already built in ?
> 
> On Mon, Aug 12, 2019 at 5:43 AM Stephen Darlington 
> mailto:stephen.darling...@gridgain.com>> 
> wrote:
> I don’t think there’s anything “out of the box,” but you could write a custom 
> CacheStore to do that.
> 
> See here for more details: 
> https://apacheignite.readme.io/docs/3rd-party-store#section-custom-cachestore 
> 
> 
> Regards,
> Stephen
> 
>> On 9 Aug 2019, at 21:50, sri hari kali charan Tummala 
>> mailto:kali.tumm...@gmail.com>> wrote:
>> 
>> one last question, is there an S3 connector for Ignite which can load s3 
>> objects in realtime to ignite cache and data updates directly back to S3? I 
>> can use spark as one alternative but is there another approach of doing?
>> 
>> Let's say I want to build in-memory near real-time data lake files which get 
>> loaded to S3 automatically gets loaded to Ignite (I can use spark structured 
>> streaming jobs but is there a direct approach ?)
>> 
>> On Fri, Aug 9, 2019 at 4:34 PM sri hari kali charan Tummala 
>> mailto:kali.tumm...@gmail.com>> wrote:
>> Thank you, I got it now I have to change the id values to see the same data 
>> as extra results (this is just for testing) amazing.
>> 
>> val df = spark.sql(SELECT monolitically_id() as id, name, department FROM 
>> json_person)
>> 
>> df.write(append)... to ignite
>> 
>> Thanks
>> Sri 
>> 
>> 
>> On Fri, Aug 9, 2019 at 6:08 AM Andrei Aleksandrov > > wrote:
>> Hi,
>> 
>> Spark contains several SaveModes that will be applied if the table that you 
>> are going to use exists:
>> 
>> * Overwrite - with this option you will try to re-create existed table or 
>> create new and load data there using IgniteDataStreamer implementation
>> * Append - with this option you will not try to re-create existed table or 
>> create new table and just load the data to existed table
>> * ErrorIfExists - with this option you will get the exception if the table 
>> that you are going to use exists
>> 
>> * Ignore - with this option nothing will be done in case if the table that 
>> you are going to use exists. If table already exists, the save operation is 
>> expected to not save the contents of the DataFrame and to not change the 
>> existing data.
>> 
>> According to your question:
>> 
>> You should use the Append SaveMode for your spark integration in case if you 
>> are going to store new data to cache and save the previous stored data.
>> 
>> Note, that in case if you will store the data for the same Primary Keys then 
>> with data will be overwritten in Ignite table. For example:
>> 
>> 1)Add person {id=1, name=Vlad, age=19} where id is the primary key
>> 2)Add person {id=1, name=Nikita, age=26} where id is the primary key
>> 
>> In Ignite you will see only {id=1, name=Nikita, age=26}.
>> 
>> Also here you can see the code sample for you and other information about 
>> SaveModes:
>> 
>> https://apacheignite-fs.readme.io/docs/ignite-data-frame#section-saving-dataframes
>>  
>> 
>> 
>> BR,
>> Andrei
>> 
>> On 2019/08/08 17:33:39, sri hari kali charan Tummala  
>>  wrote: 
>> > Hi All,> 
>> > 
>> > I am new to Apache Ignite community I am testing out ignite for knowledge> 
>> > sake in the below example the code reads a json file and writes to ingite> 
>> > in-memory table is it overwriting can I do append mode I did try spark> 
>> > append mode .mode(org.apache.spark.sql.SaveMode.Append)> 
>> > without stopping one ignite application inginte.stop which keeps the 
>> > cache> 
>> > alive and tried to insert data to cache twice but I am still getting 4> 
>> > records I was expecting 8 records , what would be the reason ?> 
>> > 
>> > https://github.com/apache/ignite/blob/1f8cf042f67f523e23f795571f609a9c81726258/examples/src/main/spark/org/apache/ignite/examples/spark/IgniteDataFrameWriteExample.scala#L89
>> >  
>> > >
>> >  
>> > 
>> > -- > 
>> > Thanks & Regards> 
>> > Sri Tummala> 
>> >
>> 
>> 
>> -- 
>> Thanks & Regards
>> Sri Tummala
>> 
>> 
>> 
>> -- 
>> Thanks & Regards
>> Sri Tummala
>> 
> 
> 
> 
> 
> -- 
> Thanks & Regards
> Sri Tummala
>