Re: Does ignite provide a Comparator for Sort?

2018-11-01 Thread Dmitriy Setrakyan
Yes, instead of utilizing custom comparators, just use "order by" clause in
your SQL query.

D.

On Thu, Nov 1, 2018 at 12:43 AM Mikael  wrote:

> Hi!
>
> I don't think so but can't you use an index and an SQL query instead ?
>
> Mikael
>
> Den 2018-11-01 kl. 06:33, skrev Ignite Enthusiast:
>
> I am new to Apache ignite.  I have used HAzelcast extensively and one of
> the features I really liked about it is the Comparator that it provides on
> the Cache Entries.
>
> Does Apache Ignite have one readily available? If not, is it in the works?
>
>
>
>


Re: Ignite SQL Queries not getting all data back in ignite 2.4 and 2.6

2018-08-16 Thread Dmitriy Setrakyan
I also want to point out that Ignite has nightly builds, so you can try
them instead of doing your own build as well.

https://ignite.apache.org/download.cgi#nightly-builds

D.

On Thu, Aug 16, 2018 at 1:38 AM, Vladimir Ozerov 
wrote:

> Hi,
>
> There were a lot of changes in the product since 2.3 which may affect it.
> Most important change was baseline topology, as already mentioned.
> I am aware of a case when incorrect result might be returned [1], which is
> already fixed in *master*. Not sure if this is the same issue, but you
> may try to build Ignite from recent master and check if the problem is
> still there.
>
> Is it possible to create isolated reproducer for this issue?
>
> [1] https://issues.apache.org/jira/browse/IGNITE-8900
>
> On Wed, Aug 15, 2018 at 11:34 PM bintisepaha 
> wrote:
>
>> Thanks for getting back, but we do not use Ignite's native persistence.
>> Anything else changed from 2.3 to 2.4 to cause this around SQL Queries?
>>
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>


Re: Ignite running on JDK10?

2018-08-15 Thread Dmitriy Setrakyan
I believe JDK9 is supported, but you need to add certain JVM parameters.

Vladimir, can you comment?

D.

On Fri, Aug 10, 2018, 07:31 KJQ  wrote:

> As a note, I downgraded all of the Docker containers to use JDK 9 (9.0.4)
> and
> I still get the same problem running the SpringBoot 2 application.  Running
> in my IDE a test case works perfectly fine.
>
> *Caused by: java.lang.RuntimeException: jdk.internal.misc.JavaNioAccess
> class is unavailable.*
>
> *Caused by: java.lang.IllegalAccessException: class
> org.apache.ignite.internal.util.GridUnsafe cannot access class
> jdk.internal.misc.SharedSecrets (in module java.base) because module
> java.base does not export jdk.internal.misc to unnamed module @78a89eea*
>
>
>
> -
> KJQ
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Toad Connection to Ignite

2018-08-03 Thread Dmitriy Setrakyan
Is it not possible to use TOAD with standard Ignite JDBC driver? I am not
sure if there should be any issues.

On Thu, Aug 2, 2018 at 9:42 AM, ApacheUser 
wrote:

> Thanks Alex,
>
> We have large pool of developers who uses TOAD, just thought of making TOAD
> connect to Ignite to have similar experience. We are using DBeaver right
> now.
>
> Thanks
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: The Apache Ignite Book

2018-08-03 Thread Dmitriy Setrakyan
Thanks, Shamim. I will give it a read.

On Wed, Aug 1, 2018 at 12:26 AM, srecon  wrote:

> Dear, Users.
>   Yesterday the first portion of our new title The Apache Ignite Book had
> been published and available at https://leanpub.com/ignitebook . The full
> table of contents and the sample chapter is also available through  leanpub
>   .
>  The title is an agile published book, and we continue to update the book
> for covering Apache Ignite version 2.x. Feel free to ask any question and
> do
> not hesitate to make any comments or suggestion.
>
> Best regards
>   Shamim Ahmed.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Cache not rebalanced after one node is restarted

2018-06-11 Thread Dmitriy Setrakyan
The Ignite 2.5 was released and can be downloaded from the Ignite website:
https://ignite.apache.org/download.html

D.

On Wed, May 30, 2018 at 6:34 AM, Stanislav Lukyanov 
wrote:

> Most likely you've run into this bug:
> https://issues.apache.org/jira/browse/IGNITE-8210
>
> It was fixed in 2.5, try updating to that version.
>
> Thanks,
> Stan
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Large durable caches

2018-06-11 Thread Dmitriy Setrakyan
The Ignite 2.5 has been released and can be downloaded from the Ignite
website:
https://ignite.apache.org/download.html

D.

On Wed, May 30, 2018 at 6:33 AM, Alexey Goncharuk <
alexey.goncha...@gmail.com> wrote:

> Ray,
>
> Which Ignite version are you running on. You may be affected by [1] which
> becomes worse the larger the data set is. Please wait for the Ignite 2.5
> release which will be available shortly.
>
> [1] https://issues.apache.org/jira/browse/IGNITE-7638
>
> пт, 18 мая 2018 г. в 5:44, Ray :
>
>> I ran into this issue as well.
>> I'm running tests on a six node Ignite node cluster, the data load stuck
>> after 1 billion data is ingested.
>> Can someone take a look at this issue please?
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>


Re: Fanout related query

2018-04-01 Thread Dmitriy Setrakyan
On Tue, Mar 27, 2018 at 10:53 PM, Deepesh Malviya 
wrote:

I notice that affinity solution is still going to update millions of items
> but the updates are local instead of cluster-wide. Please let me know if my
> interpretation is wrong.
>

Yes.


> I see Ignite also support Outer joins. Does keeping product cache and item
> cache separate and join at get query time will be efficient?
>

Yes, but only if you use affinity collocation between product and item
caches.


Re: Re:Re: Re:Issue about ignite-sql limit of table quantity

2018-04-01 Thread Dmitriy Setrakyan
Hi Fvyaba,

In order to avoid memory overhead per table, you should create all tables
as part of the same cache group:
https://apacheignite.readme.io/docs/cache-groups

D.

On Mon, Mar 26, 2018 at 7:26 AM, aealexsandrov 
wrote:

> Hi Fvyaba,
>
> I investigated your example. In your code you are going to create new cache
> every time when you are going to create new table. Every new cache will
> have
> some memory overhead. Next code can help you to get the average allocated
> memory:
>
> try (IgniteCache cache =
> ignite.getOrCreateCache(defaultCacheCfg)) {
> for(int i = 1; i < 100; i++) {
> cache.query(new SqlFieldsQuery(String.format(
> "CREATE TABLE TBL_%s (id BIGINT,uid VARCHAR,PRIMARY
> KEY(id))", i)));
> System.out.println("Count " + i + "
> -");
> for (DataRegionMetrics metrics :
> ignite.dataRegionMetrics()) {
> System.out.println(">>> Memory Region Name: " +
> metrics.getName());
> System.out.println(">>> Allocation Rate: " +
> metrics.getAllocationRate());
> System.out.println(">>> Allocated Size Full: " +
> metrics.getTotalAllocatedSize());
> System.out.println(">>> Allocated Size avg: " +
> metrics.getTotalAllocatedSize() / i);
> System.out.println(">>> Physical Memory Size: " +
> metrics.getPhysicalMemorySize());
> }
> }
> }
>
> On my machine with default settings I got next:
>
> >>> Memory Region Name: Default_Region
> >>> Allocation Rate: 3419.9666
> >>> Allocated Size Full: 840491008
> >>> Allocated Size avg: 8489808
> >>> Physical Memory Size: 840491008
>
> So it's about 8mb per cache (so if you will have 3.2 GB then you can create
> about 400 caches). I am not sure is it ok but you can do next to avoid
> org.apache.ignite.IgniteCheckedException: Out of memory in data region:
>
> 1)Increase the max value of available off-heap memory:
>
>
> 
>  class="org.apache.ignite.configuration.DataStorageConfiguration">
>
> 
>  class="org.apache.ignite.configuration.DataRegionConfiguration">
> 
>  //HERE
> 
> 
> 
> 
> 
>
> 2)Use persistence (or swaping space):
>
> 
>  class="org.apache.ignite.configuration.DataStorageConfiguration">
>
> 
>  class="org.apache.ignite.configuration.DataRegionConfiguration">
> 
> 
> 
> 
> //THIS ONE
> 
> 
> 
> 
>
> Read more about it you can here:
>
> https://apacheignite.readme.io/docs/distributed-persistent-store
> https://apacheignite.readme.io/v1.0/docs/off-heap-memory
>
> Please try to test next code:
>
> 1) add this to your config:
>
>
> 
>  class="org.apache.ignite.configuration.DataStorageConfiguration">
>
> 
>  class="org.apache.ignite.configuration.DataRegionConfiguration">
> 
> 
> 
> 
> 
> 
> 
> 
>
> 2)Run next:
>
> public class example {
> public static void main(String[] args) throws IgniteException {
> try (Ignite ignite =
> Ignition.start("examples/config/example-ignite.xml")) {
> ignite.cluster().active(true);
>
> CacheConfiguration defaultCacheCfg = new
> CacheConfiguration<>("Default_cache").setSqlSchema("PUBLIC");
>
> defaultCacheCfg.setDataRegionName("Default_Region");
>
> try (IgniteCache cache =
> ignite.getOrCreateCache(defaultCacheCfg)) {
> for(int i = 1; i < 1000; i++) {
> //remove old table cache just in case
> cache.query(new SqlFieldsQuery(String.format(
> "DROP TABLE TBL_%s", i)));
> //create new table
> cache.query(new SqlFieldsQuery(String.format(
> "CREATE TABLE TBL_%s (id BIGINT,uid VARCHAR,PRIMARY
> KEY(id))", i)));
> System.out.println("Count " + i + "
> -");
> for (DataRegionMetrics metrics :
> ignite.dataRegionMetrics()) {
> System.out.println(">>> Memory Region Name: " +
> metrics.getName());
> System.out.println(">>> Allocation Rate: " +
> metrics.getAllocationRate());
> System.out.println(">>> Allocated Size Full: " +
> metrics.getTotalAlloca

Re: Build a cluster with auth

2018-04-01 Thread Dmitriy Setrakyan
Hi,

Ignite is adding basic authentication capability for thin clients in the
upcoming 2.5 release - you will be able to provide user name and password
to connect to the cluster:

https://issues.apache.org/jira/browse/IGNITE-7436

You may already try it in the nightly builds:
https://ci.ignite.apache.org/viewLog.html?buildId=lastSuccessful&buildTypeId=Releases_NightlyRelease_RunApacheIgniteNightlyRelease&tab=artifacts&guest=1

D.



On Sun, Mar 25, 2018 at 11:05 PM, Green <15151803...@163.com> wrote:

> Hi
>   Can you show your solution? I am very upset about this.
> Thanks
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Apache Ignite nightly release builds

2018-03-24 Thread Dmitriy Setrakyan
Thanks, Denis. It should be added to the download page, I updated the
ticket.

On Sat, Mar 24, 2018 at 5:48 AM, Denis Magda  wrote:

> Created a JIRA ticket for that:
> https://issues.apache.org/jira/browse/IGNITE-8040
>
> --
> Denis
>
> On Fri, Mar 23, 2018 at 1:27 AM, Dmitriy Setrakyan 
> wrote:
>
> > Awesome! Finally instead of asking our users to build from the master, we
> > can provide a link to the nightly build instead.
> >
> > Denis, can you please add these links to the website?
> >
> > D.
> >
> > On Thu, Mar 22, 2018 at 1:27 PM, Petr Ivanov 
> wrote:
> >
> >> It works, thanks!
> >>
> >>
> >> Here is updated links for Artifacts and Changes respectively with silent
> >> guest login (can be added to bookmarks):
> >> * https://ci.ignite.apache.org/viewLog.html?buildId=lastSucces
> >> sful&buildTypeId=Releases_NightlyRelease_RunApacheIgnite
> >> NightlyRelease&tab=artifacts&guest=1
> >> * https://ci.ignite.apache.org/viewLog.html?buildId=lastSucces
> >> sful&buildTypeId=Releases_NightlyRelease_RunApacheIgnite
> >> NightlyRelease&tab=buildChangesDiv&guest=1
> >>
> >>
> >>
> >> > On 22 Mar 2018, at 13:06, Vitaliy Osipov 
> wrote:
> >> >
> >> > 
> >>
> >>
> >
>


Re: Apache Ignite nightly release builds

2018-03-23 Thread Dmitriy Setrakyan
Awesome! Finally instead of asking our users to build from the master, we
can provide a link to the nightly build instead.

Denis, can you please add these links to the website?

D.

On Thu, Mar 22, 2018 at 1:27 PM, Petr Ivanov  wrote:

> It works, thanks!
>
>
> Here is updated links for Artifacts and Changes respectively with silent
> guest login (can be added to bookmarks):
> * https://ci.ignite.apache.org/viewLog.html?buildId=
> lastSuccessful&buildTypeId=Releases_NightlyRelease_
> RunApacheIgniteNightlyRelease&tab=artifacts&guest=1
> * https://ci.ignite.apache.org/viewLog.html?buildId=
> lastSuccessful&buildTypeId=Releases_NightlyRelease_
> RunApacheIgniteNightlyRelease&tab=buildChangesDiv&guest=1
>
>
>
> > On 22 Mar 2018, at 13:06, Vitaliy Osipov  wrote:
> >
> > 
>
>


Re: Apache Ignite nightly release builds

2018-03-22 Thread Dmitriy Setrakyan
Why do we need to ask people to login to get a nightly build? Anyway to
open it to public without a login?

On Wed, Mar 21, 2018 at 10:45 PM, Dmitry Pavlov 
wrote:

> Hi Raymond,
>
> You could sign up using valid email address. Please write to @dev list if
> link still is not available.
>
> Sincerely,
> Dmitriy Pavlov
>
> ср, 21 мар. 2018 г. в 22:34, Raymond Wilson :
>
>> The link to build artifacts requires a Team City login. Is there a public
>> access location?
>>
>>
>>
>> *From:* Petr Ivanov [mailto:mr.wei...@gmail.com]
>> *Sent:* Wednesday, March 21, 2018 10:59 PM
>> *To:* dev ; user@ignite.apache.org
>> *Subject:* Re: Apache Ignite nightly release builds
>>
>>
>>
>> OK, I guess I can present renewed Apache Ignite Nightly Releases.
>>
>>
>>
>> Link to artifacts of the latest successful build: https://ci.ignite.
>> apache.org/viewLog.html?buildId=lastSuccessful&buildTypeId=Releases_
>> NightlyRelease_RunApacheIgniteNightlyRelease&tab=artifacts
>>
>> REST API link to the same (for programmatic access): https://ci.ignite.
>> apache.org/app/rest/builds/buildType:(id:Releases_NightlyRelease_
>> RunApacheIgniteNightlyRelease),status:SUCCESS/artifacts
>>
>> Link to changes of the latest successful build: https://ci.ignite.
>> apache.org/viewLog.html?buildId=lastSuccessful&buildTypeId=Releases_
>> NightlyRelease_RunApacheIgniteNightlyRelease&tab=buildChangesDiv
>>
>>
>>
>> Some disclaimers:
>>
>> * currently TeamCity will be used as storage for artifacts;
>>
>> * artifacts will be stored for 2 weeks;
>>
>> * nightly release builds are for DEVELOPMENT or TEST purposes only — use
>> at your own risk (especially on production environment);
>>
>> * build configuration is still more or less experimental, final tuning
>> will be introduced after some usage.
>>
>>
>>
>> Enjoy!
>>
>>
>>
>>
>>
>> As always — questions and feedback are more then welcome.
>>
>>
>>
>>
>>
>>
>>
>> On 20 Mar 2018, at 15:03, Petr Ivanov  wrote:
>>
>>
>>
>> Not yet.
>> Project is still under development, I will pass build to community after
>> settling corresponding permissions and receiving QA report.
>>
>> Also — it is time to rise a matter about adding nightly build link to out
>> documentation (somewhere here [1]).
>> Pavel, could you help?
>>
>>
>> [1] https://ignite.apache.org/download.cgi
>>
>>
>>
>> On 20 Mar 2018, at 13:58, Dmitry Pavlov  wrote:
>>
>> Thank you, Petr,
>>
>> could you share link to run config?
>>
>> Sincerely,
>> Dmitriy Pavlov
>>
>> вт, 20 мар. 2018 г. в 12:17, Petr Ivanov :
>>
>>
>> Prepared prototype build, passed for some preliminary testing.
>> Currently it will provide the following artifacts:
>> * sources
>> * fabric binary
>> * hadoop binary
>> * maven staging
>> * nuget staging
>>
>> Will keep community informed about progress.
>>
>>
>>
>>
>> On 14 Mar 2018, at 13:13, vveider  wrote:
>>
>> Prepared corresponding task [1], will start preparing build in the
>>
>> nearest
>>
>> future.
>>
>>
>> [1] https://issues.apache.org/jira/browse/IGNITE-7945
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/
>>
>>
>>
>>
>>
>>
>>
>


Re: AffinityKey Configuration in order to achieve multiple joins across caches

2018-03-21 Thread Dmitriy Setrakyan
On Thu, Mar 15, 2018 at 9:51 PM, StartCoding  wrote:

> Hi Mike,
>
> Thanks for your quick response.
>
> I am afraid denormalizing will work for me because I have just given a
> simple example. There are 16 tables which in that case needs to be joined
> into single entity. Replication was an approach I thought about and we have
> already considered smaller tables for that. But there are 7 huge tables
> which consists of 6M+ records and will degrade the performance using
> replicated caches.
>

Saji, in this case, when you cannot select one affinity key for all your
tables, you can try the following:

   1. Think about having multiple caches for the same data with different
   affinity keys.
   2. Instead of doing JOINs all the time, for cases where you could not
   use the affinity key for collocation, you can try using distribute compute
   API and process data locally within caches on individual nodes. You can
   then aggregate your results on the client side manually.


Re: Affinity Key column to be always part of the Primary Key

2018-03-20 Thread Dmitriy Setrakyan
On Tue, Mar 20, 2018 at 2:09 PM, Vladimir Ozerov 
wrote:

> Internally Ignite is key-value storage. It use key to derive partition it
> belongs to. By default the whole key is used. Alternatively you can use
> @AffinityKey annotation in cache API or "affinityKey" option in CREATE
> TABLE to specify *part of the key* to be used for affinity calculation.
> Affinity column cannot belong to value because in this case single
> key-value pair could migrate between nodes during updates and
> IgniteCache.get(K) will not be able to locate the key in cluster.
>

Vladimir, while it makes sense that the key must be composed of the ID and
Affinity Key, I still do not understand why we require that user declares
them both as PRIMARY KEY. Why do you need to enforce that explicitly? In my
view you can do it automatically, if you see that the table has both,
PRIMARY KEY and AFFINITY KEY declared.


Re: Azul Zing JVM with Apache Ignite

2018-03-19 Thread Dmitriy Setrakyan
The main advantage of Azul is support of large on-heap memory space without
garbage collection pauses. In case of Ignite, the primary storage is
off-heap, so garbage collection should not be an issue regardless.

However, if you still need to use Azul JVM, I would give it a shot. The
only potential issue that I can think of is Ignite's use of Unsafe class.
If Azul does not support it, then you will see it right away.

D.

On Mon, Mar 19, 2018 at 2:17 PM, piyush  wrote:

> Is anybody using Azul's Zing JVM with Ignite ?
> How was the experience ? Does it help in some way as they claim ?
>
> https://www.azul.com/products/zing/
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Fwd: Large durable caches

2018-03-08 Thread Dmitriy Setrakyan
Hi Lawernce,

I believe Alexey Goncharuk was working on improving this scenario. Alexey,
can you provide some of your findings here?

D.

-- Forwarded message --
From: lawrencefinn 
Date: Mon, Mar 5, 2018 at 1:54 PM
Subject: Re: Large durable caches
To: user@ignite.apache.org


BUMP.  Can anyone verify this?  If ignite cannot scale in this manner that
is
fine, i'd just want to know if what i am seeing makes sense.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Business Intelligence connection to Ignite

2018-03-08 Thread Dmitriy Setrakyan
Hi Steve,

The integration with Tableau was tested and verified:
https://apacheignite-sql.readme.io/docs/tableau

D.

On Mon, Mar 5, 2018 at 12:48 AM, steve.hostettler <
steve.hostett...@gmail.com> wrote:

> Hello,
>
> is there any best practice/recommendation on how to connect 3rd business
> intelligente tools to Ignite.
> For instance, is it possible to connect a BO universe to ignite?
>
> Thansk for your help
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: SSD writes get slower and slower

2018-03-08 Thread Dmitriy Setrakyan
I also think that switching to LOG_ONLY mode should be good enough.

On Mon, Feb 26, 2018 at 6:58 AM, dmitriy.govorukhin <
dmitriy.govoruk...@gmail.com> wrote:

> Hi,
>
> I guess the problem in "setSwapPath(...)", it is not the path for
> persistence.
>
> Try to do something like this:
>
> storePath="/data/ignite2/swap/"
>
> cfg.setDataStorageConfiguration(
> new DataStorageConfiguration()
> .setWriteThrottlingEnabled(true)
> .setPageSize(4 * 1024)
> .setStoragePath(storePath)
> .setWalPath(storePath + "/wal")
> .setWalArchivePath(storePath + "/archive")
> .setDefaultDataRegionConfiguration(
> new DataRegionConfiguration()
> .setPersistenceEnabled(true)
> .setInitialSize(2L * 1024 * 1024 * 1024)
> .setMaxSize(10L * 1024 * 1024 * 1024)
> )
> );
>
> Are you sure what you need so strong guaranteed? (WALMode.DEFAULT)
> The WALMode.DEFAULT mode provides the strictness of the protection, but it
> is the slowest onе.
> Use LOG_ONLY mode for data loading. More info about wal mode see:
> https://apacheignite.readme.io/docs/write-ahead-log
>
>
>
>
> On 25.02.2018 12:14, VT wrote:
>
>> Hi Stan,
>>
>> The setting is very simple and straightforward, as follows.
>>
>> DataStorageConfiguration dsCfg = new DataStorageConfiguration();
>> dsCfg.setWalMode(WALMode.DEFAULT);
>> dsCfg.setPageSize(4 * 1024);
>> dsCfg.setWriteThrottlingEnabled(true);
>>
>> ..
>>
>> DataRegionConfiguration regionCfg1 = new DataRegionConfiguration();
>> regionCfg1.setName("region_1");
>> regionCfg1.setInitialSize(2L * 1024 * 1024 * 1024);
>> regionCfg1.setMaxSize(10L * 1024 * 1024 * 1024);
>> regionCfg1.setCheckpointPageBufferSize(2L * 1024 * 1024 * 1024);
>> regionCfg1.setPageEvictionMode(DataPageEvictionMode.RANDOM_2_LRU);
>> regionCfg1.setPersistenceEnabled(true);
>> regionCfg1.setSwapPath("/data/ignite2/swap/"); //SSD
>>
>> ...
>>
>> cacheCfg.setDataRegionName("region_1");
>> cacheCfg.setName(CacheName);
>> cacheCfg.setWriteSynchronizationMode(CacheWriteSynchronizati
>> onMode.FULL_ASYNC);
>> cacheCfg.setCacheMode(CacheMode.PARTITIONED);
>> cacheCfg.setCopyOnRead(false);
>>
>> ...
>>
>> I used DataStreammer very simply, like the following.
>>
>> IgniteDataStreamer stmr.addData(key, value);
>>
>> I have tried multiple settings such as perNodeBufferSize,
>> perNodeParallelOperations. Still very slow. Please help. Thanks!
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>
>


Re: Using 3rd party DB together with native persistence (WAS: GettingInvalid state exception when Persistance is enabled.)

2018-03-08 Thread Dmitriy Setrakyan
To my knowledge, the 2.4 release should have support for both persistence
mechanisms, native and 3rd party, working together. The release is out for
a vote already:
http://apache-ignite-developers.2346864.n4.nabble.com/VOTE-Apache-Ignite-2-4-0-RC1-td27687.html

D.

On Mon, Feb 26, 2018 at 2:43 AM, Humphrey  wrote:

> I think he means when *write-through* and *read-through* modes are enabled
> on
> the 3rd party store, data might be written/read to/from one of those
> persistence storage (not on both).
>
> So if you save data "A" it might be stored in the 3rd party persistence,
> and
> not in the native. When data "A" is not in the cache it might try to look
> it
> up from the native persistence, where it's not available. Same could happen
> with updates, if "A" was updated to "B" it could have changed in the 3rd
> party but when requesting for the data again you might in one case get "A"
> an other case "B" depending on the stores it reads the data from.
>
> At least that is what I understand from his consistency between both
> stores.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite 2.4 status

2018-03-08 Thread Dmitriy Setrakyan
Paolo, the release is out for a vote:
http://apache-ignite-developers.2346864.n4.nabble.
com/VOTE-Apache-Ignite-2-4-0-RC1-td27687.html

D.

On Tue, Feb 20, 2018 at 11:55 AM, Paolo Di Tommaso <
paolo.ditomm...@gmail.com> wrote:

> Hi folks,
>
> I was wondering what's the status of Ignite 2.4. Is there any planned
> release date?
>
> The need to support java 9 is becoming a priority.
>
>
> Cheers,
> Paolo
>
>


Re: And again... Failed to get page IO instance (page content is corrupted)

2018-03-08 Thread Dmitriy Setrakyan
Hi Sergey,

The 2.4 release is about to be voted for. You can use the RC1 in the mean
time:
http://apache-ignite-developers.2346864.n4.nabble.com/VOTE-Apache-Ignite-2-4-0-RC1-td27687.html

D.

On Mon, Feb 19, 2018 at 6:43 AM, Mikhail 
wrote:

> Hi Sergey,
>
> The release of 2.4 should be soon, in a week or couple, however, there's no
> strong schedule for Apache releases.
>
> Could you please share a reproducer for the issue? Might be you can share a
> storage on which the issue can be reproduced?
>
> Thanks,
> Mike.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: slow query performance against berkley db

2018-02-06 Thread Dmitriy Setrakyan
Hi Rajesh,

Please allow the community some time to test your code.

As far as testing single node vs. distributed, when you have more than one
node, Ignite will split your data set evenly across multiple nodes. This
means that when running the query, it will be executed on each node on
smaller data sets in parallel, which should provide better performance. If
your query does some level of scanning, then the more nodes you add, the
faster it will get.

D.

On Tue, Feb 6, 2018 at 5:02 PM, Rajesh Kishore 
wrote:

> Hi All
> Please help me in getting the pointers, this is deciding factor for us to
> further evaluate ignite. Somehow we are not convinced with just  . 1 m
> records it's not responsive as that of Berkley db.
> Let me know the strategy to be adopted, pointers where I am doing wrong.
>
> Thanks
> Rajesh
>
> On 6 Feb 2018 6:11 p.m., "Rajesh Kishore"  wrote:
>
>> Further to this,
>>
>> I am re-framing what I have , pls correct me if my approach is correct or
>> not.
>>
>> As of now, using only node as local cache and using native persistence
>> file system. The system has less number of records around *.1 M *in main
>> table and 2 M in supporting table.
>>
>> Using sql to retrieve the records using join , the sql used is
>> ---
>>  final String query1 = "SELECT "
>> + "f.entryID,f.attrName,f.attrValue, "
>> + "f.attrsType "
>> + "FROM "
>> +"( select st.entryID,st.attrName,st.attrValue, st.attrsType
>> from "
>> +"(SELECT at1.entryID FROM \"objectclass\".Ignite_ObjectC
>> lass"
>> + " at1 WHERE "
>> + " at1.attrValue= ? )  t"
>> +" INNER JOIN \"Ignite_DSAttributeStore\".IGNITE_DSATTRIBUTESTORE
>> st ON st.entryID = t.entryID "
>> + " WHERE st.attrKind IN ('u','o') "
>> +" ) f "
>> + " INNER JOIN "
>> + " ( "
>> +" SELECT entryID from \"dn\".Ignite_DN where parentDN like ?
>> "
>>  +")  "
>> +" dnt"
>> + " ON f.entryID = dnt.entryID"
>> + " order by f.entryID";
>>
>> String queryWithType = query1;
>> QueryCursor> cursor = cache.query(new SqlFieldsQuery(
>> queryWithType).setEnforceJoinOrder(true).setArgs("person",
>> "dc=ignite,%"));
>> System.out.println("SUBTREE "+cursor.getAll() );
>>
>>
>> ---
>>
>> The corresponding EXPLAIN plan is
>> 
>>
>> [[SELECT
>> F.ENTRYID,
>> F.ATTRNAME,
>> F.ATTRVALUE,
>> F.ATTRSTYPE
>> FROM (
>> SELECT
>> ST.ENTRYID,
>> ST.ATTRNAME,
>> ST.ATTRVALUE,
>> ST.ATTRSTYPE
>> FROM (
>> SELECT
>> AT1.ENTRYID
>> FROM "objectclass".IGNITE_OBJECTCLASS AT1
>> WHERE AT1.ATTRVALUE = ?1
>> ) T
>> INNER JOIN "Ignite_DSAttributeStore".IGNITE_DSATTRIBUTESTORE ST
>> ON 1=1
>> WHERE (ST.ATTRKIND IN('u', 'o'))
>> AND (ST.ENTRYID = T.ENTRYID)
>> ) F
>> /* SELECT
>> ST.ENTRYID,
>> ST.ATTRNAME,
>> ST.ATTRVALUE,
>> ST.ATTRSTYPE
>> FROM (
>> SELECT
>> AT1.ENTRYID
>> FROM "objectclass".IGNITE_OBJECTCLASS AT1
>> WHERE AT1.ATTRVALUE = ?1
>> ) T
>> /++ SELECT
>> AT1.ENTRYID
>> FROM "objectclass".IGNITE_OBJECTCLASS AT1
>> /++ "objectclass".OBJECTCLASSNDEXED_ATTRVAL_IDX: ATTRVALUE =
>> ?1 ++/
>> WHERE AT1.ATTRVALUE = ?1
>>  ++/
>> INNER JOIN "Ignite_DSAttributeStore".IGNITE_DSATTRIBUTESTORE ST
>> /++ "Ignite_DSAttributeStore".IGNITE_DSATTRIBUTESTORE_ENTRYID_IDX:
>> ENTRYID = T.ENTRYID ++/
>> ON 1=1
>> WHERE (ST.ATTRKIND IN('u', 'o'))
>> AND (ST.ENTRYID = T.ENTRYID)
>>  */
>> INNER JOIN (
>> SELECT
>> ENTRYID
>> FROM "dn".IGNITE_DN
>> WHERE PARENTDN LIKE ?2
>> ) DNT
>> /* SELECT
>> ENTRYID
>> FROM "dn".IGNITE_DN
>> /++ "dn".EP_DN_IDX: ENTRYID IS ?3 ++/
>> WHERE (ENTRYID IS ?3)
>> AND (PARENTDN LIKE ?2): ENTRYID = F.ENTRYID
>> AND ENTRYID = F.ENTRYID
>>  */
>> ON 1=1
>> WHERE F.ENTRYID = DNT.ENTRYID
>> ORDER BY 1]]
>> -
>>
>> The above query takes *24 sec* to retrieve the records which we feel
>> defeats the purpose , our application existing berkley db can retrieve this
>> faster.
>>
>> Question is -
>> a) I have attached my application models & client code , am I doing
>> something wrong in defining the models and cache configuration. Right now,
>> not considering distributed as I have less number of records.. What is
>> recommended?
>> b) What is the best memory requirement of Ignite/H2 , is 16g machine not
>> good enough for the records I have as of now?
>> c) does

Re: Subscribe

2018-02-02 Thread Dmitriy Setrakyan
On Fri, Feb 2, 2018 at 2:59 PM, Luqman Ahmad  wrote:

> Please subscribe me.
>

If your message has been delivered to the user@ list, you must be already
subscribed.

D.


Re: Issues with sub query IN clause

2018-02-01 Thread Dmitriy Setrakyan
Rajesh, can you please show your query here together with execution plan?

D.

On Thu, Feb 1, 2018 at 8:36 AM, Rajesh Kishore 
wrote:

> Hi Andrey
> Thanks for your response.
> I am using native ignite persistence, saving data locally and as of now I
> don't have distributed cache, having only one node.
>
> By looking at the doc, it does not look like affinity key is applicable
> here.
>
> Pls suggest.
>
> Thanks Rajesh
>
> On 1 Feb 2018 6:27 p.m., "Andrey Mashenkov" 
> wrote:
>
>> Hi Rajesh,
>>
>>
>> Possibly, you data is not collocated and subquery return less retults as
>> it executes locally.
>> Try to rewrite IN into JOIN and check if query with
>> query#setDistributedJoins(true) will return expected result.
>>
>> It is recommended
>> 1. replace IN with JOIN due to performance issues [1].
>> 2. use data collocation [2] if possible rather than turning on
>> distributed joins.
>>
>> [1] https://apacheignite-sql.readme.io/docs/performance-and-
>> debugging#section-sql-performance-and-usability-considerations
>> [2] https://apacheignite.readme.io/docs/affinity-collocation
>> #section-collocate-data-with-data
>>
>> On Thu, Feb 1, 2018 at 3:44 PM, Rajesh Kishore 
>> wrote:
>>
>>> Hi All,
>>>
>>> As of now, we have less than 1 M records , and attribute split into
>>> few(3) tables
>>> with index created.
>>> We are using combination of join &  IN clause(sub query) in the SQL
>>> query , for some reason this query does not return any response.
>>> But, the moment we remove the IN clause and use just the join, the query
>>> returns the result.
>>> Note that as per EXPLAIN PLAN , the sub query also seems to be using the
>>> defined
>>> indexes.
>>>
>>> What are the recommendations for using such queries , are there any
>>> guidelines, What we are doing wrong here?
>>>
>>> Thanks,
>>> Rajesh
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>
>>
>> --
>> Best regards,
>> Andrey V. Mashenkov
>>
>


Re: Upcoming Apache Ignite events this month

2018-02-01 Thread Dmitriy Setrakyan
Great to see such a busy schedule!

Ignite community is unstoppable :)

D.

On Thu, Feb 1, 2018 at 3:19 PM, Tom Diederich 
wrote:

> Igniters,
>
> The following is a list of upcoming events in February. To view this list
> from the Ignite events page, click here
> .
>
> *Tokyo*
>
> *February 1:*  *Meetup*: Meet Apache Ignite In-Memory Computing Platform
>
>  Join Roman Shtykh at the Tech it Easy- Tokyo Meetup for an introductory
> talk on Apache Ignite.
>
>  In this talk you will learn about Apache Ignite memory-centric
> distributed database, caching, and processing platform. Roman will explain
> how one can do distributed computing, and use SQL with horizontal
> scalability and high availability of NoSQL systems with Apache Ignite.
>
>  Only six spots left so RSVP now! http://bit.ly/2nygyRI
>
>  *San Francisco Bay Area*
>
>  *February 7*: *Conference talk:* Apache Ignite Service Grid: Foundation
> of Your Microservices-Based Solution
>
>  Denis Magda will be attending DeveloperWeek 2018 in San Francisco to
> deliver presentation that provides a step-by-step guide on how to build a
> fault-tolerant and scalable microservices-based solution using Apache
> Ignite's Service Grid and other components to resolve these aforementioned
> issues.
>
>  Details here: http://bit.ly/2BHwFBr
>
>
>
> *London*
>
>  *February 7:* *Meetup:* Building consistent and highly available
> distributed systems with Apache Ignite
>
>  Akmal Chaudhri will speak at the inaugural gathering of the London
> In-Memory Computing Meetup.
>
>  He'll explain that while it is well known that there is a tradeoff
> between data consistency and high availability, there are many applications
> that require very strong consistency guarantees. Making such applications
> highly available can be a significant challenge. Akmal will explain how to
> overcome these challenges.
>
> This will be an outstanding event with free food and beverages. Space is
> limited, however. RSVP now to reserve your spot (you may also include 2
> guests).
>
> http://bit.ly/2BH893c
>
>
> *Boston*
>
> *February 12: Meetup*: Turbocharge your MySQL queries in-memory with
> Apache Ignite
>
> Fotios Filacouris will be the featured speaker at the Boston MySQL Meetup
> Group.
>
> The abstract of his talk: Apache Ignite is a unique data management
> platform that is built on top of a distributed key-value storage and
> provides full-fledged MySQL support.Attendees will learn how Apache Ignite
> handles auto-loading of a MySQL schema and data from PostgreSQL, supports
> MySQL indexes, supports compound indexes, and various forms of MySQL
> queries including distributed MySQL joins.
>
> Space is limited so RSVP today! http://bit.ly/2DP8W44
>
>
> *Boston*
>
> *February 13: Meetup* -- Java and In-Memory Computing: Apache Ignite
>
>  Fotios Filacouris will speak at the Boston Java Meetup Group
>
> In his talk, Foti will introduce the many components of the open-source
> Apache Ignite. Meetup members, as Java professionals, will learn how to
> solve some of the most demanding scalability and performance challenges.
> He’ll also cover a few typical use cases and work through some code
> examples. Attendees would leave ready to fire up their own database
> deployments!
>
> RSVP here: http://bit.ly/2BJ1nde
>
>
>
> *Sydney, Australia  *
>
> *February 13: Meetup:* Ignite your Cassandra Love Story: Caching
> Cassandra with Apache Ignite
>
> Rachel Pedreschi will be the guest speaker at the Sydney Cassandra Users
> Meetup. In this session attendees will learn how Apache Ignite can
> turbocharge a Cassandra cluster without sacrificing availability
> guarantees. In this talk she'll cover:
>
>
>
>- An overview of the Apache Ignite architecture
>- How to deploy Apache Ignite in minutes on top of Cassandra
>- How companies use this powerful combination to handle extreme OLTP
>workloads
>
>
>  RSVP now to secure your spot: http://bit.ly/2sydneytalk
>
>
>
> * February 14: Webinar:*  Getting Started with Apache® Ignite™ as a
> Distributed Database
>
> Join presenter Valentin Kulichenko in this live webinar featuring Apache
> Ignite native persistence --  a distributed ACID and SQL-compliant store
> that turns Apache Ignite into a full-fledged distributed SQL database.
>
>  In this webinar, Valentin will:
>
>
>
>-  Explain what native persistence is, and how it works
>- Show step-by-step how to set up Apache Ignite with native persistence
>- Explain the best practices for configuration and tuning
>
>
> RSVP now to reserve your spot: http://bit.ly/2E0SWiS
>
>
>
> *Copenhagen*
>
> *February 14: Meetup: *Apache Ignite: the in-memory hammer in your data
> science toolkit
>
> Akmal Chaudhri will be the guest speaker at the Symbion IoT Meetup
> (Copenhagen, Denmark). In this presentation, Akmal will explain some of the
> main components of Apache Ignite, such as the Compute Grid, Data Grid and
> the Machine Learning Grid. Through examples, att

Re: How to make full use of network bandwidth?

2018-01-31 Thread Dmitriy Setrakyan
Hi Michael, were you able to apply the suggestions. It would be nice if you
would share your results with the community.

D.

On Mon, Jan 8, 2018 at 6:42 PM, Michael Jay <841519...@qq.com> wrote:

> Thank you, Alexey. I'll try you advice and let you know the result later.
> Thanks again.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Group By Query is slow : Apache Ignite 2.3.0

2018-01-13 Thread Dmitriy Setrakyan
Hi,

Were you able to resolve the issue? If yes, it would be nice to share it
with the community.

D.

On Thu, Dec 21, 2017 at 12:49 AM, dkarachentsev 
wrote:

> Hi Indranil,
>
> These measurements are not fully correct, for example select count(*) might
> use only index and in select * was not actually invoked, because you need
> to
> run over cursor.
> Also by default query is not parallelized on one node, and scan with
> grouping is going sequentially in one thread.
>
> Try to recheck your results on one node with enabled query parallelism:
> CacheConfiguration.setQueryParallelism(8) [1].
>
> And/or on 4 server nodes with 1 backup. You should get better numbers
> because of spreading query over machines.
>
> [1]
> https://ignite.apache.org/releases/latest/javadoc/org/
> apache/ignite/configuration/CacheConfiguration.html#
> setQueryParallelism(int)
>
> Thanks!
> -Dmitry
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Data load is very slow in ignite 2.3 compare to ignite 1.9

2018-01-13 Thread Dmitriy Setrakyan
Hi Tejas,

Were you able to resolve your issue? If yes, it would be nice to share it
with the community.

D.

On Wed, Dec 20, 2017 at 11:09 PM, Denis Magda  wrote:

> Why are you giving only 5GB of RAM to every node then (referring to your
> data region configuration)? You mentioned that it’s fine to assign 15GB of
> RAM. Does it mean there are another processes running on the server that
> use the rest of RAM heavily.
>
> To make the troubleshooting of your problem more effectively, please
> upload your complete configuration and the code of preloader that calls
> Ignite data streamer on GitHub and share with us.
>
> —
> Denis
>
> On Dec 20, 2017, at 8:34 PM, Tejashwa Kumar Verma <
> tejashwa.ve...@gmail.com> wrote:
>
> Hi Denis,
>
> I dont know that i got your question correctly or not.
> But still attempting to ans.
>
> For now i have 2 node cluster and both have 48-48 GB RAM available. And
> data is not Preloaded .
>
>
> Thanks & Regards
> Tejas
>
> On Thu, Dec 21, 2017 at 9:55 AM, Denis Magda  wrote:
>
>> Does it mean that you have 3 cluster nodes and all of them are running on
>> a single server? Is data preloaded from a different machine?
>>
>> —
>> Denis
>>
>> On Dec 20, 2017, at 8:09 PM, Tejashwa Kumar Verma <
>> tejashwa.ve...@gmail.com> wrote:
>>
>> HI Alexey,
>>
>> We have enough memory(around 48 GB) on server whereas allocation wise we
>> are assigning/utilizing only 15GB memory.
>>
>>
>> @Denis, I have tried all the configs given in mentioned link. But its not
>> helping out.
>>
>>
>> Thanks & regards
>> Tejas
>>
>> On Thu, Dec 21, 2017 at 5:44 AM, Denis Magda  wrote:
>>
>>> Tejas,
>>>
>>> The new memory architecture of Ignite 2.x might require an extra tuning.
>>> I find this doc as a good starting point of the scrutiny:
>>> https://apacheignite.readme.io/docs/durable-memory-tuning
>>>
>>> —
>>> Denis
>>>
>>> On Dec 20, 2017, at 10:43 AM, Tejashwa Kumar Verma <
>>> tejashwa.ve...@gmail.com> wrote:
>>>
>>> Yes, I have same cluster, env and no of nodes.
>>>
>>> I am using DataStreamer to load data.
>>>
>>> Thanks and Regards
>>> Tejas
>>>
>>> On 21 Dec 2017 12:11 am, "Alexey Kukushkin" 
>>> wrote:
>>>
 Tejas, how do you load the cache - are you using DataStreamer or SQL,
 JDBC or put/putAll or something else? Can you confirm - are you saying you
 have same cluster (same number of nodes and hardware) and after the upgrade
 the cache load time increased from 40 to 90 minutes?

>>>
>>>
>>
>>
>
>


Re: Can Ignite native persistence used with 3rd party persistence?

2018-01-13 Thread Dmitriy Setrakyan
Cross-sending to dev@.

Alexey,

This issue is marked to be fixed for 2.4 which is planned to be released in
a couple of weeks. Do you think you will be able to close this issue before
the release?

D.


On Mon, Dec 18, 2017 at 9:51 AM, Alexey Goncharuk <
alexey.goncha...@gmail.com> wrote:

> Created the ticket: https://issues.apache.org/jira/browse/IGNITE-7235
>
> 2017-12-15 16:16 GMT+03:00 Alexey Goncharuk :
>
>> Ray,
>>
>> With the current API it is impossible to get a reliable integration of
>> Ignite native persistence with 3rd party persistence. The reason is that
>> first, CacheStore interface does not have methods for 2-phase commit,
>> second, it would require significant changes to the persistence layer
>> itself to make a consistent crash recovery.
>>
>> We could allow setting the cache store interface with write-through from
>> primary nodes, but in this case, it would be a user's responsibility to
>> verify that the cache store is consistent with the Ignite cluster. We will
>> try to enable and document it in ignite 2.4.
>>
>> --AG
>>
>> 2017-12-01 14:13 GMT+03:00 Andrey Mashenkov :
>>
>>> Hi Ray,
>>>
>>>
 One more question here, how can a update or new inserts back-propagate
 to
 Ignite when another application(not ignite) writes to
 persistence(hbase)?
>>>
>>>
>>> It is not supported.
>>>
>>>
>>>
>>> On Fri, Dec 1, 2017 at 12:08 PM, Ray  wrote:
>>>
 http://apache-ignite-users.70518.x6.nabble.com/Two-persisten
 t-data-stores-for-a-single-Ignite-cluster-RDBMS-and-Ignite-n
 ative-td18463.html

 Found a similar case here, I think I'll try Slava's suggestions first.

 One more question here, how can a update or new inserts back-propagate
 to
 Ignite when another application(not ignite) writes to
 persistence(hbase)?

 For example, Ignite and hbase both have one entry for now.
 When another application adds an entry to hbase, now hbase has two
 entries.
 Can Ignite be notified and load the newly added entry automatically?

 From the document, it looks like the data can only be propagated from
 Ignite
 to persistence, not the other way around.



 --
 Sent from: http://apache-ignite-users.70518.x6.nabble.com/

>>>
>>>
>>>
>>> --
>>> Best regards,
>>> Andrey V. Mashenkov
>>>
>>
>>
>


Fwd: How to speedup activation on a node with persistence

2018-01-13 Thread Dmitriy Setrakyan
Hi Kamil,

Have you been able to resolve your issue? If yes, it would be great if you
could share it with the community.

Thanks,
D.

-- Forwarded message --
From: mcherkasov 
Date: Mon, Dec 18, 2017 at 9:27 AM
Subject: Re: How to speedup activation on a node with persistence
To: user@ignite.apache.org


 Hi Kamil,

if you have magnetic tape storage and store terabytes of data, then 10
minutes might be ok for startup, but I don't think that it's your case.

Could you please share a full log of Ignite for a slow and fast startup?

Thanks,
Mike.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Transaction operations using the Ignite Thin Client Protocol

2018-01-13 Thread Dmitriy Setrakyan
Hi,

Is there a reason why you do not want to use the C++ client that comes with
Ignite?

https://apacheignite-cpp.readme.io/docs/transactions

D.

On Mon, Jan 8, 2018 at 2:12 AM, kotamrajuyashasvi <
kotamrajuyasha...@gmail.com> wrote:

> Hi
>
> I would like to perform Ignite Transaction operations from a C++ program
> using the Ignite Thin Client Protocol. Is it possible to do so ? If this
> feature is not available now, will it be added in future ?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Index not getting created

2017-12-28 Thread Dmitriy Setrakyan
Hi Naveen,

Affinity mapping is a critical portion of Ignite data distribution and
cannot be changed. For more information, please refer to this
documentation: https://apacheignite.readme.io/docs/affinity-collocation

D.

On Wed, Dec 6, 2017 at 9:20 PM, Naveen  wrote:

> This issue got fixed after clean restart of the cluster and creating the
> caches again.
> I could create the index.
> Do we have any option to set the affinity mapping for the cache which is
> already created and holding data.
>
> Thanks
> Naveen
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Question on On-Heap Caching

2017-11-08 Thread Dmitriy Setrakyan
Naresh, several questions:

   1. How are you accessing data, with SQL or key-value APIs?
   2. Are you accessing data locally on the server or remotely from a
   client? If remotely, then you might want to enable near caching.

D.

On Thu, Nov 9, 2017 at 3:01 PM, naresh.goty  wrote:

> Thanks Alexey for the info. Actually our application is read-heavy, and we
> are seeing high latencies (based on our perf benchmark) when we are
> measuring the response times during load tests. Based on the one of the
> thread's recommendations
> (http://apache-ignite-users.70518.x6.nabble.com/10X-
> decrease-in-performance-with-Ignite-2-0-0-td12637.html#a12655),
> we are trying to check if onheap cache have any reduction in latencies. But
> we did not see any noticeable difference in perf using onheap cache
> enabled/disabled. We are using ignite v2.3.
>
> Thanks,
> Naresh
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite-cassandra module issue

2017-11-08 Thread Dmitriy Setrakyan
Hi Michael, do you have any update for the issue?

On Thu, Nov 2, 2017 at 5:14 PM, Michael Cherkasov <
michael.cherka...@gmail.com> wrote:

> Hi Tobias,
>
> Thank you for explaining how to reproduce it, I'll try your instruction. I
> spend several days trying to reproduce the issue,
> but I thought that the reason of this is too high load and I didn't stop
> client during testing.
> I'll check your instruction and try to fix the issue.
>
> Thanks,
> Mike.
>
> 2017-10-25 16:23 GMT+03:00 Tobias Eriksson :
>
>> Hi Andrey et al
>>
>> I believe I now know what the problem is, the Cassandra session is
>> refreshed, but before it is a prepared statement is created/used and there,
>> and so using a new session with an old prepared statement is not working.
>>
>>
>>
>> The way to reproduce is
>>
>> 1)   Start Ignite Server Node
>>
>> 2)   Start client which inserts a batch of 100 elements
>>
>> 3)   End client
>>
>> 4)   Now Ignite Server Node returns the Cassandra Session to the pool
>>
>> 5)   Wait 5+ minutes
>>
>> 6)   Now Ignite Server Node has does a clean-up of the “unused”
>> Cassandra sessions
>>
>> 7)   Start client which inserts a batch of 100 elements
>>
>> 8)   Boom ! The exception starts to happen
>>
>>
>>
>> Reason is
>>
>> 1)   Execute is called for a BATCH
>>
>> 2)   Prepared-statement is reused since there is a cache of those
>>
>> 3)   It is about to do session().execute( batch )
>>
>> 4)   BUT the call to session() results in refreshing the session,
>> and this is where the prepared statements as the old session new them are
>> cleaned up
>>
>> 5)   Now it is looping over 100 times with a NEW session but with an
>> OLD prepared statement
>>
>>
>>
>> This is a bug,
>>
>>
>>
>> -Tobias
>>
>>
>>
>>
>>
>> *From: *Andrey Mashenkov 
>> *Reply-To: *"user@ignite.apache.org" 
>> *Date: *Wednesday, 25 October 2017 at 14:12
>> *To: *"user@ignite.apache.org" 
>> *Subject: *Re: Ignite-cassandra module issue
>>
>>
>>
>> Hi Tobias,
>>
>>
>>
>> What ignite version do you use? May be this was already fixed in latest
>> one?
>>
>> I see related fix inclueded in upcoming 2.3 version.
>>
>>
>>
>> See IGNITE-5897 [1] issue. It is unobvious, but this fix session init\end
>> logic, so session should be closed in proper way.
>>
>>
>>
>> [1] https://issues.apache.org/jira/browse/IGNITE-5897
>>
>>
>>
>>
>>
>> On Wed, Oct 25, 2017 at 11:13 AM, Tobias Eriksson <
>> tobias.eriks...@qvantel.com> wrote:
>>
>> Hi
>>  Sorry did not include the context when I replied
>>  Has anyone been able to resolve this problem, cause I have it too on and
>> off
>> In fact it sometimes happens just like that, e.g. I have been running my
>> Ignite client and then stop it, and then it takes a while and run it
>> again,
>> and all by a sudden this error shows up. An that is the first thing that
>> happens, and there is NOT a massive amount of load on Cassandra at that
>> time. But I have also seen it when I hammer Ignite/Cassandra with
>> updates/inserts.
>>
>> This is a deal-breaker for me, I need to understand how to fix this, cause
>> having this in production is not an option.
>>
>> -Tobias
>>
>>
>> Hi!
>> I'm using the cassandra as persistence store for my caches and have one
>> issue by handling a huge data (via IgniteDataStreamer from kafka).
>> Ignite Configuration:
>> final IgniteConfiguration igniteConfiguration = new IgniteConfiguration();
>> igniteConfiguration.setIgniteInstanceName("test");
>> igniteConfiguration.setClientMode(true);
>> igniteConfiguration.setGridLogger(new Slf4jLogger());
>> igniteConfiguration.setMetricsLogFrequency(0);
>> igniteConfiguration.setDiscoverySpi(configureTcpDiscoverySpi());
>> final BinaryConfiguration binaryConfiguration = new BinaryConfiguration();
>> binaryConfiguration.setCompactFooter(false);
>> igniteConfiguration.setBinaryConfiguration(binaryConfiguration);
>> igniteConfiguration.setPeerClassLoadingEnabled(true);
>> final MemoryPolicyConfiguration memoryPolicyConfiguration = new
>> MemoryPolicyConfiguration();
>> memoryPolicyConfiguration.setName("3Gb_Region_Eviction");
>> memoryPolicyConfiguration.setInitialSize(1024L * 1024L * 1024L);
>> memoryPolicyConfiguration.setMaxSize(3072L * 1024L * 1024L);
>>
>> memoryPolicyConfiguration.setPageEvictionMode(DataPageEvicti
>> onMode.RANDOM_2_LRU);
>> final MemoryConfiguration memoryConfiguration = new MemoryConfiguration();
>> memoryConfiguration.setMemoryPolicies(memoryPolicyConfiguration);
>> igniteConfiguration.setMemoryConfiguration(memoryConfiguration);
>>
>> Cache configuration:
>> final CacheConfiguration cacheConfiguration = new
>> CacheConfiguration<>();
>> cacheConfiguration.setAtomicityMode(CacheAtomicityMode.ATOMIC);
>> cacheConfiguration.setStoreKeepBinary(true);
>> cacheConfiguration.setCacheMode(CacheMode.PARTITIONED);
>> cacheConfiguration.setBackups(0

Re: Re: How the Ignite Service performance? When we test the CPU soon be occupied 100%

2017-11-08 Thread Dmitriy Setrakyan
On Wed, Oct 25, 2017 at 9:15 AM, aa...@tophold.com 
wrote:

> Thanks Andrey!  Now it better, we try to exclude no-core logic to
> separated instances.
>
> What I learned from last several months using ignite, we should set up
> ignite as a standalone data node, while put my application logic in another
> one.
>
> Otherwise it will bring too much unstable to my application.  I not sure
> this is the best practice?
>

It depends on your use case, but I would say that majority of Ignite
deployments have stand-alone data nodes, so there is nothing wrong with
what you are suggesting.


Re: Ignite 2.0.0 GridUnsafe unmonitor

2017-10-30 Thread Dmitriy Setrakyan
Denis,

We should definitely print out a thorough warning if HashMap is passed into
a bulk method (instead of SortedMap). However, we should make sure that we
only print that warning once and not ever time the API is called.

Can you please file a ticket for 2.4?

D.

On Thu, Oct 26, 2017 at 11:05 AM, Denis Magda  wrote:

> + dev list
>
> Igniters, that’s a relevant point below. Newcomers to Ignite tend to
> stumble on deadlocks simply because the keys are passed in an unordered
> HashMap. Propose to do the following:
> - update bulk operations Java doc.
> - print out a warning if a HashMap is used and its exceeds one element.


> Thoughts?
>
> —
> Denis
>
> > On Oct 21, 2017, at 6:16 PM, dark  wrote:
> >
> > Many people seem to be more likely to send Cache entries in bulk via a
> > HashMap.
> > How do you expose a warning statement by checking if the TreeMap is
> putAll
> > inside the code?
> >
> >
> >
> >
> > --
> > Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>
>


Re: checkpoint marker is present on disk, but checkpoint record is missed in WAL

2017-10-12 Thread Dmitriy Setrakyan
KR, any chance you can provide a reproducer? It would really help us
properly debug your issue. If not, can we get a copy of your configuration?

On Thu, Oct 12, 2017 at 10:31 AM, KR Kumar  wrote:

> Hi AG,
>
> Thanks for responding to the thread. I have tried with 2.3 and I still face
> the same problem.
>
> Just to further explore, I killed ignite instance with kill -9 and a
> reboot,
> both situations, ignite just hangs during restart.
>
> Thanx and Regards
> KR Kumar
>


Re: Ignite long term support (LTS) version policy?

2017-10-06 Thread Dmitriy Setrakyan
Hi Dop,

I am not sure Apache Ignite community will be able to provide support
beyond what you see on the user@ and dev@ lists today. If you need
something beyond that, I would advise you to contact commercial vendors,
like GridGain.

D.

On Tue, Oct 3, 2017 at 6:58 AM, Dop Sun  wrote:

> Hi,
>
> I’m currently developing an application for my employer, and starting
> beginning of 2017, Ignite started at 1.8, and released 1.9 (Feb), 2.0
> (Apr), 2.1 (Jul) and recently 2.2 (Sep), or about 2 - 3 months a version.
> And I can see good features added to every releases, together with bug
> fixes.
>
> For us, we upgraded from 1.8 and then 2.0, and due to several bugs fixed
> in 2.1 and 2.2, we have upgraded to 2.2. And the bug fixes, for example
> IGNITE-6181, would likely pushing us to upgrade to 2.3 when its ready
> before our first production release.
>
> My question is:
>
> - will there be a kind of long term support (LTS) version? By LTS, I mean
> there is a version will be considered stable, and bug fixes of future
> several releases would likely back ported for a certain period of time.
>
> - if not today, any chance this can be considered in future?
>
> *Please kindly suggest if this should be sending to d...@ignite.apache.org
>  instead.*
>
> Thanks,
> Regards,
> Dop
>


Re: Question about number of total onheap and offheap cache entries.

2017-10-02 Thread Dmitriy Setrakyan
On Tue, Oct 3, 2017 at 4:19 AM, Ray  wrote:

> Hi Alexey
>
> My cache configuration is as follows.
> cacheConfig.setName("DailyAggData")
> cacheConfig.setIndexedTypes(classOf[A], classOf[B])
> cacheConfig.setSqlSchema("PUBLIC")
> aggredCacheConfig.setBackups(2)
> cacheConfig.setQueryParallelism(8)
>
> I didn't explicitly set "onHeapEnabled=true".
> So what will happen if I perform get & sql operations with
> onHeapEnabled=false?
> Will off-heap entries be brought on-heap?
>

Yes, but only to return to user. Ignite will not be caching on-heap entry,
and therefore the count should be 0.


--
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: INSERT into SELECT from Ignite 1.9 or 2.0

2017-09-21 Thread Dmitriy Setrakyan
To add to Andrey's example, here is how you would use IgniteAtomicSequence
to make IDs unique across the whole distributed cluster:

*public static class CustomSQLFunctions {*
*@QuerySqlFunction*
*public static long nextId(String seqName, long initVal) {*
*return Ignition.ignite().atomicSequence("idGen", 0,
true).incrementAndGet();*
*}*
* }*


On Thu, Sep 21, 2017 at 5:37 AM, Andrey Mashenkov <
andrey.mashen...@gmail.com> wrote:

> Hi,
>
> As a workaround you can implement custom function [1] for unique number
> generation.
>
> 1.You need to create a class with static functions annotated with
> @QuerySqlFunction.
>
> E.g. for single node grid you can use some AtomicLong static field.
>
>
> public class MyFunctions {
>
> static AtomicLong seq = new AtomicLong();
>
>
> @QuerySqlFunction
> public static long nextID() {
> return seq.getAndIncrement();
> }
> }
>
>
> This class should be added to classpath on all nodes.
>
> 2.Register class with functions.
>
> cacheConfiguration.setSqlFunctionClasses(MyFunctions.class);
>
>
> 3. For multi-node grid you use IgniteAtomicSequence instead and
> initialize static variable on grid start, e.g. manually or via
> LifecycleBean [2].
>
> 4. Now you can run query like "INSERT ... (ID, ...) SELECT nextID(), ..."
>
> [1] https://apacheignite.readme.io/docs/miscellaneous-
> features#custom-sql-functions
> [2] https://apacheignite.readme.io/docs/ignite-life-
> cycle#section-lifecyclebean
>
> On Mon, Sep 18, 2017 at 4:17 PM, Alexander Paschenko <
> alexander.a.pasche...@gmail.com> wrote:
>
>> Hello,
>>
>> Andrey, I believe you're wrong. INSERT from SELECT should work. AUTO
>> INCREMENT columns indeed are not supported for now though, it's true.
>>
>> - Alex
>>
>> 2017-09-18 16:09 GMT+03:00 Andrey Mashenkov :
>> > Hi,
>> >
>> > Auto-increment fields are not supported yet. Here is a ticket for this
>> [1]
>> > and you can track it's state.
>> > Moreover, underlying H2 doesn't support SELECT with JOINs nested into
>> > INSERT\UPDATE query.
>> >
>> > [1] https://issues.apache.org/jira/browse/IGNITE-5625
>> >
>> > On Mon, Sep 18, 2017 at 12:31 PM, acet 
>> wrote:
>> >>
>> >> Hello,
>> >> I would like to insert the result of a select query into a cache in
>> >> ignite.
>> >> Something like:
>> >>
>> >> INSERT INTO "new_cache_name".NewCacheDataType(ID, CUSTOMERID,
>> PRODUCTNAME)
>> >> (SELECT {?}, c.id, p.product_name
>> >> FROM "customers".CUSTOMER as c
>> >> JOIN "products".PRODUCT as p
>> >> ON c.id = p.customer_id)
>> >>
>> >> in the place of the {?} i would like to put in something similar to
>> >> AtomicSequence, however seeing as this will be work done without using
>> the
>> >> client I cannot tell how this is possible.
>> >> Can someone advise if this can be done, and if so, how?
>> >>
>> >> Thanks.
>> >>
>> >>
>> >>
>> >> --
>> >> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>> >
>> >
>> >
>> >
>> > --
>> > Best regards,
>> > Andrey V. Mashenkov
>>
>
>
>
> --
> Best regards,
> Andrey V. Mashenkov
>


Re: work around for problem where ignite query does not include objects added into Cache from within a transaction

2017-09-18 Thread Dmitriy Setrakyan
On Mon, Sep 18, 2017 at 8:01 AM, rick_tem  wrote:

> I see that as a huge problem.  Certainly one of the functions of Ignite is
> to
> be faster than the database, but if it fails to meet all of the
> requirements
> of what a database will do for you, what is the point of using it? Clearly
> a
> database will keep read consistency between transactions.  Most
> applications
> I've worked with require that as well.  If I understand correctly, this
> hole
> makes querying the grid almost useless as I can't count it being
> consistent.


Rick,
The Ignite community does understand this and is very honest about warning
users about it:
https://apacheignite.readme.io/docs/ignite-facts#section-is-ignite-a-transactional-database-

The plan is to add this feature in 2.4 release, hopefully by the end of the
year.

D.


Re: Re: Fetched result use too much time

2017-09-16 Thread Dmitriy Setrakyan
Lucky,

We would like to see the output of the "EXPLAIN" command for the query that
takes a long time, so we could make suggestions. Can you post it here?

D.

On Fri, Sep 15, 2017 at 11:50 PM, Lucky  wrote:

> Hi , Yakov Zhdanov
> Actually I did not run H2 console, I run  like this :
> cache.query(new SqlFieldsQuery("explain select id from assignInfo
> "));
>
> I change to like this:
> new SqlFieldsQuery("select * from Person p join table(id bigint = ?) i on
> p.id = i.id").setArgs(new Object[]{ new Integer[] {2, 3, 4} }))
>
> but it also need take 82 seconds.That did not change anything.
>
> Any other suggestion?
> Thanks a lot.
> Lucky
>
>
>
>
> At 2017-09-15 22:17:30, "Yakov Zhdanov"  wrote:
>
> Please run explain from Ignite, not from H2 console -
> https://apacheignite.readme.io/docs/sql-performance-and-
> debugging#using-explain-statement
>
> Here you can find info on proper IN usage in Ignite -
> https://apacheignite.readme.io/docs/sql-performance-and-
> debugging#sql-performance-and-usability-considerations
>
> Thanks!
> --
> Yakov Zhdanov, Director R&D
> *GridGain Systems*
> www.gridgain.com
>
> 2017-09-15 4:50 GMT+03:00 Lucky :
>
>> Hi
>> I have a table with 25,000,000 records.
>> The sql like this:
>> select fdatabasedid from databasedassign where fassingcuid
>> in(3589 ids) group by fdatabasedid have count(fassingcuid)>=3589
>> It return 1500 records.
>> But It took 82 seconds!!!
>>
>>
>> I see the explain like this:
>>
>>
>>  Any suggestions?
>> Thanks.
>> Lucky
>>
>>
>>
>>
>
>
>
>
>


Re: Comparing Strings in SQL statements

2017-09-15 Thread Dmitriy Setrakyan
On Thu, Sep 14, 2017 at 10:02 PM, iostream  wrote:

> I have used Informix DB before. In Informix string comparisons such as -
>
> SELECT * from Person where fName = "ABC";
>
> return rows even if the column value has trailing spaces. The Informix
> engine internally trims strings before comparison. It would be great if a
> similar feature could be added to Ignite because performing TRIM() in every
> create or update scenario will be expensive from an application point of
> view.


TRIM() on insert or update is a much better approach in my view, because
the trimmed string can be properly indexed, and then the comparisons will
use the direct index lookup and perform much better. If the comparison has
to trim on the fly, then it will likely be a full scan, not a direct index
lookup.

However, to consider this feature for Ignite, I would like to read some
documentation from other databases that describes this behavior. Would be
great if you could provide a link.


> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Comparing Strings in SQL statements

2017-09-14 Thread Dmitriy Setrakyan
On Thu, Sep 14, 2017 at 7:10 PM, iostream  wrote:

>
> 1. Whether Ignite does not TRIM strings internally when doing comparisons?
>

I don't think Ignite trims strings for comparison. You should use TRIM()
function explicitly when inserting or comparing strings.


> 2. Is there a way to configure my cluster to enforce TRIM whenever there
> are SQL statements with String comparisons?
>

I do not think you can enforce trimming without actually using the TRIM()
function.

Do you know other databases that provide the behavior you are asking for?
If yes, please give us a link to this feature and we will consider adding
it to Ignite.


> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re:

2017-09-13 Thread Dmitriy Setrakyan
Hi Chaitanya,

Sorry to see you go :(

Please follow the unsubscribe link here:
https://ignite.apache.org/community/resources.html#mail-lists

D.

On Wed, Sep 13, 2017 at 2:26 PM, chaitanya kulkarni <9...@gmail.com>
wrote:

> Unsubscribe
>


Re: Re: When cache node switch between primary and backup any notification be received?

2017-09-11 Thread Dmitriy Setrakyan
On Mon, Sep 11, 2017 at 6:54 PM, aa...@tophold.com 
wrote:

> Thanks Alexey!   what we real want, we deploy service on each Cache Node.
>  those service will use data from its' local cache.
>
> Client will call those remote service, Client should only call  the
> service on primary node,  this make those nodes work like master-slave mode
> automatically.
>

In Ignite, a node is a primary node for a certain partition. A key belongs
to a partition and a partition belongs to a node. A node may be primary for
key1 (partition N)  and the same node may be a back up for key 2 (partition
M).

I think you simply should invoke your service on each node and only check
or iterate through primary keys stored on that node. You can get a list of
primary keys by using org.apache.ignite.cache.affinity.Affinity API, for
example Affinty.primaryPartitions(ClusterNode) method.

*int[] primaryPartitions =
> Ignite.affinity("cacheName").primaryPartitions(Ignite.cluster().localNode());*




>
> *for (int primaryPartition : primaryPartitions) {// Cursor over local
> entries for the given partition.QueryCursor> cur =
> cache.query(new ScanQuery(primaryPartition));*




>
>
> * for (Entry entry : cur) { // Do something on local entries. }}*


Does this make sense?


Re: Logging documentation

2017-09-08 Thread Dmitriy Setrakyan
Great!

I think the next task should be to explain expiration vs eviction. I am
seeing too many questions on it as well. At this point, I am also confused
about how it really works.

D.

On Fri, Sep 8, 2017 at 10:18 AM, Denis Magda  wrote:

> Unbelievable! So many years went by and finally we got this documentation
> ready. Thanks a lot Prachi!
>
> —
> Denis
>
> On Sep 8, 2017, at 9:48 AM, Prachi Garg  wrote:
>
> Hello Igniters,
>
> I see a lot of questions about logging in Ignite . Here is the
> documentation-  https://apacheignite.readme.io/docs/logging
>
> This should help answer most of your questions :)
>
> -P
>
>
>
>


Re: POJO field having wrapper type, mapped to cassandra table are getting initialized to respective default value of primitive type instead of null if column value is null

2017-09-05 Thread Dmitriy Setrakyan
Cross-sending to user@ as well.

On Tue, Sep 5, 2017 at 10:44 PM, kotamrajuyashasvi <
kotamrajuyasha...@gmail.com> wrote:

> Hi
>
> I'm using ignite with cassandra as persistent store. I have a POJO class
> mapped to cassandra table. I have used
> ignite-cassandra-store/KeyValuePersistenceSettings xml bean to map POJO to
> cassandra table. In the POJO one of the fields is Integer (wrapper class)
> mapped to int column in cassandra table. When I load any row having this
> int
> field as null in cassandra, I'm getting that respective field in POJO as 0,
> which is default value of primitive type int. Same is the case when using
> other wrapper classes. How can I get that field as null when the actual
> column field is null in cassandra, since wrapper object can be null.
>
> I found a work around by using custom class extending CacheStoreAdapter and
> using this class in cache configuration in cacheStoreFactory
> property,instead of using  ignite-cassandra-store. This class overrides
> load,write and delete methods. In load method I connect to cassandra
> database using Datastax driver, and load respective row depending upon the
> key passed as parameter to load, and then create a new POJO whose fields
> are
> set to the fields of row returned from cassandra and return the POJO.
> During
> this process I make a check if the int field that I mentioned above is null
> in cassandra by using Row.isNull method of Datastaxdriver and if its not
> null only then I set POJO field to the field value returned from cassandra,
> else it will remain as null.
>
> Is it a bug in ignite-cassandra-store, where I cannot retain null value of
> cassandra table field for primitive types mapped to wrapper classes in POJO
> in ignite? The reason I have used wrapper class objects is to identify if
> its null in cassandra or not, but there seems no way to differentiate
> between primitive type default value and null when using
> ignite-cassandra-store.
>
>
>
> --
> Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/
>


Re: IgniteDataStreamer.addData - behavior for a FULL_SYNC Cache

2017-09-05 Thread Dmitriy Setrakyan
On Tue, Sep 5, 2017 at 10:51 AM, mcherkasov  wrote:

> I think javadoc is the best source for this:
>
>  /**
>  * Flag indicating that Ignite should wait for write or commit replies
> from all nodes.
>  * This behavior guarantees that whenever any of the atomic or
> transactional writes
>  * complete, all other participating nodes which cache the written data
> have been updated.
>  */
>
> so with FULL_SYNC client node will wait for data will be saved on the
> primary node and backup node.
> if you have REPLICATED cache that means you have 1 primary node and all
> other nodes in the cluster
> store backups, so in your case, you lost one backup and that's it. Data was
> saved.
>

I don't think this is exactly true. The client node is calling addData(...)
method and will not wait for anything, IgniteDataStreamer is completely
asynchronous.

However, the primary-key server node will wait for the backup server nodes
to be updated before responding back to the client.

In case of REPLICATED cache, the primary node will wait until all other
nodes are updated, so essentially, all nodes are guaranteed to have the
latest state. If one of the nodes crashes, then other nodes will still have
the state.


>
> You right that now you have cluster consists of only 1 node, but you can
> start a new node or even hundred nodes,
> and data will be replicated to all new nodes.


>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Specifying location of persistent storage location

2017-09-04 Thread Dmitriy Setrakyan
On Mon, Sep 4, 2017 at 8:40 PM, Raymond Wilson 
wrote:

> Thanks.
>
>
>
> I get the utility of specifying the network address to bind to; I’m not
> convinced using that to derive the name of the internal data store is a
> good idea! J
>
 For instance, what if you have to move a persistent data store to a
> different server? Or are you saying everybody sets LocalHost or 120.0.0.1
> to ensure the folder name is always essentially local host?
>

I think what you are asking about is a database backup or a snapshot.
Ignite does not support it out of the box, but you may wish to look at the
3rd party solutions, e.g. the one provided by GridGain -
https://docs.gridgain.com/docs/data-snapshots



>
>
> *From:* Dmitriy Setrakyan [mailto:dsetrak...@apache.org]
> *Sent:* Tuesday, September 5, 2017 3:09 PM
> *To:* user 
>
> *Subject:* Re: Specifying location of persistent storage location
>
>
>
>
>
>
>
> On Mon, Sep 4, 2017 at 6:07 PM, Raymond Wilson 
> wrote:
>
> Dmitriy,
>
>
>
> I set up an XML file based on the default one and added the two elements
> you noted.
>
>
>
> However, this has brought up an issue in that the XML file and an
> IgniteConfiguration instance can’t both be provided to the Ignition.Start()
> call. So I changed it to use the DiscoverSPI aspect of IgniteConfiguration
> and set LocalAddress to “127.0.0.1” and LocalPort to 47500.
>
>
>
> This did change the name of the persistence folder to be “127_0_0_1_47500”
> as you suggested.
>
>
>
> While this resolves my current issue with the folder name changing, it
> still seems fragile as network configuration aspects of the server Ignite
> is running on have a direct impact on an internal aspect of its
> configuration (ie: the location where to store the persisted data). A DHCP
> IP lease renewal or an internal DNS domain change or an internal IT
> department change to using IPv6 addressing (among other things) could cause
> problems when a node restarts and decides the location of its data is
> different.
>
>
>
> Do you know how GridGain manage this in their enterprise deployments using
> persistence?
>
>
>
> I am glad the issue is resolved. By default, Ignite will bind to all the
> local network interfaces, and if they are provided in different order, it
> may create the situation you witnessed.
>
>
>
> All enterprise users explicitly specify which network address to bind to,
> just like you did. This helps avoid any kind of magic in production.
>
>
>
>
>
>
>
>
>
> Thanks,
> Raymond.
>
>
>
> *From:* Dmitriy Setrakyan [mailto:dsetrak...@apache.org]
> *Sent:* Tuesday, September 5, 2017 11:41 AM
>
>
> *To:* user 
> *Cc:* Raymond Wilson 
> *Subject:* Re: Specifying location of persistent storage location
>
>
>
>
>
> On Mon, Sep 4, 2017 at 4:28 PM, Raymond Wilson 
> wrote:
>
> Hi,
>
>
>
> It’s possible this could cause change in the folder name, though I do not
> think this is an issue in my case. Below are three different folder names I
> have seen. All use the same port number, but differ in terms of the IPV6
> address (I have also seen variations where the IPv6 address is absent in
> the folder name).
>
> 0_0_0_0_0_0_0_1_10_0_75_1_10_3_72_117_127_0_0_1_192_168_
> 121_1_192_168_178_27_192_168_3_1_2406_e007_9e5_1_9cc8_92bc_
> 50c9_6794_2406_e007_9e5_1_c5d8_af4b_55b2_582a_47500
>
>
>
> ,
>
> 0_0_0_0_0_0_0_1_10_0_75_1_10_3_72_117_127_0_0_1_192_168_
> 121_1_192_168_178_27_192_168_3_1_2406_e007_9e5_1_a58c_2f32_
> 8005_b03d_2406_e007_9e5_1_c5d8_af4b_55b2_582a_47500
>
> 0_0_0_0_0_0_0_1_10_0_75_1_10_3_72_117_127_0_0_1_192_168_
> 121_1_192_168_178_27_192_168_3_1_2406_e007_38b4_1_858c_
> f0ab_bc60_54ab_2406_e007_38b4_1_c5d8_af4b_55b2_582a_47500
>
>
>
> I start the nodes in my local setup in a well defined order so I would
> expect the port to be the same. I did once start a second instance by
> mistake and did see the port number incremented in the folder name.
>
>
>
> Are you suggesting the two changes you note below will result in the same
> folder name being chosen every time, unlike above?
>
>
>
>
>
> Yes, exactly. My suggestions will ensure that you explicitly bind to the
> same address every time.
>
>
>
>
>
>
>
>
>
>
>
> Thanks,
>
> Raymond.
>
>
>
> *From:* Dmitriy Setrakyan [mailto:dsetrak...@apache.org]
> *Sent:* Tuesday, September 5, 2017 11:17 AM
> *To:* user 
> *Cc:* Raymond Wilson 
> *Subject:* Re: Specifying location of persistent storage location
>
>
>
>
>
>
>
> On Mon,

Re: Specifying location of persistent storage location

2017-09-04 Thread Dmitriy Setrakyan
On Mon, Sep 4, 2017 at 6:07 PM, Raymond Wilson 
wrote:

> Dmitriy,
>
>
>
> I set up an XML file based on the default one and added the two elements
> you noted.
>
>
>
> However, this has brought up an issue in that the XML file and an
> IgniteConfiguration instance can’t both be provided to the Ignition.Start()
> call. So I changed it to use the DiscoverSPI aspect of IgniteConfiguration
> and set LocalAddress to “127.0.0.1” and LocalPort to 47500.
>
>
>
> This did change the name of the persistence folder to be “127_0_0_1_47500”
> as you suggested.
>
>
>
> While this resolves my current issue with the folder name changing, it
> still seems fragile as network configuration aspects of the server Ignite
> is running on have a direct impact on an internal aspect of its
> configuration (ie: the location where to store the persisted data). A DHCP
> IP lease renewal or an internal DNS domain change or an internal IT
> department change to using IPv6 addressing (among other things) could cause
> problems when a node restarts and decides the location of its data is
> different.
>
>
>
> Do you know how GridGain manage this in their enterprise deployments using
> persistence?
>

I am glad the issue is resolved. By default, Ignite will bind to all the
local network interfaces, and if they are provided in different order, it
may create the situation you witnessed.

All enterprise users explicitly specify which network address to bind to,
just like you did. This helps avoid any kind of magic in production.




>
>
> Thanks,
> Raymond.
>
>
>
> *From:* Dmitriy Setrakyan [mailto:dsetrak...@apache.org]
> *Sent:* Tuesday, September 5, 2017 11:41 AM
>
> *To:* user 
> *Cc:* Raymond Wilson 
> *Subject:* Re: Specifying location of persistent storage location
>
>
>
>
>
> On Mon, Sep 4, 2017 at 4:28 PM, Raymond Wilson 
> wrote:
>
> Hi,
>
>
>
> It’s possible this could cause change in the folder name, though I do not
> think this is an issue in my case. Below are three different folder names I
> have seen. All use the same port number, but differ in terms of the IPV6
> address (I have also seen variations where the IPv6 address is absent in
> the folder name).
>
> 0_0_0_0_0_0_0_1_10_0_75_1_10_3_72_117_127_0_0_1_192_168_
> 121_1_192_168_178_27_192_168_3_1_2406_e007_9e5_1_9cc8_92bc_
> 50c9_6794_2406_e007_9e5_1_c5d8_af4b_55b2_582a_47500
>
>
>
> ,
>
> 0_0_0_0_0_0_0_1_10_0_75_1_10_3_72_117_127_0_0_1_192_168_
> 121_1_192_168_178_27_192_168_3_1_2406_e007_9e5_1_a58c_2f32_
> 8005_b03d_2406_e007_9e5_1_c5d8_af4b_55b2_582a_47500
>
> 0_0_0_0_0_0_0_1_10_0_75_1_10_3_72_117_127_0_0_1_192_168_
> 121_1_192_168_178_27_192_168_3_1_2406_e007_38b4_1_858c_
> f0ab_bc60_54ab_2406_e007_38b4_1_c5d8_af4b_55b2_582a_47500
>
>
>
> I start the nodes in my local setup in a well defined order so I would
> expect the port to be the same. I did once start a second instance by
> mistake and did see the port number incremented in the folder name.
>
>
>
> Are you suggesting the two changes you note below will result in the same
> folder name being chosen every time, unlike above?
>
>
>
>
>
> Yes, exactly. My suggestions will ensure that you explicitly bind to the
> same address every time.
>
>
>
>
>
>
>
>
>
>
>
> Thanks,
>
> Raymond.
>
>
>
> *From:* Dmitriy Setrakyan [mailto:dsetrak...@apache.org]
> *Sent:* Tuesday, September 5, 2017 11:17 AM
> *To:* user 
> *Cc:* Raymond Wilson 
> *Subject:* Re: Specifying location of persistent storage location
>
>
>
>
>
>
>
> On Mon, Sep 4, 2017 at 3:37 PM, Raymond Wilson 
> wrote:
>
> Hi,
>
>
>
> I definitely have not had more than one server node running at the same
> time (though there have been more than one client node running on the same
> machine).
>
>
>
> I suspect what is happening is that one or more of the network interfaces
> on the machine can have their address change dynamically. What I thought of
> as a GUID is actually (I think) an IPv6 address attached to one of the
> interfaces. This aspect of the folder name tends to come and go.
>
>
>
> You can see from the folder names below that there are quite a number of
> addresses involved. This seems to be fragile (and I certainly see the name
> of this folder changing frequently), so I think being able to set it to
> something concrete would be a good idea.
>
>
>
>
>
> I think I understand what is happening. Ignite starts off with a default
> port, and then starts incrementing it with every new node started on the
> same host. Perhaps yo

Re: Specifying location of persistent storage location

2017-09-04 Thread Dmitriy Setrakyan
On Mon, Sep 4, 2017 at 4:28 PM, Raymond Wilson 
wrote:

> Hi,
>
>
>
> It’s possible this could cause change in the folder name, though I do not
> think this is an issue in my case. Below are three different folder names I
> have seen. All use the same port number, but differ in terms of the IPV6
> address (I have also seen variations where the IPv6 address is absent in
> the folder name).
>
> 0_0_0_0_0_0_0_1_10_0_75_1_10_3_72_117_127_0_0_1_192_168_
> 121_1_192_168_178_27_192_168_3_1_2406_e007_9e5_1_9cc8_92bc_
> 50c9_6794_2406_e007_9e5_1_c5d8_af4b_55b2_582a_47500
>
>
>
> ,
>
> 0_0_0_0_0_0_0_1_10_0_75_1_10_3_72_117_127_0_0_1_192_168_
> 121_1_192_168_178_27_192_168_3_1_2406_e007_9e5_1_a58c_2f32_
> 8005_b03d_2406_e007_9e5_1_c5d8_af4b_55b2_582a_47500
>
> 0_0_0_0_0_0_0_1_10_0_75_1_10_3_72_117_127_0_0_1_192_168_
> 121_1_192_168_178_27_192_168_3_1_2406_e007_38b4_1_858c_
> f0ab_bc60_54ab_2406_e007_38b4_1_c5d8_af4b_55b2_582a_47500
>
>
>
> I start the nodes in my local setup in a well defined order so I would
> expect the port to be the same. I did once start a second instance by
> mistake and did see the port number incremented in the folder name.
>
>
>
> Are you suggesting the two changes you note below will result in the same
> folder name being chosen every time, unlike above?
>


Yes, exactly. My suggestions will ensure that you explicitly bind to the
same address every time.





>
> Thanks,
>
> Raymond.
>
>
>
> *From:* Dmitriy Setrakyan [mailto:dsetrak...@apache.org]
> *Sent:* Tuesday, September 5, 2017 11:17 AM
> *To:* user 
> *Cc:* Raymond Wilson 
> *Subject:* Re: Specifying location of persistent storage location
>
>
>
>
>
>
>
> On Mon, Sep 4, 2017 at 3:37 PM, Raymond Wilson 
> wrote:
>
> Hi,
>
>
>
> I definitely have not had more than one server node running at the same
> time (though there have been more than one client node running on the same
> machine).
>
>
>
> I suspect what is happening is that one or more of the network interfaces
> on the machine can have their address change dynamically. What I thought of
> as a GUID is actually (I think) an IPv6 address attached to one of the
> interfaces. This aspect of the folder name tends to come and go.
>
>
>
> You can see from the folder names below that there are quite a number of
> addresses involved. This seems to be fragile (and I certainly see the name
> of this folder changing frequently), so I think being able to set it to
> something concrete would be a good idea.
>
>
>
>
>
> I think I understand what is happening. Ignite starts off with a default
> port, and then starts incrementing it with every new node started on the
> same host. Perhaps you start server and client nodes in different order
> sometimes which causes server to bind to a different port.
>
>
>
> To make sure that your server node binds to the same port all the time,
> you should try specifying it explicitly in the server node configuration,
> like so (forgive me if this snippet does not compile):
>
>
>
>
>
>
>
> *  class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
>   *
>
>
>
> Please make sure that the client nodes either don't have any port
> configured, or have a different port configured.
>
>
>
> You should also make sure that Ignite always binds to the desired local
> interface on client and server nodes, by specifying 
> IgniteConfiguration.setLocalHost(...)
> property, or like so in XML:
>
>
>
> **
>
>
>
> If my theory is correct, Ignite should make sure that the clients and
> servers cannot theoretically bind to the same port. I will double check it
> with the community and file a ticket if needed.
>
>
>


Re: Specifying location of persistent storage location

2017-09-04 Thread Dmitriy Setrakyan
On Mon, Sep 4, 2017 at 3:37 PM, Raymond Wilson 
wrote:

> Hi,
>
>
>
> I definitely have not had more than one server node running at the same
> time (though there have been more than one client node running on the same
> machine).
>
>
>
> I suspect what is happening is that one or more of the network interfaces
> on the machine can have their address change dynamically. What I thought of
> as a GUID is actually (I think) an IPv6 address attached to one of the
> interfaces. This aspect of the folder name tends to come and go.
>
>
>
> You can see from the folder names below that there are quite a number of
> addresses involved. This seems to be fragile (and I certainly see the name
> of this folder changing frequently), so I think being able to set it to
> something concrete would be a good idea.
>
>
>
I think I understand what is happening. Ignite starts off with a default
port, and then starts incrementing it with every new node started on the
same host. Perhaps you start server and client nodes in different order
sometimes which causes server to bind to a different port.

To make sure that your server node binds to the same port all the time, you
should try specifying it explicitly in the server node configuration, like
so (forgive me if this snippet does not compile):


>
>
>
> *  class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
>   *


Please make sure that the client nodes either don't have any port
configured, or have a different port configured.

You should also make sure that Ignite always binds to the desired local
interface on client and server nodes, by specifying
IgniteConfiguration.setLocalHost(...) property, or like so in XML:

**


If my theory is correct, Ignite should make sure that the clients and
servers cannot theoretically bind to the same port. I will double check it
with the community and file a ticket if needed.


Re: Specifying location of persistent storage location

2017-09-04 Thread Dmitriy Setrakyan
Hi Raymond,

Sorry for the initial confusion. The consistent ID is the combination of
the local IP and port. You DO NOT need to do anything special to configure
it.

If you had different folders created under the work folder, you probably
had more than one node running at the same time. Can you please make sure
that it was not the case?

D.

On Mon, Sep 4, 2017 at 2:55 PM, Raymond Wilson 
wrote:

> Hi Dmitry,
>
>
>
> I looked at IgniteConfiguration in the C# client, but it does not have
> consistentID in its namespace.
>
>
>
> I pulled the C# client source code and searched in there and was not able
> to find it. Perhaps this is not exposed in the C# client at all?
>
>
>
> If that is that case, how would I configure this?
>
>
>
> Thanks,
>
> Raymond.
>
>
>
> *From:* Dmitry Pavlov [mailto:dpavlov@gmail.com]
> *Sent:* Tuesday, September 5, 2017 9:24 AM
>
> *To:* user@ignite.apache.org
> *Subject:* Re: Specifying location of persistent storage location
>
>
>
> Hi Ramond,
>
>
>
> Node.Consistent ID by default is the sorted set of local IP addresses and
> ports. This field value survives during node restart.
>
>
>
> At the same time consistent ID may be set using
> IgniteConfiguration.setConsistentId() if you need to specify it manually.
>
> I'm not sure how to write in C# syntax, but I am pretty sure it may be
> configured.
>
>
>
> Sincerely,
>
> Dmitriy Pavlov
>
>
>
> вт, 5 сент. 2017 г. в 0:12, Raymond Wilson :
>
> … also, the documentation for ClusterNode here (
> https://www.gridgain.com/sdk/pe/latest/javadoc/org/apache/i
> gnite/cluster/ClusterNode.html) only describes a getter for the
> consistent ID, I need to be able to set it.
>
>
>
> *From:* Raymond Wilson [mailto:raymond_wil...@trimble.com]
> *Sent:* Tuesday, September 5, 2017 9:06 AM
> *To:* 'user@ignite.apache.org' 
> *Subject:* RE: Specifying location of persistent storage location
>
>
>
> Apologies if this is a silly question, but I’m struggling to see how to
> get at the consistentID member of ClusterNode on the C# client.
>
>
>
> If I look at IClusterNode I only see “Id”, which is the ID that changes
> each restart. Is consistentID a Java client only feature?
>
>
>
> Thanks,
>
> Raymond.
>
>
>
> *From:* Raymond Wilson [mailto:raymond_wil...@trimble.com
> ]
> *Sent:* Tuesday, September 5, 2017 6:04 AM
> *To:* user@ignite.apache.org
> *Subject:* Re: Specifying location of persistent storage location
>
>
>
> Thank you Dmitry!
>
> Sent from my iPhone
>
>
> On 5/09/2017, at 1:12 AM, Dmitry Pavlov  wrote:
>
> Hi Raymond,
>
>
>
> Ignite Persistent Store includes consistentID parameter of cluster node
> into folders name. It is required because there is possible that 2 nodes
> would be started at same physical machine.
>
>
>
> Consistency of using same folder each time is provided by this property,
>
> ClusterNode.consistentID - consistent globally unique node ID. Unlike
> ClusterNode.id this parameter constains consistent node ID which survives
> node restarts.
>
>
>
> Sincerely,
>
> Dmitriy Pavlov
>
>
>
>
>
> сб, 2 сент. 2017 г. в 23:40, Raymond Wilson :
>
> Hi,
>
>
>
> I’m running a POC looking at the Ignite Persistent Store feature.
>
>
>
> I have added a section to the configuration for the Ignite grid as follows:
>
>
>
> cfg.PersistentStoreConfiguration = new
> PersistentStoreConfiguration()
>
> {
>
> PersistentStorePath = PersistentCacheStoreLocation,
>
> WalArchivePath = Path.Combine(PersistentCacheStoreLocation,
> "WalArchive"),
>
> WalStorePath = Path.Combine(PersistentCacheStoreLocation,
> "WalStore"),
>
> };
>
>
>
> When I run the Ignite grid (a single node running locally) it then creates
> a folder inside the PersistentCacheStoreLocation with a complicated name,
> like this (which looks like a collection of IP addresses and a GUID for
> good measure, and perhaps with a port number added to the end):
>
>
>
> 0_0_0_0_0_0_0_1_10_0_75_1_10_3_72_117_127_0_0_1_192_168_121_
> 1_192_168_178_27_192_168_3_1_2406_e007_9e5_1_9cc8_92bc_50c9
> _6794_2406_e007_9e5_1_c5d8_af4b_55b2_582a_47500
>
>
>
>,
>
>
>
> Within that folder are then placed folders containing the content for each
> cache in the system
>
>
>
> Oddly, if I stop and then restart the grid I sometime get another folder
> with a slightly different complicated name, like this:
>
>
>
> 0_0_0_0_0_0_0_1_10_0_75_1_10_3_72_117_127_0_0_1_192_168_121_
> 1_192_168_178_27_192_168_3_1_2406_e007_9e5_1_a58c_2f32_8005
> _b03d_2406_e007_9e5_1_c5d8_af4b_55b2_582a_47500
>
>
>
> How do I ensure my grid uses the same persistent location each time? There
> doesn’t seem anything obvious in the PersistentStoreConfiguration that
> relates to this, other than the root location of the folder to store
> persisted data.
>
>
>
> Thanks,
> Raymond.
>
>
>
>


Re: UPDATE SQL for nested BinaryObject throws exception.

2017-09-04 Thread Dmitriy Setrakyan
On Mon, Sep 4, 2017 at 7:40 AM, afedotov 
wrote:

> Hi,
>
> Actually, flattening the nested properties with aliases works only for one
> level as for now.
> Looks like it's a bug. I'll file a JIRA ticket for this.
>
>
Alex, maybe one level nesting will be enough for some use cases. Is there
an example in documentation somewhere we can all look at? If not, should we
add such documentation?


>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Is the following statement true in all cases for REPLICATED mode ?

2017-09-03 Thread Dmitriy Setrakyan
On Mon, Aug 28, 2017 at 5:23 AM, agura  wrote:

When persistence store is enabled the data pages that can't be stored in
> the memory will be evicted to the persistence store.


Andrey, this is not how Ignite persistence works.

When the persistence is enabled, all the data will be persisted, without
exceptions. If the data size exceeds the the pre-configured allocated
memory, then Ignite will start purging the cold pages and will only keep
the hot data in memory.

D.


Re: Can I use Ignite for my case?

2017-09-03 Thread Dmitriy Setrakyan
James,

I think you will find this documentation useful:

https://apacheignite.readme.io/docs/getting-started

D.

On Sun, Sep 3, 2017 at 8:32 PM, James <2305958...@qq.com> wrote:

> First, I really appreciate a help from Dmitriy.
>
> My company assigns me to work on a big product using Ignite. One node
> server
> is standard and free to all customers. If customers want to use more
> server,
> this product will add modules so the customers can  just pay it and use
> multiple node computing. If I want to use Ignite for one node server, I
> want
> to make sure the performance on one node server is little better than
> MySQL.
>
> I just find Ignite this pass weekend. I need to finish one prototype this
> week. Ignite is so big. I would appreciate it if you can provide me with
> some solution for my following dummy question?
>
> What is api for standard client-server? What is api to embed Ignite into my
> Java application?
>
> I like to get some databse sql query benchmarks  on ignite vs MySQL? Where
> can I find a big database data?
>
> Thanks,
>
> James.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Retrieving multiple keys with filtering

2017-09-03 Thread Dmitriy Setrakyan
Semyon,

Can you please clarify this. Do we allow concurrent reads while invokeAll
or invoke is executed?

D.

On Tue, Aug 29, 2017 at 11:59 AM, Andrey Kornev 
wrote:

> Ah, yes! Thank you, Semyon! According to invokeAll() javadocs "No mappings
> will be returned for EntryProcessors that return a null value for a key." I
> should read JCache javadocs more carefully next time. :)
>
>
> Still, the processor is invoked while a monitor is held on the cache entry
> being processed, which is of course unnecessary in a read-only case like
> the one we're discussing in this thread...
>
>
> I guess I'm stuck with the Compute-based approach for now. :(
>
> Thanks!
> Andrey
>
> --
> *From:* Semyon Boikov 
> *Sent:* Tuesday, August 29, 2017 6:15 AM
>
> *To:* user@ignite.apache.org
> *Subject:* Re: Retrieving multiple keys with filtering
>
> Hi,
>
> If EntryProcessor returns null then null is not added in the result map.
> But I agree that using invokeAll() will have a lot of unnecessary overhead.
> Perhaps we need add new getAll method on API, otherwise best alternative is
> use custom ComputeJob or affinityCall.
>
> Thanks,
> Semyon
>
> On Tue, Aug 29, 2017 at 7:20 AM, Dmitriy Setrakyan 
> wrote:
>
>> Andrey,
>>
>> I am not sure I understand. According to EntryProcessor API [1] you can
>> chose to return nothing.
>>
>> Also, to my knowledge, you can still do parallel reads while executing
>> the EntryProcessor. Perhaps other community members can elaborate on this.
>>
>> [1] https://static.javadoc.io/javax.cache/cache-api/1.0.0/in
>> dex.html?javax/cache/processor/EntryProcessor.html
>>
>> D.
>>
>>
>> On Mon, Aug 28, 2017 at 8:29 PM, Andrey Kornev 
>> wrote:
>>
>>> Dmitriy,
>>>
>>>
>>> It's good to be back! 😃 Glad to find Ignite community as vibrant
>>> and thriving as ever!
>>>
>>> Speaking of invokeAll(), even if we ignore for a moment the overhead
>>> associated with locking/unlocking a cache entry prior to passing it to the
>>> EntryProcessor as well as the overhead associated with enlisting the
>>> touched entries in a transaction, the bigger problem with using
>>> invokeAll() for filtering is that EntryProcessor must return a value. I'm
>>> not aware of any way to make EntryProcessor drop the entry from the
>>> response. The only options is to use a null (or false) to indicate a
>>> filtered out entry. In my specific case, I'll end up sending back a whole
>>> bunch of nulls in the result map as I expect most of the keys to be
>>> rejected by the filter.
>>>
>>> Overall, invokeAll() is not what one would call *efficient* (the key
>>> word in my original question) way of filtering.
>>>
>>> Thanks!
>>> Andrey
>>>
>>> --
>>> *From:* Dmitriy Setrakyan 
>>> *Sent:* Saturday, August 26, 2017 8:37 AM
>>> *To:* user
>>>
>>> *Subject:* Re: Retrieving multiple keys with filtering
>>>
>>> Andrey,
>>>
>>> Good to hear from you. Long time no talk.
>>>
>>> I don't think invokeAll has only update semantics. You can definitely
>>> use it just to look at the keys and return a result. Also, as you
>>> mentioned, Ignite compute is a viable option as well.
>>>
>>> The reason that predicates were removed from the get methods is because
>>> the API was becoming unwary, and also because JCache does not require it.
>>>
>>> D.
>>>
>>> On Thu, Aug 24, 2017 at 10:50 AM, Andrey Kornev <
>>> andrewkor...@hotmail.com> wrote:
>>>
>>>> Well, I believe invokeAll() has "update" semantics and using it for
>>>> read-only filtering of cache entries is probably not going to be efficient
>>>> or even appropriate.
>>>>
>>>>
>>>> I'm afraid the only viable option I'm left with is to use Ignite's
>>>> Compute feature:
>>>>
>>>> - on the sender, group the keys by affinity.
>>>>
>>>> - send each group along with the filter predicate to their
>>>> affinity nodes using IgniteCompute.
>>>>
>>>> - on each node, use getAll() to fetch the local keys and apply the
>>>> filter.
>>>>
>>>> - on the sender node, collect the results of the compute jobs into a
>>>> map.
>>>>
>>>>
>>&

Re: Ignite sql queries working transactionally

2017-09-03 Thread Dmitriy Setrakyan
Denis, I think you provided an incorrect link to the ticket. Here is the
correct link:

https://issues.apache.org/jira/browse/IGNITE-3478

D.

On Wed, Aug 30, 2017 at 5:50 PM, Denis Magda  wrote:

> Hi,
>
> The docs are still valid - SQL operations are not fully transactional yet
> and, according, to JIRA the works is in progress to make this happen:
> https://ggsystems.atlassian.net/browse/IGN-4666
>
> —
> Denis
>
>
> On Aug 30, 2017, at 12:21 AM, kotamrajuyashasvi <
> kotamrajuyasha...@gmail.com> wrote:
>
> Hi
>
> In my ignite client application I need to perform a set of update/delete
> sql
> query operations transactionally. I observed  that by using ignite
> transactions I was able to achieve this. When ever an update or delete
> query
> is executed with in a transaction, it is locking all resulting rows and
> thus
> preventing other clients modify/delete the same rows using update/delete
> queries. I checked this in the following way.
>
> First I started an Ignite client and started a Transaction. Then I executed
> an update query acting up on some rows. Then I made this client to sleep
> for
> some seconds before committing the transaction. Now I immediately started
> another client and tried executing a delete query which would act upon the
> same or few of the rows as the update query in first client. Now I could
> observe that the second client waits till the first client commits and only
> then it executes its delete query.
>
> Rollback functionality is also working on update/delete queries. So does it
> mean that ignite now supports fully transactional sql queries? It was
> mentioned in many previous ignite users posts that ignite sql queries are
> not transactional and also in 2.1 docs its mentioned that '*At SQL level
> Ignite supports atomic, but not yet transactional consistency. Ignite
> community plans to implement SQL transactions in version 2.2*.'. What does
> it mean?  Also no where in docs mentions about using sql queries in
> transactions.
>
> can I use ignite jdbc thin client with sql queries and transactions?
>
> I am using ignite version 2.1
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>
>
>


Re: UPDATE SQL for nested BinaryObject throws exception.

2017-09-03 Thread Dmitriy Setrakyan
Cross sending to dev@

Igniters, up until version 1.9, the nested fields were supported by
flattening the names. Do we still support it? I cannot seem to find
documentation for it.

D.

On Thu, Aug 31, 2017 at 7:12 AM, takumi  wrote:

> This is a part of the real code that I wrote.
>
> -
>   List entities = new ArrayList<>();
>   QueryEntity qe = new QueryEntity(String.class.getName(), "cache");
>   qe.addQueryField("attribute.prop1", Double.class.getName(), "prop3");
>   qe.addQueryField("attribute.prop2", String.class.getName(), "prop4");
>   qe.addQueryField("attribute.prop.prop1", Double.class.getName(),
> "prop5");
>   qe.addQueryField("attribute.prop.prop2", String.class.getName(),
> "prop6");
>
>   BinaryObject bo  =ib.builder("cache").setField("attribute",
> ib.builder("cache.attribute")
>   .setField("prop",
> ib.builder("cache.attribute.prop")
>.setField("prop1", 50.0, Double.class)
>.setField("prop2", "old", String.class))
>   .setField("prop1", 50.0, Double.class)
>   .setField("prop2", "old", String.class)).build();
>
>   cache.put("key1", bo);
>   cache.query(new SqlFieldsQuery("update cache set prop4 = 'new'  where
> prop3 >= 20.0"));//OK
>   cache.query(new SqlFieldsQuery("update cache set prop6 = 'new'  where
> prop5 >= 20.0"));//NG
> -
>
> I can update 'prop4' by SQL, but I do not update 'prop6' by SQL.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Limiting the size of Persistent Store and clearing data on restart

2017-09-03 Thread Dmitriy Setrakyan
On Sat, Sep 2, 2017 at 5:11 AM, userx  wrote:

> Hi all,
>
> Regarding question 2, let me put it this way.
>
> Like ignite has eviction policy for RAM described at
> https://apacheignite.readme.io/docs/evictions
> Is there an eviction policy for a persistent store ? Say for a use case
> where in I cannot allocate a dedicated 1 or 2 tb space (like we do for
> relational dbms), I would want to define an upper limit of say 300 GB and
> if
> $IGNITE_HOME/PersistentStore goes beyond the same same, evict the lru
> cache.
>
>
Evictions for RAM make sense because the data is still on disk and is not
lost. Evictions from disk do not make sense for me, nor am I aware of other
databases that have this feature. I believe it should be handled by users
based on the application requirements.

I think the Ignite community may agree to consider this feature as a
possible enhancement. Can you provide an example of a similar feature you
may have seen in another persistent store or a database?


>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Memory is not going down after cache.clean()

2017-09-03 Thread Dmitriy Setrakyan
On Sun, Sep 3, 2017 at 12:35 AM, dkarachentsev 
wrote:

> Hi,
>
> Just to clarify my words a bit. When persistence is enabled, all memory
> data
> are stored on disk with all durability guarantees. But it is also allows
> you
> to store in cache more data than you can fit in memory: Ignite just evicts
> stale data pages from RAM, and when they will be needed - loaded from disk.
>
> So you always have everything on hard drive and hot data in memory.
>

Let me clarify a bit more. When Ignite persistence is enabled, all the data
gets persisted to disk. If the data size exceeds the pre-configured memory
size, then Ignite will start purging the cold pages and will only keep the
hot data in memory.


Re: Can I use Ignite for my case?

2017-09-03 Thread Dmitriy Setrakyan
Hi James,

My answers are inline...

On Sun, Sep 3, 2017 at 3:41 AM, James <2305958...@qq.com> wrote:

> I am searching one solution to my case.  I just found Apache Ignite
> Yesterday. It appears a good solution. But I am not sure. I need your
> suggestions.
>
> My data is stored in MySQL. It is TB level. I like to do the following:
> 1. Load all data from MySQL into Ignite.
>

You should utilize IgniteDataStreamer [1] API to load data into Ignite.


> 2. Run sql query to get all data to do some "group by" on number data to
> calculate sum, avg, max, min etc.
>

You can do it in Ignite the standard client-server way. However, Ignite is
a distributed system and a much more performant way would be to send
computations to the nodes where the data is and run your logic locally on
those nodes. [2]


> 3. If I need to do some complex math operation, I like ignite to provide
> some stored procedure so I can run it on Ignite side.
>

Ignite does not have stored procedures. Instead, you should use collocated
computations. [2]


> 4. I like to embed Ignite into my Java application.
>

This is easy. Ignite has a very rich Java API and comes with many Java
examples.


> 5. I just want to use One server to achieve above goal.
>

If you use just one server, then what would be the advantage over MySQL?
Ignite brings the most value when you need to scale-out your application. I
would consider at least 2 servers or more.


[1] https://apacheignite.readme.io/docs/data-streamers
[2] https://apacheignite.readme.io/docs/collocate-compute-and-data


Re: Limiting the size of Persistent Store and clearing data on restart

2017-09-01 Thread Dmitriy Setrakyan
On Tue, Aug 29, 2017 at 7:00 PM, userx  wrote:

> Hi All,
>
> 1) Like the conventional configuration of
> https://apacheignite.readme.io/docs/memory-configuration#
> section-memory-policies
> where in we could limit the size of the memory, how do we define a
> configuration which can limit the size of Persistent Store so that say out
> of a 500GB HDisk, I don't want this Ignite related data to go beyond 300GB
> ?
>

You would have to control your data on disk by yourself. Evicting from disk
is very sensitive and use case specific, so it would be almost impossible
to automate it.


>
> 2) Also, is there a configuration such that when Ignite data grid servers
> are restarted (say some code change either on client or server side),
> whatever was persisted before gets wiped out completely on all the
> participating nodes and only fresh persisted data is there ?
>

How about calling destroyCache(...) on startup?


Re: Limiting the size of Persistent Store and clearing data on restart

2017-09-01 Thread Dmitriy Setrakyan
On Fri, Sep 1, 2017 at 1:41 PM, Denis Mekhanikov 
wrote:

> Regarding your second question: looks like you don't actually need
> persistence. Its purpose is the opposite: to save cache data between
> restarts.
> If you use persistence to store more data than RAM available, then you can
> enable swap space: https://apacheignite.readme.io/v1.9/docs/off-heap-mem
> ory#section-swap-space
>

Denis, starting with 2.1, Ignite does not have the swap space anymore,
since it never worked well and kept hurting the performance. We recommend
now that users do use persistence whenever the data size is bigger than the
available memory size.


Re: Retrieving multiple keys with filtering

2017-08-28 Thread Dmitriy Setrakyan
Andrey,

I am not sure I understand. According to EntryProcessor API [1] you can
chose to return nothing.

Also, to my knowledge, you can still do parallel reads while executing the
EntryProcessor. Perhaps other community members can elaborate on this.

[1]
https://static.javadoc.io/javax.cache/cache-api/1.0.0/index.html?javax/cache/processor/EntryProcessor.html

D.


On Mon, Aug 28, 2017 at 8:29 PM, Andrey Kornev 
wrote:

> Dmitriy,
>
>
> It's good to be back! 😃 Glad to find Ignite community as vibrant
> and thriving as ever!
>
> Speaking of invokeAll(), even if we ignore for a moment the overhead
> associated with locking/unlocking a cache entry prior to passing it to the
> EntryProcessor as well as the overhead associated with enlisting the
> touched entries in a transaction, the bigger problem with using
> invokeAll() for filtering is that EntryProcessor must return a value. I'm
> not aware of any way to make EntryProcessor drop the entry from the
> response. The only options is to use a null (or false) to indicate a
> filtered out entry. In my specific case, I'll end up sending back a whole
> bunch of nulls in the result map as I expect most of the keys to be
> rejected by the filter.
>
> Overall, invokeAll() is not what one would call *efficient* (the key word
> in my original question) way of filtering.
>
> Thanks!
> Andrey
>
> --
> *From:* Dmitriy Setrakyan 
> *Sent:* Saturday, August 26, 2017 8:37 AM
> *To:* user
>
> *Subject:* Re: Retrieving multiple keys with filtering
>
> Andrey,
>
> Good to hear from you. Long time no talk.
>
> I don't think invokeAll has only update semantics. You can definitely use
> it just to look at the keys and return a result. Also, as you mentioned,
> Ignite compute is a viable option as well.
>
> The reason that predicates were removed from the get methods is because
> the API was becoming unwary, and also because JCache does not require it.
>
> D.
>
> On Thu, Aug 24, 2017 at 10:50 AM, Andrey Kornev 
> wrote:
>
>> Well, I believe invokeAll() has "update" semantics and using it for
>> read-only filtering of cache entries is probably not going to be efficient
>> or even appropriate.
>>
>>
>> I'm afraid the only viable option I'm left with is to use Ignite's
>> Compute feature:
>>
>> - on the sender, group the keys by affinity.
>>
>> - send each group along with the filter predicate to their affinity nodes
>> using IgniteCompute.
>>
>> - on each node, use getAll() to fetch the local keys and apply the filter.
>>
>> - on the sender node, collect the results of the compute jobs into a map.
>>
>>
>> It's unfortunate that Ignite dropped that original API. What used to be a
>> single API call is now a non-trivial algorithm and one have to worry about
>> things like what happens if the grid topology changes while the compute
>> jobs are executing, etc.
>>
>> Can anyone think of any other less complex/more robust approach?
>>
>> Thanks
>> Andrey
>>
>> --
>> *From:* slava.koptilin 
>> *Sent:* Thursday, August 24, 2017 9:03 AM
>> *To:* user@ignite.apache.org
>> *Subject:* Re: Retrieving multiple keys with filtering
>>
>> Hi Andrey,
>>
>> Yes, you are right. ScanQuery scans all entries.
>> Perhaps, IgniteCache#invokeAll(keys, cacheEntryProcessor) with custom
>> processor will work for you.
>> https://ignite.apache.org/releases/2.1.0/javadoc/org/apache/
>> ignite/IgniteCache.html#invokeAll(java.util.Set,%20org
>> .apache.ignite.cache.CacheEntryProcessor,%20java.lang.Object...)
>>
>> Thanks!
>>
>>
>>
>> --
>> View this message in context: http://apache-ignite-users.705
>> 18.x6.nabble.com/Retrieving-multiple-keys-with-filtering-
>> tp16391p16400.html
>> Apache Ignite Users - Retrieving multiple keys with filtering
>> <http://apache-ignite-users.70518.x6.nabble.com/Retrieving-multiple-keys-with-filtering-tp16391p16400.html>
>> apache-ignite-users.70518.x6.nabble.com
>> Retrieving multiple keys with filtering. Hello, I have a list of cache
>> keys (up to a few hundred of them) and a filter predicate. I'd like to
>> efficiently retrieve only those values that pass the...
>>
>>
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>
>


Re: Retrieving multiple keys with filtering

2017-08-26 Thread Dmitriy Setrakyan
Andrey,

Good to hear from you. Long time no talk.

I don't think invokeAll has only update semantics. You can definitely use
it just to look at the keys and return a result. Also, as you mentioned,
Ignite compute is a viable option as well.

The reason that predicates were removed from the get methods is because the
API was becoming unwary, and also because JCache does not require it.

D.

On Thu, Aug 24, 2017 at 10:50 AM, Andrey Kornev 
wrote:

> Well, I believe invokeAll() has "update" semantics and using it for
> read-only filtering of cache entries is probably not going to be efficient
> or even appropriate.
>
>
> I'm afraid the only viable option I'm left with is to use Ignite's Compute
> feature:
>
> - on the sender, group the keys by affinity.
>
> - send each group along with the filter predicate to their affinity nodes
> using IgniteCompute.
>
> - on each node, use getAll() to fetch the local keys and apply the filter.
>
> - on the sender node, collect the results of the compute jobs into a map.
>
>
> It's unfortunate that Ignite dropped that original API. What used to be a
> single API call is now a non-trivial algorithm and one have to worry about
> things like what happens if the grid topology changes while the compute
> jobs are executing, etc.
>
> Can anyone think of any other less complex/more robust approach?
>
> Thanks
> Andrey
>
> --
> *From:* slava.koptilin 
> *Sent:* Thursday, August 24, 2017 9:03 AM
> *To:* user@ignite.apache.org
> *Subject:* Re: Retrieving multiple keys with filtering
>
> Hi Andrey,
>
> Yes, you are right. ScanQuery scans all entries.
> Perhaps, IgniteCache#invokeAll(keys, cacheEntryProcessor) with custom
> processor will work for you.
> https://ignite.apache.org/releases/2.1.0/javadoc/org/
> apache/ignite/IgniteCache.html#invokeAll(java.util.Set,%
> 20org.apache.ignite.cache.CacheEntryProcessor,%20java.lang.Object...)
>
> Thanks!
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Retrieving-multiple-keys-with-
> filtering-tp16391p16400.html
> Apache Ignite Users - Retrieving multiple keys with filtering
> 
> apache-ignite-users.70518.x6.nabble.com
> Retrieving multiple keys with filtering. Hello, I have a list of cache
> keys (up to a few hundred of them) and a filter predicate. I'd like to
> efficiently retrieve only those values that pass the...
>
>
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Testing Ignite Applications Locally

2017-08-26 Thread Dmitriy Setrakyan
Love the idea. Let's add Testing Ignite Apps page on Readme. Denis, I don't
think we need many snippets, just a few.

As far as Maven archetype, Yakov, is the only purpose of it to load a
project, so users can add tests to it?

D.

On Fri, Aug 25, 2017 at 8:51 AM, Denis Magda  wrote:

> Yasha,
>
> Sure, I’ll help from the documentation point but will need raw material
> from you, guys - test snippets, essential configuration parameters, etc.
>
> —
> Denis
>
> On Aug 25, 2017, at 8:30 AM, Yakov Zhdanov  wrote:
>
> Guys,
>
> I want to discuss the subject again. It is pretty vivid that having wide
> set of local unit and simple integration tests most likely help to avoid
> many failures and bugs when going to server environment.
>
> I participated in many POC and I can say for sure - if developers are not
> implementing local tests then their application is broken. This is true for
> the entire industry. Why does anyone think that Ignite and distributed
> systems in general are exceptions here? Complexity added by distributed
> nature probably needs local tests even more.
>
> So, what Ignite already offers here and what can be done further?
>
> 1. Ignite offers ability to emulate cluster and even many cluster in a
> single VM. Let's create a page on readme.io explaining how to start
> topologies in a single VM and provide couple examples of unit tests for
> cache operations and, for example, queries. Denis Magda, can you help?
> (Yes, we don't have the page explaining how to test Ignite locally!)
>
> 2. Ignite has a large and rich set of tests in its code base. We can
> provide the link on the page at p1.
>
> 3. Let's create maven archetype for Ignite. So, executing the command [1]
> will bring me inited project with valid poms, sample batch scripts, sample
> Ignite configs, sample logger configuration and test sources folder
> containing several JUnits (!!).
>
> [1]  mvn archetype:generate \
>   -DinteractiveMode=false \
>   -DarchetypeGroupId=org.apache.ignite \
>   -DarchetypeArtifactId=ignite-app-archetype \
>   -DgroupId=org.sample \
>   -DartifactId=sampleapp \
>   -Dversion=1.0
>
> Please share your thoughts and we can file tickets to start moving.
>
> --Yakov
>
>
>


Re: In ignite 2.1, persist a particular cache rather than all caches

2017-08-19 Thread Dmitriy Setrakyan
On Thu, Aug 17, 2017 at 4:48 PM, Marco  wrote:

> Hi Val,
> Thank you for the response and it's good to know this feature will be
> announced in next release.
>

You should expect it in September time frame.


Re: Query about running SQL with durable memory

2017-08-08 Thread Dmitriy Setrakyan
On Sat, Aug 5, 2017 at 10:17 PM, iostream  wrote:

> Suppose I have 10 Person entries in the disk, out of which only 5 are
> in-memory. Now if I run a SQL query which is expected to count the number
> of
> entries in Person cache, will the query run only on the disk or RAM or will
> it run on both?
>

The SQL query will simply run over the total data set, which is 10 persons,
but it will obviously process the data that is cached in-memory faster. You
can also speed up SQL queries by indexing the data.


>
> If the query will run on both the disk and RAM, will the count be 10 or 15
> (10 on disk + 5 in RAM)? Does the SQL processor know which entries are
> present in-memory to resolve duplicates?
>
>
If I understand your example correctly, then the super-set of data is 10
Persons (right?). In this case, the total count returned by Ignite will be
10.


>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Query-about-running-SQL-with-
> durable-memory-tp16015.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: GA Grid (Beta): Genetic Algorithm component for Ignite is here!

2017-08-08 Thread Dmitriy Setrakyan
Thanks, Turik, very interesting!

On Mon, Aug 7, 2017 at 8:58 PM, techbysample  wrote:

> Igniters,
>
> Check out the new GA Grid(Beta) project here:
>
> https://github.com/techbysample/gagrid
>
> GA Grid (Beta) is a distributive in memory Genetic Algorithm (GA) component
> for Apache Ignite.
> A GA is a method of solving optimization problems by simulating the process
> of biological evolution.
>
> GAs are excellent for searching through large and complex data sets for an
> optimal solution.
> Real world applications of GAs include:  automotive design, computer
> gaming,
> robotics, investments,
> traffic/shipment routing and more.
>
>
>  GAGrid_Overview.png>
>
> Best,
> Turik Campbell
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.705
> 18.x6.nabble.com/GA-Grid-Beta-Genetic-Algorithm-component-
> for-Ignite-is-here-tp16041.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Affinity Key field is not identified if binary configuration is used on cache key object

2017-08-05 Thread Dmitriy Setrakyan
On Fri, Aug 4, 2017 at 11:54 AM, kotamrajuyashasvi <
kotamrajuyasha...@gmail.com> wrote:

> Hi
>
> Thanks for the response. When I put cacheKeyConfiguration in ignite
> configuration, the affinity was working. But when I call Cache.Get() in
> client program I'm getting the following error.
>
> "Java exception occurred
> [cls=org.apache.ignite.binary.BinaryObjectException, msg=Binary type has
> different affinity key fields [typeName=PersonPK,
> affKeyFieldName1=customer_ref, affKeyFieldName2=null]]"
>
> I already did LoadCache before running the program.
>

Did you load the cache before you updated the configuration? If yes, it
won't work. You have to fix the config and then start everything from
scratch.


Re: 10X decrease in performance with Ignite 2.0.0

2017-05-12 Thread Dmitriy Setrakyan
Chris,

After looking at your code, the only slow down that may have occurred
between 1.9 and 2.0 is the actual cache "get(...)" operation. As you may
already know, Ignite 2.0 has moved data off-heap completely, so we do not
cache data in the deserialized form any more, by default. However, you can
still enable on-heap cache, in which case the data will be cached the same
way as in 1.9.

What is the average size of the object you store in cache? If it is large,
then you have 2 options:

1. Do not deserialize your objects into classes and work directly with
BinaryObject interface.
2. Turn on on-heap cache.

Will this work for you?

D.

On Fri, May 12, 2017 at 6:53 AM, Chris Berry  wrote:

> Hi,
>
> I hope this helps.
>
> This is the flow. It is very simple.
> Although, the code in the ComputeJob (executor.compute(request, algType,
> correlationId);) is relatively application complex.
>
> I hope this code makes sense.
> I had to take the actual code and expunge all of the actual Domain bits
> from
> it…
>
> But as far as Ignite is concerned, it is mostly boilerplate.
>
> Thanks,
> -- Chris
>
> =
> Invoke:
>
> private List executeTaskOnGrid(AComputeTask AResponse> computeTask,  List uuids) {
>  return
> managedIgnite.getCompute().withTimeout(timeout).execute(computeTask,
> uuids);
> }
>
> ===
> ComputeTask:
>
> public class AComputeTask ComputeTask TResponse>
> extends ComputeTaskAdapter, List> {
>
> private final AExecutorType type;
> private final TRequest rootARequest;
> private final AlgorithmType algType;
> private final String correlationId;
> private IgniteCacheName cacheName;
>
> @IgniteInstanceResource
> private Ignite ignite;
>
> public AComputeTask(AExecutorType type, TRequest request,
> AlgorithmType
> algType,  String correlationId) {
> this.cacheName = IgniteCacheName.ACache;
> this.type = type;
> this.rootARequest = request;
> this.algType = algType;
> this.correlationId = correlationId;
> }
>
> @Nullable
> @Override
> public Map map(List
> subgrid, @Nullable Collection cacheKeys)
> throws IgniteException {
> Map> nodeToKeysMap =
> ignite.affinity(cacheName.name()).mapKeysToNodes(cacheKeys);
> Map jobMap = new HashMap<>();
> for (Map.Entry> mapping :
> nodeToKeysMap.entrySet()) {
> ClusterNode node = mapping.getKey();
> final Collection mappedKeys = mapping.getValue();
>
> if (node != null) {
> ComputeBatchContext context = new
> ComputeBatchContext(node.id(), node.consistentId(), correlationId);
> Map nodeRequestUUIDMap =
> Collections.singletonMap(algType, convertToArray(mapping.getValue()));
> ARequest nodeARequest = new ARequest(rootARequest,
> nodeRequestUUIDMap);
> AComputeJob job = new AComputeJob(type, nodeARequest,
> algType, context);
> jobMap.put(job, node);
> }
> }
> return jobMap;
> }
>
> private UUID[] convertToArray(Collection cacheKeys) {
> return cacheKeys.toArray(new UUID[cacheKeys.size()]);
> }
>
> @Nullable
> @Override
> public List reduce(List results) throws
> IgniteException {
> List responses = new ArrayList<>();
> for (ComputeJobResult res : results) {
> if (res.getException() != null) {
> ARequest  request = ((AComputeJob)
> res.getJob()).getARequest();
>
> // The entire result failed. So return all as errors
> AExecutor executor =
> AExecutorFactory.getAExecutor(type);
> List unitUuids =
> Lists.newArrayList(request.getMappedUUIDs().get(algType));
> List errorResponses =
> executor.createErrorResponses(unitUuids.stream(),
> ErrorCode.UnhandledException);
> responses.addAll(errorResponses);
> } else {
> List perNode = res.getData();
> responses.addAll(perNode);
> }
> }
> return response;
> }
> }
>
> ==
> ComputeJob
>
> public class AComputeJob extends
> ComputeJobAdapter {
> @Getter
> private final ExecutorType executorType;
> @Getter
> private final TRequest request;
> @Getter
> private final AlgorithmType algType;
> @Getter
> private final String correlationId;
> @Getter
> private final ComputeBatchContext context;
>
> @IgniteInstanceResource
> private Ignite ignite;
> @JobContextResource
> private ComputeJobContext jobContext;
>
> public AComputeJob(ExecutorType executorType, TRequest request,
> AlgorithmType algType, ComputeBatchContext context) {
> this.executorType = executorType;
> this.request = request;
> this.algType = algType;
> this.correlationId 

Re: https://issues.apache.org/jira/browse/IGNITE-3401

2017-05-09 Thread Dmitriy Setrakyan
It is not clear to me what this issue is. Ranjit, can you explain what this
is critical to you?

On Tue, May 9, 2017 at 9:55 AM, Ranjit Sahu  wrote:

> When will that be ?
>
> On Tue, 9 May 2017 at 10:10 PM, Andrey Gura  wrote:
>
>> No, it isn't fixed yet. Should be fixed in Ignite 2.1 I hope.
>>
>> On Tue, May 9, 2017 at 6:52 PM, Ranjit Sahu 
>> wrote:
>> > Hi Team,
>> >
>> > Is this issue fixed ? If yes on which version? IS there any work around
>> to
>> > avoid this ?
>> >
>> > Thanks,
>> > Ranjit
>>
>


Re: Lots of cache creation become slow

2017-05-03 Thread Dmitriy Setrakyan
Cédric,

Can you clarify why not create 1 continuous query and listen to all the
changes for all the keys?

D.

On Thu, Apr 13, 2017 at 8:00 AM, ctranxuan 
wrote:

> Well, actually we were interesting in having continuous queries listening
> multi-tenant caches.
>
> This was the postulate for the architecture of a PoC project. Based on this
> discussion, we are switching to another architecture postulate where we
> have
> one cache with thousands continuous queries listening the changes of
> thousands keys of the cache (basically 1 continuous query per key).
>
> So, at the beginning, we were investigating how many caches / continuous
> queries could be supported by a node. May be, it's not the right way to
> evaluate this?
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Lots-of-cache-creation-become-slow-tp11875p11955.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Master-Worker Pattern Possible In Ignite?

2017-04-22 Thread Dmitriy Setrakyan
This question has already been answered here:
http://stackoverflow.com/questions/43551422/javaspaces-like-patterns-in-ignite/

D.

On Fri, Apr 21, 2017 at 2:11 PM, Sean Winship 
wrote:

> I've used GigaSpaces in the past and I'd like to know if I can use Ignite
> in a similar fashion. Specifically, I need to implement a master-worker
> pattern where one set of process writes objects to the in-memory data grid
> and another set reads those objects, does some processing, and possibly
> writes results back to the grid. One important GigaSpaces/JavaSpaces
> feature I need is leasing. If I write an object to the grid and it isn't
> picked up within a certain time period, it should automatically expire and
> I should get some kind of notification.
>
> Ideally this system should be resilient to failure of one or more nodes.
>
> Is Apache Ignite a good match for this use case?
>
> Thanks,
>
> Sean
>
>


Re: Input data is no significant change in multi-threading

2017-04-20 Thread Dmitriy Setrakyan
On Wed, Apr 19, 2017 at 10:16 PM, woo charles 
wrote:

> When I call addData() in streamer. this data will send & buffer in server
> node. is that correct?
> If I correct, this data will buffer in random server node or only the one
> it directly connected?
>

addData() will buffer the data on the client side. As a matter of fact,
there are multiple buffers on the client side, which each buffer associated
with some server node.

Ignite will never send the data to a random node. The data is always sent
exactly to the node where it will be cached.

D.


Re: Binary objects and cache store

2017-04-17 Thread Dmitriy Setrakyan
On Wed, Feb 8, 2017 at 4:57 PM, Denis Magda  wrote:

> Cross-posting to the dev list.
>
> Igniters, what if we make “storeKeepBinary” = true by default in Ignite
> 2.0? Presently, the user has to tweak the configuration manually.
>
>
Makes sense to me.


Re: NameNode sync using DUAL_ASYNC mode on IGFS

2017-04-03 Thread Dmitriy Setrakyan
Hi Massayuki,

Ignite itself does not have a concept of a NameNode. It goes directly to
the cluster node responsible for storing the data based on a key's hashcode.

The only time when a NameNode would come into a play, is when Hadoop HDFS
is configured as an underlying write-through file system. Basically it
means that every time a data is changed in Ignite, it will also be changed
in the underlying HDFS, either synchronously or asynchronously, based on
the IGFS configuration. In this case, HDFS would contact the NameNode
whenever the data is written into it, based on the its own native protocol,
which has nothing to do with Ignite itself.

Let me know if you have more questions.

D.

On Sun, Apr 2, 2017 at 5:47 AM, Masayuki Takahashi 
wrote:

> Hi,
>
> I am trying to use IGFS on HDFS.
>
> If I set DUAL_ASYNC to IGFS mode and execute 'hdfs dfs -put ...', when
> does the file info write to HDFS NameNode?
>
> Also, if I set PRIMARY to IGFS mode and put a new file, does the file
> info write to HDFS NameNode?
>
> thanks.
> --
> Masayuki Takahashi
>


Re: 2.0

2017-03-28 Thread Dmitriy Setrakyan
Thanks Denis!

Lea, I just want to clarify that if you manually pick Denis' commit, it
will likely fix your issue, but it will not be an official Ignite release
and will not have undergone the regular testing and QA cycle that all
Ignite releases generally go through.

D.


On Tue, Mar 28, 2017 at 10:19 AM, Denis Magda  wrote:

> Lea,
>
> If you can’t wait for 2.0 release I would suggest you pick my commit,
> merge it to your fork of Ignite 1.9 release and build it from sources.
>
> Does it work for you?
>
> —
> Denis
>
> On Mar 28, 2017, at 1:21 AM, Lea Thurman  wrote:
>
> Thanks Pavel
>
> Would it be worth us reverting to an earlier release. Any idea when it was
> introduced?
>
> Regards
> Lea Thurman.
>
> On 28 March 2017 at 08:46, Pavel Tupitsyn  wrote:
>
>> According to the dev list thread (http://apache-ignite-develope
>> rs.2346864.n4.nabble.com/Apache-Ignite-2-0-Release-td15690.html),
>> you can expect 2.0 by the end of the April.
>>
>>
>> On Tue, Mar 28, 2017 at 10:41 AM, Lea Thurman 
>> wrote:
>>
>>> Hi all,
>>>
>>> We have upgraded to 1.9 and noticed the following issue:
>>>
>>> https://issues.apache.org/jira/browse/IGNITE-4858
>>>
>>> I understand this is to be fixed in 2.0.
>>>
>>> Is there any indicated when this is planned to be released?
>>>
>>> Regards
>>> Lea Thurman
>>>
>>> --
>>> *Lea Thurman*
>>> OneSoon Limited
>>> Manchester Business Park
>>> 3000 Aviator Way
>>> Manchester M22 5TG
>>>
>>> mob:   +44 (0) 7545 828 526 <+44+(0)+7545+828+526>
>>> tel:  +44 (0) 333 666 7366
>>> email:  lea.thur...@adalyser.com 
>>> web:www.adalyser.com
>>>
>>> *Adalyser* is a registered trademark and trading name of OneSoon Limited
>>> *OneSoon* is registered in England and Wales Company Number 04746025
>>>
>>
>>
>
>
> --
> *Lea Thurman*
> OneSoon Limited
> Manchester Business Park
> 3000 Aviator Way
> Manchester M22 5TG
>
> mob:   +44 (0) 7545 828 526 <+44+(0)+7545+828+526>
> tel:  +44 (0) 333 666 7366
> email:  lea.thur...@adalyser.com 
> web:www.adalyser.com
>
> *Adalyser* is a registered trademark and trading name of OneSoon Limited
> *OneSoon* is registered in England and Wales Company Number 04746025
>
>
>


Re: create table via JDBC

2017-03-16 Thread Dmitriy Setrakyan
DDL commands are not supported in Ignite yet. However, in Ignite the table
will be created automatically if you define a class with @SqlQueryField
annotations or define a QueryEntity in configuration, as described here:

https://apacheignite.readme.io/docs/indexes

Starting with Ignite 2.0, planned in April, Ignite will support CREATE/DROP
INDEX command. Further it is planned that towards June/July Ignite will
have full DDL support, including CREATE/ALTER/DROP TABLE commands.

D.

On Thu, Mar 16, 2017 at 11:46 AM, Ivan Zeng  wrote:

> Hi,
>
> I am new to Ignite.  Could you tell me the right way to create a
> cache, load data into cache, and then query the cache via JDBC?
>
> I wrote the following code to create a table via JDBC.
>
>
> Class.forName("org.apache.ignite.IgniteJdbcDriver");
> con = DriverManager.getConnection (connectionURL)
> String create_sql = "CREATE TABLE Person " +
>   "(_key INTEGER PRIMARY KEY, " +
>   " name VARCHAR(255), " +
>   " age INTEGER);";
> Statement cstmt = con.createStatement();
> cstmt.executeQuery(create_sql);
>
>
> But i got this error.
>
> java.sql.SQLException: Failed to query Ignite.
> at org.apache.ignite.internal.jdbc2.JdbcStatement.executeQuery(
> JdbcStatement.java:131)
> at IgniteJDBC.main(IgniteJDBC.java:26)
> Caused by: javax.cache.CacheException: Unsupported SQL statement:
> CREATE TABLE Person (_key INTEGER PRIMARY KEY,  name VARCHAR(255),
> age INTEGER)
>
> Thanks so much
> Ivan
>


Re: Same Affinity For Same Key On All Caches

2017-02-23 Thread Dmitriy Setrakyan
If you use the same (or default) configuration for the affinity, then the
same key in different caches will always end up on the same node. This is
guaranteed.

D.

On Thu, Feb 23, 2017 at 8:09 AM, Andrey Mashenkov <
andrey.mashen...@gmail.com> wrote:

> Val,
>
> Yes, with same affinity function entries with same key should be saved in
> same nodes.
> As far as I know, primary node is assinged automatically by Ignite. And I'm
> not sure that
> there is a guarantee that 2 entries from different caches with same key
> will have same primary and backup nodes.
> So, get operation for 1-st key can be local while get() for 2-nd key will
> be remote.
>
>
> On Thu, Feb 23, 2017 at 6:49 PM, Valentin Kulichenko <
> valentin.kuliche...@gmail.com> wrote:
>
> > Actually, this should work this way out of the box, as long as the same
> > affinity function is configured for all caches (that's true for default
> > settings).
> >
> > Andrey, am I missing something?
> >
> > -Val
> >
> > On Thu, Feb 23, 2017 at 7:02 AM, Andrey Mashenkov <
> > andrey.mashen...@gmail.com> wrote:
> >
> > > Hi Alper,
> > >
> > > You can implement you own affinityFunction to achieve this.
> > > In AF you should implement 2 mappings: key to partition and partition
> to
> > > node.
> > >
> > > First mapping looks trivial, but second doesn't.
> > > Even if you will lucky to do it, there is no way to choose what node
> wil
> > be
> > > primary and what will be backup for a partition,
> > > that can be an issue.
> > >
> > >
> > > On Thu, Feb 23, 2017 at 10:44 AM, Alper Tekinalp 
> wrote:
> > >
> > > > Hi all.
> > > >
> > > > Is it possible to configures affinities in a way that partition for
> > same
> > > > key will be on same node? So calling
> > > > ignite.affinity(CACHE).mapKeyToNode(KEY).id() with same key for any
> > > cache
> > > > will return same node id. Is that possible with a configuration etc.?
> > > >
> > > > --
> > > > Alper Tekinalp
> > > >
> > > > Software Developer
> > > > Evam Streaming Analytics
> > > >
> > > > Atatürk Mah. Turgut Özal Bulv.
> > > > Gardenya 5 Plaza K:6 Ataşehir
> > > > 34758 İSTANBUL
> > > >
> > > > Tel:  +90 216 455 01 53 Fax: +90 216 455 01 54
> > > > www.evam.com.tr
> > > > 
> > > >
> > >
> > >
> > >
> > > --
> > > Best regards,
> > > Andrey V. Mashenkov
> > >
> >
>
>
>
> --
> Best regards,
> Andrey V. Mashenkov
>


Re: Webinar: Automatic Integration with Relational Database

2017-02-14 Thread Dmitriy Setrakyan
Here is the link to the webinar:
https://www.gridgain.com/resources/webinars/apacher-ignitetm-web-console-automating-rdbms-integration

I am already signed up. Looking forward to it!

D.

On Tue, Feb 14, 2017 at 3:22 PM, Denis Magda  wrote:

> Igniters,
>
> Feel free to join my next webinar planned for tomorrow - Wednesday,
> February 15, 201711:00am PT / 2:00pm ET.
>
> I’m going to talk less and show more. In general, this will be a live
> demonstration of the following:
> - automatic cluster configuration using a scheme of an existing MySQL
> database.
> - cluster startup and data preloading using the configuration and project
> generated by Web Console.
> - data querying and modification using SQL Grid capabilities from Web
> Console.
> - demonstration on how write-through/read-through modes work in practice.
> - walk-through of Web Console tabs and features.
>
> —
> Denis


Re: ApacheCon CFP closing soon (11 February)

2017-01-20 Thread Dmitriy Setrakyan
I have submitted for a couple of talks myself.

I would like to encourage everyone in Ignite community, especially if you
don't mind visiting Miami, to submit a proposal. The topic may include
Ignite architecture, projects based on top of Ignite, as well as real
production use cases and deployments.

Here is the link to submit speaking proposals:
http://events.linuxfoundation.org/events/apache-big-data-north-america/program/cfp

D.

On Thu, Jan 19, 2017 at 4:58 PM, Denis Magda  wrote:

> + user list
>
>
> > On Jan 19, 2017, at 3:07 PM, Denis Magda  wrote:
> >
> > Igniters,
> >
> > Is there anyone of you who want to attend this conference as a speaker
> presenting Apache Ignite in some form or sharing your working experience
> with it?
> >
> > —
> > Denis
> >
> >> Begin forwarded message:
> >>
> >> From: Rich Bowen 
> >> Subject: ApacheCon CFP closing soon (11 February)
> >> Date: January 18, 2017 at 8:45:41 AM PST
> >> To: comdev 
> >> Reply-To: d...@ignite.apache.org
> >> Reply-To: comdev 
> >>
> >> Hello, fellow Apache enthusiast. Thanks for your participation, and
> >> interest in, the projects of the Apache Software Foundation.
> >>
> >> I wanted to remind you that the Call For Papers (CFP) for ApacheCon
> >> North America, and Apache: Big Data North America, closes in less than a
> >> month. If you've been putting it off because there was lots of time
> >> left, it's time to dig for that inspiration and get those talk
> proposals in.
> >>
> >> It's also time to discuss with your developer and user community whether
> >> there's a track of talks that you might want to propose, so that you
> >> have more complete coverage of your project than a talk or two.
> >>
> >> We're looking for talks directly, and indirectly, related to projects at
> >> the Apache Software Foundation. These can be anything from in-depth
> >> technical discussions of the projects you work with, to talks about
> >> community, documentation, legal issues, marketing, and so on. We're also
> >> very interested in talks about projects and services built on top of
> >> Apache projects, and case studies of how you use Apache projects to
> >> solve real-world problems.
> >>
> >> We are particularly interested in presentations from Apache projects
> >> either in the Incubator, or recently graduated. ApacheCon is where
> >> people come to find out what technology they'll be using this time next
> >> year.
> >>
> >> Important URLs are:
> >>
> >> To submit a talk for Apache: Big Data -
> >> http://events.linuxfoundation.org/events/apache-big-data-
> north-america/program/cfp
> >> To submit a talk for ApacheCon -
> >> http://events.linuxfoundation.org/events/apachecon-north-
> america/program/cfp
> >>
> >> To register for Apache: Big Data -
> >> http://events.linuxfoundation.org/events/apache-big-data-
> north-america/attend/register-
> >> To register for ApacheCon -
> >> http://events.linuxfoundation.org/events/apachecon-north-
> america/attend/register-
> >>
> >> Early Bird registration rates end March 12th, but if you're a committer
> >> on an Apache project, you get the low committer rate, which is less than
> >> half of the early bird rate!
> >>
> >> For further updated about ApacheCon, follow us on Twitter, @ApacheCon,
> >> or drop by our IRC channel, #apachecon on the Freenode IRC network. Or
> >> contact me - rbo...@apache.org - with any questions or concerns.
> >>
> >> Thanks!
> >>
> >> Rich Bowen, VP Conferences, Apache Software Foundation
> >>
> >> --
> >> (You've received this email because you're on a dev@ or users@ mailing
> >> list of an Apache Software Foundation project. For subscription and
> >> unsubscription information, consult the headers of this email message,
> >> as this varies from one list to another.)
> >
>
>


Re: Ignite Shutdown Hook

2016-12-20 Thread Dmitriy Setrakyan
+ user list

On Tue, Dec 20, 2016 at 12:30 PM, hemanta  wrote:

> Hi,
>
> I am starting Ignite as standalone program embedded in my java application
> via Spring configuration. It works great however my application stucks and
> never finishes. I have added some shutdown hooks but they are never called.
> I think this is because Ignite is running in main thread and stopping jvm
> to
> exit.
>
> I want jvm to stop Ignite and exit gracefully but this is not what I am
> seeing. Any suggestions? Thank you.
>
>
>
> --
> View this message in context: http://apache-ignite-
> developers.2346864.n4.nabble.com/Ignite-Shutdown-Hook-tp13198.html
> Sent from the Apache Ignite Developers mailing list archive at Nabble.com.
>


Re: LOOK THORUGH THIS ERROR

2016-08-12 Thread Dmitriy Setrakyan
Val, do we have this documented?


> On Aug 11, 2016, at 10:18 AM, vkulichenko  
> wrote:
> 
> Ravi,
> 
> You have to use the same version of ignite-hibernate as the ignite-core (the
> latest is 1.7). The ignite-hibernate module is not deployed to Maven central
> anymore due to licensing restrictions, but you can use the repo provided by
> GridGain [1]. Another option is to build this module from sources by
> yourself.
> 
> [1] www.gridgainsystems.com/nexus/content/repositories/external
> 
> -Val
> 
> 
> 
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/LOOK-THORUGH-THIS-ERROR-tp6977p6995.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Hazelcast vs Ignite vs Geode

2016-07-27 Thread Dmitriy Setrakyan
On Wed, Jul 27, 2016 at 9:58 AM, Ralph Goers 
wrote:

> That is not strictly correct.  You can do objective performance testing of
> Ignite and compare that to other solutions, where it is allowed (which it
> typically would be for other open source solutions). When you do this, it
> should be done in a way that helps users understand how to best configure
> Ignite. For example, show how the results differ with different
> configuration values and in different use cases.
>

If that's the case, I would run the benchmarks again with completely clear
Ignite configuration in github and a proper description on it. I really
miss the benchmark results personally, they added a lot of clarity about
how to run and configure the project for performance.


>
> Ralph
>
> On Jul 21, 2016, at 12:55 PM, Dmitriy Setrakyan 
> wrote:
>
> I would like to add that Ignite community as part of the Apache Software
> Foundation cannot publish any competitive benchmarks or product
> comparisons. This is the reason why we generally provide links to external
> resources for questions like this.
>
> On Wed, Jul 20, 2016 at 4:42 AM, Denis Magda  wrote:
>
>> Hi,
>>
>> Please properly subscribe to the user list so that we can see your
>> questions
>> as soon as possible and provide answers on them quicker. All you need to
>> do
>> is send an email to user-subscr...@ignite.apache.orgî and follow simple
>> instructions in the reply.
>>
>>
>> Reddy wrote
>> > Hi All,
>> >
>> > We want to know the best features of Apache ignite(GridGain) when
>> compared
>> > to other In memory databases(Hazelcast,Geode) with respective of
>> > performnace,reads,writes,backup,storage and keys metadata etc.
>> > Why I am asking this question that you people have solid understanding
>> of
>> > ignite and already might  have compared with other IMDBs.
>> >
>> > Pls let me know if you have any metrics also.
>>
>> I would suggest you referring to GridGain's products comparison page [1]
>> and
>> benchmarks page [2] which have extensive information on what you're
>> looking
>> for.
>>
>> [1] http://www.gridgain.com/resources/product-comparisons
>> [2] http://www.gridgain.com/resources/benchmarks
>>
>> --
>> Denis
>>
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-ignite-users.70518.x6.nabble.com/Hazelcast-vs-Ignite-vs-Geode-tp6345p6419.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com
>> <http://nabble.com>.
>>
>
>
>


Re: Breaking improvements for the Ignite C++.

2016-07-22 Thread Dmitriy Setrakyan
I am not a C++ expert, but can you please explain why you would like to
change all methods on the BinaryType to static? Is it the same way in Java?

On Fri, Jul 22, 2016 at 9:00 AM, Igor Sapego  wrote:

> Hello Igniters and Ignite users,
>
> As there is going to be Ignite 2.0 release soon, It is a good opportunity
> for us to improve Ignite C++ API without the need to maintain backward
> compatibility. Let's collect and discuss all the proposal for the changes
> in API here.
>
> If you've had any proposal on how to improve C++ API but that could break
> backward compatibility, now you can propose that for us to discuss and
> probably include it in Ignite 2.0. So, go ahead and post your proposals
> here.
>
> Let's create tasks for accepted proposals as subtasks for the task [1] so
> they all could be easy to track.
>
> [1] - https://issues.apache.org/jira/browse/IGNITE-3559
>
> Best Regards,
> Igor
>


Re: Hazelcast vs Ignite vs Geode

2016-07-21 Thread Dmitriy Setrakyan
I would like to add that Ignite community as part of the Apache Software
Foundation cannot publish any competitive benchmarks or product
comparisons. This is the reason why we generally provide links to external
resources for questions like this.

On Wed, Jul 20, 2016 at 4:42 AM, Denis Magda  wrote:

> Hi,
>
> Please properly subscribe to the user list so that we can see your
> questions
> as soon as possible and provide answers on them quicker. All you need to do
> is send an email to user-subscr...@ignite.apache.orgî and follow simple
> instructions in the reply.
>
>
> Reddy wrote
> > Hi All,
> >
> > We want to know the best features of Apache ignite(GridGain) when
> compared
> > to other In memory databases(Hazelcast,Geode) with respective of
> > performnace,reads,writes,backup,storage and keys metadata etc.
> > Why I am asking this question that you people have solid understanding of
> > ignite and already might  have compared with other IMDBs.
> >
> > Pls let me know if you have any metrics also.
>
> I would suggest you referring to GridGain's products comparison page [1]
> and
> benchmarks page [2] which have extensive information on what you're looking
> for.
>
> [1] http://www.gridgain.com/resources/product-comparisons
> [2] http://www.gridgain.com/resources/benchmarks
>
> --
> Denis
>
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Hazelcast-vs-Ignite-vs-Geode-tp6345p6419.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Any plans to enhance support for subselects on partitioned caches?

2016-07-18 Thread Dmitriy Setrakyan
On Mon, Jul 18, 2016 at 10:44 PM, Sergi Vladykin 
wrote:

> Subquery in FROM clause should work with distributed joins enabled.
> Subquery expressions (in SELECT, WHERE, etc...) must always be collocated.
>

Thanks, Sergi! This definitely helps. Is it going to be possible to support
non-collocated joins in sub-queries in further releases? What are the
challenges there?

>
>
> Sergi
>
> On Mon, Jul 18, 2016 at 7:09 PM, Cristi C  wrote:
>
>> Thanks for your reply, Alexei.
>>
>> So, considering users will be able to use the distributed join workaround,
>> you're not planning on making any enhancements regarding the distributed
>> subselect in the near future, correct?
>>
>> Thanks,
>>Cristi
>>
>>
>>
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-ignite-users.70518.x6.nabble.com/Any-plans-to-enhance-support-for-subselects-on-partitioned-caches-tp6344p6350.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>
>


Call for Speakers: Apache Ignite at ApacheCon: Big Data Spain 2016

2016-07-14 Thread Dmitriy Setrakyan
Hello Igniters,

ApacheCon Europe is coming November, 2016. I just submitted 3 talks on
Apache Ignite:

- Apache Ignite - Path to Converged Data Platform
- Shared Memory Layer and Faster SQL for Spark Applications
- Apache Ignite - JCache and Beyond

If you are a user or a developer of Apache Ignite, I would like to
encourage you to submit a presentation. You only need to submit a short
abstract at this point, no need to have the whole presentation ready.

Looking forward to seeing you there.

http://events.linuxfoundation.org/events/apache-big-data-europe

D.

-- Forwarded message --
From: Linux Foundation Events 
Date: Fri, Jul 8, 2016 at 9:02 PM
Subject: Call for Speakers: ApacheCon + Apache: Big Data Spain 2016
To: dsetrak...@apache.org


[image: LinuxCon Europe / ContainerCon Europe 2016]


*It’s time to submit your talk for ApacheCon + Apache: Big Data in Spain!
Submit by September 9th.*

*Submit an ApacheCon Proposal*

*Submit an Apache: Big Data Proposal*



ApacheCon Europe
will
bring the open source community together to learn about and collaborate on
the technologies and projects driving the future of open source, web
technologies and cloud computing. This is the place to share your
knowledge, ideas, best practices and creativity with the rest of the Apache
community. Check out the list of suggested topics here

.


Apache: Big Data Europe

will gather together the Apache projects, people and technologies working
in Big Data, ubiquitous computing, machine learning, natural language
processing, geospatial and data engineering and science to educate,
collaborate and connect in a completely project-neutral environment. It is
the only event that brings together the full suite of Big Data open source
projects. Apache: Big Data is your opportunity to share your knowledge,
ideas, best practices and creativity with the thought leaders, innovators,
executives and those in the trenches pushing the envelope of Big Data
technologies. Check out the list of suggested topics here

.

Don’t miss your chance to present in Spain at the official conferences of
The Apache Software Foundation! Submit your proposal now. The deadline to
submit proposals is September 9th.

*Submit an ApacheCon Proposal*

*Submit an Apache: Big Data Proposal*


Event Highlights
Register to Attend »

Hotel & Travel »

Features & Add-ons »

Sponsor »

Diversity Scholarships

 »

Thank You to Our Apache: Big Data Europe Sponsors

*PLATINUM*
[image: cloudera]
   [image:
Hortonworks]


*SILVER*​

 [image: criteo labs]


*COMMUNITY PARTNER*
​
 [image: RusBiTech]

  Thank you to Our ApacheCon Europe Sponsors

*GOLD*
 [image: Suse]


*COMMUNITY PARTNER*
[image: RusBiTech]


About Us 
| Events 
| Training
 |
Projects 
| Linux.com


[image: Facebook]
[image:
Twitter] [image:
YouTube]


Re: How about adding kryo or protostuff as an optional marshaller?

2016-07-14 Thread Dmitriy Setrakyan
I highly doubt these marshallers will be more compact than Ignite binary
marshaller. Have you tested it?

On Thu, Jul 14, 2016 at 4:01 PM, Lin  wrote:

> Hi all,
>
> I would like to find a more compacted marshaller to save the network
> bandwidth in Ignite clusters.
> From the benchmark result https://github.com/eishay/jvm-serializers/wiki
> , It looks like the protostuff and kryo works better than other serializers.
>
> Is it a good idea to use them as an optional marshaller? How to do it?
> Hope for your suggestions.
>
>
> Best regards,
>
> Lin.
>


Re: Does ignite support UPDATE/DELETE sql

2016-07-13 Thread Dmitriy Setrakyan
We are currently working on adding insert/update/delete commands to Ignite.
Here is the ticket you can follow:

https://issues.apache.org/jira/browse/IGNITE-2294

Thanks,
D.

On Thu, Jul 14, 2016 at 6:58 AM, Denis Magda  wrote:

> Hi,
>
> Please properly subscribe to the Ignite's user list. Refer to this page for
> details - https://ignite.apache.org/community/resources.html#ask
>
> See my answers inline.
>
>
> zhaojun08 wrote
> > HI ALL,
> >
> > I am new to ignite, and I have a few questions to confirm.
> >
> > 1. I want to use ignite to store RDBMS in MEM. Table in RDBMS have many
> > rows, does ignite store every row as a Java object, like the "Person"
> > object in docs? And is it the only way to store a row in Ignite?
> /
> > Yes, Apache Ignite is an in-memory key-value store meaning that for every
> > key there should be a corresponding value. A key-value tuple will
> > correspond to a row from your RDBMS store.
> /
> >
> > 2. I notice that "Person" class implements Serializable, does it mean
> > every row record stores in ignite in Serialization format? If so, will
> the
> > Serialization degrade the select performance, and the reason for
> > Serialization?
> /
> > Objects are stored in a serialized form in memory. However it doesn't
> mean
> > that JDK serialization techniques are used to prepare an object for
> > storage. In fact Ignite uses its own BinaryMarshaller (serializer) that
> > has good performance characteristics -
> > https://apacheignite.readme.io/docs/binary-marshaller
> /
> >
> > 3. I have store RDBMS in Ignite, can I update specific row record using
> > UPDATE/DELETE sql statement to alter the table?
> /
> > This kind of queries is not supported right know. You have to update
> > caches with methods like cache.put, cache.putAll, cache.invoke, etc.
> /
> >
> /
> > If you need to pre-load data from RDBMS then you can rely on  one of the
> > pre-loading strategies -
> https://apacheignite.readme.io/docs/data-loading.
> > This topic should be useful for you as well -
> > https://apacheignite.readme.io/docs/persistent-store
> /
> >
> > --
> > Denis
> >
> > Many Thanks!
>
>
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Does-ignite-support-UPDATE-DELETE-sql-tp6290p6292.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: How to troubleshoot a slow client node get()

2016-07-13 Thread Dmitriy Setrakyan
On Wed, Jul 13, 2016 at 11:23 AM, tracel  wrote:

> thanks dsetrakyan,
>
> Why use System.out.println()?
>

I want to make sure that there is no overhead associated with log.info().
Can you check?


> I have added the System.out.println(), and keep the log.info() just for
> comparison:
>
> log.info("### Before get()");
> System.out.println("##~ Before get()");
> Vendor vendor = cache.get(vendorCode);
> System.out.println("##~ After  get()");
> log.info("### After  get()");
>
>
>
> The System.out was captured by log4j like this so they also marked as
> [INFO]
> log4j.appender.stdout.Threshold=INFO
> log4j.appender.stdout.Target=System.out
>
>
>
> Here's the log output:
>
> 16:17:06,861 [ INFO] CacheService:150 - ### Before get()
> 16:17:06,862 [ INFO] CacheService:62 - ##~ Before get()
> 16:17:11,921 [ INFO] CacheService:62 - ##~ After  get()
> 16:17:11,921 [ INFO] CacheService:156 - ### After  get()
> 16:17:11,922 [ INFO] CacheService:150 - ### Before get()
> 16:17:11,922 [ INFO] CacheService:62 - ##~ Before get()
> 16:17:11,933 [ INFO] CacheService:62 - ##~ After  get()
> 16:17:11,934 [ INFO] CacheService:156 - ### After  get()
> 16:17:11,934 [ INFO] CacheService:150 - ### Before get()
> 16:17:11,935 [ INFO] CacheService:62 - ##~ Before get()
> 16:17:11,940 [ INFO] CacheService:62 - ##~ After  get()
> 16:17:11,941 [ INFO] CacheService:156 - ### After  get()
> 16:17:11,941 [ INFO] CacheService:150 - ### Before get()
> 16:17:11,941 [ INFO] CacheService:62 - ##~ Before get()
> 16:17:11,956 [ INFO] CacheService:62 - ##~ After  get()
> 16:17:11,956 [ INFO] CacheService:156 - ### After  get()
> 16:17:11,957 [ INFO] CacheService:150 - ### Before get()
> 16:17:11,957 [ INFO] CacheService:62 - ##~ Before get()
> 16:17:11,965 [ INFO] CacheService:62 - ##~ After  get()
> 16:17:11,965 [ INFO] CacheService:156 - ### After  get()
> 16:17:11,965 [ INFO] CacheService:150 - ### Before get()
> 16:17:11,966 [ INFO] CacheService:62 - ##~ Before get()
> 16:17:11,969 [ INFO] CacheService:62 - ##~ After  get()
> 16:17:11,969 [ INFO] CacheService:156 - ### After  get()
> 16:17:11,970 [ INFO] CacheService:150 - ### Before get()
> 16:17:11,970 [ INFO] CacheService:62 - ##~ Before get()
> 16:17:11,975 [ INFO] CacheService:62 - ##~ After  get()
> 16:17:11,976 [ INFO] CacheService:156 - ### After  get()
>
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/How-to-troubleshoot-a-slow-client-node-get-tp6250p6258.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: How to troubleshoot a slow client node get()

2016-07-13 Thread Dmitriy Setrakyan
Can you replace log.info() with System.out.println() in your test?

On Wed, Jul 13, 2016 at 10:21 AM, tracel  wrote:

> I have an Ignite (1.5.0.final) cache client node started in a Tomcat
> 8.0.32,
> the client node connects to a server node started on the same machine.
>
> Sometimes the get() need some 5 seconds, while most of the other get() need
> almost no time.
> I wonder how the 5 seconds was spent, how I can troubleshoot it?
>
> I am trying but still cannot reproduce the symptom with another
> application,
> I will keep trying but hopefully I can get someone to shed some light here.
>
> Here is the log output, only the get() at 11:40:13 and 11:56:10 were taking
> longer time:
>
> 11:40:13,333 [ INFO] CacheService:150 - ### Before get()
> 11:40:18,503 [ INFO] CacheService:152 - ### After  get()
> 11:40:18,505 [ INFO] CacheService:150 - ### Before get()
> 11:40:18,528 [ INFO] CacheService:152 - ### After  get()
> 11:40:18,529 [ INFO] CacheService:150 - ### Before get()
> 11:40:18,538 [ INFO] CacheService:152 - ### After  get()
> 11:40:18,538 [ INFO] CacheService:150 - ### Before get()
> 11:40:18,558 [ INFO] CacheService:152 - ### After  get()
> 11:40:18,558 [ INFO] CacheService:150 - ### Before get()
> 11:40:18,567 [ INFO] CacheService:152 - ### After  get()
> 11:40:18,567 [ INFO] CacheService:150 - ### Before get()
> 11:40:18,575 [ INFO] CacheService:152 - ### After  get()
> 11:40:18,576 [ INFO] CacheService:150 - ### Before get()
> 11:40:18,595 [ INFO] CacheService:152 - ### After  get()
> 11:40:18,595 [ INFO] CacheService:150 - ### Before get()
> 11:40:18,603 [ INFO] CacheService:152 - ### After  get()
> 11:40:18,786 [ INFO] CacheService:150 - ### Before get()
> 11:40:18,795 [ INFO] CacheService:152 - ### After  get()
> 11:56:10,142 [ INFO] CacheService:150 - ### Before get()
> 11:56:15,208 [ INFO] CacheService:152 - ### After  get()
> 11:56:15,208 [ INFO] CacheService:150 - ### Before get()
> 11:56:15,214 [ INFO] CacheService:152 - ### After  get()
> 11:56:15,214 [ INFO] CacheService:150 - ### Before get()
> 11:56:15,228 [ INFO] CacheService:152 - ### After  get()
> 11:56:15,229 [ INFO] CacheService:150 - ### Before get()
> 11:56:15,243 [ INFO] CacheService:152 - ### After  get()
> 11:56:15,244 [ INFO] CacheService:150 - ### Before get()
> 11:56:15,247 [ INFO] CacheService:152 - ### After  get()
> 11:56:15,247 [ INFO] CacheService:150 - ### Before get()
> 11:56:15,250 [ INFO] CacheService:152 - ### After  get()
> 11:56:15,250 [ INFO] CacheService:150 - ### Before get()
> 11:56:15,255 [ INFO] CacheService:152 - ### After  get()
> 11:56:15,256 [ INFO] CacheService:150 - ### Before get()
> 11:56:15,258 [ INFO] CacheService:152 - ### After  get()
> 11:56:15,280 [ INFO] CacheService:150 - ### Before get()
>
>
> The original code is quite complicated so I put a simplified version here:
>
> private IgniteCache cache;
>
> public Vendor getVendor(String vendorCode) {
> log.info("### Before get()");
> Vendor vendor = cache.get(vendorCode);
> log.info("### After  get()");
>
> if (vendor == null) {
> vendor = findVendorFromDB(vendorCode);
> }
>
> return vendor;
> }
>
>
> Thanks in advance!
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/How-to-troubleshoot-a-slow-client-node-get-tp6250.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Cache Partitioned Mode

2016-07-08 Thread Dmitriy Setrakyan
On Fri, Jul 8, 2016 at 1:07 PM, vkulichenko 
wrote:

> daniels,
>
> The node where the entry is stored is defined by affinity function which is
> designed to provide even distribution across nodes. Since the function
> doesn't know your keys in advance, it provides statistically better
> distribution on growing dataset. In other words, if you put only two random
> keys, they can easily go to the same node, but if you put a million of
> random keys, they will be split very close to half a million per node.
>

In my experience, a few thousands of keys should be good.

>
> -Val
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Cache-Partitioned-Mode-tp6172p6184.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: kick off a discussion

2016-07-08 Thread Dmitriy Setrakyan
Thanks Sasha!

Resending to the dev list.

D.

On Fri, Jul 8, 2016 at 2:02 PM, Alexandre Boudnik 
wrote:

> Apache Ignite a great platform but it lacks of certain capabilities,
> which are common in RDMS world, such as:
> - Consistent on-line backup for data on entire cluster (or for
> specified set of caches)
> - Hierarchal snapshots for specified set caches
> - Transaction log
> - Restore cluster state as of certain point in time
> - Rolling forward from snapshot with ability to filter/modify transactions
> - Asynchronous replication based either on log shipment or snapshot
> shipment
> -- Between clusters
> -- Continues data export to let’s say RDMS
> It is also a necessity to reduce cold start time for huge clusters
> with strict SLAs.
>
> I'll put some implementation ideas in JIRA later on. I believe that
> this list is far from being complete, but I want the community to
> discuss these abovementioned use cases.
>
> --Sasha
>


Re: Starting H2 Debug Console On Remote Server

2016-07-05 Thread Dmitriy Setrakyan
You should also check out the web console as well.The management tab only
works if you build form master.
https://console.gridgain.com/

On Tue, Jul 5, 2016 at 5:38 AM, Denis Magda  wrote:

> Do you really need to track this all the time?
>
> GridGain’s Visor UI [1] supports this kind of functionality. However it’s
> workable with GridGain builds only.
>
> [1] https://gridgain.readme.io/docs/visor-management-console
>
> —
> Denis
>
>
> On Jul 5, 2016, at 3:11 PM, pragmaticbigdata  wrote:
>
> It I think expires in 30 mins.
> Currently I access h2 debug console on my local machine through chrome. I
> use it currently to verify if the data is partitioned correctly based on
> the
> affinity key I have configured.
>
> Once the session expires, the only option left is restart Ignite and
> execute
> the data load again which is kind of time consuming.
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Starting-H2-Debug-Console-On-Remote-Server-tp6063p6106.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com
> .
>
>
>


Re: Web Console Beta 2 release

2016-06-29 Thread Dmitriy Setrakyan
Just checked it out, looks awesome! Very nice and easy way to configure,
manage, and query Ignite clusters.

On Tue, Jun 28, 2016 at 1:41 AM, Alexey Kuznetsov 
wrote:

> Igniters!
>
> I'd like to announce that we just pushed Ignite Web Console Beta 2 to
> master branch and deployed new version at https://console.gridgain.com
>
> NOTE: You may need to refresh page (F5 or Ctrl+R) in order to reload Web
> Console.
>
> What's new:
>
>- Implemented Monitoring of grid and caches (please note, you will
>need grid started from latest nightly build of master branch).
>-  Improved Demo mode (you may test SQL and Monitoring in Demo mode).
>-  Added a lot of properties to grid configuration.
>-  Improved validation of configuration properties.
>-  Improved XML and Java code generation.
>-  Fixed a lot of bugs and usability issues.
>
> Feedback and suggestions are welcome!
>
> What's next:
>
>- Migrate build to Webpack from jspm.
>- Frontend and backend tests.
>- .NET configuration and code generation.
>- Logs view and logs search.
>- Many new features are coming...
>
>
> Stay tuned!
> --
> Alexey Kuznetsov
>


  1   2   3   >