Problems are enabling Ignite Persistence

2018-09-18 Thread Lokesh Sharma
I enabled Ignite Persistence. Following is my configuration:

//New Data Region
> DataStorageConfiguration storageCfgPersistence = new 
> DataStorageConfiguration();
> DataRegionConfiguration regionCfg = new DataRegionConfiguration();
> regionCfg.setName("Persistence");
> regionCfg.setInitialSize(100L * 1024 * 1024);
> regionCfg.setMaxSize(500L * 1024 * 1024);
> regionCfg.setPersistenceEnabled(true);
> storageCfgPersistence.setDataRegionConfigurations(regionCfg);
> cfg.setDataStorageConfiguration(storageCfgPersistence);
>
>
I am facing 2 problems after enabling Ignite persistence:

1 The applications boots up fine with no error but its stuck at that point.
It doesn't perform any action that its supposed to perform. I know this
because as soon the Ignite completes boot up my application logs starts to
print on the console but with persistence on, nothing is printed. Following
is the output when the node boots up:

2018-09-19 10:53:58.957  WARN 6032 --- [-worker-#42%cm%]
>> o.a.i.i.p.c.p.file.FilePageStoreManager  : Persistence store directory is
>> in the temp directory and may be cleaned.To avoid this set "IGNITE_HOME"
>> environment variable properly or change location of persistence directories
>> in data storage configuration (see DataStorageConfiguration#walPath,
>> DataStorageConfiguration#walArchivePath,
>> DataStorageConfiguration#storagePath properties). Current persistence store
>> directory is: [/tmp]
>
> [10:53:59] Performance suggestions for grid 'cm' (fix if possible)
>
> [10:53:59] To disable, set -DIGNITE_PERFORMANCE_SUGGESTIONS_DISABLED=true
>
> [10:53:59]   ^-- Enable G1 Garbage Collector (add '-XX:+UseG1GC' to JVM
>> options)
>
> [10:53:59]   ^-- Specify JVM heap max size (add '-Xmx[g|G|m|M|k|K]'
>> to JVM options)
>
> [10:53:59]   ^-- Set max direct memory size if getting 'OOME: Direct
>> buffer memory' (add '-XX:MaxDirectMemorySize=[g|G|m|M|k|K]' to JVM
>> options)
>
> [10:53:59]   ^-- Disable processing of calls to System.gc() (add
>> '-XX:+DisableExplicitGC' to JVM options)
>
> [10:53:59] Refer to this page for more performance suggestions:
>> https://apacheignite.readme.io/docs/jvm-and-system-tuning
>
> [10:53:59]
>
> [10:53:59] To start Console Management & Monitoring run
>> ignitevisorcmd.{sh|bat}
>
> [10:53:59]
>
> [10:53:59] Ignite node started OK (id=085939f2, instance name=cm)
>
> [10:53:59] Topology snapshot [ver=1, servers=1, clients=0, CPUs=8,
>> offheap=2.8GB, heap=2.6GB]
>
> [10:53:59]   ^-- Node [id=085939F2-56B0-42C7-BC17-A7BBC8F000F0,
>> clusterState=INACTIVE]
>
> [10:53:59]   ^-- Baseline [id=0, size=1, online=1, offline=0]
>
> [10:53:59]   ^-- All baseline nodes are online, will start auto-activation
>
> [10:53:59] Data Regions Configured:
>
>
>
As you can see the baseline topology is configured correctly and cluster is
activated.

2. When I try to kill the application using Ctrl+C, it doesn't work. It
used to work before I enabled Ignite Persistence.

Any clue what might be wrong here?


Re: How much heap to allocate

2018-09-18 Thread Ray
Hi Mikhail,

Can you explain how is lazy loading working when I use this sql "select *
from table" to query a big table which don't fit in on heap memory?
Does Ignite send part of the result set to the client? 
If Ignite is still sending whole result set back to the client, how can lazy
load avoid loading full result set on heap?  



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


ttl-cleanup-worker got "Critical system error detected"

2018-09-18 Thread Mạnh Tâm Nguyễn
Hi all,

Some time i got this exception and service crash after that. I think that 
Ignite got exception during
remove expired records. I have few Caches with expiry but only this Cache got 
exception.

Anyone know how to sort this issue?

Here are exception details and configuration:

StackTrace:
```
[11:27:17,451][SEVERE][ttl-cleanup-worker-#39%TravelInventory%][] Critical 
system error detected. Will be handled accordingly to configured handler 
[hnd=class o.a.i.failure.StopNodeOrHaltFailureHandler, 
failureCtx=FailureContext [type=SYSTEM_WORKER_TERMINATION, err=class 
o.a.i.IgniteException: Runtime failure on search row: Row@6a4eb016[ key: 
8bbcac6f-3219-48e3-a1cf-cae8be90752e, val: GoQuoEngine.Data.Domain.PackageQuery 
[idHash=793518907, hash=1245385215  ]]]
class org.apache.ignite.IgniteException: Runtime failure on search row: 
Row@6a4eb016[ key: 8bbcac6f-3219-48e3-a1cf-cae8be90752e, val: 
GoQuoEngine.Data.Domain.PackageQuery [idHash=793518907, hash=1245385215  ]]
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.doRemove(BPlusTree.java:1800)
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.removex(BPlusTree.java:1595)
at 
org.apache.ignite.internal.processors.query.h2.database.H2TreeIndex.removex(H2TreeIndex.java:289)
at 
org.apache.ignite.internal.processors.query.h2.opt.GridH2Table.remove(GridH2Table.java:522)
at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.remove(IgniteH2Indexing.java:703)
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.remove(GridQueryProcessor.java:2518)
at 
org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager.remove(GridCacheQueryManager.java:457)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.finishRemove(IgniteCacheOffheapManagerImpl.java:1456)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.remove(IgniteCacheOffheapManagerImpl.java:1426)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.remove(IgniteCacheOffheapManagerImpl.java:377)
at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.removeValue(GridCacheMapEntry.java:3679)
at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.onExpired(GridCacheMapEntry.java:3409)
at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.onTtlExpired(GridCacheMapEntry.java:3341)
at 
org.apache.ignite.internal.processors.cache.GridCacheTtlManager$1.applyx(GridCacheTtlManager.java:60)
at 
org.apache.ignite.internal.processors.cache.GridCacheTtlManager$1.applyx(GridCacheTtlManager.java:51)
at 
org.apache.ignite.internal.util.lang.IgniteInClosure2X.apply(IgniteInClosure2X.java:38)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.expire(IgniteCacheOffheapManagerImpl.java:1049)
at 
org.apache.ignite.internal.processors.cache.GridCacheTtlManager.expire(GridCacheTtlManager.java:197)
at 
org.apache.ignite.internal.processors.cache.GridCacheSharedTtlCleanupManager$CleanupWorker.body(GridCacheSharedTtlCleanupManager.java:137)
at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:748)
```

Configuration:

```XML


http://www.springframework.org/schema/beans";
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
   xmlns:util="http://www.springframework.org/schema/util";
   xsi:schemaLocation="http://www.springframework.org/schema/beans
   
http://www.springframework.org/schema/beans/spring-beans.xsd
   http://www.springframework.org/schema/util
   
http://www.springframework.org/schema/util/spring-util.xsd";>










127.0.0.1:47500..47510




























 


  
 



















   

Re: SQL SELECT with AffinityKeyMapped - no results

2018-09-18 Thread kcheng.mvp
I ran into the same issue. if we can not use @AffinityKeyMapped without any
workaround. then Can I use AffinityKey as the document address


Object personKey1 = new AffinityKey("myPersonId1", "myCompanyId");
Object personKey2 = new AffinityKey("myPersonId2", "myCompanyId");
 
Person p1 = new Person(personKey1, ...);
Person p2 = new Person(personKey2, ...);
 
// Both, the company and the person objects will be cached on the same node.
comCache.put("myCompanyId", new Company(..));
perCache.put(personKey1, p1);
perCache.put(personKey2, p2);



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: IgniteUtils enables strictPostRedirect in a static block

2018-09-18 Thread xero
Hi,
Does anyone have information about this?

Thanks.




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Message grid failure due to userVersion setting

2018-09-18 Thread Dave Harvey
Thanks Ilya,

As I understand  this a bit more, it seems like IGNITE-7905
 is really the same
basic flaw in user version not working as documented. Ignite-7905
reproduction is simply to set a non-zero userVersion in a ignite.xml (
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/DeploymentMode.html),
and then

// Connect to the cluster.Ignite ignite = Ignition.start();
// Activate the cluster. Automatic topology initialization occurs //
only if you manually activate the cluster for the very first time.
ignite.cluster().active(true);


The activation then throws an exception on the server, because the server
already has the same built-in Ignite class.

As I understand the documentation, since the built in Ignite class is not
excluded, it should not even consider peer class loading because the class
exists locally.It should just use the local class.

-DH

On Tue, Sep 18, 2018 at 9:00 AM Ilya Kasnacheev 
wrote:

> Hello!
>
> I'm not familiar with these areas very much, but if you had a reproducer
> project I could take a look.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пн, 17 сент. 2018 г. в 19:32, Dave Harvey :
>
>> I probably did not explain this clearly.  When sending a message from
>> server to client using the message grid, from a context unrelated to any
>> client call, the server, as you would expect uses its installed libraries,
>> and userVersion 0.For some reason, when the client receives this
>> message, it require that the user version match it's current user version.
>>
>> The use case is we have a stable set of libraries on the server, and the
>> server wants to send a topic based message to the client, using only the
>> type "String".   Unrelated to this, the client is using the compute grid,
>> where P2P is used, but that is interfering with basic functionality.
>> This,  IGNITE-7905  ,
>> and the paucity  of  results when I google for "ignite userVersion"  makes
>> it clear that shooting down classes in CONTINUOUS mode with userVersion is
>> not completely thought through.  We certainly never want to set a
>> userVersion on the servers.
>>
>> The documentation for P2P says:
>> "
>>
>>1. Ignite will check if class is available on local classpath (i.e.
>>if it was loaded at system startup), and if it was, it will be returned. 
>> No
>>class loading from a peer node will take place in this case."
>>
>> Clearly, java.lang.String is on the local class path.So it seems like
>> a user version mismatch should not be a reason to reject a class that is on
>> the local classpath.
>>
>> On Mon, Sep 17, 2018 at 11:01 AM Ilya Kasnacheev <
>> ilya.kasnach...@gmail.com> wrote:
>>
>>> Hello!
>>>
>>> I think that Ignite cannot unload old version of code, unless it is
>>> loaded with something like URI deployment module.
>>> Version checking is there but server can't get rid of old code if it's
>>> on classpath.
>>>
>>> Regards,
>>> --
>>> Ilya Kasnacheev
>>>
>>>
>>> пн, 17 сент. 2018 г. в 16:47, Dave Harvey :
>>>
 We have a client that uses the compute grid and message grid, as well
 as the discovery API.  It communicates with a server plugin.   The cluster
 is configured for CONTINUOUS peer class loading.  In order to force the
 proper code to be loaded for the compute tasks, we change the user version,
 e.g., to 2.

 If the server sends the client a message on the message grid, using
 java.lang.string, the client fails because the user version sent for
 java.lang.string is 0, but the client insists on 2.

 How is this supposed to work?   Our expectation was that the message
 grid should not be affected by peer class loading settings.




 *Disclaimer*

 The information contained in this communication from the sender is
 confidential. It is intended solely for use by the recipient and others
 authorized to receive it. If you are not the recipient, you are hereby
 notified that any disclosure, copying, distribution or taking action in
 relation of the contents of this information is strictly prohibited and may
 be unlawful.

 This email has been scanned for viruses and malware, and may have been
 automatically archived by *Mimecast Ltd*, an innovator in Software as
 a Service (SaaS) for business. Providing a *safer* and *more useful*
 place for your human generated data. Specializing in; Security, archiving
 and compliance. To find out more Click Here
 .

>>>
>>
>> *Disclaimer*
>>
>> The information contained in this communication from the sender is
>> confidential. It is intended solely for use by the recipient and others
>> authorized to receive it. If you are not the recipient, you are hereby
>> notified that any disclosure, copying, distribution or taking action in
>> relation of the

How does lazy load work internally in Ignite?

2018-09-18 Thread Ray
>From this document
https://apacheignite-sql.readme.io/docs/performance-and-debugging#section-result-set-lazy-load,
it mentioned that when setting lazy load flag on the query it can avoid
prolonged GC pauses and even OutOfMemoryError.

But it confuses me how does lazy load work internally in Ignite?
Does Ignite send part of the result set to the client?
If Ignite is still sending whole result set back to the client, how can lazy
load avoid loading full result set on heap?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Configurations precedence and consistency across the cluster

2018-09-18 Thread akurbanov
Hello,

What is the use-case that are you trying to achieve? What does "to sync
cache configs" mean? 

IgniteCache.withX returns you cache with a given  CacheOperationContext

  
that allows you to override several usage patterns from client to client,
the cache itself may be created with XML configuration as well as using Java
API.

Have you tried launching nodes with these configs? Both cases will throw an
exception on startup of a node that is trying to join with cache
configuration different from the already started one.

You may refer to
org.apache.ignite.internal.processors.cache.ClusterCachesInfo#checkCache to
familiarize with configuration checks that will be performed on node startup
in case where you have preconfigured caches in xml.

Regards,
Anton



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


.net decimal being stored as Other in ignite.

2018-09-18 Thread wt
i have the following class

[QuerySqlField]
public int vd { get; set; }
[QuerySqlField]
public long sharesinindex { get; set; }
[QuerySqlField]
public string name { get; set; }
[QuerySqlField]
public string isin { get; set; }
[QuerySqlField]
public string sedol { get; set; }
[QuerySqlField]
public string ric { get; set; }
[QuerySqlField]
public decimal close { get; set; }
[QuerySqlField]
public decimal rate { get; set; }

when i configure ignite with this class in .net and start the server it
correctly sets all the other fields just not the decimals. The documentation
states

DECIMAL
Possible values: Data type with fixed precision and scale.

Mapped to:

Java/JDBC: java.math.BigDecimal
.NET/C#: decimal
C/C++: ignite::Decimal
ODBC: SQL_DECIMAL


Why is Ignite not mapping this correctly ->  tables.png
  


here is the config

var cfg = new IgniteConfiguration
{
DiscoverySpi = new TcpDiscoverySpi
{
IpFinder = new TcpDiscoveryStaticIpFinder
{
Endpoints = new[] { "127.0.0.1:47500..47509" }
},
SocketTimeout = TimeSpan.FromSeconds(0.3)
},
CacheConfiguration = new[]
{
new CacheConfiguration("IndexComposition")
{
SqlSchema = "IndexComposition",CacheMode =
CacheMode.Replicated,
QueryEntities = new []
{
new
QueryEntity(typeof(int),typeof(IndexComposition))
}
}
}

};




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: IGNITE-8386 question (composite pKeys)

2018-09-18 Thread eugene miretsky
So how should we work around it now? Just create a new index for
(customer_id, date)?

Cheers,
Eugene

On Mon, Sep 17, 2018 at 10:52 AM Stanislav Lukyanov 
wrote:

> Hi,
>
>
>
> The thing is that the PK index is currently created roughly as
>
> CREATE INDEX T(_key)
>
> and not
>
> CREATE INDEX T(customer_id, date).
>
>
>
> You can’t use the _key column in the WHERE clause directly, so the query
> optimizer can’t use the index.
>
>
>
> After the IGNITE-8386 is fixed the index will be created as a multi-column
> index, and will behave the way you expect (e.g. it will be used instead of
> the affinity key index).
>
>
>
> Stan
>
>
>
> *From: *eugene miretsky 
> *Sent: *12 сентября 2018 г. 23:45
> *To: *user@ignite.apache.org
> *Subject: *IGNITE-8386 question (composite pKeys)
>
>
>
> Hi,
>
>
>
> A question regarding
> https://issues.apache.org/jira/browse/IGNITE-8386?focusedCommentId=16511394&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16511394
>
>
>
> It states that a pkey index with a  compoise pKey is "effectively useless".
> Could you please explain why is that? We have a pKey that we are using as
> an index.
>
>
>
> Also, on our pKey is (customer_id, date) and affinity column is
> customer_id. I have noticed that most queries use AFFINITY_KEY index.
> Looking at the source code, AFFINITY_KEY index should not even be created
> since the first field of the pKey  is the affinity key. Any idea what may
> be happening?
>
>
>
> Cheers,
>
> Eugene
>
>
>


Re: Query 3x slower with index

2018-09-18 Thread Ilya Kasnacheev
Hello!

I can see you try to use _key_PK as index. If your primary key is
composite, it won't work properly for you. I recommend creating an explicit
(category_id, customer_id) index.

Regards,
-- 
Ilya Kasnacheev


вт, 18 сент. 2018 г. в 17:47, eugene miretsky :

> Hi Ilya,
>
> The different query result was my mistake - one of the categoy_ids was
> duplicate, so in the query that used join, it counted rows for that
> category twice. My apologies.
>
> However, we are still having an issue with query time, and the index not
> being applied to category_id. Would appreciate if you could take a look.
>
> Cheers,
> Eugene
>
> On Mon, Sep 17, 2018 at 9:15 AM Ilya Kasnacheev 
> wrote:
>
>> Hello!
>>
>> Why don't you diff the results of those two queries, tell us what the
>> difference is?
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> пн, 17 сент. 2018 г. в 16:08, eugene miretsky > >:
>>
>>> Hello,
>>>
>>> Just wanted to see if anybody had time to look into this.
>>>
>>> Cheers,
>>> Eugene
>>>
>>> On Wed, Sep 12, 2018 at 6:29 PM eugene miretsky <
>>> eugene.miret...@gmail.com> wrote:
>>>
 Thanks!

 Tried joining with an inlined table instead of IN as per the second
 suggestion, and it didn't quite work.

 Query1:

- Select COUNT(*) FROM( Select customer_id from GATABLE3  use
Index( ) where category_id in (9005, 175930, 175930, 
 175940,175945,101450,
6453) group by customer_id having SUM(product_views_app) > 2 OR
SUM(product_clicks_app) > 1 )
- exec time = 17s
- *Result: 3105868*
- Same exec time if using AFFINITY_KEY index or "_key_PK_hash or
customer_id index
- Using an index on category_id increases the query time 33s

 Query2:

- Select COUNT(*) FROM( Select customer_id from GATABLE3 ga  use
index (PUBLIC."_key_PK") inner join table(category_id int = (9005, 
 175930,
175930, 175940,175945,101450, 6453)) cats on cats.category_id =
ga.category_id   group by customer_id having SUM(product_views_app) > 2 
 OR
SUM(product_clicks_app) > 1 )
- exec time = 38s
- *Result: 3113921*
- Same exec time if using AFFINITY_KEY index or "_key_PK_hash or
customer_id index or category_id index
- Using an index on category_id doesnt change the run time

 Query plans are attached.

 3 questions:

1. Why is the result differnt for the 2 queries - this is quite
concerning.
2. Why is the 2nd query taking longer
3. Why  category_id index doesn't work in case of query 2.


 On Wed, Sep 5, 2018 at 8:31 AM Ilya Kasnacheev <
 ilya.kasnach...@gmail.com> wrote:

> Hello!
>
> I don't think that we're able to use index with IN () clauses. Please
> convert it into OR clauses.
>
> Please see
> https://apacheignite-sql.readme.io/docs/performance-and-debugging#section-sql-performance-and-usability-considerations
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пн, 3 сент. 2018 г. в 12:46, Andrey Mashenkov <
> andrey.mashen...@gmail.com>:
>
>> Hi
>>
>> Actually, first query uses index on affinity key which looks more
>> efficient than index on category_id column.
>> The first query can process groups one by one and stream partial
>> results from map phase to reduce phase as it use sorted index lookup,
>> while second query should process full dataset on map phase before
>> pass it for reducing.
>>
>> Try to use composite index (customer_id, category_id).
>>
>> Also, SqlQueryFields.setCollocated(true) flag can help Ignite to
>> build more efficient plan when group by on collocated column is used.
>>
>> On Sun, Sep 2, 2018 at 2:02 AM eugene miretsky <
>> eugene.miret...@gmail.com> wrote:
>>
>>> Hello,
>>>
>>> Schema:
>>>
>>>-
>>>
>>>PUBLIC.GATABLE2.CUSTOMER_ID
>>>
>>>PUBLIC.GATABLE2.DT
>>>
>>>PUBLIC.GATABLE2.CATEGORY_ID
>>>
>>>PUBLIC.GATABLE2.VERTICAL_ID
>>>
>>>PUBLIC.GATABLE2.SERVICE
>>>
>>>PUBLIC.GATABLE2.PRODUCT_VIEWS_APP
>>>
>>>PUBLIC.GATABLE2.PRODUCT_CLICKS_APP
>>>
>>>PUBLIC.GATABLE2.PRODUCT_VIEWS_WEB
>>>
>>>PUBLIC.GATABLE2.PRODUCT_CLICKS_WEB
>>>
>>>PUBLIC.GATABLE2.PDP_SESSIONS_APP
>>>
>>>PUBLIC.GATABLE2.PDP_SESSIONS_WEB
>>>- pkey = customer_id,dt
>>>- affinityKey = customer
>>>
>>> Query:
>>>
>>>- select COUNT(*) FROM( Select customer_id from GATABLE2 where
>>>category_id in (175925, 101450, 9005, 175930, 175930, 
>>> 175940,175945,101450,
>>>6453) group by customer_id having SUM(product_views_app) > 2 OR
>>>SUM(product_clicks_app) > 1 )
>>>
>>> The table has 600M rows.
>>> At first, the query took 1m, when we added an 

Re: Query 3x slower with index

2018-09-18 Thread eugene miretsky
Hi Ilya,

The different query result was my mistake - one of the categoy_ids was
duplicate, so in the query that used join, it counted rows for that
category twice. My apologies.

However, we are still having an issue with query time, and the index not
being applied to category_id. Would appreciate if you could take a look.

Cheers,
Eugene

On Mon, Sep 17, 2018 at 9:15 AM Ilya Kasnacheev 
wrote:

> Hello!
>
> Why don't you diff the results of those two queries, tell us what the
> difference is?
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пн, 17 сент. 2018 г. в 16:08, eugene miretsky :
>
>> Hello,
>>
>> Just wanted to see if anybody had time to look into this.
>>
>> Cheers,
>> Eugene
>>
>> On Wed, Sep 12, 2018 at 6:29 PM eugene miretsky <
>> eugene.miret...@gmail.com> wrote:
>>
>>> Thanks!
>>>
>>> Tried joining with an inlined table instead of IN as per the second
>>> suggestion, and it didn't quite work.
>>>
>>> Query1:
>>>
>>>- Select COUNT(*) FROM( Select customer_id from GATABLE3  use Index(
>>>) where category_id in (9005, 175930, 175930, 175940,175945,101450, 6453)
>>>group by customer_id having SUM(product_views_app) > 2 OR
>>>SUM(product_clicks_app) > 1 )
>>>- exec time = 17s
>>>- *Result: 3105868*
>>>- Same exec time if using AFFINITY_KEY index or "_key_PK_hash or
>>>customer_id index
>>>- Using an index on category_id increases the query time 33s
>>>
>>> Query2:
>>>
>>>- Select COUNT(*) FROM( Select customer_id from GATABLE3 ga  use
>>>index (PUBLIC."_key_PK") inner join table(category_id int = (9005, 
>>> 175930,
>>>175930, 175940,175945,101450, 6453)) cats on cats.category_id =
>>>ga.category_id   group by customer_id having SUM(product_views_app) > 2 
>>> OR
>>>SUM(product_clicks_app) > 1 )
>>>- exec time = 38s
>>>- *Result: 3113921*
>>>- Same exec time if using AFFINITY_KEY index or "_key_PK_hash or
>>>customer_id index or category_id index
>>>- Using an index on category_id doesnt change the run time
>>>
>>> Query plans are attached.
>>>
>>> 3 questions:
>>>
>>>1. Why is the result differnt for the 2 queries - this is quite
>>>concerning.
>>>2. Why is the 2nd query taking longer
>>>3. Why  category_id index doesn't work in case of query 2.
>>>
>>>
>>> On Wed, Sep 5, 2018 at 8:31 AM Ilya Kasnacheev <
>>> ilya.kasnach...@gmail.com> wrote:
>>>
 Hello!

 I don't think that we're able to use index with IN () clauses. Please
 convert it into OR clauses.

 Please see
 https://apacheignite-sql.readme.io/docs/performance-and-debugging#section-sql-performance-and-usability-considerations

 Regards,
 --
 Ilya Kasnacheev


 пн, 3 сент. 2018 г. в 12:46, Andrey Mashenkov <
 andrey.mashen...@gmail.com>:

> Hi
>
> Actually, first query uses index on affinity key which looks more
> efficient than index on category_id column.
> The first query can process groups one by one and stream partial
> results from map phase to reduce phase as it use sorted index lookup,
> while second query should process full dataset on map phase before
> pass it for reducing.
>
> Try to use composite index (customer_id, category_id).
>
> Also, SqlQueryFields.setCollocated(true) flag can help Ignite to build
> more efficient plan when group by on collocated column is used.
>
> On Sun, Sep 2, 2018 at 2:02 AM eugene miretsky <
> eugene.miret...@gmail.com> wrote:
>
>> Hello,
>>
>> Schema:
>>
>>-
>>
>>PUBLIC.GATABLE2.CUSTOMER_ID
>>
>>PUBLIC.GATABLE2.DT
>>
>>PUBLIC.GATABLE2.CATEGORY_ID
>>
>>PUBLIC.GATABLE2.VERTICAL_ID
>>
>>PUBLIC.GATABLE2.SERVICE
>>
>>PUBLIC.GATABLE2.PRODUCT_VIEWS_APP
>>
>>PUBLIC.GATABLE2.PRODUCT_CLICKS_APP
>>
>>PUBLIC.GATABLE2.PRODUCT_VIEWS_WEB
>>
>>PUBLIC.GATABLE2.PRODUCT_CLICKS_WEB
>>
>>PUBLIC.GATABLE2.PDP_SESSIONS_APP
>>
>>PUBLIC.GATABLE2.PDP_SESSIONS_WEB
>>- pkey = customer_id,dt
>>- affinityKey = customer
>>
>> Query:
>>
>>- select COUNT(*) FROM( Select customer_id from GATABLE2 where
>>category_id in (175925, 101450, 9005, 175930, 175930, 
>> 175940,175945,101450,
>>6453) group by customer_id having SUM(product_views_app) > 2 OR
>>SUM(product_clicks_app) > 1 )
>>
>> The table has 600M rows.
>> At first, the query took 1m, when we added an index on category_id
>> the query started taking 3m.
>>
>> The SQL execution plan for both queries is attached.
>>
>> We are using a single x1.16xlarge insntace with query parallelism
>> set to 32
>>
>> Cheers,
>> Eugene
>>
>>
>
> --
> Best regards,
> Andrey V. Mashenkov
>



Re: How much heap to allocate

2018-09-18 Thread eugene miretsky
My understanding is that lazy loading doesn't work with group_by.

On Tue, Sep 18, 2018 at 10:11 AM Mikhail 
wrote:

> Hi Eugene,
>
> >For #2: wouldn't H2 need to bring the data into the heap to make the
> queries?
> > Or at least some of the date to do the group_by and sum operation?
>
> yes, ignite will bring data from off-heap to heap, sometimes if data set is
> too big for heap memory you need to set lazy flag for your query:
>
> https://apacheignite-sql.readme.io/docs/performance-and-debugging#result-set-lazy-load
>
> Thanks,
> Mike.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Configurations precedence and consistency across the cluster

2018-09-18 Thread eugene miretsky
Thanks!

A few clarifications:
1) The first configuration with given cache name will be applied to all
nodes" - what do you mean by the first configuration? The configuration of
the first node that was started? Is there a gossip/consensus  protocol that
syncs the cache configs across the
2) We are using an xml configuration file instead of  IgniteCache.withX.,
will it work similarly?

For example, I have the following configuration files on the client and
server node
1)  What settings will be applied when I create a SQL table with
template test_template from the client?
2)  What will happen if I start another  client with different settings?

*Client:*





















*Server*























On Tue, Sep 18, 2018 at 10:01 AM akurbanov  wrote:

> Hello Eugene,
>
> 1. Dynamic cache configuration changes are not supported, except properties
> that may be overridden with IgniteCache.withX.
> 2. The first configuration with given cache name will be applied to all
> nodes. You can use the same IgniteCache.withX to put with different expiry
> policies per each client. Also you can configure different near cache
> configurations that will take effect only for operations on client where it
> was configured.
>
> Regards,
> Anton
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: How much heap to allocate

2018-09-18 Thread Mikhail
Hi Eugene,

>For #2: wouldn't H2 need to bring the data into the heap to make the
queries?
> Or at least some of the date to do the group_by and sum operation? 

yes, ignite will bring data from off-heap to heap, sometimes if data set is
too big for heap memory you need to set lazy flag for your query:
https://apacheignite-sql.readme.io/docs/performance-and-debugging#result-set-lazy-load

Thanks,
Mike.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Configurations precedence and consistency across the cluster

2018-09-18 Thread akurbanov
Hello Eugene,

1. Dynamic cache configuration changes are not supported, except properties
that may be overridden with IgniteCache.withX.
2. The first configuration with given cache name will be applied to all
nodes. You can use the same IgniteCache.withX to put with different expiry
policies per each client. Also you can configure different near cache
configurations that will take effect only for operations on client where it
was configured.

Regards,
Anton



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Error while loading data to cache

2018-09-18 Thread Skollur
Your suggestion is working.Thank you.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Too Many open files (native persistence bin files)

2018-09-18 Thread Evgenii Zhuravlev
Hi,

How many caches do you have?

Evgenii

вт, 18 сент. 2018 г. в 15:38, kvenkatramtreddy :

> one correction: lsof showing the information for threads as well, so that
> is
> why it is showing number of Threads * file descriptor.
>
> My current ulimit is 32635, open files in 6000 but still we are receiving
> too many files open.
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: ScanQuery throwing Exception for java Thin client while peerclassloading is enabled

2018-09-18 Thread Ilya Kasnacheev
Hello!

Have you tried to do cache.withKeepBinary().query(yourScanQuery)? This will
avoid demarshalling of binary objects in predicate.

Regards,
-- 
Ilya Kasnacheev


вт, 18 сент. 2018 г. в 13:29, Saby :

> Hi Ilya,
> I tested it with nightly build of 17/09/2018(ver.
> 2.7.0.20180917#19700101-sha1:DEV), but getting the same exception. Getting
> the following exception  while trying to fetch the data from cache using
> ScanQuery from Java Thin client,  but SqlQuery giving the correct result.
> ScanQuery excepting cached class's description in the server class path.
>
> The definition of the cached class is-
>
> public class IgniteValueClass implements Binarylizable, Serializable {
> private static final long serialVersionUID = 50283244369719L;
> @QuerySqlField(index = true)
> String id;
> @QuerySqlField(index = true)
> String val1;
>
> @QuerySqlField
> String val2;
>
> public String getId() {
> return id;
> }
>
> public void setId(String id) {
> this.id = id;
> }
>
> public String getVal1() {
> return val1;
> }
>
> public void setVal1(String val1) {
> this.val1 = val1;
> }
>
> public String getVal2() {
> return val2;
> }
>
> public void setVal2(String val2) {
> this.val2 = val2;
> }
>
> @Override
> public boolean equals(Object obj) {
> if (obj == null)
> return false;
> IgniteValueClass newObj = (IgniteValueClass) obj;
> if (newObj == this)
> return true;
> return check(id, newObj.id) && check(val1, newObj.val1) &&
> check(val2,
> newObj.val2);
> }
>
> private boolean check(String v1, String v2) {
> if (v1 == v2 || (v1 != null && v1.equals(v2)))
> return true;
> return false;
> }
>
> @Override
> public void writeBinary(BinaryWriter writer) throws
> BinaryObjectException {
> writer.writeString("id", id);
> writer.writeString("val1", val1);
> writer.writeString("val2", val2);
>
> }
>
> @Override
> public void readBinary(BinaryReader reader) throws
> BinaryObjectException {
> id = reader.readString("id");
> val1 = reader.readString("val1");
> val2 = reader.readString("val2");
>
> }
>
> @Override
> public String toString() {
> return "ID:" + id + " Val1:" + val1 + " Val2:" + val2;
> }
> }
>
> The server is throwing this exception:
>
> [13:05:12,104][SEVERE][client-connector-#75][ClientListenerNioListener]
> Failed to process client request
>
> [req=o.a.i.i.processors.platform.client.cache.ClientCacheScanQueryRequest@490c852e
> ]
> class org.apache.ignite.binary.BinaryInvalidTypeException:
> xxx.imdg.ignite.test.IgniteValueClass
> at
>
> org.apache.ignite.internal.binary.BinaryContext.descriptorForTypeId(BinaryContext.java:707)
> at
>
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1757)
> at
>
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1716)
> at
>
> org.apache.ignite.internal.binary.BinaryObjectImpl.deserializeValue(BinaryObjectImpl.java:798)
> at
>
> org.apache.ignite.internal.binary.BinaryObjectImpl.value(BinaryObjectImpl.java:143)
> at
>
> org.apache.ignite.internal.processors.cache.CacheObjectUtils.unwrapBinary(CacheObjectUtils.java:177)
> at
>
> org.apache.ignite.internal.processors.cache.CacheObjectUtils.unwrapBinaryIfNeeded(CacheObjectUtils.java:39)
> at
>
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager$ScanQueryIterator.advance(GridCacheQueryManager.java:3082)
> at
>
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager$ScanQueryIterator.onHasNext(GridCacheQueryManager.java:2984)
> at
>
> org.apache.ignite.internal.util.GridCloseableIteratorAdapter.hasNextX(GridCloseableIteratorAdapter.java:53)
> at
>
> org.apache.ignite.internal.util.lang.GridIteratorAdapter.hasNext(GridIteratorAdapter.java:45)
> at
>
> org.apache.ignite.internal.processors.platform.client.cache.ClientCacheQueryCursor.writePage(ClientCacheQueryCursor.java:77)
> at
>
> org.apache.ignite.internal.processors.platform.client.cache.ClientCacheQueryResponse.encode(ClientCacheQueryResponse.java:50)
> at
>
> org.apache.ignite.internal.processors.platform.client.ClientMessageParser.encode(ClientMessageParser.java:387)
> at
>
> org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:172)
> at
>
> o

Re: Message grid failure due to userVersion setting

2018-09-18 Thread Ilya Kasnacheev
Hello!

I'm not familiar with these areas very much, but if you had a reproducer
project I could take a look.

Regards,
-- 
Ilya Kasnacheev


пн, 17 сент. 2018 г. в 19:32, Dave Harvey :

> I probably did not explain this clearly.  When sending a message from
> server to client using the message grid, from a context unrelated to any
> client call, the server, as you would expect uses its installed libraries,
> and userVersion 0.For some reason, when the client receives this
> message, it require that the user version match it's current user version.
>
> The use case is we have a stable set of libraries on the server, and the
> server wants to send a topic based message to the client, using only the
> type "String".   Unrelated to this, the client is using the compute grid,
> where P2P is used, but that is interfering with basic functionality.
> This,  IGNITE-7905  ,
> and the paucity  of  results when I google for "ignite userVersion"  makes
> it clear that shooting down classes in CONTINUOUS mode with userVersion is
> not completely thought through.  We certainly never want to set a
> userVersion on the servers.
>
> The documentation for P2P says:
> "
>
>1. Ignite will check if class is available on local classpath (i.e. if
>it was loaded at system startup), and if it was, it will be returned. No
>class loading from a peer node will take place in this case."
>
> Clearly, java.lang.String is on the local class path.So it seems like
> a user version mismatch should not be a reason to reject a class that is on
> the local classpath.
>
> On Mon, Sep 17, 2018 at 11:01 AM Ilya Kasnacheev <
> ilya.kasnach...@gmail.com> wrote:
>
>> Hello!
>>
>> I think that Ignite cannot unload old version of code, unless it is
>> loaded with something like URI deployment module.
>> Version checking is there but server can't get rid of old code if it's on
>> classpath.
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> пн, 17 сент. 2018 г. в 16:47, Dave Harvey :
>>
>>> We have a client that uses the compute grid and message grid, as well as
>>> the discovery API.  It communicates with a server plugin.   The cluster is
>>> configured for CONTINUOUS peer class loading.  In order to force the proper
>>> code to be loaded for the compute tasks, we change the user version, e.g.,
>>> to 2.
>>>
>>> If the server sends the client a message on the message grid, using
>>> java.lang.string, the client fails because the user version sent for
>>> java.lang.string is 0, but the client insists on 2.
>>>
>>> How is this supposed to work?   Our expectation was that the message
>>> grid should not be affected by peer class loading settings.
>>>
>>>
>>>
>>>
>>> *Disclaimer*
>>>
>>> The information contained in this communication from the sender is
>>> confidential. It is intended solely for use by the recipient and others
>>> authorized to receive it. If you are not the recipient, you are hereby
>>> notified that any disclosure, copying, distribution or taking action in
>>> relation of the contents of this information is strictly prohibited and may
>>> be unlawful.
>>>
>>> This email has been scanned for viruses and malware, and may have been
>>> automatically archived by *Mimecast Ltd*, an innovator in Software as a
>>> Service (SaaS) for business. Providing a *safer* and *more useful*
>>> place for your human generated data. Specializing in; Security, archiving
>>> and compliance. To find out more Click Here
>>> .
>>>
>>
>
> *Disclaimer*
>
> The information contained in this communication from the sender is
> confidential. It is intended solely for use by the recipient and others
> authorized to receive it. If you are not the recipient, you are hereby
> notified that any disclosure, copying, distribution or taking action in
> relation of the contents of this information is strictly prohibited and may
> be unlawful.
>
> This email has been scanned for viruses and malware, and may have been
> automatically archived by *Mimecast Ltd*, an innovator in Software as a
> Service (SaaS) for business. Providing a *safer* and *more useful* place
> for your human generated data. Specializing in; Security, archiving and
> compliance. To find out more Click Here
> .
>


Re: Too Many open files (native persistence bin files)

2018-09-18 Thread kvenkatramtreddy
one correction: lsof showing the information for threads as well, so that is
why it is showing number of Threads * file descriptor.

My current ulimit is 32635, open files in 6000 but still we are receiving
too many files open.





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to use Affinity Function to Map a set of Keys to a particular node in a cluster?

2018-09-18 Thread Mikhail
Hi

You can gather your data by @AffinityKeyMapped in one partition, but you can
not force partition to be stored on some particular node.
So you can use the same @AffinityKeyMapped for all data related to a
particular city and this means all data about this city and/or related to
this city would be on one particular node. However, this means that data
related to London and Zurich might get to the same partition. 
Anyway if you list of cities >1000 distribution would be fiar enough. In the
case of a small set of cities, you can try to implement your own affinity
function, but this isn't an easy task and can take significant time.

Thanks,
Mike.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Error while loading data to cache

2018-09-18 Thread Ilya Kasnacheev
Hello!

Alternatively, you can set `storeKeepBinary' cache configuration setting to
false.

Regards,
-- 
Ilya Kasnacheev


пн, 17 сент. 2018 г. в 19:59, Skollur :

> I am trying to load data to cache using below code and seeing an error.
>
>
> ignite.cache("CustomerCache").loadCache(new IgniteBiPredicate()
> {
> @Override
> public boolean apply(Object key, Object
> value) {
> if(((Customer) value).getId()==11);
> return true;
> }
> });
>
> ERROR:-
>
> Caused by: java.lang.ClassCastException:
> org.apache.ignite.internal.binary.BinaryObjectImpl cannot be cast to
> com.idb.cache.model.Customer
> at
> com.idb.cache.load.LoadIdbIgniteCaches$1.apply(LoadIdbIgniteCaches.java:36)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter.loadEntry(GridDhtCacheAdapter.java:639)
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Hibernate spring XML file to connect ignite database

2018-09-18 Thread Ilya Kasnacheev
Hello!

You need to add ignite-core-VERSION.jar to class path.

If that fails, specify org.apache.ignite.IgniteJdbcThinDriver as driver
class explicitly.

Regards,
-- 
Ilya Kasnacheev


вт, 18 сент. 2018 г. в 13:48, Malashree :

> Hibernate spring XML file to connect Apache ignite database using jdbc
> Driver
>
> when i am connecting first time from the maven Application the connection
> is
> loosing getting error has
> No suitable driver found for jdbc:ignite:thin://127.0.0.1/
> Please help me resolving these issues.
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: what is the Apache Ignite Dialect

2018-09-18 Thread Ilya Kasnacheev
Hello!

Please take a look at this SO post (top answer):
https://stackoverflow.com/questions/2085368/difference-between-database-drivers-and-database-dialects

Basically, it's a set of do-s and dont-s for Hibernate to mind when
querying Ignite.

Regards,
-- 
Ilya Kasnacheev


вт, 18 сент. 2018 г. в 13:51, Malashree :

> what is the Apache Ignite Hibernate Dialect to be used in a Java Spring MVC
> Application.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Too Many open files (native persistence bin files)

2018-09-18 Thread kvenkatramtreddy
Hi Team,

We have got one production issue using Ignite. as per the lsof (File
descriptors ) it is around a 1 million open files with given process and all
used by ignite.

I am running ignite as emebedding into my web application with native
persistence enabled with 3 nodes as Cache Replicated mode.

I am using ignite-direct-io as well.

We are running our web app in Websphere Liberty Profile.

when I run below command in lsof -p  | grep
cache-turnAroundProcesses/part-31.bin | wc -l

result is 189. One bin file is opened around 189 times.


openfiles_bin.txt
  

Thanks & Regards,
Venkat









--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


what is the Apache Ignite Dialect

2018-09-18 Thread Malashree
what is the Apache Ignite Hibernate Dialect to be used in a Java Spring MVC
Application.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Hibernate spring XML file to connect ignite database

2018-09-18 Thread Malashree
Hibernate spring XML file to connect Apache ignite database using jdbc Driver 

when i am connecting first time from the maven Application the connection is
loosing getting error has
No suitable driver found for jdbc:ignite:thin://127.0.0.1/
Please help me resolving these issues.




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: ScanQuery throwing Exception for java Thin client while peerclassloading is enabled

2018-09-18 Thread Saby
Hi Ilya,
I tested it with nightly build of 17/09/2018(ver.
2.7.0.20180917#19700101-sha1:DEV), but getting the same exception. Getting
the following exception  while trying to fetch the data from cache using
ScanQuery from Java Thin client,  but SqlQuery giving the correct result.
ScanQuery excepting cached class's description in the server class path.

The definition of the cached class is-

public class IgniteValueClass implements Binarylizable, Serializable {
private static final long serialVersionUID = 50283244369719L;
@QuerySqlField(index = true)
String id;
@QuerySqlField(index = true)
String val1;

@QuerySqlField
String val2;

public String getId() {
return id;
}

public void setId(String id) {
this.id = id;
}

public String getVal1() {
return val1;
}

public void setVal1(String val1) {
this.val1 = val1;
}

public String getVal2() {
return val2;
}

public void setVal2(String val2) {
this.val2 = val2;
}

@Override
public boolean equals(Object obj) {
if (obj == null)
return false;
IgniteValueClass newObj = (IgniteValueClass) obj;
if (newObj == this)
return true;
return check(id, newObj.id) && check(val1, newObj.val1) && 
check(val2,
newObj.val2);
}

private boolean check(String v1, String v2) {
if (v1 == v2 || (v1 != null && v1.equals(v2)))
return true;
return false;
}

@Override
public void writeBinary(BinaryWriter writer) throws 
BinaryObjectException {
writer.writeString("id", id);
writer.writeString("val1", val1);
writer.writeString("val2", val2);

}

@Override
public void readBinary(BinaryReader reader) throws 
BinaryObjectException {
id = reader.readString("id");
val1 = reader.readString("val1");
val2 = reader.readString("val2");

}

@Override
public String toString() {
return "ID:" + id + " Val1:" + val1 + " Val2:" + val2;
}
}

The server is throwing this exception:

[13:05:12,104][SEVERE][client-connector-#75][ClientListenerNioListener]
Failed to process client request
[req=o.a.i.i.processors.platform.client.cache.ClientCacheScanQueryRequest@490c852e]
class org.apache.ignite.binary.BinaryInvalidTypeException:
xxx.imdg.ignite.test.IgniteValueClass
at
org.apache.ignite.internal.binary.BinaryContext.descriptorForTypeId(BinaryContext.java:707)
at
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1757)
at
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1716)
at
org.apache.ignite.internal.binary.BinaryObjectImpl.deserializeValue(BinaryObjectImpl.java:798)
at
org.apache.ignite.internal.binary.BinaryObjectImpl.value(BinaryObjectImpl.java:143)
at
org.apache.ignite.internal.processors.cache.CacheObjectUtils.unwrapBinary(CacheObjectUtils.java:177)
at
org.apache.ignite.internal.processors.cache.CacheObjectUtils.unwrapBinaryIfNeeded(CacheObjectUtils.java:39)
at
org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager$ScanQueryIterator.advance(GridCacheQueryManager.java:3082)
at
org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager$ScanQueryIterator.onHasNext(GridCacheQueryManager.java:2984)
at
org.apache.ignite.internal.util.GridCloseableIteratorAdapter.hasNextX(GridCloseableIteratorAdapter.java:53)
at
org.apache.ignite.internal.util.lang.GridIteratorAdapter.hasNext(GridIteratorAdapter.java:45)
at
org.apache.ignite.internal.processors.platform.client.cache.ClientCacheQueryCursor.writePage(ClientCacheQueryCursor.java:77)
at
org.apache.ignite.internal.processors.platform.client.cache.ClientCacheQueryResponse.encode(ClientCacheQueryResponse.java:50)
at
org.apache.ignite.internal.processors.platform.client.ClientMessageParser.encode(ClientMessageParser.java:387)
at
org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:172)
at
org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:45)
at
org.apache.ignite.internal.util.nio.GridNioFilterChain$TailFilter.onMessageReceived(GridNioFilterChain.java:279)
at
org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:109)
at
org.apache.ignite.internal.util.nio.GridNioAsyncNotifyFilter$3.body(GridNioAsyncNotifyFilter.java:97

Re: Performance of SQL query by partial primary key

2018-09-18 Thread Юрий
Hi Ray,

You are right, currently complex PK can't be used as index and performance
for your query are bad due to full scan.
After IGNITE-8386  will
be merged the same query start use PK index.

вт, 18 сент. 2018 г. в 8:16, Ray :

> To answer my own question here, basically the index created on PK now is
> useless according to this ticket.
> https://issues.apache.org/jira/browse/IGNITE-8386
>
> Ignite will perform a whole table scan when trying to execute the SQL query
> I posted above unless an index identical to PK is created manually.
>
> Please correct me if understand it wrong.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


-- 
Живи с улыбкой! :D


Is there a way to use Ignite optimization and Spark optimization together when using Spark Dataframe API?

2018-09-18 Thread Ray
Currently, OPTION_DISABLE_SPARK_SQL_OPTIMIZATION option can only be set on
spark session level.
It means I can only have Ignite optimization or Spark optimization for one
Spark job.

Let's say I want to load data into spark memory with pushdown filters using
Ignite optimization.
For example, I want to load one day's data using this sql "select * from
tableA where date = '2018-09-01'".
With Ignite optimization, this sql is executed on Ignite and the where
clause filter is applied on Ignite.
But with Spark optimization, all the data in this table will be loaded into
Spark memory and do filter later.

Then I want to join filtered tableA with filtered tableB which is also
loaded from Ignite.
But I want use Spark's join feature to do the join because both filtered
tableA with filtered tableB contains millions or rows and Ignite is not
optimized for join.
How can I do that?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to start an Ignite cluster with fixed number of servers?

2018-09-18 Thread Denis Mekhanikov
Ray,

You can use SSL authentication to prevent nodes, that don't have a
corresponding certificate, from connecting to the cluster.
https://apacheignite.readme.io/docs/ssltls

Denis

вт, 18 сент. 2018 г. в 7:49, Ray :

> Let's say I want to start an Ignite cluster of three server nodes with
> fixed
> IP address and prevent other servers joining cluster as Ignite server
> nodes,
> how can I do that?
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>