Re: Cache pre-loading question

2017-10-09 Thread franck102
Hi Alex, what do you mean by custom? 

I am indeed using a CacheJdbcPojoStore, it has the method I need 
(loadCache(final IgniteBiInClosure clo, @Nullable Object... args)) 

however I can't obtain a reference to the store from my ignite instance. 

I can try to build an instance of the store outside Ignite however that 
means (at best) maintaining its configuration separately from the rest of 
the Ignite config... 

What I'd like to do instead is: 


... however that loadCache method exists on the cache store, but not on the
cache :( 

Franck 




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Cache pre-loading question

2017-10-09 Thread franck102
Hi Alex, what do you mean by custom?

I am indeed using a CacheJdbcPojoStore, it has the method I need
(loadCache(final IgniteBiInClosure clo, @Nullable Object... args)) 

however I can't obtain a reference to the store from my ignite instance.

I can try to build an instance of the store outside Ignite however that
means (at best) maintaining its configuration separately from the rest of
the Ignite config... 

What I'd like to do instead is:



... however there is no IgniteCache.getStore API :(

Franck




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: SQL query with string where clause value evaluated as hex string

2017-10-09 Thread chuck.wi...@gm.com
I believe I found my own answer  here

 
. After adding the referenced hibernate jar to my project, ignite SQL
queries against VIN were not interpreted as hexadecimal strings. 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Lock on cache keys during node failures

2017-10-09 Thread naresh.goty
Hi All,

We would like to understand the lock behavior on cache items, based on the
code snippet below:

1. What happens if the key "fake" in cache1 is lost (due to node failure,
part data lost etc) after the lock is acquired and before it is released ?  
   
2. What happens, if another thread (t2) tries to acquire lock on the key
"fake" in cache1 after the lock was acquired by thread (t1) and the key was
lost and before the lock was release by thread t1 ?
3. What are the resiliency guarantees on the lock on a key in a cache ?


IgniteCache cache1 =
ignite.getOrCreateCache(getConfig("cache1"));
Lock lock = cache1.lock("fake");
try{
   lock.lock();
   //do something 
}finally{
   lock.unlock();
}

Thanks,
Naresh   





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


BackupFilter for the RendezvousAffinityFunction Questions

2017-10-09 Thread Chris Berry
Hi,

I have 2 availability zones (AZs), and an Ignite Grid that spans them.
I have implemented a BackupFilter for the RendezvousAffinityFunction, which
attempts to keep the Primary and Backups balanced.

In other words, if I have 1 Primary and 3 Backups (for a PARTITIONED cache)
across 16 Nodes (8 per AZ) 
Then I will have  4 copies of the data – with 2 copies on each AZ.

This way I can lose an entire AZ – for maintenance or whatever – and be able
to withstand it.

My questions:

1) By messing with the RendezvousAffinityFunction, am I messing with the
Cache Affinity?? (I believe not?)
We have many caches – and they all use the same cache keys (the same set of
UUIDs – imagine a User Id) 
Which ensures that all data that is affiliated with a particular UUID lives
on the same Node, and thus is collocated in the Compute Grid.
This is essential to our system’s performance, and we want to be certain
that we are not affecting that by implementing the BackupFilter??

2) How can I visualize the distribution of cache data in the Primary &
Backups??
I’d love to determine if all of this is working as expected.
Even being able to dump this to a log would be helpful.
Better would be something similar to how Elasticsearch can show you the
Shard distribution across Nodes.

Thanks,
-- Chris 




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


SQL query with string where clause value evaluated as hex string

2017-10-09 Thread chuck.wi...@gm.com
I am trying to use vehicle identification number (VIN) as a field in an
apache ignite cache. I am using version 2.2. The form of this string is 17
characters in length and contains alphanumeric character outside of the
hexadecimal digit range. I am getting errors similar to  this thread

  
where the string  in a where clause is evaluated as a hexadecimal string and
fails to return any results. I am having to concatenate a string that is
clearly *not *hexadecimal  to get any values to return.

Caused by: org.h2.jdbc.JdbcSQLException: Hexadecimal string with odd number
of characters:

examples of working queries:
select * from "vin_raw".VEHICLETIMESERIESDATA vs where 'VIN' || vin = 'VIN'
|| '1G1BB5SM1H7130073';
SELECT _key, _val FROM "vin_raw".VEHICLETIMESERIESDATA where 'vin' || vin =
'vin1G1BB5SM1H7130073' and name = 'Foo';
SELECT _key, _val FROM "vin_raw".VEHICLETIMESERIESDATA where  name = 'Foo';
SELECT * FROM "vin_raw".VEHICLETIMESERIESDATA;

examples that fail:
select count(*) from "vin_raw".VEHICLETIMESERIESDATA vs where vin =
'1G1BB5SM1H7130073';
SELECT _key, _val FROM "vin_raw".VEHICLETIMESERIESDATA where vin =
'1G1BB5SM1H7130073' and name = 'Foo';












 


vin
name
lastModifiedTimeMillis











--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Data (that is inserted via JDBC) is not persisted after cluster restart

2017-10-09 Thread blackfield
The config file:
IgniteConfig.xml
  

No source code is provided as the issue is reproduce-able from SQL tool such
as DBeaver. 

As far as logs, I have not configured ignite.-log4j.xml (as in it is still
default). Do you want me to configure it differently?  Also, I believe the
default log dir is  $IGNITE_HOME/work/logs?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Performance of persistent store too low when bulb-loading

2017-10-09 Thread Ray
I also had the same problem here.
I'm using ignite-spark's saveParis method(which is also a
IgniteDataStreamer) to ingest 550 million entries of data into Ignite, and
the ingesting speed got slow down after first few minutes and seems stuck
for now with persistent store enabled. 
My setup is 4 nodes with 16GB heap size and 32GB off heap size.

Here's the dstat result for about a minute when the ingesting is stuck.
total-cpu-usage --memory-usage- -dsk/total- ---paging--
swap--- --filesystem-
usr sys idl wai hiq siq| used  buff  cach  free| read  writ|  in   out |
used  free|files  inodes
  1   0  99   0   0   0|89.9G 3970M  157G  507G|  30k  967k|   0 0 |   0
0 | 8176453k
  0   0  97   3   0   0|89.9G 3970M  157G  507G|   012M|   0 0 |   0
0 | 8176453k
  0   0  97   3   0   0|89.9G 3970M  157G  507G|   0  9548k|   0 0 |   0
0 | 8176453k
  0   0  96   3   0   0|89.9G 3970M  157G  507G|   018M|   0 0 |   0
0 | 8176453k
  0   0  97   3   0   0|89.9G 3970M  157G  507G|   017M|   0 0 |   0
0 | 8176453k
  0   0  97   3   0   0|89.9G 3970M  157G  507G|   017M|   0 0 |   0
0 | 8176453k
  0   0  96   3   0   0|89.9G 3970M  157G  507G|   016M|   0 0 |   0
0 | 8176453k
  0   0  97   3   0   0|89.9G 3970M  157G  507G|   015M|   0 0 |   0
0 | 8176453k
  0   0  97   3   0   0|89.9G 3970M  157G  507G|   017M|   0 0 |   0
0 | 8176453k
  0   0  97   3   0   0|89.9G 3970M  157G  507G|   016M|   0 0 |   0
0 | 8176453k
  0   0  97   3   0   0|89.9G 3970M  157G  507G|   018M|   0 0 |   0
0 | 8176453k
  0   0  97   3   0   0|89.9G 3970M  157G  507G|   017M|   0 0 |   0
0 | 8176453k
  2   1  94   3   0   0|89.9G 3970M  157G  507G|   015M|   0 0 |   0
0 | 8176453k
  1   0  96   3   0   0|89.9G 3970M  157G  507G|   024M|   0 0 |   0
0 | 8176453k
  0   0  97   3   0   0|89.9G 3970M  157G  507G|   025M|   0 0 |   0
0 | 8176453k
  0   0  97   3   0   0|89.9G 3970M  157G  507G|   020M|   0 0 |   0
0 | 8176453k
  0   0  97   3   0   0|89.9G 3970M  157G  507G|   026M|   0 0 |   0
0 | 8176453k
  0   0  97   3   0   0|89.9G 3970M  157G  507G|   016M|   0 0 |   0
0 | 8176453k
  0   0  96   3   0   0|89.9G 3970M  157G  507G|   010M|   0 0 |   0
0 | 8176453k
  1   0  96   3   0   0|89.9G 3970M  157G  507G|   024M|   0 0 |   0
0 | 8176453k
  0   0  97   3   0   0|89.9G 3970M  157G  507G|   020M|   0 0 |   0
0 | 8176453k
  0   0  97   3   0   0|89.9G 3970M  157G  507G|   025M|   0 0 |   0
0 | 8176453k
  0   0  97   3   0   0|89.9G 3970M  157G  507G|   031M|   0 0 |   0
0 | 8176453k
  0   0  97   3   0   0|89.9G 3970M  157G  507G|   023M|   0 0 |   0
0 | 8176453k
  0   0  97   3   0   0|89.9G 3970M  157G  507G|   021M|   0 0 |   0
0 | 8176453k
  0   0  97   3   0   0|89.9G 3970M  157G  507G|   021M|   0 0 |   0
0 | 8176453k
  0   0  96   3   0   0|89.9G 3970M  157G  507G|   027M|   0 0 |   0
0 | 8176453k
  0   0  96   3   0   0|89.9G 3970M  157G  507G|   024M|   0 0 |   0
0 | 8176453k
  0   0  97   3   0   0|89.9G 3970M  157G  507G|   027M|   0 0 |   0
0 | 8176453k
  0   0  97   3   0   0|89.9G 3970M  157G  507G|   016M|   0 0 |   0
0 | 8176453k
  0   0  96   3   0   0|89.9G 3970M  157G  507G|   020M|   0 0 |   0
0 | 8176453k
  0   0  97   3   0   0|89.9G 3970M  157G  507G|   028M|   0 0 |   0
0 | 8176453k
  0   0  97   3   0   0|89.9G 3970M  157G  507G|   022M|   0 0 |   0
0 | 8176453k
  0   0  97   3   0   0|89.9G 3970M  157G  507G|   024M|   0 0 |   0
0 | 8176453k
  0   0  97   3   0   0|89.9G 3970M  157G  507G|   027M|   0 0 |   0
0 | 8176453k
  1   0  96   3   0   0|89.9G 3970M  157G  507G|   027M|   0 0 |   0
0 | 8176453k
  0   0  97   3   0   0|89.9G 3970M  157G  507G|   024M|   0 0 |   0
0 | 8176453k
  1   0  96   3   0   0|89.9G 3970M  157G  507G|   029M|   0 0 |   0
0 | 8176453k
  0   0  97   3   0   0|89.9G 3970M  157G  507G|   027M|   0 0 |   0
0 | 8176453k
  0   0  97   3   0   0|89.9G 3970M  157G  507G|   023M|   0 0 |   0
0 | 8176453k
  0   0  97   3   0   0|89.9G 3970M  157G  507G|   017M|   0 0 |   0
0 | 8176453k
  0   0  97   3   0   0|89.9G 3970M  157G  507G|   062M|   0 0 |   0
0 | 8176453k
  0   0  97   3   0   0|89.9G 3970M  157G  507G|   016M|   0 0 |   0
0 | 8176453k
  0   0  96   3   0   0|89.9G 3970M  157G  507G|   017M|   0 0 |   0
0 | 8176453k
  0   0  97   3   0   0|89.9G 3970M  157G  507G|   020M|   0 

Re: (Cross Platform)c# wrapper for java CacheStoreAdapter for database persistence

2017-10-09 Thread JP
Could you read this topic,
http://apache-ignite-users.70518.x6.nabble.com/Persistence-store-MSSQL-using-cross-platform-c-Ignite-Client-windows-and-java-Ignite-Server-linux-ve-td17238.html


Actually, I am looking for using JavaCachStore in C# not .NETCacheStore.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Cache pre-loading question

2017-10-09 Thread afedotov
Hi Franck,

Yes. You would need to implement the same mapping logic in your client
populating a cache via data streamer.
A custom CacheStore or CacheJdbcPojoStore should be an easier approach in
case of an underlying DB, especially if you need read-through/write-through
semantics as well.

Kind regards,
Alex



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: (Cross Platform)c# wrapper for java CacheStoreAdapter for database persistence

2017-10-09 Thread afedotov
Hi,

Please take a look at the end of the following documentation  section

  

Kind regards,
Alex




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


(Cross Platform)c# wrapper for java CacheStoreAdapter for database persistence

2017-10-09 Thread JP
Hi,

Working on persisting data in the database from the .NET ignite
client(windows) to Ignite server(ubuntu).

I have created CacheJdbcPersonStore in Java and deployed in Ignite
Server(Ubuntu).
https://apacheignite.readme.io/docs/3rd-party-store#section-cachestore-example

>From .NET Ignite client, need to configure CacheStoreFactory(JavaCacheStore)
and writethrough, readthrough enabled.

So, How to wrap the java CacheJdbcPersonStore class in c#..?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Possible dead lock when number of jobs exceeds thread pool

2017-10-09 Thread afedotov
Hi Raymond,

Yes. Bringing C# client in sync with Java version is on plans.
Of course, C# client will always be a bit outdated in terms of features of
Java client.

Kind regards,
Alex



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


checkpoint marker is present on disk, but checkpoint record is missed in WAL

2017-10-09 Thread KR Kumar
Hi Guys - I am using ignite persistence with a 8 node cluster. Currently in
dev/poc  stages. I get following exception when i try to restart the node
after I killed the process with "kill . I have a shutdown hook to the
code in which I am shutting down Ignite with G.stop(false). I read in a blog
that When you stop ignite with cancel false, it will checkpoint the data and
the stop the cluster and should not have any issues with restart. Any help
is greatly appreciated.

Invocation of init method failed; nested exception is class
org.apache.ignite.IgniteCheckedException: Failed to restore memory state
(checkpoint marker is present on disk, but checkpoint record is missed in
WAL) [cpStatus=CheckpointStatus [cpStartTs=1507546382988,
cpStartId=abeb760a-0388-4ad5-8473-62ed9c7bc0f3, startPtr=FileWALPointer
[idx=6, fileOffset=33982453, len=2380345, forceFlush=false],
cpEndId=c257dd1f-c350-4b0d-aefc-cad6d2c2082b, endPtr=FileWALPointer [idx=4,
fileOffset=38761373, len=1586221, forceFlush=false]], lastRead=null]
06:55:09.341 [main] WARN 
org.springframework.context.support.ClassPathXmlApplicationContext -
Exception encountered during context initialization - cancelling refresh
attempt: org.springframework.beans.factory.BeanCreationException: Error
creating bean with name 'igniteContainer' defined in class path resource
[mihi-gridworker-s.xml]: Invocation of init method failed; nested exception
is class org.apache.ignite.IgniteCheckedException: Failed to restore memory
state (checkpoint marker is present on disk, but checkpoint record is missed
in WAL) [cpStatus=CheckpointStatus [cpStartTs=1507546382988,
cpStartId=abeb760a-0388-4ad5-8473-62ed9c7bc0f3, startPtr=FileWALPointer
[idx=6, fileOffset=33982453, len=2380345, forceFlush=false],
cpEndId=c257dd1f-c350-4b0d-aefc-cad6d2c2082b, endPtr=FileWALPointer [idx=4,
fileOffset=38761373, len=1586221, forceFlush=false]], lastRead=null]
at
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1628)
at
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:555)
at
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:483)
at
org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:306)
at
org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
at
org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:302)
at
org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:197)
at
org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:761)
at
org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:866)
at
org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:542)
at
org.springframework.context.support.ClassPathXmlApplicationContext.(ClassPathXmlApplicationContext.java:139)
at
org.springframework.context.support.ClassPathXmlApplicationContext.(ClassPathXmlApplicationContext.java:83)
at
com.pointillist.gridworker.agent.MihiGridWorker.start(MihiGridWorker.java:32)
at com.pointillist.gridworker.MihiWorker.main(MihiWorker.java:20)
Caused by: class org.apache.ignite.IgniteCheckedException: Failed to restore
memory state (checkpoint marker is present on disk, but checkpoint record is
missed in WAL) [cpStatus=CheckpointStatus [cpStartTs=1507546382988,
cpStartId=abeb760a-0388-4ad5-8473-62ed9c7bc0f3, startPtr=FileWALPointer
[idx=6, fileOffset=33982453, len=2380345, forceFlush=false],
cpEndId=c257dd1f-c350-4b0d-aefc-cad6d2c2082b, endPtr=FileWALPointer [idx=4,
fileOffset=38761373, len=1586221, forceFlush=false]], lastRead=null]
at
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.restoreMemory(GridCacheDatabaseSharedManager.java:1433)
at
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.readCheckpointAndRestoreMemory(GridCacheDatabaseSharedManager.java:539)
at
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:616)
at
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:1901)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:745)


Appreciate your help??

Thanx and Regars,
KR Kumar



--
Sent from: http://apache-ignite-users.70518.x6.nabb

Re: Why SQL_PUBLIC is appending to Cache name while using JDBC thin driver

2017-10-09 Thread Vladimir Ozerov
Hi Austin,

All caches in Ignite must have unique names. This is why we add unique
prefix with schema name attached.

On Sat, Oct 7, 2017 at 1:17 PM, austin solomon 
wrote:

> Hi,
>
> I am using Ignite version 2.2.0, and I have created a table using
> IgniteJdbcThinDriver.
>
> When I checked the cache in Ignite Visor I'm seeing SQL_PUBLIC_{TABLE-NAME}
> is appended.
> Is their a way to get rid of this.
>
> I want to remove the SQL_PUBLIC from the cache name.
>
> Thanks,
> Austin
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Cache pre-loading question

2017-10-09 Thread franck102
Hi all, I am trying to figure out the best approach for pre-loading cache
entries from a SQL DB on startup.

I have read the recommendations to use a client node and a data streamer to
pre-load cache data on startup.

One issue I see with the approach however, compared with cache.loadCache(),
is that I can't seem to easily reuse the CacheStore.loadCache
implementations - is there any way to do so?

My (JDBC) cache store configuration contains logic to map the DB schema to
the cache schema; it seems that I would need to replicate that logic in the
loader client in order to call streamer.addData()?

Thanks!
Franck



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Race in Service Grid deployment?

2017-10-09 Thread Artёm Basov
Hi Denis!

Thank you for explanation. If this is the case, then i don't think that user
should actually know that this was happening (this exception handled and
retry happens?), since user can't change anyhing to avoid such messages.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/