Re: Ignite 2.16 entry processors sometimes execute twice

2024-07-12 Thread Вячеслав Коптилин
Hi Raymond,

Besides the answer Pavel provided, please take into account that entry
processor implementation should avoid generating random values for the
entry being updated.
For example, mutableEntry.setValue(rand.nextInt()) might lead to data
inconsistency.

* 
* An instance of entry processor must be stateless as it may be
invoked multiple times on primary and
* backup nodes in the cache. It is guaranteed that the value passed to
the entry processor will be always
* the same.
* 

If you need random values, please consider generating them before the
invocation and passing them through additional parameters as follows:
Integer randomValue = rand.nextInt();
cache.invoke(key, entryProcessor, randomValue);

Thanks,
Slava.


ср, 10 июл. 2024 г. в 19:59, Raymond Liu :

> I've only tested the real deal with 2.16, but I just ran the sample repo
> test with 2.14, and the output is the same. I can test with even earlier
> versions if you'd like.
>
> On Wed, Jul 10, 2024 at 4:08 AM Stephen Darlington 
> wrote:
>
>> Do you see the same behaviour with older versions of Ignite, or is this
>> unique to 2.16?
>>
>> On Tue, 9 Jul 2024 at 21:34, Raymond Liu  wrote:
>>
>>> Hi all,
>>>
>>> We're encountering an issue where entry processors execute twice.
>>> Executing twice is a problem for us because, for easier optimization, we
>>> would like our entry processors *not* to be idempotent.
>>>
>>> Here is a sample self-contained junit test on Github which demonstrates
>>> this issue: https://github.com/Philosobyte/ignite
>>> -duplicate-processing-test/blob/main/src/test/java/com/philosobyte/igniteduplicateprocessingtest/DuplicateProcessingTest.java
>>>
>>>
>>> (in case that link doesn't work, my github username is Philosobyte and
>>> the project is called "ignite-duplicate-processing-test")
>>>
>>> When the test is run, it will log two executions instead of just one.
>>>
>>> To rule out the entry processor executing on both a primary and backup
>>> partition, I set the number of backups to 0. I've also set atomicityMode to
>>> ATOMIC.
>>>
>>> Does anyone have any ideas about why this might happen?
>>>
>>> Thank you,
>>> Raymond
>>>
>>


Re: Ignite TryGet - cache data not found intermittently

2024-06-12 Thread Вячеслав Коптилин
Hello Charlin,

As I wrote, the first option is the `full sync` mode:
CacheConfiguration.WriteSynchronizationMode =
CacheWriteSynchronizationMode.FullSync [1]
The second one is disabling reading from backups:
CacheConfiguration.ReadFromBackup = false [2]

[1]
https://ignite.apache.org/releases/latest/dotnetdoc/api/Apache.Ignite.Core.Cache.Configuration.CacheConfiguration.html#Apache_Ignite_Core_Cache_Configuration_CacheConfiguration_WriteSynchronizationMode
[2]
https://ignite.apache.org/releases/latest/dotnetdoc/api/Apache.Ignite.Core.Cache.Configuration.CacheConfiguration.html#Apache_Ignite_Core_Cache_Configuration_CacheConfiguration_ReadFromBackup

Thanks,
S.


ср, 12 июн. 2024 г. в 12:58, Charlin S :

> Hi  Slava Koptilin
>
> Thanks for your email.
>
> Cache configuration used at time of cache creation in C# code. Please
> suggest me if any configuration changes required in cache level or grid
> level
> CacheConfiguration.CopyOnRead=false
> CacheConfiguration.EagerTtl=true
> CacheConfiguration.CacheMode = CacheMode.Partitioned
> CacheConfiguration.Backups = 1
>
> *Client node xml bean*
>
> 
> http://www.springframework.org/schema/beans";
>xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
>xmlns:util="http://www.springframework.org/schema/util";
>xsi:schemaLocation="http://www.springframework.org/schema/beans
>
> http://www.springframework.org/schema/beans/spring-beans.xsd
>http://www.springframework.org/schema/util
>
> http://www.springframework.org/schema/util/spring-util.xsd";>
>   
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
>   
> 
> 
>class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
> 
>   
> 1.0.0.1:55500
>
> 1.0.0.2:55500
>   
> 
>   
> 
>   
> 
> 
>  class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi">
>
> 
> 
> 
>   
> 
>
> *Server node xml bean*
>
> 
> http://www.springframework.org/schema/beans";
>xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
>xmlns:util="http://www.springframework.org/schema/util";
>xsi:schemaLocation="http://www.springframework.org/schema/beans
>
> http://www.springframework.org/schema/beans/spring-beans.xsd
>http://www.springframework.org/schema/util
>
> http://www.springframework.org/schema/util/spring-util.xsd";>
>class="org.apache.ignite.configuration.CacheConfiguration" abstract="true">
> 
>  factory-method="factoryOf">
> 
> 
> 
> 
> 
> 
>
> 
>   
>   
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
>   
> 
> 
>class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
> 
>   
> 1.0.0.1:55500
> 1.0.0.2:55500
>   
> 
>   
> 
>   
> 
>
> 
> 
>  value="TestModel1"/>
> 
> 
>  value="TestModel2"/>
> 
> 
>  value="TestModel3"/>
> 
> 
>  value="TestModel4"/>
> 
> 
>  value="TestModel5"/>
> 
> 
> 
> 
>  class="org.apache.ignite.configuration.DataStorageConfiguration">
>     
>  class="org.apache.ignite.configuration.DataRegionConfiguration">
>  value="Common_Dynamic_Data_Region"/>
> 
> 
>  value="RANDOM_2_LRU"/>
> 
>

Re: Ignite TryGet - cache data not found intermittently

2024-06-12 Thread Вячеслав Коптилин
Hi Charlin,

I mean that it might be "well-known" behavior if you use `primary sync`
mode and the `readFromBackup` property equals `true` (which is `true` by
default).

The first option, to overcome this problem, is using `full sync` mode. In
that case, the update request will wait for the write to complete on all
participating nodes (primary and backups).
The second option, that can be used here, is to use 'primary sync' and set
'CacheConfiguration#readFromBackup' flag to false. Ignite will always send
the request to the primary node and get the value from there.

Thanks,
S.

пн, 10 июн. 2024 г. в 14:22, Вячеслав Коптилин :

> Hello Charlin,
>
> Could you share your cache configuration? Specifically, what values are
> used for `readFromBackup` and `writeSynchronizationMode`.
>
> Thanks,
> S.
>
> ср, 5 июн. 2024 г. в 15:49, Charlin S :
>
>> Hi All,
>> I am unable to fetch data from cache by reading by
>> key.intermittently (very rarely).
>>
>> Ignite version: 2.10
>> Cache mode: Partition
>> Client : C# with Ignite thick client
>>
>> Scenario:
>> My C# application received a request for cache data insertion @ 09:09:35
>> and successfully insertion initiated at application side.
>> Thereafter @ 09:10:21 C# application received a request to read cache
>> data for the same key and Ignite TryGet could not fetch data.
>> Note: We are able to get cache data by the same key after some time.
>>
>> Cache creation code
>> var IgniteCache= IIgnite.GetCache("cacheModel")
>> .WithExpiryPolicy(new ExpiryPolicy(
>>  TimeSpan.FromMinutes(60),
>>  TimeSpan.FromMinutes(60),
>>  TimeSpan.FromMinutes(60)
>>  ));
>>
>> Cache data insertion code
>> IgniteCache.Put(cacheKey, (T)data);
>>
>> Cache data reading code
>>   IgniteCache.TryGet(Key, out var value);
>>
>> Thanks & Regards,
>> Charlin
>>
>>
>>
>>


Re: Ignite TryGet - cache data not found intermittently

2024-06-10 Thread Вячеслав Коптилин
Hello Charlin,

Could you share your cache configuration? Specifically, what values are
used for `readFromBackup` and `writeSynchronizationMode`.

Thanks,
S.

ср, 5 июн. 2024 г. в 15:49, Charlin S :

> Hi All,
> I am unable to fetch data from cache by reading by
> key.intermittently (very rarely).
>
> Ignite version: 2.10
> Cache mode: Partition
> Client : C# with Ignite thick client
>
> Scenario:
> My C# application received a request for cache data insertion @ 09:09:35
> and successfully insertion initiated at application side.
> Thereafter @ 09:10:21 C# application received a request to read cache data
> for the same key and Ignite TryGet could not fetch data.
> Note: We are able to get cache data by the same key after some time.
>
> Cache creation code
> var IgniteCache= IIgnite.GetCache("cacheModel")
> .WithExpiryPolicy(new ExpiryPolicy(
>  TimeSpan.FromMinutes(60),
>  TimeSpan.FromMinutes(60),
>  TimeSpan.FromMinutes(60)
>  ));
>
> Cache data insertion code
> IgniteCache.Put(cacheKey, (T)data);
>
> Cache data reading code
>   IgniteCache.TryGet(Key, out var value);
>
> Thanks & Regards,
> Charlin
>
>
>
>


Re: Does SQL Query support TouchedExpiryPolicy?

2024-01-12 Thread Вячеслав Коптилин
Hello,

Unfortunately, SQL does not support TouchedExpiryPolicy. You need to use
KeyValue API to get benefits from it.

Thanks,
S.

пт, 15 дек. 2023 г. в 04:56, 38797715 <38797...@qq.com>:

> From javadoc, An ExpiryPolicy that defines the expiry Duration of a Cache
> Entry based on when it was last touched. A touch includes creation, update
> or access.
>
> So, does SQL queries support this expiration policy?
>


Re: Will ScanQuery scan intermediate data for pessimistic transactions?

2024-01-12 Thread Вячеслав Коптилин
Hello,

Yes, scan queries do not honor transaction guarantees.

Thanks,
S.

ср, 27 дек. 2023 г. в 14:43, 38797715 <38797...@qq.com>:

> Hi,
>
> The demo is as follows and attached, It seems that ScanQuery has read the
> intermediate data of the transaction.
>
> Execute the following code:
> @Slf4j
> public class QueryTest {
> public static void main(String[] args) {
> Ignite ignite = Ignition.start();
> testRange(ignite);
> }
> public static void testRange(Ignite ignite) {
> String cacheName = "positionCache";
> CacheConfiguration cfg = new CacheConfiguration<>(
> cacheName);
> cfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
> final IgniteCache cache = ignite.createCache(cfg);
> CountDownLatch writeCountDownLatch = new CountDownLatch(1);
> CountDownLatch readCountDownLatch = new CountDownLatch(1);
> boolean writeThreadStopFlag = false;
> boolean readThreadStopFlag = false;
> final Long combiId = 123l;
> Task writeTask = new Task(ignite, writeCountDownLatch) {
> @Override
> protected void invoke() {
> int count = 1;
> while (!writeThreadStopFlag && count <= 100) {
> try (Transaction transaction = ignite.transactions().txStart(
> TransactionConcurrency.PESSIMISTIC,
> TransactionIsolation.REPEATABLE_READ)) {
> for (int i = 0; i < 1000; i++) {
> Position position = new Position(combiId, 1 + i, "01", BigDecimal.
> valueOf(1000+count));
> cache.put(position.getKey(), position);
> }
> transaction.commit();
> log.info("{}write success",count);
> } catch (Exception e) {
> log.error("write error", e);
> }
> count++;
> }
> countDownLatch.countDown();
> }
> };
> Task.start(writeTask);
> ScanQuery scanQuery = new ScanQuery<>((k, v) -> combiId.
> equals(v.getCombiId()));
> Task readTask = new Task(ignite, readCountDownLatch) {
> @Override
> protected void invoke() {
> int count = 1;
> int tryTimes = 300;
> int unmatchCount = 0;
> while (!readThreadStopFlag && count <= tryTimes) {
> List positionDetails = new ArrayList<>();
> IgniteCache cache = ignite.cache(cacheName);
> try (QueryCursor> cursor = cache.query(
> scanQuery)) {
> List> list = cursor.getAll();
> list.forEach(x -> positionDetails.add(x.getValue()));
> }
> if (positionDetails.size() > 0) {
> BigDecimal currentQuantity = positionDetails.get(0).getQuantity();
> List unmatchDetails = positionDetails.stream()
> .filter(x -> !currentQuantity.equals(x.getQuantity())).collect(Collectors.
> toList());
> if (unmatchDetails.isEmpty()) {
> log.info("{}query success", count);
> } else {
> log.error("{} query,validate error,Quantity[{},{}] inconsistent,
> inconsistent size:{}", count, currentQuantity,
> unmatchDetails.stream().map(x -> x.getQuantity()).distinct().collect(
> Collectors.toList()),
> unmatchDetails.size());
> unmatchCount++;
> }
> }
> try {
> if (writeCountDownLatch.await(50, TimeUnit.MILLISECONDS)) {
> break;
> }
> } catch (InterruptedException e) {
> log.error("query error", e);
> }
> count++;
> }
> countDownLatch.countDown();
> }
> };
> Task.start(readTask);
> }
> }
>
>
>


Re: incompatible class error

2023-05-24 Thread Вячеслав Коптилин
Hello,

In general, you need to provide the right version of JCache API on all
nodes.
Please take a look at https://issues.apache.org/jira/browse/IGNITE-11986
(The issue mentioned in the ticket looks very similar to your case)

Thanks,
S.


пн, 15 мая 2023 г. в 21:00, Jiang Jacky :

> Hello, can someone help me about this?
>
> Thank you.
>
>
>
> *From: *Jiang Jacky 
> *Date: *Saturday, May 13, 2023 at 1:58 AM
> *To: *user@ignite.apache.org 
> *Subject: *incompatible class error
>
> Hello,
>
> I am running Gridgain Ignite with spark,
>
> I am using spark 2.4.5 and Hadoop 3.1.1 version.
>
> My gridgain cluster is 2.7.0 version
>
> However, I am stuck at below error:
>
>
>
> Caused by: java.io.InvalidClassException:
> javax.cache.configuration.MutableConfiguration; local class incompatible:
> stream classdesc serialVersionUID = 201405, local class serialVersionUID =
> 201306200821
>
>
>
>
>
> After investigation, I find it is caused by the jar from yarn lib
> *geronimo-jcache_1.0_spec-1.0-alpha-1.jar*
>
> I cannot delete the jar,  because it is part of YARN cluster.
>
>
>
> So, anyone has the experience about how to fix the issue, please let me
> know.
>
>
>
> Thank you.
>
>
>
> Jacky
>


Re: JmxExporter error while starting ignite node

2023-03-15 Thread Вячеслав Коптилин
Hello Abhishek,

You don't need to register JmxSystemViewExporterSpi explicitly via
configuration. This exporter implicitly enabled/registered on the startup
Ignite node. That is the reason why you are getting: IgniteCheckedException:
Duplicate SPI name (need to explicitly configure 'setName()' property):
JmxSystemViewExporterSpi

> SEVERE: JMX scrape failed: java.lang.IllegalArgumentException: Not an
Attribute: javax.management.openmbean.TabularDataSupport Could you please
provide a full stack trace?

Thanks,
S.

ср, 15 мар. 2023 г. в 08:58, Abhishek Ubhe :

> Hello,
>
> Getting below error while starting ignite node. Please check and suggest
> any changes required.
>
> *After upgrading JMX to 18 I'm getting above error while starting ignite 
> node.*
>
> SEVERE: JMX scrape failed: java.lang.IllegalArgumentException: Not an 
> Attribute: javax.management.openmbean.TabularDataSupport
>
>
> *Then by referring internet I make changes as below *
>
> SystemViewExporterSpi systemViewExporter = new JmxSystemViewExporterSpi();
>   systemViewExporter.setExportFilter(i -> false);
>   MetricExporterSpi metricExporter = new JmxMetricExporterSpi();
>   metricExporter.setExportFilter(i -> false);
>
>
> new IgniteConfiguration().setSystemViewExporterSpi(systemViewExporter)
>   .setMetricExporterSpi(metricExporter)
>
>
>
> *Now I am getting below error :*
>
> 06:24:27.570 ERROR org.apache.ignite.internal.IgniteKernal - Failed to start 
> manager: GridManagerAdapter [enabled=true, 
> name=o.a.i.i.managers.systemview.GridSystemViewManager]
> org.apache.ignite.IgniteCheckedException: Duplicate SPI name (need to 
> explicitly configure 'setName()' property): JmxSystemViewExporterSpi
>   at 
> org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:268)
>  ~[ignite-core-2.14.0.jar:2.14.0]
>   at 
> org.apache.ignite.internal.managers.systemview.GridSystemViewManager.start(GridSystemViewManager.java:71)
>  ~[ignite-core-2.14.0.jar:2.14.0]
>   at 
> org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1766) 
> ~[ignite-core-2.14.0.jar:2.14.0]
>   at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:998) 
> ~[ignite-core-2.14.0.jar:2.14.0]
>   at 
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:1757)
>  ~[ignite-core-2.14.0.jar:2.14.0]
>   at 
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1679)
>  ~[ignite-core-2.14.0.jar:2.14.0]
>   at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1121) 
> ~[ignite-core-2.14.0.jar:2.14.0]
>   at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:657) 
> ~[ignite-core-2.14.0.jar:2.14.0]
>   at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:579) 
> ~[ignite-core-2.14.0.jar:2.14.0]
>   at org.apache.ignite.Ignition.start(Ignition.java:328) 
> ~[ignite-core-2.14.0.jar:2.14.0]
>   at 
> com.clouds.ignite.server.startup.IgniteClusterNode.ignite(IgniteClusterNode.java:51)
>  ~[clouds-ignite-core-2.4.94_jmx.jar:?]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[?:1.8.0_171]
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_171]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_171]
>   at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_171]
>   at 
> org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:154)
>  ~[spring-beans-5.2.2.RELEASE.jar:5.2.2.RELEASE]
>
>
> --
> *Regards,*
> *Abhishek Ubhe*
>
>


Re: REST API TIMESTAMP

2022-10-04 Thread Вячеслав Коптилин
Hello,

I'll just duplicate the answer from
https://stackoverflow.com/questions/73703680/apache-ignite-rest-api-timestamp-format
(Thanks to Igor Belyakov!)

The issue is related to the default locale provider change in the Java 9+,
see JEP 252 .

As a workaround, you can set the next option to enable behavior compatible
with Java 8: -Djava.locale.providers=COMPAT
Thanks,
S.


чт, 1 сент. 2022 г. в 10:28, Dren Butković :

>
> Example of timestamp data, difference is in "," after year.
>
> Ignite 2.7.6  Java 8   ->"Sep 18, 2019 12:57:35 PM"
> Ignite 2.13  Java 11  ->"Aug 31, 2022, 12:43:44 PM"
>
>
> On Thu, Sep 1, 2022 at 9:03 AM Dren Butković 
> wrote:
>
>> Hi,
>>
>> I have upgraded Ignite 2.7.6 on Java 8 to Ignite 2.13 on Java 11.
>> In the REST API response the timestamp format has changed.
>> Locale and all other ENV variables on the host are equal.
>>
>> Is there a possibility to define the format of the timestamp output in
>> the Ignite configuration?
>>
>> Best regards
>>
>> Dren Butković
>>
>


Re: partitionLossPolicy confused

2022-09-30 Thread Вячеслав Коптилин
Hello,

In general there are two possible ways to handle lost partitions for a
cluster that uses Ignite Native Persistence:
1.
   - Return all failed nodes to baseline topology.
   - Call resetLostPartitions

2.
   - Stop all remaining nodes in the cluster.
   - Start all nodes in the cluster (including previously failed nodes) and
activate a cluster.

it’s important to return all failed nodes to the topology before calling
resetLostPartitions, otherwise a cluster could end up having stale data.

If some owners cannot be returned to the topology for a some reason, they
should be excluded from baseline before attempting resetting lost partition
state or an ClusterTopologyCheckedException will be thrown
with a message "Cannot reset lost partitions because no baseline nodes are
online [cache=someCahe, partition=someLostPart]” indicating safe recovery
is not possible.

In your particular case, the cache does not have backups and returning a
node that holds a lost partition should not lead to data inconsistencies.
This particular case can be detected and automatically "resolved". I will
file a jira ticket in order to address this improvement.

Thanks,
Slava.

пн, 26 сент. 2022 г. в 16:51, 38797715 <38797...@qq.com>:

> hello,
>
> Start two nodes with native persistent enabled, and then activate it.
>
> create a table with no backups, sql like follows:
>
> CREATE TABLE City (
>   ID INT,
>   Name VARCHAR,
>   CountryCode CHAR(3),
>   District VARCHAR,
>   Population INT,
>   PRIMARY KEY (ID, CountryCode)
> ) WITH "template=partitioned, affinityKey=CountryCode, CACHE_NAME=City,
> KEY_TYPE=demo.model.CityKey, VALUE_TYPE=demo.model.City";
>
> INSERT INTO City(ID, Name, CountryCode, District, Population) VALUES
> (1,'Kabul','AFG','Kabol',178);
> INSERT INTO City(ID, Name, CountryCode, District, Population) VALUES
> (2,'Qandahar','AFG','Qandahar',237500);
>
> then execute SELECT COUNT(*) FROM city;
>
> normal.
>
> then kill one node.
>
> then execute SELECT COUNT(*) FROM city;
>
> Failed to execute query because cache partition has been lostPart
> [cacheName=City, part=0]
>
> this alse normal.
>
> Next, start the node that was shut down before.
>
> then execute SELECT COUNT(*) FROM city;
>
> Failed to execute query because cache partition has been lostPart
> [cacheName=City, part=0]
>
> At this time, all partitions have been recovered, and all baseline nodes
> are ONLINE. Why still report this error? It is very confusing. Execute
> reset_lost_partitions operation at this time seems redundant. Do have any
> special considerations here?
>
> if this time restart the whole cluster,  then execute SELECT COUNT(*)
> FROM city; normal, this state is the same as the previous state, but the
> behavior is different.
>
>
>
>
>
>
>


Re: Node crashed with error "Getting affinity for too old topology version that is already out of history"

2022-04-08 Thread Вячеслав Коптилин
Hello Marcus,

It looks like a bug. I will check and file a jira issue.

> I suppose Ignite should be fault tolerant and outage of some nodes should
not cause shutdown of other nodes.
You are absolutely right.

Thanks,
S.


ср, 6 апр. 2022 г. в 10:49, Lo, Marcus :

> Hi Ignite team,
>
>
>
> Can you please advise if there are anything that we can check on the
> below? Thanks.
>
>
>
> Regards,
>
> Marcus
>
>
>
> *From:* Lo, Marcus [ICG-IT]
> *Sent:* Wednesday, March 30, 2022 11:55 AM
> *To:* user
> *Subject:* Node crashed with error "Getting affinity for too old topology
> version that is already out of history"
>
>
>
> Hi,
>
>
>
> We are using Ignite 2.10.0 and have 5 nodes (with consistentId/hostname -
> lrdeqprmap01p, lrdeqprmap02p, lrdeqprmap03p, lcgeqprmap03c, lcgeqprmap04c)
> in the cluster, and at one time 2 of the nodes (lcgeqprmap03c,
> lcgeqprmap04c) were out due to power outage. Somehow another node
> lrdeqprmap03p shut down shortly after that, with the following error:
>
>
>
> 2022-03-29 14:32:01.996+0100 ERROR
> [query-#194160%Prism%]  : Critical
> system error detected. Will be handled accordingly to configured handler
> [hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0,
> super=AbstractFailureHandler [ignoredFailureTypes=UnmodifiableSet
> [SYSTEM_WORKER_BLOCKED, SYSTEM_CRITICAL_OPERATION_TIMEOUT]]],
> failureCtx=FailureContext [type=CRITICAL_ERROR, err=class
> o.a.i.i.processors.cache.persistence.tree.CorruptedTreeException: B+Tree is
> corrupted [pages(groupId, pageId)=[], cacheId=388652627,
> cacheName=LIMIT_DASHBOARD_SNAPSHOT, indexName=_key_PK, msg=Runtime failure
> on bounds: [lower=null, upper=null
>
> org.apache.ignite.internal.processors.cache.persistence.tree.CorruptedTreeException:
> B+Tree is corrupted [pages(groupId, pageId)=[], cacheId=388652627,
> cacheName=LIMIT_DASHBOARD_SNAPSHOT, indexName=_key_PK, msg=Runtime failure
> on bounds: [lower=null, upper=null]]
>
> at
> org.apache.ignite.internal.processors.query.h2.database.H2Tree.corruptedTreeException(H2Tree.java:977)
> ~[ignite-indexing-2.10.0.jar:2.10.0]
>
> at
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.find(BPlusTree.java:1133)
> ~[ignite-core-2.10.0.jar:2.10.0]
>
> at
> org.apache.ignite.internal.processors.query.h2.database.H2TreeIndex.find(H2TreeIndex.java:415)
> ~[ignite-indexing-2.10.0.jar:2.10.0]
>
> …
>
> Caused by:
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTreeRuntimeException:
> java.lang.IllegalStateException: Getting affinity for too old topology
> version that is already out of history [locNode=TcpDiscoveryNode
> [id=e21d561d-314a-4240-a379-23f139870717, consistentId=lrdeqprmap03p,
> addrs=ArrayList [127.0.0.1, 169.182.110.133], sockAddrs=HashSet [/
> 127.0.0.1:47500, lrdeqprmap03p.eur.nsroot.net/169.182.110.133:47500],
> discPort=47500, order=7, intOrder=7, lastExchangeTime=1648560721652,
> loc=true, ver=2.10.0#20210310-sha1:bc24f6ba, isClient=false],
> grp=LimitDashboardSnapshotCache, topVer=AffinityTopologyVersion [topVer=17,
> minorTopVer=28], lastAffChangeTopVer=AffinityTopologyVersion [topVer=17,
> minorTopVer=28], head=AffinityTopologyVersion [topVer=19, minorTopVer=0],
> history=[AffinityTopologyVersion [topVer=18, minorTopVer=0],
> AffinityTopologyVersion [topVer=19, minorTopVer=0]]]
>
> at
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.findLowerUnbounded(BPlusTree.java:1079)
> ~[ignite-core-2.10.0.jar:2.10.0]
>
> at
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.find(BPlusTree.java:1118)
> ~[ignite-core-2.10.0.jar:2.10.0]
>
> ... 23 more
>
> Caused by: java.lang.IllegalStateException: Getting affinity for too old
> topology version that is already out of history [locNode=TcpDiscoveryNode
> [id=e21d561d-314a-4240-a379-23f139870717, consistentId=lrdeqprmap03p,
> addrs=ArrayList [127.0.0.1, 169.182.110.133], sockAddrs=HashSet [/
> 127.0.0.1:47500, lrdeqprmap03p.eur.nsroot.net/169.182.110.133:47500],
> discPort=47500, order=7, intOrder=7, lastExchangeTime=1648560721652,
> loc=true, ver=2.10.0#20210310-sha1:bc24f6ba, isClient=false],
> grp=LimitDashboardSnapshotCache, topVer=AffinityTopologyVersion [topVer=17,
> minorTopVer=28], lastAffChangeTopVer=AffinityTopologyVersion [topVer=17,
> minorTopVer=28], head=AffinityTopologyVersion [topVer=19, minorTopVer=0],
> history=[AffinityTopologyVersion [topVer=18, minorTopVer=0],
> AffinityTopologyVersion [topVer=19, minorTopVer=0]]]
>
> at
> org.apache.ignite.internal.processors.affinity.GridAffinityAssignmentCache.cachedAffinity(GridAffinityAssignmentCache.java:831)
> ~[ignite-core-2.10.0.jar:2.10.0]
>
> at
> org.apache.ignite.internal.processors.affinity.GridAffinityAssignmentCache.cachedAffinity(GridAffinityAssignmentCache.java:778)
> ~[ignite-core-2.10.0.jar:2.10.0]
>
>  

Re: Blocked system-critical due to a listener?

2022-04-06 Thread Вячеслав Коптилин
Hello Joan,

> What I've done is launch an async worker from the listener to do that
operation.
Yep, that is true. Event listeners can be called from Ignite internal
threads (which can be critical ones).
I think it was done this way for performance reasons. At least, this allows
you to avoid transferring an event from critical thread to another
thread/executor. The user should take care about this "feature",
unfortunately.
So, it is recommended to avoid any cache operations from the event listener
(you need to use your own separate thread for this).

Thanks,
Slava.

вс, 3 апр. 2022 г. в 10:21, Joan Pujol :

> Hi Ilya,
>
> Yes, that was exactly the case.
> What I've done is launch an async worker from the listener to do that
> operation.
>
> Cheers,
>
> On Sat, 2 Apr 2022 at 22:30, Ilya Shishkov  wrote:
>
>> I didn't read your post carefully, sorry.
>>
>> >  The problem seems to be a listener that we register with
>>  ignite.events(forLocal).remoteListen() that deletes/puts some cache
>> entries during NODE_LEFT events.
>>
>> Are you listening to NODE_LEFT events and then trigger some cache actions
>> (put / get)?
>>
>>
>>
>> сб, 2 апр. 2022 г. в 22:34, Ilya Shishkov :
>>
>>> Hi,
>>>
>>> Could you, please, attach logs from failed nodes?
>>>
>>> ср, 23 мар. 2022 г. в 14:07, Joan Pujol :
>>>
 Hi,

 We had a cluster problem while two of the four server nodes left the
 cluster and started again in a cluster running Ignite 2.10
 The problem seems to be that Partition Exchange didn't finish

 In particular, the problem seems to be in one of the remaining nodes
 that has a Blocked system-critical error on disco-event-worker that is
 locked for more than 19s.
 I checked GC logs and there isn't any relevant pause when the block
 occurs

 Some relevant logs:

 remaining node:
 >2022-03-22 13:33:42.439 WARN  [services-deployment-worker-#95]
 o.a.i.i.p.s.ServiceDeploymentManager - Failed to wait service
 deployment process or timeout had been reached, timeout=1,
 taskDepId=ServiceDeploymentProcessId [topVer=AffinityTopologyVersion
 [topVer=9, minorTopVer=0], reqId=null]
 >2022-03-22 13:33:42.687 WARN  [exchange-worker-#54]
 o.a.i.i.p.c.d.d.p.GridDhtPartitionsExchangeFuture - Unable to await
 partitions release latch within timeout. Some nodes have not sent
 acknowledgement for latch completion. It's possible due to unfinishined
 atomic updates, transactions or not released explicit locks on that nodes.
 Please check logs for errors on nodes with ids reported in latch
 `pendingAcks` collection [latch=ServerLatch [permits=1,
 pendingAcks=HashSet [6d63e1bd-9de4-4341-81bb-c3a293ebe2eb],
 super=CompletableLatch [id=CompletableLatchUid [id=exchange,
 topVer=AffinityTopologyVersion [topVer=9, minorTopVer=0]
 >2022-03-22 13:33:51.681 ERROR [tcp-disco-msg-worker-[5bc334df 0:0:0:0:
 0:0:0:1%lo:47500]-#2-#45] o.a.i.i.u.typedef.G - Blocked
 system-critical thread has been detected. This can lead to cluster-wide
 undefined behaviour [workerName=disco-event-worker,
 threadName=disco-event-worker-#50, blockedFor=19s]
 >2022-03-22 13:33:51.701 WARN  [tcp-disco-msg-worker-[5bc334df 0:0:0:0:
 0:0:0:1%lo:47500]-#2-#45] o.a.ignite.Ignite - Possible failure
 suppressed accordingly to a configured handler [hnd=AbstractFailureHandler
 [ignoredFailureTypes=UnmodifiableSet [SYSTEM_WORKER_BLOCKED,
 SYSTEM_CRITICAL_OPERATION_TIMEOUT]], failureCtx=FailureContext
 [type=SYSTEM_WORKER_BLOCKED, err=class o.a.i.IgniteException:
 GridWorker [name=disco-event-worker, igniteInstanceName=null, finished=
 false, heartbeatTs=1647956012230]]]
 org.apache.ignite.IgniteException: GridWorker
 [name=disco-event-worker, igniteInstanceName=null, finished=false,
 heartbeatTs=1647956012230]
   at java.base@11.0.9/jdk.internal.misc.Unsafe.park(Native Method)
   at java.base@11.0.9
 /java.util.concurrent.locks.LockSupport.park(LockSupport.java:323)
   at
 org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:178)
   at
 org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:141)
   at
 org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.remove0(GridDhtAtomicCache.java:746)
 >2022-03-22 13:33:51.701 WARN  [tcp-disco-msg-worker-[5bc334df 0:0:0:0:
 0:0:0:1%lo:47500]-#2-#45] o.a.i.i.p.f.FailureProcessor - No deadlocked
 threads detected.
 ...
 A thread dump where the blocked thread thread dump is:
 Thread [name="disco-event-worker-#50", id=306, state=WAITING, blockCnt=
 1, waitCnt=543567]
 at java.base@11.0.9/jdk.internal.misc.Unsafe.park(Native
 Method)
 at java.base@11.0.9
 /java.util.concurrent.locks.LockSupport.park(LockSupport.java:323)
 at
 o.a.i.i.util.future.GridFutureAdap

Re: getOrCreateCache hangs when cacheStore uses spring resources

2021-06-01 Thread Вячеслав Коптилин
Hello,

Could you please share log files and a full thread dump from the node?

Thanks,
S.

чт, 27 мая 2021 г. в 14:20, Orange :

> Calling ignite.getOrCreateCache(cacheConfig) results in the thread getting
> blocked. I've noticed this does not happen when the cacheStore does not use
> spring resources. Ignite is being started with IgniteSpring.start(config,
> ctx).
>
> The are no other server errors.
>
> I've provided the code and the error below.
>
> [2021-05-27 12:11:15.381] - 5368 SEVERE
> [tcp-disco-msg-worker-[crd]-#2%SERVER-NAME%-#42%SERVER-NAME%] ---
> org.apache.ignite.internal.util.typedef.G: Blocked system-critical thread
> has been detected. This can lead to cluster-wide undefined behaviour
> [workerName=partition-exchanger,
> threadName=exchange-worker-#46%SERVER-NAME%, blockedFor=1288s]
> [2021-05-27 12:11:15.381] - 5368 WARNING
> [tcp-disco-msg-worker-[crd]-#2%SERVER-NAME%-#42%SERVER-NAME%] ---
> org.apache.ignite.internal.util.typedef.G: Thread
> [name="exchange-worker-#46%SERVER-NAME%", id=87, state=BLOCKED, blockCnt=1,
> waitCnt=4]
> Lock [object=java.util.concurrent.ConcurrentHashMap@307677e2,
> ownerName=main, ownerId=1]
>
> [2021-05-27 12:11:15.382] - 5368 WARNING
> [tcp-disco-msg-worker-[crd]-#2%SERVER-NAME%-#42%SERVER-NAME%] --- :
> Possible
> failure suppressed accordingly to a configured handler
> [hnd=NoOpFailureHandler [super=AbstractFailureHandler
> [ignoredFailureTypes=UnmodifiableSet [SYSTEM_WORKER_BLOCKED,
> SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], failureCtx=FailureContext
> [type=SYSTEM_WORKER_BLOCKED, err=class o.a.i.IgniteException: GridWorker
> [name=partition-exchanger, igniteInstanceName=SERVER-NAME, finished=false,
> heartbeatTs=1622112586746]]]
> class org.apache.ignite.IgniteException: GridWorker
> [name=partition-exchanger, igniteInstanceName=SERVER-NAME, finished=false,
> heartbeatTs=1622112586746]
> at
>
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance$3.apply(IgnitionEx.java:1806)
> at
>
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance$3.apply(IgnitionEx.java:1801)
> at
>
> org.apache.ignite.internal.worker.WorkersRegistry.onIdle(WorkersRegistry.java:234)
> at
>
> org.apache.ignite.internal.util.worker.GridWorker.onIdle(GridWorker.java:297)
> at
>
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.lambda$new$1(ServerImpl.java:2970)
> at
>
> org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorker.body(ServerImpl.java:8057)
> at
>
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.body(ServerImpl.java:3086)
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
> at
>
> org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorkerThread.body(ServerImpl.java:7995)
> at
> org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:58)
>
> [2021-05-27 12:11:15.382] - 5368 WARNING
> [tcp-disco-msg-worker-[crd]-#2%SERVER-NAME%-#42%SERVER-NAME%] ---
> org.apache.ignite.internal.processors.cache.CacheDiagnosticManager: Page
> locks dump:
>
> Thread=[name=exchange-worker-#46%SERVER-NAME%, id=87], state=BLOCKED
> Locked pages = []
> Locked pages log: name=exchange-worker-#46%SERVER-NAME%
> time=(1622113875382,
> 2021-05-27 12:11:15.382)
>
>
>  @Bean
> public CacheConfiguration cacheConfig() {
> CacheConfiguration cacheCfg = new CacheConfiguration("cache-name");
> cacheCfg.setCacheMode(CacheMode.REPLICATED);
> cacheCfg.setReadThrough(true);
> cacheCfg.setCacheStoreFactory(
> new FactoryBuilder.SingletonFactory<>(new
> TestCacheStore()));
> return cacheCfg;
> }
>
>
> public class TestCacheStore extends CacheStoreAdapter
> implements Serializable {
>
> private static final Logger log = getLogger(TestCacheStore .class);
>
> @SpringResource(resourceName = "serverConfig")
> private transient ServerConfig serverConfig;
>
> public TestCacheStore () {
> log.info("test");
> }
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: data rebalancing and partition map exchange with persistence

2021-02-04 Thread Вячеслав Коптилин
Hi Alan,

I am sorry for the typo in your name, it was done unintentionally.

Thanks,
S.

чт, 4 февр. 2021 г. в 19:10, Вячеслав Коптилин :

> Hello Allan,
>
> > Does data rebalancing occur when a node leaves or joins, or only when
> you manually change the baseline topology (assuming automatic baseline
> adjustment is disabled)? Again, this is on a cluster with persistence
> enabled.
> Yes, this can happen when a node joins the cluster, for instance.
> Let's consider the following scenario: you shut down a node that is a part
> of the current baseline topology, and so, this node cannot apply updates.
> After a while, this node was restarted and returned to the cluster. In
> this case, rebalancing can be triggered in order to transfer that updates.
>
> > 2. Sometimes I look at the partition counts of a cache across all the
> nodes using
> Arrays.stream(ignite.affinity(cacheName).primaryPartitions(severNode) and I
> see 0 partition
> > After a while it returns to a balanced state. What's going on here?
> Well, when a partition needs to be rebalanced from one node (supplier) to
> another one (demander) we create a partition on the demander in a MOVING
> state (this means a backup that applies updates but cannot be used for
> reads).
> When this partition fully rebalanced it is switched to OWNING state and
> the next PME (Late Affinity Assignment) may mark this partition as a
> primary.
>
> > 3. Is there a way to manually invoke the partition map exchange process?
> I don't think so.
>
> > 4. Sometimes I see 'partition lost' errors. If i am using persistence
> and all the baseline nodes are online and connected, is it safe to assume
> no data has been lost and just call cache.resetLostPartitions(myCaches)?
> If I am not mistaken, the answer is yes. Please take a look at
> https://ignite.apache.org/docs/latest/configuring-caches/partition-loss-policy#recovering-from-a-partition-loss
>
> Thanks,
> S.
>
> пт, 29 янв. 2021 г. в 15:56, Alan Ward :
>
>> I'm using Ignite 2.9.1, a 5 node cluster with persistence enabled,
>> partitioned caches with 1 backup.
>>
>> I'm a bit confused about the difference between data rebalancing and
>> partition map exchange in this context.
>>
>> 1. Does data rebalancing occur when a node leaves or joins, or only when
>> you manually change the baseline topology (assuming automatic baseline
>> adjustment is disabled)? Again, this is on a cluster with persistence
>> enabled.
>>
>> 2. Sometimes I look at the partition counts of a cache across all the
>> nodes using
>> Arrays.stream(ignite.affinity(cacheName).primaryPartitions(severNode) and I
>> see 0 partitions on one or even two nodes for some of the caches. After a
>> while it returns to a balanced state. What's going on here? Is this data
>> rebalancing at work, or is this the result of the partition map exchange
>> process determining that one node is/was down and thus switching to use the
>> backup partitions?
>>
>> 3. Is there a way to manually invoke the partition map exchange process?
>> I figured it would happen on cluster restart, but even after restarting the
>> cluster and seeing all baseline nodes connect I still observe the partition
>> imbalance. It often takes hours for this to resolve.
>>
>> 4. Sometimes I see 'partition lost' errors. If i am using persistence and
>> all the baseline nodes are online and connected, is it safe to assume no
>> data has been lost and just call cache.resetLostPartitions(myCaches)? Is
>> there a way calling that method would lead to data loss with persistence
>> enabled?
>>
>> thanks for your help!
>>
>


Re: data rebalancing and partition map exchange with persistence

2021-02-04 Thread Вячеслав Коптилин
Hello Allan,

> Does data rebalancing occur when a node leaves or joins, or only when you
manually change the baseline topology (assuming automatic baseline
adjustment is disabled)? Again, this is on a cluster with persistence
enabled.
Yes, this can happen when a node joins the cluster, for instance.
Let's consider the following scenario: you shut down a node that is a part
of the current baseline topology, and so, this node cannot apply updates.
After a while, this node was restarted and returned to the cluster. In this
case, rebalancing can be triggered in order to transfer that updates.

> 2. Sometimes I look at the partition counts of a cache across all the
nodes using
Arrays.stream(ignite.affinity(cacheName).primaryPartitions(severNode) and I
see 0 partition
> After a while it returns to a balanced state. What's going on here?
Well, when a partition needs to be rebalanced from one node (supplier) to
another one (demander) we create a partition on the demander in a MOVING
state (this means a backup that applies updates but cannot be used for
reads).
When this partition fully rebalanced it is switched to OWNING state and the
next PME (Late Affinity Assignment) may mark this partition as a primary.

> 3. Is there a way to manually invoke the partition map exchange process?
I don't think so.

> 4. Sometimes I see 'partition lost' errors. If i am using persistence and
all the baseline nodes are online and connected, is it safe to assume no
data has been lost and just call cache.resetLostPartitions(myCaches)?
If I am not mistaken, the answer is yes. Please take a look at
https://ignite.apache.org/docs/latest/configuring-caches/partition-loss-policy#recovering-from-a-partition-loss

Thanks,
S.

пт, 29 янв. 2021 г. в 15:56, Alan Ward :

> I'm using Ignite 2.9.1, a 5 node cluster with persistence enabled,
> partitioned caches with 1 backup.
>
> I'm a bit confused about the difference between data rebalancing and
> partition map exchange in this context.
>
> 1. Does data rebalancing occur when a node leaves or joins, or only when
> you manually change the baseline topology (assuming automatic baseline
> adjustment is disabled)? Again, this is on a cluster with persistence
> enabled.
>
> 2. Sometimes I look at the partition counts of a cache across all the
> nodes using
> Arrays.stream(ignite.affinity(cacheName).primaryPartitions(severNode) and I
> see 0 partitions on one or even two nodes for some of the caches. After a
> while it returns to a balanced state. What's going on here? Is this data
> rebalancing at work, or is this the result of the partition map exchange
> process determining that one node is/was down and thus switching to use the
> backup partitions?
>
> 3. Is there a way to manually invoke the partition map exchange process? I
> figured it would happen on cluster restart, but even after restarting the
> cluster and seeing all baseline nodes connect I still observe the partition
> imbalance. It often takes hours for this to resolve.
>
> 4. Sometimes I see 'partition lost' errors. If i am using persistence and
> all the baseline nodes are online and connected, is it safe to assume no
> data has been lost and just call cache.resetLostPartitions(myCaches)? Is
> there a way calling that method would lead to data loss with persistence
> enabled?
>
> thanks for your help!
>


Re: PME and affinity guarantees

2021-02-01 Thread Вячеслав Коптилин
Hello Ivan,

Well, EVT_CACHE_REBALANCE_PART_LOADED is triggered when the corresponding
partition is fully rebalanced and moved into the "OWNING" state,
and so, invoking Affinity#isPrimaryOrBackup(ignite.cluster().localNode(),
key) on this node should return "true".
to be more precise, this partition is treated as a "backup" until a new PME
(which is triggered by a Late Affinity Assignment) that can switch the
partition to "primary".

Thanks,
S.

пт, 29 янв. 2021 г. в 00:49, :

> Dear Igniters,
>
> Could someone please clarify if it is guaranteed that the Affinity
> (GridCacheAffinityImpl) will have the most up-to-date information about
> partitions distribution and the following scenario is impossible:
>
>
>   1.  We have a registered listener of EVT_CACHE_REBALANCE_PART_LOADED and
> EVT_CACHE_REBALANCE_PART_UNLOADED events
>   2.  We have a ContinuousQuery that keeps track of all the new cache
> entries (cluster-wide)
>   3.  The events listener gets the Partition Loaded Event
>   4.  The cache listener receives a new cache entry that belongs to the
> loaded partition and invokes
> Affinity#isPrimaryOrBackup(ignite.cluster().localNode(), key) and gets the
> FALSE response
>
> In other words, is it guaranteed that the information about partitions
> distribution will be adjusted strictly before the first Partition Loaded
> Event will be distributed to listeners?
>
> Best regards,
> Ivan
>


Re: Apache Ignite Clientside NearCache and Serverside Eviction blocking cluster

2020-12-15 Thread Вячеслав Коптилин
Hello Prasad,

Sorry for the late reply.

I checked your reproducer and it looks good to me.
And yes, this behavior looks like a bug. I filed a Jira ticket in order to
address this issue: https://issues.apache.org/jira/browse/IGNITE-13858
You can enable ignite native persistence in order to overcome this issue.

Thanks,
Slava.

пт, 4 дек. 2020 г. в 08:58, pvprsd :

> Hi Slava,
>
> Did you get a chance to look into the issue? Please let us know if there
> are
> configuration problems. We need to have both the features in our current
> implementation for better performance.
>
> Thanks,
> Prasad
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Apache Ignite Clientside NearCache and Serverside Eviction blocking cluster

2020-11-25 Thread Вячеслав Коптилин
Hello Prasad,

I will take a look and file a ticket if needed.

Thanks,
S.

пн, 23 нояб. 2020 г. в 19:23, pvprsd :

> Hi,
>
> Did anyone get a chance to look into this issue? Is this supported
> configuration for ignite? As these 2 (clientside NearCache and serverside
> eviction) are very common features, I am wondering many projects should be
> using this combination.
>
> Can this be reported as a defect for ignite, if there is no configuration
> fix for this problem?
>
> Many thanks in advance.
>
> Thanks,
> Prasad
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite client node hangs while IgniteAtomicLong is created

2020-08-11 Thread Вячеслав Коптилин
Hello Ilya,

I have a look at your logs and thread dumps. This definitely a bug.
The data structure processor does not properly initialize when a client
node connects to a cluster that changes its own state (state transition
from the inactive state to active).
I have created a ticket in order to address this issue
https://issues.apache.org/jira/browse/IGNITE-13348

Thanks,
S.

вт, 11 авг. 2020 г. в 05:00, Ilya Roublev :

> Hello, Ilya,
>
> In the post above one week ago I've attached necessary thread dumps. Could
> you please say whether do you have sufficient information to investigate
> the
> problem with hanging of IgniteAtomicLong? I think the issue not all that
> harmless, it concerns the last Ignite 2.8.1 and its fixing may be IMHO
> important for the community (I think the cause is in initialization of
> ignite-sys-atomic-cache simultaneously in several nodes, but certainly I
> may
> be mistaken). But unfortunately I see no reaction on this since a week.
> Could you please give at least a hint that the problem is under
> investigation and there is a slightest chance that the problem can be
> resolved? Or it is better to work out some workarounds?
>
> Thank you very much in advance for any response.
>
> My best regards,
> Ilya
>
>
> ilya.kasnacheev wrote
> > Hello!
> >
> > Can you collect thread dumps from all nodes once you get them hanging?
> >
> > Can you throw together a reproducer project?
> >
> > Regards,
> > --
> > Ilya Kasnacheev
> >
> >
> > вт, 4 авг. 2020 г. в 12:51, Ilya Roublev <
>
> > iroublev@
>
> > >:
> >
> >> We are developing Jira cloud app using Apache Ignite both as data
> storage
> >> and as job scheduler. This is done via a standard Ignite client node.
> But
> >> we need to use Atlassian Connect Spring Boot to be able to communicate
> >> with
> >> Jira. In short, all is done exactly as in our article Boosting Jira
> Cloud
> >> app development with Apache Ignite
> >> <
> https://medium.com/alliedium/boosting-jira-cloud-app-development-with-apache-ignite-7eebc7bb3d48>
> ;.
> >> At first we used simple Ignite JDBC driver
> >> ; just for
> >> Atlassian
> >> Connect Spring Boot along with a separate Ignite client node for our own
> >> purposes. But this turned out to be very unstable being deployed in our
> >> local Kubernetes cluster (built via Kubespray) due to constant
> exceptions
> >>
> >> java.net.SocketException: Connection reset
> >>
> >> occuring from time to time (in fact, this revealed only in our local
> >> cluster, in AWS EKS all worked fine). To make all this more stable we
> >> tried
> >> to use Ignite JDBC Client driver
> >> ;
> >> exactly as
> >> described in the article mentioned above. Thus, now our backend uses two
> >> Ignite client nodes per single JVM: the first one for JDBC used by
> >> Atlassian Connect Spring Boot, the second one for our own purposes. This
> >> solution turned out to be good enough, because our app works now very
> >> stable both in our local cluster and in AWS EKS. But when we deploy our
> >> app
> >> in Docker for testing and developing purposes, our Ignite client nodes
> >> hang
> >> from time to time. After some investigation we were able to see that
> this
> >> occurs exactly at the instant when an object of IgniteAtomicLong is
> >> created. Below are logs both for successful initialization of our app
> and
> >> for the case when Ignite client node hanged. Logs when all is ok
> >> ignite-appclientnode-successful.log
> >> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t2262/ignite-appclientnode-successful.log>
> ;
> >> ignite-jdbcclientnode-successful.log
> >> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t2262/ignite-jdbcclientnode-successful.log>
> ;
> >> Logs
> >> when both client node hang ignite-appclientnode-failed.log
> >> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t2262/ignite-appclientnode-failed.log>
> ;
> >> ignite-jdbcclientnode-failed.log
> >> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t2262/ignite-jdbcclientnode-failed.log>
> ;
> >> Some
> >> analysis and questions From logs one can see that caches default,
> >> tenants, atlassian_host_audit, SQL_PUBLIC_ATLASSIAN_HOST are
> manipulated,
> >> in fact, default is given in client configuration: client.xml
> >> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t2262/client.xml>;,
> >> the cache SQL_PUBLIC_ATLASSIAN_HOST contains atlassian_host table
> >> mentioned
> >> in Boosting Jira Cloud app development with Apache Ignite
> >> <
> https://medium.com/alliedium/boosting-jira-cloud-app-development-with-apache-ignite-7eebc7bb3d48>
> ;
> >> and is created in advance even before the app starts. Further,
> >> atlassian_host_audit is a copy of atlassian_host, in any case it is not
> >> yet
> >> created when the app hangs. What concerns other entities processed by
> >> Ignite, they are created by the following code:
> >>
>

Re: Data is lost during rebalance

2019-11-07 Thread Вячеслав Коптилин
Hi Alex,

I will take a look at log files.

Thanks,
S.

ср, 6 нояб. 2019 г. в 17:51, novacean.alex :

> This is the test i am performing:
>
> 1. I have an Ignite Cluster of 3 Server Nodes running in Kubernetes. (the
> cluster is created using a StatefulSet)
> 2. Once the cluster is up i use a Deployment to run 5 pods each with a
> Client Node that performs put operations on my cache.
> 3. Once i see in the logs "Page-based evictions started. Consider
> increasing
> 'maxSize' on Data Region configuration: default" on all 3 Server Nodes i
> delete the deployment, thus stopping the put opperations.
> 4. Once the deployment is deleted i restart node-2 from the cluster and
> once
> it is up again i wait for it to finish rebalancing.
> 5. The result of the test so far is that entries are missing from node-2
> after the rebalance is complete.
>
> Log Files before the restart:
> node-0.log
> 
> node-1.log
> 
> node-2.log
> 
>
> Log Files after the restart:
>
> node-0.log
> 
> node-1.log
> 
> node-2.log
> 
>
> I want to use the cluster to store web-session and i have ~400 request per
> second of cahe read and put operation.
> Would disabling the page eviction for the default data region end up and
> getting OOM Exceptions?
> If no, i will try and disable it as you suggested and perform more tests.
>
> Thanks,
> Alex.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Data is lost during rebalance

2019-11-06 Thread Вячеслав Коптилин
We are definitely missing something obvious. )))

1. Let's check log files for the following message: "Page-based evictions
started. Consider increasing 'maxSize' on Data Region configuration:"
2. Please try to disable page eviction for the default data region.
3. Could you please describe your test scenario in detail and attach log
files from all nodes.

Thanks,
S.

ср, 6 нояб. 2019 г. в 15:48, novacean.alex :

> Hello Slava,
>
> Apparently i celebrated to early. The first test i performed after i used
> the 'backup' cache property in the config it was a success indeed, but the
> test was performed with a half full cache. When performing the test with
> the
> cache full the results are the same as previously, entries missing after re
> balance.
>
> The code snipped you provided returned : REPLICATED. And in visor i also
> had
> this which i missed to share earlier:
> 
>
> Thank you,
> Alex Novacean.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Data is lost during rebalance

2019-11-06 Thread Вячеслав Коптилин
Hi Alex,

Oh... I missed the fact that your cache is replicated. In that case, you
don't need to specify the number of backups.
Could you please check, that the cache mode is replicated? You can try the
following code snippet:

System.out.println(ignite.cache("session-cache").getConfiguration(CacheConfiguration.class).getCacheMode());

Thanks,

S.


ср, 6 нояб. 2019 г. в 14:53, novacean.alex :

> Hello Slava,
>
> Thank you very much for the answer. It worked! Now every time 1 ignite node
> gets restarted it re-balances the exact number of keys.
>
> I was aware of the "backup" cache property but i think i misunderstood it's
> usages. As my cache is REPLICATED and in the documentations says that */"In
> Ignite, replicated caches are implemented in a way similar to partitioned
> caches where every key has a primary copy and is also backed up on all
> other
> nodes in the cluster."/ * i thought that really implies that i already have
> the backups.
>
> Thank you again,
> Alex.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Data is lost during rebalance

2019-11-06 Thread Вячеслав Коптилин
Hello Alex,

You need to specify the number of backups for your cache. For instance,




...

...




Please take a look at the page for details:
https://apacheignite.readme.io/docs/primary-and-backup-copies

Thanks,
S.

ср, 6 нояб. 2019 г. в 12:20, novacean.alex :

> Hello,
>
> I am a new user of Ignite and i'm trying to get a cluster with 3 server
> nodes up and running in Kubernetes. Everything works perfectly until one
> ignite node gets restarted. During the rebalance process i noticed that
> ~20.000 entries are lost, This happens with each restart. If two ignite
> nodes are restarted at the same time at the end of the rebalance process of
> the ~40.000 entries are lost.
>
> Before the restart:
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t2660/misskeystest.png>
>
>
>
> After restart and rebalance:
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t2660/misskeuresults.png>
>
>
>
> After the rebalance process is done @n2 is missing 18705 entries.
>
> This is the config file i am using:
>
> 
> http://www.springframework.org/schema/beans";
> xmlns:context="http://www.springframework.org/schema/context";
> xmlns:util="http://www.springframework.org/schema/util";
> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
> xsi:schemaLocation="http://www.springframework.org/schema/beans
> http://www.springframework.org/schema/beans/spring-beans.xsd
> http://www.springframework.org/schema/util
> http://www.springframework.org/schema/util/spring-util.xsd
> http://www.springframework.org/schema/context
> https://www.springframework.org/schema/context/spring-context.xsd";>
> 
> 
>  value="${IGNITE_OVERRIDE_CONSISTENT_ID}"/>
> 
> 
> 
> 
> 
>  class="org.apache.ignite.configuration.DataStorageConfiguration">
> 
> 
>  class="org.apache.ignite.configuration.DataRegionConfiguration">
>  value="false"/>
> 
> 
>  value="RANDOM_2_LRU"/>
> 
> 
> 
> 
> 
> 
>class="org.apache.ignite.configuration.CacheConfiguration">
>   
>   
>   
>   
>   
>   
>   
>   
>   
>value="PRIMARY_SYNC"/>
>   
>class="org.apache.ignite.cache.eviction.lru.LruEvictionPolicy">
>  
>   
>   
>   
> 
>  
>class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi">
>   
>   
>   
>   
> 
>   
>   
>   
>   
>  
> 
>  class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
> 
> 
> class="org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder">
> 
> 
> 
> 
> 
> 
> 
> 
> 
>
> I have't found any other issue related to this so my guess is that must be
> a
> configuration problem.
> Any help would be greatly appreciated.
>
> Thank you,
> Alex.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite 2.7.0 : server node: null pointer exception

2019-07-26 Thread Вячеслав Коптилин
Hi Mahesh,

Yes, it should be done on server nodes.

Thanks,
S.

чт, 25 июл. 2019 г. в 14:28, Mahesh Renduchintala <
mahesh.renduchint...@aline-consulting.com>:

> IGNITE_DISCOVERY_HISTORY_SIZE=700
>
> Does this go on the server side or the thick client side ?
>
>


Re: Ignite 2.7.0 : server node: null pointer exception

2019-07-25 Thread Вячеслав Коптилин
Hi Mahesh,

It definitely looks like a bug. I have created this ticket in order to
track the issue  https://issues.apache.org/jira/browse/IGNITE-12013
As a temporary workaround, I would propose increasing discovery history size
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/IgniteSystemProperties.html#IGNITE_DISCOVERY_HISTORY_SIZE
The default value is 500, so let's try to use 700, for instance.
(You need to pass it to your java process -D
IGNITE_DISCOVERY_HISTORY_SIZE=700)

Thanks,
S.

чт, 25 июл. 2019 г. в 08:12, Mahesh Renduchintala <
mahesh.renduchint...@aline-consulting.com>:

> The clients come in and get disconnected from the cluster for many reasons
> - some intentionally and some due to poor network.
> Cant have Ignite nodes crashing with null pointer exception.
>
>
>


Re: Apache Ignite expired data remains in memory with TTL enabled and after thread execution

2019-07-22 Thread Вячеслав Коптилин
Hello,

It seems this is a known issue
https://issues.apache.org/jira/browse/IGNITE-11438 which was recently fixed
and will be available in AI 2.8.

Thanks,
S.

пн, 22 июл. 2019 г. в 15:02, ales :

> Hello,
>
> Our company is currently facing an issue with Ignite expired entries that
> remains in the cache.
>
> Here is the setup to reproduce:
> create an ignite client (in client mode false) and put some data (10k
> entries/values) to it with very small expiration time (~20s) and TTL
> enabled. Each time the thread is running it'll remove all the entries that
> expired, but after few attempts this thread is not removing all the expired
> entries, some of them are staying in memory and are not removed by this
> thread execution. That means we got expired data in memory, and it's
> something we want to avoid.
>
> Please can you confirm that is a real issue or just misuse/configuration of
> our setup?
>
> Thanks for your feedback.
>
> Test
> I've tried in three different setups: full local mode (embedded server) on
> MacOS, remote server using one node in Docker, and also remote cluster
> using
> 3 nodes in kubernetes.
>
> To reproduce
> Git repo: https://github.com/panes/ignite-sample
>
> Run MyIgniteLoadRunnerTest.run() to reproduce the issue described on top.
>
> (Global setup: Writing 10k entries of 64octets each with TTL 10s)
>
> Thank you.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite node crash: IndexOutOfBoundsException

2019-07-20 Thread Вячеслав Коптилин
Hi Mahesh,

It looks like a known issue
https://issues.apache.org/jira/browse/IGNITE-8552

Anyway, could you please share your cache configuration?

Thanks,
S


сб, 20 июл. 2019 г. в 06:41, Mahesh Renduchintala <
mahesh.renduchint...@aline-consulting.com>:

> Hi
>
>
> We have an IndexOutOfBoundsException and ignite JVM stopped.
>
> Can you please check if it is a known bug?
>
>
> regards
>
> mahesh
>


Re: unsubscribe

2019-07-15 Thread Вячеслав Коптилин
Hi,

Hello,

To unsubscribe from the user mailing list send an e-mail to user-unsubscribe
@ignite.apache.org
If you have a mailing client, follow the unsubscribe link here:
https://ignite.apache.org/community/resources.html#mail-lists

Thanks,
S.

пт, 12 июл. 2019 г. в 23:26, Majid Salimi :

> unsubscribe
>
> --
>
>
>
>
> *Regards,Majid Salimi BeniM.Sc. Student of Computer Engineering,Department
> of Computer Science and Engineering & IT*
> *Shiraz University*
> Attachments area
>


Re: Memory leak.Ignite runs slower and slower after a period of time.

2019-04-14 Thread Вячеслав Коптилин
Hi,

> Does it mean the cursor still not closed in Ignite-2.6.0?
Yes, it seems so.

Thanks,
S.

чт, 11 апр. 2019 г. в 15:06, Justin Ji :

> Emmm, Does it mean the cursor still not closed in Ignite-2.6.0?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Memory leak.Ignite runs slower and slower after a period of time.

2019-04-11 Thread Вячеслав Коптилин
Hi,

> BTW, I can not find the class you provided(Ignite-2.6.0), do you spell it
correctly?
Oops :) That class was added recently. Please take a look at
https://issues.apache.org/jira/browse/IGNITE-10827
The fix is already available on the master branch

Thanks,
S.

чт, 11 апр. 2019 г. в 14:10, Justin Ji :

> Slava -
>
> Thank you again for your reply, I will close the cursor explicitly.
>
> BTW, I can not find the class you provided(Ignite-2.6.0), do you spell it
> correctly?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Memory leak.Ignite runs slower and slower after a period of time.

2019-04-11 Thread Вячеслав Коптилин
Hello,

> I would like to know will resource be closed after I iterate over the
Cursor?
Well, the answer is yes, the cursor will be closed (the underlying iterator
is wrapped by
org.apache.ignite.internal.processors.cache.AutoClosableCursorIterator).
Please take into account that this behavior is implementation details and
it can be changed.
So, I think the best choice would be to explicitly close the cursor using
close()/try-with-resources statement.

Thanks,
S.

чт, 11 апр. 2019 г. в 11:55, Justin Ji :

> One more question.
>
> According to the org.apache.ignite.cache.query.QueryCursor.getAll API, I
> know that the resources will be closed automatically since all results are
> fetched.
>
>
> I would like to know will resources be closed after I iterate over the
> Cursor?
>
> The reason why I ask this question is that in my test case I can not
> reproduce this issue if I iterate over all the records returned by SQL
> query.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Memory leak.Ignite runs slower and slower after a period of time.

2019-04-10 Thread Вячеслав Коптилин
Hello,

At first glance, your code does not close FieldsQueryCursor instances.
Could you explicitly close the cursors via close() or use
try-with-resources statement.

[1]
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/cache/query/FieldsQueryCursor.html
[2]
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/cache/query/QueryCursor.html#getAll--

Thanks,
S.

ср, 10 апр. 2019 г. в 10:15, BinaryTree :

> When my Ignite clients run for a while, it becomes slower and slower, and
> the outputs can be seen in our gc logs:
>
> 2019-04-10T06:42:47.885+: 62271.788: [Full GC (Ergonomics) [PSYoungGen: 
> 1494016K->1494005K(1797120K)] [ParOldGen: 2097006K->2097006K(2097152K)] 
> 3591022K->3591012K(3894272K), [Metaspace: 103757K->103757K(1144832K)], 
> 9.9864029 secs] [Times: user=19.85 sys=0.00, real=9.98 secs]
> 2019-04-10T06:42:57.874+: 62281.777: [Full GC (Ergonomics) [PSYoungGen: 
> 1494015K->1494012K(1797120K)] [ParOldGen: 2097006K->2097006K(2097152K)] 
> 3591022K->3591019K(3894272K), [Metaspace: 103757K->103757K(1144832K)], 
> 9.9982344 secs] [Times: user=19.87 sys=0.00, real=9.99 secs]
> 2019-04-10T06:43:07.874+: 62291.778: [Full GC (Ergonomics) [PSYoungGen: 
> 1494016K->1494014K(1797120K)] [ParOldGen: 2097006K->2097006K(2097152K)] 
> 3591022K->3591020K(3894272K), [Metaspace: 103757K->103757K(1144832K)], 
> 10.0803891 secs] [Times: user=19.93 sys=0.00, real=10.08 secs]
>
> According to the outputs, I am sure that some objects have not been
> recycled. So I dumped the heap and analyzed them in the Eclipse Memory
> Analyzer, here is the reports that were given by the tool.
>
> *From the above picture, I guess there may be some bugs or inappropriate
> usage caused GridReduceQueryExecutor is not being recycled, but I don't
> know what the specific reason is.So I hope that you can give me some
> advice.*
>
> The code segments show how I execute a query:
>
> public List query(String key, String value) {
> List list = Lists.newArrayList();
> String fields = "id, gmtCreate, gmtModified, devId, dpId, code, name, 
> customName, mode, type, value, rawValue, time, status, uuid";
> String sql = "select " + fields + " from " + 
> IgniteTableKey.T_DATA_POINT_NEW + " where " + key + "='" + value +"'";
> FieldsQueryCursor> cursor = newIgniteCache.query(new 
> SqlFieldsQuery(sql));
> for (List objects : cursor) {
> DpCache cache = convertToDpCache(objects);
> list.add(cache);
> }
> return list;
> }
>
> public DpCache queryOne(String devId, Integer dpId) {
> DpCache cache = null;
> String fields = "id, gmtCreate, gmtModified, devId, dpId, code, name, 
> customName, mode, type, value, rawValue, time, status, uuid";
> String sql = "select " + fields + " from " + 
> IgniteTableKey.T_DATA_POINT_NEW + " where devId=? and dpId=?";
> ​
> SqlFieldsQuery query = new SqlFieldsQuery(sql);
> query.setArgs(devId, dpId);
> FieldsQueryCursor> cursor = newIgniteCache.query(query);
> Iterator> iterator = cursor.iterator();
> if (iterator.hasNext()) {
> cache = convertToDpCache(iterator.next());
> }
> turn cache;
> }
>
> public boolean hasRecord(String devId, Integer dpId) {
> boolean hasRecord;
> String sql = "select 1 from t_data_point_new where devId=? and 
> dpId=?";
> SqlFieldsQuery query = new SqlFieldsQuery(sql);
> query.setArgs(devId, dpId);
> ​
> FieldsQueryCursor> cursor = newIgniteCache.query(query);
> ​
> Iterator> iterator = cursor.iterator();
> hasRecord = iterator.hasNext();
> return hasRecord;
> }
>
> public void invokeAllAsync(Map map) {
> Map processorMap = Maps.newHashMap();
> for (Map.Entry entry : map.entrySet()) {
> processorMap.put(entry.getKey(), new 
> DataPointEntryProcessor(entry.getValue()));
> }
> newIgniteCache.invokeAllAsync(processorMap);
> }
>
> Anyone who can give me advice will be appreciated.
>
> Looking forward to your reply.
>


Re: no data by JDBC or Cache query!

2019-03-14 Thread Вячеслав Коптилин
Hi,

Please make sure that your types are registered as follows:
CacheConfiguration ccfg = new
CacheConfiguration<>("PersonCache");
ccfg.*setIndexedTypes*(String.class, Person.class);
...
IgniteCache personCache =
igniteClient.getOrCreateCache(ccfg);

additional details can be found here:
https://apacheignite-sql.readme.io/docs/schema-and-indexes#section-registering-indexed-types

Thanks,
S.

чт, 14 мар. 2019 г. в 10:47, Aurora <2565003...@qq.com>:

> Ignite 2.7.0
> I poured data by the following code, but I didnt get  data after executing
> jdbc or cache query, while I just can see data by "cache -scan -c=@c4" by
> the command window: IgniteVisorCmd.sh. why?
>
> IgniteCache personCache =
> igniteClient.cache("PersonCache");
> Person person = new Person();
> person.setXXX
> personCache .put(person.getId(),person);
>
>
> IgniteClient:
>
> public class IgniteClient {
> private final static Logger logger = LoggerFactory
> .getLogger(IgniteClient.class);
>
> private static Ignite client = null;
>
> static {
> logger.info("Initialize Ignite Client");
> try {
>
> // start client ignite
> IgniteConfiguration config = new IgniteConfiguration();
>
> //config.setClientMode(true);
> // config.setGridName(CLIENT_GRID);
> config.setPeerClassLoadingEnabled(true);
>
> // Configure discovery SPI.
> TcpDiscoverySpi discoSpi = new TcpDiscoverySpi();
> TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder();
>
>
> ipFinder.setAddresses(Arrays.asList("192.168.100.105:47500..47509"));
>
> discoSpi.setIpFinder(ipFinder);
>
> config.setDiscoverySpi(discoSpi);
>
> client = Ignition.start(config);
>
> } catch (IgniteException e) {
> e.printStackTrace();
> }
>
> }
>
> public static Ignite getClient() {
> return client;
> }
>
> }
>
> thanks.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Cassandra writeBehind configuration

2019-02-12 Thread Вячеслав Коптилин
Hello,

It looks like a known issue
https://issues.apache.org/jira/browse/IGNITE-8788. Unfortunately, it is not
resolved yet.
I would suggest the following workaround:

1. create your own CacheStoreFactory

public class CustomCassandraCacheStoreFactory extends
CassandraCacheStoreFactory {
private final String persistenceSettingsXml;

public CustomCassandraCacheStoreFactory(String persistenceSettingsXml) {
this.persistenceSettingsXml = persistenceSettingsXml;
}

@Override public CassandraCacheStore create() {
setPersistenceSettings(new
KeyValuePersistenceSettings(persistenceSettingsXml));
return super.create();
}
}



2. configure Ignite cache as follows

CacheConfiguration configuration = new CacheConfiguration<>();

configuration.setName("test-cache");
configuration.setCacheMode(CacheMode.PARTITIONED);
...

String persistenceSettingsXml = new
ClassPathResource("cassandra.xml").getPath();
CassandraCacheStoreFactory cacheStoreFactory = new
CustomCassandraCacheStoreFactory(persistenceSettingsXml);
cacheStoreFactory.setDataSource(dataSource);

configuration.setCacheStoreFactory(cacheStoreFactory);
configuration.setWriteThrough(true);
configuration.setWriteBehindEnabled(true);
...

I assume that `cassandra.xml` is available on all nodes.

Thanks,
S.


пн, 11 февр. 2019 г. в 20:15, Chris Genrich :

> I’ve created a small project to demonstrate issues I’ve had with Cassandra
> write behind configuration: https://github.com/cgenrich/ignite-cassandra
>
>
>
> My intention is to use a partitioned ignite cache that will persist
> asynchronously to Cassandra.  When using a single node cluster, the data is
> sent to Cassandra as expected.
>
>
>
> I’d appreciate any suggested changes to the configuration:
>
>
>
>   DataSource ds = *new* DataSource();
>
>   ds.setPort(9042);
>
>   ds.setContactPoints("localhost");
>
>   ds.setReconnectionPolicy(*new*
> ConstantReconnectionPolicy(1000));
>
>   ds.setAuthProvider(*new* PlainTextAuthProvider(*null*,
> *null*));
>
>   ds.setReadConsistency("ONE");
>
>   ds.setWriteConsistency("ONE");
>
>   ds.setLoadBalancingPolicy(*new* TokenAwarePolicy(*new*
> RoundRobinPolicy()));
>
>   Ignition.*getOrStart*(*localCluster*(*new*
> IgniteConfiguration()))
>
>.getOrCreateCache(*new* CacheConfiguration<>(
> *WRITEBEHIND*).setCacheMode(CacheMode.*PARTITIONED*)
>
>
> .setExpiryPolicyFactory(TouchedExpiryPolicy.*factoryOf*(Duration.
> *ONE_HOUR*))
>
>
> .setWriteSynchronizationMode(CacheWriteSynchronizationMode.*FULL_SYNC*
> ).setBackups(1)
>
>  .setReadThrough(*false*
> ).setWriteThrough(*true*).setWriteBehindEnabled(*true*)
>
>
> .setWriteBehindFlushFrequency(100).setCopyOnRead(*false*
> ).setCacheStoreFactory(
>
>*new*
> CassandraCacheStoreFactory<>().setDataSource(ds).setPersistenceSettings(
>
>  *new*
>  KeyValuePersistenceSettings(*new* ClassPathResource("cassandra.xml");
>
>
>
> 
>
> 
>
>  strategy=*"POJO"*>
>
>   
>
>  
>
>   
>
>   
>
>  
>
>  
>
>   
>
>
>
> *"com.example.ignite.cassandra.CompositeValue"* strategy=*"POJO"*>
>
>   
>
>   
>
>   
>
>   
>
>   
>
>   
>
>
>
> 
>
>
>
>
>
> With larger clusters I’ve encountered issues such as the following errors:
>
>
>
> 2019-02-11 09:59:44 ERROR GridDhtPartitionsExchangeFuture:161 - Failed to
> initialize cache(s) (will try to rollback). GridDhtPartitionExchangeId
> [topVer=AffinityTopologyVersion [topVer=8, minorTopVer=1],
> discoEvt=DiscoveryCustomEvent [customMsg=DynamicCacheChangeBatch
> [id=1bbff7dd861-4da0b1ed-6b7a-418f-9d65-d57552e74a30,
> reqs=[DynamicCacheChangeRequest [cacheName=writebehind, hasCfg=true,
> nodeId=813dd53e-5e05-4e4d-9f63-5a41634937c0, clientStartOnly=false,
> stop=false, destroy=false, disabledAfterStartfalse]],
> exchangeActions=ExchangeActions [startCaches=[writebehind],
> stopCaches=null, startGrps=[writebehind], stopGrps=[], resetParts=null,
> stateChangeRequest=null], startCaches=false],
> affTopVer=AffinityTopologyVersion [topVer=8, minorTopVer=1],
> super=DiscoveryEvent [evtNode=TcpDiscoveryNode
> [id=813dd53e-5e05-4e4d-9f63-5a41634937c0, addrs=[0:0:0:0:0:0:0:1%lo0,
> 10.68.228.62, 127.0.0.1], sockAddrs=[/0:0:0:0:0:0:0:1%lo0:5021, /
> 127.0.0.1:5021], discPort=5021, order=1, intOrder=1,
> lastExchangeTime=1549904383332, loc=false,
> ver=2.7.0#20181130-sha1:256ae401, isClient=false], topVer=8,
> nodeId8=eca7beb4, msg=null, type=DISCOVERY_CUSTOM_EVT,
> tstamp=1549904384518]], nodeId=813dd53e, evt=DISCOVERY_CUSTOM_EVT]
>
> java.lang.NullPointerException
>
>  

Re: Default Cache template

2019-01-31 Thread Вячеслав Коптилин
Thanks for catching that. I already proposed the fix of xml example. I hope
the doc will be updated soon.

Regards,
S.

ср, 30 янв. 2019 г. в 22:30, mahesh76private :

> It works.
>
> it isn't in your documentation though.
> https://apacheignite.readme.io/docs/cache-template
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Default Cache template

2019-01-30 Thread Вячеслав Коптилин
Hello,

You have to add '*' to the cache name as follows:











Thanks,
S.

вт, 29 янв. 2019 г. в 11:07, mahesh76private :

> Hi,
>
> I added the below, in node config.xml file. However, SQL table create from
> client side keep complaining that "SQLTABLE_TEMPLATE" template is not
> found.
>
>
> 
> 
>  class="org.apache.ignite.configuration.CacheConfiguration">
> 
> 
> 
> 
> 
> 
>
> The only way this works is from Java code, when I use the
> CacheConfiguration.addCacheConfiguration and register the template with the
> cluster.
>
> My need is to set the template in node config xml and ensure it
> automatically resisters the template and there should be no need to
> explicitly set it.
>
> Please let me know, if I am doing something wrong.
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Limiting the number of files in WalArchive folder

2018-12-09 Thread Вячеслав Коптилин
Hello,

The size of WAL Archive is controlled by the
'DataStorageConfiguration#walHistorySize' property [1], which is 20
checkpoints by default [2].
As of Apache Ignite 2.7, you can specify the size of wal archive in bytes
via 'DataStorageConfiguration#setMaxWalArchiveSize()' method [3].

[1]
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/DataStorageConfiguration.html#setWalHistorySize-int-
[2] https://apacheignite.readme.io/docs/persistence-checkpointing
[3]
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/DataStorageConfiguration.html#setMaxWalArchiveSize-long-

Thanks,
S.

сб, 8 дек. 2018 г. в 08:21, aMark :

> Hi,
>
> We are using Ignite 2.6 as persistent store in Partitioned Mode having 6
> cluster
> node running, each node is running on different machine.
>
> Our cluster is having ~50G of data in IgnitePersistentStore but if I check
> WalArchive folder then it contains lots of files and size is around ~200G.
> This uncontrollable growth has created space issue on drive and eventually
> results in hang state for cluster.
>
> I understand that WalArchive is needed to recover data in case of node
> crash. So we dont want to disable the WalArchive altogether.
>
> Is there a way to limit the number of files in WalArchive so that disc does
> not run out of space ?
>
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Do I need CacheStoreFactory class and hibernate configuration on Remote Node?

2018-09-17 Thread Вячеслав Коптилин
Hello,

> Do I need CacheStoreFactory class and hibernate configuration on Remote
Node?
Yes, CacheStore classes must be in the classpath on all nodes (including
the client nodes)
For example, you can put them to '$IGNITE_HOME/libs' folder (a subfolder
can be created for convenience as well).

> Note that: I have tried peer class loading false and true, but nothing
changed.
Long story short, 'CacheStore' does not support zero deployment feature (it
is about Ignite Compute, in general)
Please take a look at this page:
https://apacheignite.readme.io/docs/zero-deployment

Thanks,
S.


пн, 17 сент. 2018 г. в 21:57, monstereo :

> Simply hibernate example:
>
> One node contains HibernateCacheStore and other configuration
> (hibernate.cfg.xml vs...)
> I have created that node and read database and write to cache (It works as
> it is) (cache name is "hibernateCache")
>
> Now I have created another node which has default configuration. Then in
> the
> main app just contains::
>
> public static void printCache(IgniteCache igniteCache){
> System.out.println("Print the all caches");
> Iterator> cacheIterator =
> igniteCache.iterator();
> while (cacheIterator.hasNext()){
> Cache.Entry cacheEntry = cacheIterator.next();
> System.out.println("Key: " + cacheEntry.getKey() + ", value:" +
> cacheEntry.getValue());
> }
> }
>
> public static void main...(){
>Ignite node = Ignition.start(defaultxmlpath)
> IgniteCache cache = node.getOrCreateCache("hibernateCache");
> System.out.println(cache.size());
> printCache(cache);
> }
>
> even if I work on local mode(and server nodes become 2 when I run the other
> node), I can not see the cache size on the screen and does not print the
> cache. It just waiting!!!
>
>
> Note that: I have tried peer class loading false and true, but nothing
> changed.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite Cluster: Cache Misses happening for existing keys

2018-09-13 Thread Вячеслав Коптилин
Hi,

You can try with using EntryProcessor. Please take a look at the following
example:

public class EntryUpdater implements EntryProcessor {
@Override
public String process(MutableEntry entry,
Object... arguments) throws EntryProcessorException {
if (!entry.exists()) {
// Cache does not contain a mapping for the given key.

String newValue = getNewValue();

entry.setValue(newValue);
}
else {
// The entry is already exists, just return the existing value.
}

return entry.getValue();
}
}

private static String processKey(String key) {
return cache.invoke(key, new EntryUpdater());
}

[1] 
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/IgniteCache.html#invoke-K-org.apache.ignite.cache.CacheEntryProcessor-java.lang.Object...-

Thanks,

S.


чт, 13 сент. 2018 г. в 15:06, HEWA WIDANA GAMAGE, SUBASH <
subash.hewawidanagam...@fmr.com>:

> Thank you for the prompt response.
>
>
>
> I agree there’s a race condition here. But the problem of the suggestion
> is, “getNewValue()” will get called even for cache hits. This getNewValue
> eventually calling some backend legacy API and the caching layer is setup
> to reduce the load for it.
>
>
>
> *From:* Вячеслав Коптилин [mailto:slava.kopti...@gmail.com]
> *Sent:* Thursday, September 13, 2018 4:08 AM
> *To:* user@ignite.apache.org
> *Subject:* Re: Ignite Cluster: Cache Misses happening for existing keys
>
>
> This email is from an external source - exercise caution regarding links
> and attachments. Please visit cybersecurity.fmr.com/phishing/ to learn
> more.
>
>
>
> Hello,
>
>
>
> Your code contains the following obvious race:
>
> String value = *cache*.get(key);
> *if *(value == *null*) {
> *cache*.put(key, *getNewValue*());
> *LOG*.info(*"Cache put key={} "*, key);
> }
>
> It's easy to imagine that two threads managed to execute get() method and got 
> null value before put() operation is called.
>
> Please try to modify the code as follows:
>
> if (cache.putIfAbsent(key, getNewValue())) {
>
> LOG.info("Cache put key={} ", key);
>
> }
>
> else {
>
> LOG.info("Cache hit key={}", key);
>
> }
>
>
>
> [1] 
> https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/IgniteCache.html#putIfAbsent-K-V-
>
>
>
> Thanks,
>
> S.
>
>
>
> чт, 13 сент. 2018 г. в 6:29, HEWA WIDANA GAMAGE, SUBASH <
> subash.hewawidanagam...@fmr.com>:
>
> Hi all,
>
> We’re observing this in a 3 node server cluster(3 separate JVMs, in 3
> separate VMs within same datacenter, node to node latency is 2-3
> milliseconds within the network).
>
>
>
> As with following code is wrapped inside an http API, and that API is
> getting called by a 5000 users ramping up(60seconds) forever, hitting 3
> nodes in a round robin manner.
>
>
>
> *With this, for same cache key, I could see more than one “Cache put key=”
> log appearing with in 15mins window(actually I am getting this duplicate
> put logs after 2-3mins of the load test).. *
>
>
>
> For the SAME cache key, there cannot be more than one put within 15mins.
> Based on cache size, it’s well below eviction size, and since it’s well
> within the expiry window, looks to me some timing issue when replicating
> the cache between the nodes.
>
>
>
> Time between same key cache put logs is about 8-12 seconds usually.  Am I
> doing something wrong here ? Any way we can make a cache.put operation
> synchronously complete only upon a full node replication (not quite sure
> whether it helps though)?
>
>
>
> Version: Ignite 1.9
>
>
>
> *Code to create the cache*
>
>
>
> IgniteConfiguration cfg = *new *IgniteConfiguration();
> cfg.setDiscoverySpi(*getDiscoverySpi*());
> *// static ip list on tcp discovery *cfg.setClientMode(*false*);
> cfg.setIncludeEventTypes(EventType.*EVT_NODE_SEGMENTED*,EventType.
> *EVT_NODE_FAILED*);
> Ignite ignite = Ignition.*start*(cfg);
>
> ignite.events().localListen(event -> {
> *LOG*.info(*"Cache event received: {} "*, event);
> *return true*;},EventType.*EVT_NODE_SEGMENTED*, 
> EventType.*EVT_NODE_FAILED*);
>
>
>
> CacheConfiguration cc  = *new *CacheConfiguration<>();
> cc.setName(*"mycache1"*);
> cc.setExpiryPolicyFactory(CreatedExpiryPolicy.*factoryOf*(*new 
> *Duration(TimeUnit.*MINUTES*, 15)));
> cc.setCacheMode(CacheMode.*REPLICATED*);
>   LruEvictionPolicy evictionPolicy = *new *LruEvictionPolicy();
>   evictionPolicy.setMaxMemorySize(500 * 1024 * 1024);
> cc.setEvi

Re: Ignite Cluster: Cache Misses happening for existing keys

2018-09-13 Thread Вячеслав Коптилин
Hello,

Your code contains the following obvious race:

String value = *cache*.get(key);
*if *(value == *null*) {
*cache*.put(key, *getNewValue*());
*LOG*.info(*"Cache put key={} "*, key);
}

It's easy to imagine that two threads managed to execute get() method
and got null value before put() operation is called.

Please try to modify the code as follows:

if (cache.putIfAbsent(key, getNewValue())) {

LOG.info("Cache put key={} ", key);

}

else {

LOG.info("Cache hit key={}", key);

}


[1] 
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/IgniteCache.html#putIfAbsent-K-V-


Thanks,

S.


чт, 13 сент. 2018 г. в 6:29, HEWA WIDANA GAMAGE, SUBASH <
subash.hewawidanagam...@fmr.com>:

> Hi all,
>
> We’re observing this in a 3 node server cluster(3 separate JVMs, in 3
> separate VMs within same datacenter, node to node latency is 2-3
> milliseconds within the network).
>
>
>
> As with following code is wrapped inside an http API, and that API is
> getting called by a 5000 users ramping up(60seconds) forever, hitting 3
> nodes in a round robin manner.
>
>
>
> *With this, for same cache key, I could see more than one “Cache put key=”
> log appearing with in 15mins window(actually I am getting this duplicate
> put logs after 2-3mins of the load test).. *
>
>
>
> For the SAME cache key, there cannot be more than one put within 15mins.
> Based on cache size, it’s well below eviction size, and since it’s well
> within the expiry window, looks to me some timing issue when replicating
> the cache between the nodes.
>
>
>
> Time between same key cache put logs is about 8-12 seconds usually.  Am I
> doing something wrong here ? Any way we can make a cache.put operation
> synchronously complete only upon a full node replication (not quite sure
> whether it helps though)?
>
>
>
> Version: Ignite 1.9
>
>
>
> *Code to create the cache*
>
>
>
> IgniteConfiguration cfg = *new *IgniteConfiguration();
> cfg.setDiscoverySpi(*getDiscoverySpi*());
> *// static ip list on tcp discovery *cfg.setClientMode(*false*);
> cfg.setIncludeEventTypes(EventType.*EVT_NODE_SEGMENTED*,EventType.
> *EVT_NODE_FAILED*);
> Ignite ignite = Ignition.*start*(cfg);
>
> ignite.events().localListen(event -> {
> *LOG*.info(*"Cache event received: {} "*, event);
> *return true*;},EventType.*EVT_NODE_SEGMENTED*, 
> EventType.*EVT_NODE_FAILED*);
>
>
>
> CacheConfiguration cc  = *new *CacheConfiguration<>();
> cc.setName(*"mycache1"*);
> cc.setExpiryPolicyFactory(CreatedExpiryPolicy.*factoryOf*(*new 
> *Duration(TimeUnit.*MINUTES*, 15)));
> cc.setCacheMode(CacheMode.*REPLICATED*);
>   LruEvictionPolicy evictionPolicy = *new *LruEvictionPolicy();
>   evictionPolicy.setMaxMemorySize(500 * 1024 * 1024);
> cc.setEvictionPolicy(evictionPolicy);
>
>
> IgniteCache cache = ignite.getOrCreateCache(cc);
>
>
>
>
>
> *Code to cache operations.(following method can be accessed by multiple
> threads at the same time)*
>
>
>
> *private static *String processKey(String key) {
> String value = *cache*.get(key);
> *if *(value == *null*) {
> *cache*.put(key, *getNewValue*());
> *LOG*.info(*"Cache put key={} "*, key);
> } *else *{
> *LOG*.info(*"Cache hit key={}"*, key);
> }
> *return *value;
> }
>
>
>
>
>
>
>


Re: Partition map exchange in detail

2018-09-12 Thread Вячеслав Коптилин
Hello Eugene,

I hope you meant PME (partitions map exchange) instead of NPE :)

> What constitutes a transaction in this context?
If I am not mistaken, it is about Ignite transactions.
Please take a look at this page
https://apacheignite.readme.io/docs/transactions

> Does it mean that if the cluster constantly receives transaction
requests, NPE will never happen?
> Or will all transactions that were received after the NPE request wait
for the NPE to complete?
Transactions that were initiated after the PME request will wait for the
PME is completed.

Thanks,
S.

ср, 12 сент. 2018 г. в 22:51, eugene miretsky :

> Make sense
> I think the actual issue that was affecting me is
> https://issues.apache.org/jira/browse/IGNITE-9562. (which IEP-25 should
> solve).
>
> Final 2 questions:
> 1) If all NPE waits for all pending transactions
>   a) What constitutes a transaction in this context? (any query, a SQL
> transaction, etc)
>   b) Does it mean that if the cluster constantly receives transaction
> requests, NPE will never happen? (Or will all transactions that were
> received after the NPE request wait for the NPE to complete?)
> 2) Any other advice on how to avoid NPE? (transaction timeouts, graceful
> shutdown/restart of nodes, etc)
>
> Cheers,
> Eugene
>
>
>
>
>
> On Wed, Sep 12, 2018 at 12:18 PM Pavel Kovalenko 
> wrote:
>
>> Eugene,
>>
>> In the case of Zookeeper Discovery is enabled and communication problem
>> between some nodes, a subset of problem nodes will be automatically killed
>> to reach cluster state where each node can communicate with each other
>> without problems. So, you're absolutely right, dead nodes will be removed
>> from a cluster and will not participate in PME.
>> IEP-25 is trying to solve a more general problem related only to PME.
>> Network problems are only part of the problem can happen during PME. A node
>> may break down before it even tried to send a message because of unexpected
>> exceptions (e.g. NullPointer, Runtime, Assertion e.g.). In general, IEP-25
>> tries to defend us from any kind of unexpected problems to make sure that
>> PME will not be blocked in that case and the cluster will continue to live.
>>
>>
>> ср, 12 сент. 2018 г. в 18:53, eugene miretsky > >:
>>
>>> Hi Pavel,
>>>
>>> The issue we are discussing is PME failing because one node cannot
>>> communicate to another node, that's what IEP-25 is trying to solve. But in
>>> that case (where one node is either down, or there is a communication
>>> problem between two nodes) I would expect the split brain resolver to kick
>>> in, and shut down one of the nodes. I would also expect the dead node to be
>>> removed from the cluster, and no longer take part in PME.
>>>
>>>
>>>
>>> On Wed, Sep 12, 2018 at 11:25 AM Pavel Kovalenko 
>>> wrote:
>>>
 Hi Eugene,

 Sorry, but I didn't catch the meaning of your question about Zookeeper
 Discovery. Could you please re-phrase it?

 ср, 12 сент. 2018 г. в 17:54, Ilya Lantukh :

> Pavel K., can you please answer about Zookeeper discovery?
>
> On Wed, Sep 12, 2018 at 5:49 PM, eugene miretsky <
> eugene.miret...@gmail.com> wrote:
>
>> Thanks for the patience with my questions - just trying to understand
>> the system better.
>>
>> 3) I was referring to
>> https://apacheignite.readme.io/docs/zookeeper-discovery#section-failures-and-split-brain-handling.
>> How come it doesn't get the node to shut down?
>> 4) Are there any docs/JIRAs that explain how counters are used, and
>> why they are required in the state?
>>
>> Cheers,
>> Eugene
>>
>>
>> On Wed, Sep 12, 2018 at 10:04 AM Ilya Lantukh 
>> wrote:
>>
>>> 3) Such mechanics will be implemented in IEP-25 (linked above).
>>> 4) Partition map states include update counters, which are
>>> incremented on every cache update and play important role in new state
>>> calculation. So, technically, every cache operation can lead to 
>>> partition
>>> map change, and for obvious reasons we can't route them through
>>> coordinator. Ignite is a more complex system than Akka or Kafka and such
>>> simple solutions won't work here (in general case). However, it is true
>>> that PME could be simplified or completely avoid for certain cases and 
>>> the
>>> community is currently working on such optimizations (
>>> https://issues.apache.org/jira/browse/IGNITE-9558 for example).
>>>
>>> On Wed, Sep 12, 2018 at 9:08 AM, eugene miretsky <
>>> eugene.miret...@gmail.com> wrote:
>>>
 2b) I had a few situations where the cluster went into a state
 where PME constantly failed, and could never recover. I think the root
 cause was that a transaction got stuck and didn't timeout/rollback.  I 
 will
 try to reproduce it again and get back to you
 3) If a node is down, I would expect it to get detected and the
 node to get removed fro

Re: How to create tables with JDBC, read with ODBC?

2018-09-06 Thread Вячеслав Коптилин
Hi,

> I have tried various things on the Java side to make the Public schema
explicit, such as this:
If I'm not mistaken the schema can be specified as a parameter of ODBC
connection string.
Please take a look at this page:
https://apacheignite-sql.readme.io/docs/connection-string-and-dsn#section-connection-string-format

Also, you can find an example here:
https://github.com/apache/ignite/blob/master/modules/platforms/cpp/examples/odbc-example/src/odbc_example.cpp

Thanks,
S.

чт, 6 сент. 2018 г. в 16:59, limabean :

> Scenario:
> 64-bit ODBC driver cannot read data created from the Java Thin driver.
>
> Ignite 2.6.
> Running a single node server on Centos to test this.
>
> First:
> Using Intellij to remotely run the sample code from the Ignite Getting
> started page here on SQL:
> First Ignite SQL Application
> https://apacheignite.readme.io/docs/getting-started
>
> This all works fine.  Tables created, data inserted, data read.  All as
> expected.
>
> Next:
> Using the ODBC 64-bit driver from Windows 10 to connect to the still
> running
> Ignite server to read the same tables (City, Person).   This does not work.
>
> The ODBC driver appears to be able to get meta data - it gets the table
> names from the PUBLIC schema and it understands the fields / field counts
> in
> each table.  However, the ODBC driver is unable to perform any select
> operations on the tables. See the following stack trace as an example of
> the
> errors I am seeing:
>
>
> [13:30:19]   ^-- default [initSize=256.0 MiB, maxSize=6.3 GiB,
> persistenceEnabled=false]
> [13:33:09,424][SEVERE][client-connector-#45][OdbcRequestHandler] Failed to
> execute SQL query [reqId=0, req=OdbcQueryExecuteRequest [schema=PUBLIC,
> sqlQry=SELECT COUNT(*) FROM ""PUBLIC"".CITY, timeout=0, args=[]]]
> class org.apache.ignite.internal.processors.query.IgniteSQLException:
> Failed
> to parse query. Table  not found; SQL statement:
> SELECT COUNT(*) FROM ""PUBLIC"".CITY [42102-195]
> at
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.prepareStatementAndCaches(IgniteH2Indexing.java:2026)
>
> at
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.parseAndSplit(IgniteH2Indexing.java:1796)
>
> at
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.querySqlFields(IgniteH2Indexing.java:1652)
>
>
> ---
>
> I have tried various things on the Java side to make the Public schema
> explicit, such as this:
> conn = DriverManager.getConnection("jdbc:ignite:thin://10.60.1.101/PUBLIC");
>
> // conn.setSchema("PUBLIC");
>
> but this does not help with the ODBC problem.  The Java stuff still works
> fine.  Select statements in Java can be written like this and they still
> work:
>
>  stmt.executeQuery("SELECT p.name, c.name " +
>  " FROM PUBLIC.Person p, City c " +
>  " WHERE p.city_id = c.id"))
>
>
> Any advice on how this should be done (sample code?) is much appreciated.
> Thank you.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: configuration for some persistent and some non-persistent caches

2018-09-05 Thread Вячеслав Коптилин
I meant `datastore` cache of course )

Thanks,
S.

ср, 5 сент. 2018 г. в 19:04, Вячеслав Коптилин :

> Hello,
>
> I just tried the configuration, you provided, and it works as expected.
> I mean that only `datasource` cache persists its own data.
> Please make sure you clean up working directory before testing.
>
> Thanks,
> S.
>
>
> вс, 2 сент. 2018 г. в 11:50, joseheitor :
>
>> Hi Denis,
>>
>> I am struggling to get this properly configured for a persistent cache
>> (DATASTORE) and a non-persistent cache (SESSIONCACHE). Here is my config
>> ...
>> (what am I doing wrong?):
>>
>> > class="org.apache.ignite.configuration.IgniteConfiguration">
>>
>>   
>> > class="org.apache.ignite.configuration.DataStorageConfiguration">
>>   
>> 
>>   > class="org.apache.ignite.configuration.DataRegionConfiguration">
>> 
>> 
>> 
>> 
>>   
>> 
>>   
>> 
>>   
>>   
>> 
>> > class="org.apache.ignite.configuration.CacheConfiguration">
>> > value="Persistent_Region"/>
>> 
>> 
>> > class="org.apache.ignite.configuration.CacheConfiguration">
>> 
>> 
>> 
>>   
>> ...
>>
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>


Re: configuration for some persistent and some non-persistent caches

2018-09-05 Thread Вячеслав Коптилин
Hello,

I just tried the configuration, you provided, and it works as expected.
I mean that only `datasource` cache persists its own data.
Please make sure you clean up working directory before testing.

Thanks,
S.


вс, 2 сент. 2018 г. в 11:50, joseheitor :

> Hi Denis,
>
> I am struggling to get this properly configured for a persistent cache
> (DATASTORE) and a non-persistent cache (SESSIONCACHE). Here is my config
> ...
> (what am I doing wrong?):
>
>  class="org.apache.ignite.configuration.IgniteConfiguration">
>
>   
>  class="org.apache.ignite.configuration.DataStorageConfiguration">
>   
> 
>class="org.apache.ignite.configuration.DataRegionConfiguration">
> 
> 
> 
> 
>   
> 
>   
> 
>   
>   
> 
>  class="org.apache.ignite.configuration.CacheConfiguration">
> 
> 
> 
>  class="org.apache.ignite.configuration.CacheConfiguration">
> 
> 
> 
>   
> ...
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: error starting Ignite -Failed to activate node components

2018-09-03 Thread Вячеслав Коптилин
Hello,

The root cause of the issue is that your Spring configuration is not quite
correct.
It seems that the `dataSource` property [1] refers to the
`dsSQLServer_Sustainanalytics` which cannot be found:

Caused by: class org.apache.ignite.IgniteCheckedException: Spring bean
with the provided name doesn't exist,
beanName=dsSQLServer_Sustainanalytics]
at

org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.loadBeanFromAppContext(IgniteSpringHelperImpl.java:220)

Please make sure that `dsSQLServer_Sustainanalytics` bean exists and it is
properly configured.

[1]
https://apacheignite.readme.io/docs/3rd-party-store#section-cachejdbcpojostore

Thanks,
S.

пн, 3 сент. 2018 г. в 15:27, wt :

>
>  
> Ignition.start("C:igniteexamplesconfigpersistentstoreexample-persistent-store.xml");
>
>
> "C:\Program Files\Java\jdk1.8.0_151\bin\java" "-javaagent:C:\Program
> Files\JetBrains\IntelliJ IDEA Community Edition
> 2017.3\lib\idea_rt.jar=51729:C:\Program Files\JetBrains\IntelliJ IDEA
> Community Edition 2017.3\bin" -Dfile.encoding=UTF-8 -classpath "C:\Program
> Files\Java\jdk1.8.0_151\jre\lib\charsets.jar;C:\Program
> Files\Java\jdk1.8.0_151\jre\lib\deploy.jar;C:\Program
> Files\Java\jdk1.8.0_151\jre\lib\ext\access-bridge-64.jar;C:\Program
> Files\Java\jdk1.8.0_151\jre\lib\ext\cldrdata.jar;C:\Program
> Files\Java\jdk1.8.0_151\jre\lib\ext\dnsns.jar;C:\Program
> Files\Java\jdk1.8.0_151\jre\lib\ext\jaccess.jar;C:\Program
> Files\Java\jdk1.8.0_151\jre\lib\ext\jfxrt.jar;C:\Program
> Files\Java\jdk1.8.0_151\jre\lib\ext\localedata.jar;C:\Program
> Files\Java\jdk1.8.0_151\jre\lib\ext\nashorn.jar;C:\Program
> Files\Java\jdk1.8.0_151\jre\lib\ext\sunec.jar;C:\Program
> Files\Java\jdk1.8.0_151\jre\lib\ext\sunjce_provider.jar;C:\Program
> Files\Java\jdk1.8.0_151\jre\lib\ext\sunmscapi.jar;C:\Program
> Files\Java\jdk1.8.0_151\jre\lib\ext\sunpkcs11.jar;C:\Program
> Files\Java\jdk1.8.0_151\jre\lib\ext\zipfs.jar;C:\Program
> Files\Java\jdk1.8.0_151\jre\lib\javaws.jar;C:\Program
> Files\Java\jdk1.8.0_151\jre\lib\jce.jar;C:\Program
> Files\Java\jdk1.8.0_151\jre\lib\jfr.jar;C:\Program
> Files\Java\jdk1.8.0_151\jre\lib\jfxswt.jar;C:\Program
> Files\Java\jdk1.8.0_151\jre\lib\jsse.jar;C:\Program
> Files\Java\jdk1.8.0_151\jre\lib\management-agent.jar;C:\Program
> Files\Java\jdk1.8.0_151\jre\lib\plugin.jar;C:\Program
> Files\Java\jdk1.8.0_151\jre\lib\resources.jar;C:\Program
> Files\Java\jdk1.8.0_151\jre\lib\rt.jar;C:\Program
>
> Files\Java\jdk1.8.0_151\lib\tools.jar;C:\Users\ww309\Downloads\POC5-project\target\classes;C:\Users\ww309\.m2\repository\org\apache\ignite\ignite-core\2.6.0\ignite-core-2.6.0.jar;C:\Users\ww309\.m2\repository\javax\cache\cache-api\1.0.0\cache-api-1.0.0.jar;C:\Users\ww309\.m2\repository\org\jetbrains\annotations\13.0\annotations-13.0.jar;C:\Users\ww309\.m2\repository\org\gridgain\ignite-shmem\1.0.0\ignite-shmem-1.0.0.jar;C:\Users\ww309\.m2\repository\org\apache\ignite\ignite-spring\2.6.0\ignite-spring-2.6.0.jar;C:\Users\ww309\.m2\repository\org\springframework\spring-core\4.3.16.RELEASE\spring-core-4.3.16.RELEASE.jar;C:\Users\ww309\.m2\repository\org\springframework\spring-aop\4.3.16.RELEASE\spring-aop-4.3.16.RELEASE.jar;C:\Users\ww309\.m2\repository\org\springframework\spring-beans\4.3.16.RELEASE\spring-beans-4.3.16.RELEASE.jar;C:\Users\ww309\.m2\repository\org\springframework\spring-context\4.3.16.RELEASE\spring-context-4.3.16.RELEASE.jar;C:\Users\ww309\.m2\repository\org\springframework\spring-expression\4.3.16.RELEASE\spring-expression-4.3.16.RELEASE.jar;C:\Users\ww309\.m2\repository\org\springframework\spring-tx\4.3.16.RELEASE\spring-tx-4.3.16.RELEASE.jar;C:\Users\ww309\.m2\repository\org\springframework\spring-jdbc\4.3.16.RELEASE\spring-jdbc-4.3.16.RELEASE.jar;C:\Users\ww309\.m2\repository\commons-logging\commons-logging\1.1.1\commons-logging-1.1.1.jar;C:\Users\ww309\.m2\repository\org\apache\ignite\ignite-indexing\2.6.0\ignite-indexing-2.6.0.jar;C:\Users\ww309\.m2\repository\commons-codec\commons-codec\1.11\commons-codec-1.11.jar;C:\Users\ww309\.m2\repository\org\apache\lucene\lucene-core\5.5.2\lucene-core-5.5.2.jar;C:\Users\ww309\.m2\repository\org\apache\lucene\lucene-analyzers-common\5.5.2\lucene-analyzers-common-5.5.2.jar;C:\Users\ww309\.m2\repository\org\apache\lucene\lucene-queryparser\5.5.2\lucene-queryparser-5.5.2.jar;C:\Users\ww309\.m2\repository\org\apache\lucene\lucene-queries\5.5.2\lucene-queries-5.5.2.jar;C:\Users\ww309\.m2\repository\org\apache\lucene\lucene-sandbox\5.5.2\lucene-sandbox-5.5.2.jar;C:\Users\ww309\.m2\repository\com\h2database\h2\1.4.195\h2-1.4.195.jar;C:\Users\ww309\.m2\repository\org\apache\ignite\ignite-rest-http\2.6.0\ignite-rest-http-2.6.0.jar;C:\Users\ww309\.m2\repository\org\apache\tomcat\tomcat-servlet-api\8.0.23\tomcat-servlet-api-8.0.23.jar;C:\Users\ww309\.m2\repository\commons-lang\commons-lang\2.6\commons-lang-2.6.jar;C:\Users\ww309\.m2\repository\org\eclipse\jetty\jetty-continuation\9.2.11.v20150529\jetty-continuation-9.2.11.v201

Re: How to check if key exists in DataStreamer buffer so that it can be flushed?

2018-08-29 Thread Вячеслав Коптилин
Hi Dave,

> The DataStreamer is unordered
Yes, that is absolutely correct.

If I understand correctly, the initial use case is the following:
- there is an initial payload that is streamed into the cluster via
data streamer.
- during that operation, new updates arrive and corresponding keys
should be updated.

So, it seems you can just do cache.put(key, newValue) (I assume, that
`allowOverwrite` == false).

> What we do is have a version # that we store in the value, and the
StreamReceiver ignores earlier versions.
In any way, your approach looks reasonable to me.

Thanks,
S.

ср, 29 авг. 2018 г. в 16:30, Dave Harvey :

> The DataStreamer is unordered.   If you have  duplicate keys with
> different values, and you don't flush or take other action, then you will
> get an arbitrary result.   AllowOverwrite is not a solution.
>
> Adding to the streamer returns a Future, and all of those futures are
> notified when the buffer is committed.,
>
> You can keep a map by key of those futures if the source is a single
> client, and delay subsequent updates until the first completes.   You could
> discard more than one duplicate.
>
> What we do is have a version # that we store in the value, and the
> StreamReceiver ignores earlier versions.
> -DH
>
> On Wed, Aug 29, 2018 at 8:08 AM, Вячеслав Коптилин <
> slava.kopti...@gmail.com> wrote:
>
>> Hello,
>>
>> I don't think there is a way to do that check. Moreover, it seems to me
>> that is useless in any case.
>> The thing that allows you to achieve the desired behavior is
>> `allowOverwrite` flag [1].
>> By default, the data streamer will not overwrite existing data, which
>> means that if it will encounter an entry that is already in cache, it will
>> skip it.
>> So, you can just set `allowOverride` to `false` (which is the default
>> value) and put updated values into a cache.
>>
>> [1]
>> https://apacheignite.readme.io/docs/data-streamers#section-allow-overwrite
>>
>> Thanks,
>> S.
>>
>> ср, 29 авг. 2018 г. в 8:32, the_palakkaran :
>>
>>> Hi,
>>>
>>> I have a data streamer to load data into a cache. While loading I might
>>> need
>>> to update value of a particular key in cache, so I need to check if it is
>>> already there in the streamer buffer. If so, either I need to update
>>> value
>>> against that key in the buffer or I need to flush the data in the
>>> streamer
>>> and then update. Is there a way to do this?
>>>
>>>
>>>
>>> --
>>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>>
>>
>
>
> *Disclaimer*
>
> The information contained in this communication from the sender is
> confidential. It is intended solely for use by the recipient and others
> authorized to receive it. If you are not the recipient, you are hereby
> notified that any disclosure, copying, distribution or taking action in
> relation of the contents of this information is strictly prohibited and may
> be unlawful.
>
> This email has been scanned for viruses and malware, and may have been
> automatically archived by *Mimecast Ltd*, an innovator in Software as a
> Service (SaaS) for business. Providing a *safer* and *more useful* place
> for your human generated data. Specializing in; Security, archiving and
> compliance. To find out more Click Here
> <http://www.mimecast.com/products/>.
>


Re: How to check if key exists in DataStreamer buffer so that it can be flushed?

2018-08-29 Thread Вячеслав Коптилин
Hello,

I don't think there is a way to do that check. Moreover, it seems to me
that is useless in any case.
The thing that allows you to achieve the desired behavior is
`allowOverwrite` flag [1].
By default, the data streamer will not overwrite existing data, which means
that if it will encounter an entry that is already in cache, it will skip
it.
So, you can just set `allowOverride` to `false` (which is the default
value) and put updated values into a cache.

[1]
https://apacheignite.readme.io/docs/data-streamers#section-allow-overwrite

Thanks,
S.

ср, 29 авг. 2018 г. в 8:32, the_palakkaran :

> Hi,
>
> I have a data streamer to load data into a cache. While loading I might
> need
> to update value of a particular key in cache, so I need to check if it is
> already there in the streamer buffer. If so, either I need to update value
> against that key in the buffer or I need to flush the data in the streamer
> and then update. Is there a way to do this?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: How many threads does ignite.executorService() have.

2018-08-28 Thread Вячеслав Коптилин
Hello,

Usually, the size of a thread pool is max(8, total number of CPU's).
The details about configuration particular thread pools can be found here:
https://apacheignite.readme.io/docs/thread-pools

Thanks,
S.

вт, 28 авг. 2018 г. в 15:49, begineer :

> Hi Guys,
> I have one simple query about ignite executor service.
> I know java executor service accepts the no of threads to be run, but there
> is no option to pass no of threads to ignite executor service.
>
> So how does it manage threads to be started.
>
> many thanks
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: While enabling JCache, JCacheMetrics is throwing NullPointerException in getCacheManager with spring boot

2018-08-17 Thread Вячеслав Коптилин
  Hello,

As I mentioned above, this exception indicates that the cache with the
given name already exists in the cluster.

Thanks,
S.

пт, 17 авг. 2018 г., 10:52 daya airody :

> Hi Slava,
>
> other reply is about issues in caching Spring proxied objects.
>
> I am also seeing "Cache already exists" issue. Could you please throw some
> light on this?
>
> Thanks in advance,
> --daya--
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: sql query log

2018-08-16 Thread Вячеслав Коптилин
Hello,

If I am not mistaken, there is no such capability out of the box.
You can try to use IgniteConfiguration#longQueryWarningTimeout property as
a workaround.

For example, you can set this property to 1ms. In this case, every SQL
request that spends more than 1ms will be printed in the log as follows:
[2018-01-04 06:32:38,230][WARN ] [IgniteH2Indexing] Query execution is
too long [time=3518 ms, sql='the execution plan of your SQL query will be
printed here"

I hope the workaround will be helpful for debugging purposes.

Thanks,
S.


чт, 16 авг. 2018 г. в 18:22, Som Som <2av10...@gmail.com>:

> hi.
>
> how can i log all the sql queries on the server node side?
>


Re: While enabling JCache, JCacheMetrics is throwing NullPointerException in getCacheManager with spring boot

2018-08-15 Thread Вячеслав Коптилин
Hi,

looks like your question is already answered here
http://apache-ignite-users.70518.x6.nabble.com/values-retrieved-from-the-cache-are-wrapped-with-JdkDynamicAopProxy-while-using-springboot-and-JCache-tp23258p23426.html

Thanks,
S.

пн, 13 авг. 2018 г. в 15:14, daya airody :

> Hi slava,
>
> I have uploaded my code at below link:
> https://github.com/daya-airody/ignite-caching
>
> You need uncomment below lines in application.properties before running
> startup.sh.
>
> spring.cache.jcache.config=classpath:example-cache.xml
> spring.cache.cache-names=users,cannedReports
>
> When you run startup.sh, it throws below error:
>
> org.apache.ignite.cache.CacheExistsException: Failed to start cache (a
> cache
> with the same name is already started): users
>
> Please review my code and help me debug this issue.
> Thanks in advance,
> --daya--
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: distributed-ddl extended-parameters section showing 404 page not found

2018-08-15 Thread Вячеслав Коптилин
Hello Huang,

Both of them should work. In any way, I would suggest using 'AFFINITY-KEY'
instead of 'AFFINITYKEY'.
You can find the details here:
https://issues.apache.org/jira/browse/IGNITE-6270

By the way, I will try to update the docs.

Thanks,
S.


ср, 15 авг. 2018 г. в 14:19, Huang Meilong :

> thank you slava,
>
>
> it says that to specify an affinity key name we can use AFFINITI_KEY in
> the Parameters section,
>
> "
>
>- AFFINITY_KEY= - specifies an affinity key
><https://apacheignite.readme.io/docs/affinity-collocation> name which
>is a column of the PRIMARY KEY constraint.
>-
>
> "
>
>
> but in the example section, it uses affinityKey
>
>
>
>- SQL <https://apacheignite-sql.readme.io/docs/create-table>
>
> Copy
>
> CREATE TABLE IF NOT EXISTS Person (
>   id int,
>   city_id int,
>   name varchar,
>   age int,
>   company varchar,
>   PRIMARY KEY (id, city_id)
> ) WITH "template=partitioned,backups=1,affinitykey=city_id, key_type=PersonKe
>
>
>
> which one works?
> --
> *发件人:* Вячеслав Коптилин 
> *发送时间:* 2018年8月15日 16:14:34
> *收件人:* user@ignite.apache.org
> *主题:* Re: distributed-ddl extended-parameters section showing 404 page
> not found
>
> Hello,
>
> Yep, the link is broken, unfortunately.
> It seems it should be the following
> https://apacheignite-sql.readme.io/docs/create-table#section-parameters
>
> Thanks,
> S.
>
> ср, 15 авг. 2018 г. в 10:17, Huang Meilong :
>
> I found it here: https://apacheignite-sql.readme.io/docs/getting-started
> Getting Started - Apache Ignite SQL Documentation
> <https://apacheignite-sql.readme.io/docs/getting-started>
> apacheignite-sql.readme.io
> Apache Ignite is a memory-centric distributed database, caching, and
> processing platform for transactional, analytical, and streaming workloads,
> delivering in-memory speeds at petabyte scale
>
>
> """
>
> to set other cache configurations for the table, you should use the
> template parameter and provide the name of the cache configuration
> previously registered(via XML or code). See extended parameters
> <https://apacheignite-sql.readme.io/docs/distributed-ddl#section-extended-parameters>
>  section
> for more details.
>
> """
> --
> *发件人:* dkarachentsev 
> *发送时间:* 2018年8月15日 14:48:56
> *收件人:* user@ignite.apache.org
> *主题:* Re: distributed-ddl extended-parameters section showing 404 page
> not found
>
> Hi,
>
> Where did you find it? It might be a broken link.
>
> Thanks!
> -Dmitry
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>
>


Re: distributed-ddl extended-parameters section showing 404 page not found

2018-08-15 Thread Вячеслав Коптилин
Hello,

Yep, the link is broken, unfortunately.
It seems it should be the following
https://apacheignite-sql.readme.io/docs/create-table#section-parameters

Thanks,
S.

ср, 15 авг. 2018 г. в 10:17, Huang Meilong :

> I found it here: https://apacheignite-sql.readme.io/docs/getting-started
> Getting Started - Apache Ignite SQL Documentation
> 
> apacheignite-sql.readme.io
> Apache Ignite is a memory-centric distributed database, caching, and
> processing platform for transactional, analytical, and streaming workloads,
> delivering in-memory speeds at petabyte scale
>
>
> """
>
> to set other cache configurations for the table, you should use the
> template parameter and provide the name of the cache configuration
> previously registered(via XML or code). See extended parameters
> 
>  section
> for more details.
>
> """
> --
> *发件人:* dkarachentsev 
> *发送时间:* 2018年8月15日 14:48:56
> *收件人:* user@ignite.apache.org
> *主题:* Re: distributed-ddl extended-parameters section showing 404 page
> not found
>
> Hi,
>
> Where did you find it? It might be a broken link.
>
> Thanks!
> -Dmitry
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite with POJO persistency in SQLServer

2018-08-10 Thread Вячеслав Коптилин
Hello,

Perhaps, I'm missing something but it seems that LONGVARBINARY is not
supported by Apache Ignite.
The full list of available types can be found here:
https://apacheignite-sql.readme.io/docs/data-types

Thanks,
S.

ср, 8 авг. 2018 г. в 18:52, michal23849 :

> Hi All,
>
> I tried mapping the fields in number of different combinations based on the
> above, but all the time I am failing with the SQLServerException:  The
> conversion from UNKNOWN to UNKNOWN is unsupported.
>
> The mappings I used in the following structure included:
>
> 
>name="databaseFieldType" >
>static-field="java.sql.Types.LONGVARBINARY"/>
>   
>name="databaseFieldName" value="firstCode" />
>name="javaFieldType" value="java.lang.Byte[]" />
>name="javaFieldName" value="firstCode" />
>  
>
> I also checked other combinations of:
> javaFieldTypes:
> my.package.ListingCode
> byte[]
> java.lang.Byte[]
> java.sql.Blob
> Object
>
> to JdbcTypes (java.sql.Types.):
> LONGVARBINARY
> VARBINARY
>
> Based on the SQLServer JDBC driver documentation and Ignite's all this
> should be supported. Could you please shed some more live how the object is
> passed to the driver and how best it should be mapped in XML?
>
> Thank you
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: While enabling JCache, JCacheMetrics is throwing NullPointerException in getCacheManager with spring boot

2018-08-07 Thread Вячеслав Коптилин
Hi,

You can create a cache via CacheManager instance using Ignite
CacheConfiguration or MutableConfiguration:

public class ExampleNodeStartup {
public static void main(String[] args) throws Exception {
CachingProvider provider = Caching.getCachingProvider();
CacheManager manager = provider.getCacheManager();

Cache c1 = manager.createCache("my-test-cache-1", new
MutableConfiguration<>());

CacheConfiguration cfg = new
CacheConfiguration<>("my-test-cache-2");
cfg.setCacheMode(CacheMode.REPLICATED);
cfg.setBackups(2);

Cache c2 = manager.createCache(cfg.getName(), cfg);
}
}

Moreover, you can provide your own Ignite configuration as well:

public class ExampleNodeStartup {
public static void main(String[] args) throws Exception {
URI igniteCfg =
"//projects//ignite//examples//config//example-cache.xml";

CachingProvider provider = Caching.getCachingProvider();
CacheManager manager = provider.getCacheManager(igniteCfg,
ExampleNodeStartup.class.getClassLoader());

Cache default = manager.getCache("default");
}
}

All these examples work as expected, without any errors/exceptions.

So, I owuld suggest creating a small project, that reproduces the issue you
mentioned, and upload it on GitHub
so that the community can take a look at this.

Thanks.

вт, 7 авг. 2018 г. в 20:45, daya airody :

> Hi slava,
>
> thanks for your comments. I am creating the cache directly using JCache API
> below:
> ---
> @Bean
> public JCacheManagerCustomizer cacheManagerCustomizer() {
> return cm -> {
> Configuration cacheConfiguration =
> createCacheConfiguration();
>
> if (cm.getCache("users") == null)
> cm.createCache("users", cacheConfiguration);
> if (cm.getCache("cannedReports") == null)
> cm.createCache("cannedReports",
> createCustomCacheConfiguration());
> };
> }
> --
> Even though I have created these caches using JCache API ( and not through
> Ignite API), when I restart my application, cache.getCacheManager() is
> returning null within JCacheMetrics constructor. This is a blocker.
>
> Any help is appreciated.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Failed to add node to topology because it has the same hash code for partitioned affinity as one of existing nodes

2018-08-07 Thread Вячеслав Коптилин
Hello,

> Hi It is having unique consistentId across cluster.
> Node1 ConsistentId (Server 1): 127.0.0.1:48500..48520 192.168.0.4:48500..48520
192.168.0.5:48500..48520
> Node2 ConsistentId (Server 1): 127.0.0.1:48500..48520 192.168.0.4:48500..48520
192.168.0.5:48500..48520
> Node 3(server2):   127.0.0.1:48500..48520 192.168.0.4:48500..48520
192.168.0.5:48500..48520

Hmm, all these strings are absolutely the same :), and it looks like that
it is the root cause of the issue.

Thanks.


вт, 7 авг. 2018 г. в 13:41, kvenkatramtreddy :

> Hi It is having unique consistentId across cluster. All nodes running for
> some time, it is happening after some time. Please see the discoverySpi and
> consistenId details below. *Node1 ConsistentId (Server 1):*
> 127.0.0.1:48500..48520 192.168.0.4:48500..48520 192.168.0.5:48500..48520 Node2
> ConsistentId (Server 1): 127.0.0.1:48500..48520 192.168.0.4:48500..48520
> 192.168.0.5:48500..48520 Node 3(server2): 127.0.0.1:48500..48520
> 192.168.0.4:48500..48520 192.168.0.5:48500..48520
> --
> Sent from the Apache Ignite Users mailing list archive
>  at Nabble.com.
>


Re: Failed to add node to topology because it has the same hash code for partitioned affinity as one of existing nodes

2018-08-07 Thread Вячеслав Коптилин
Hello,

It seems that there are nodes that have the same value of consistentId
property.
Please try to set IgniteConfiguration.setConsistentId to a unique value
cluster-wide.

Thanks.

вт, 7 авг. 2018 г. в 7:27, kvenkatramtreddy :

> Hi Team,
>
> I have configured my all caches replicated mode with native persistence
> enable and running it on 3 nodes. 2 nodes runs on same server and another
> node run on different server.
>
> I have configured unique consistentId for each node and unique IGNITE_HOME.
> I also configured
> 
> 
> class="org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction">
>  value="true"/>
> 
> 
>
> Receiving the Failed to add node to topology error after some time. Please
> could you help us to resolve this issue.
>
> igniteSamehashError.txt
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t1700/igniteSamehashError.txt>
>
>
> Thanks & Regards,
> Venkat
>
>
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite.NET how to cancel tasks

2018-08-07 Thread Вячеслав Коптилин
Hello,

Have you checked Ignite log files? Do they contain anything suspicious?
I just checked TeamCity and it seems that CancellationTest (that I
mentioned above) is OK.

Thanks,
S.


вт, 7 авг. 2018 г. в 9:47, Maksym Ieremenko :

> Hi Slava,
>
>
>
> >> > using (var ignite = Ignition.Start())
>
> >> Is it possible that Ignite node was closed before the cancellation
> request was processed by an instance of SimpleJob? Could you please check
> that fact?
>
> No.
>
>
>
> I double cheeked: the main thread hangs on
>
> cts.Cancel(); // CancellationTokenSource.Cancel()
>
>
>
> so, the next lines of code will never be reached, stack:
>
>
>
>
> Apache.Ignite.Core.dll!Apache.Ignite.Core.Impl.Unmanaged.Jni.Env.CallLongMethod
>
> [Managed to Native Transition]
>
> Apache.Ignite.Core.dll!Apache.Ignite.Core.Impl.Unmanaged.Jni.Env.CallLongMethod(Apache.Ignite.Core.Impl.Unmanaged.Jni.GlobalRef
> obj, System.IntPtr methodId, long* argsPtr)
>
> Apache.Ignite.Core.dll!Apache.Ignite.Core.Impl.Unmanaged.UnmanagedUtils.TargetInLongOutLong(Apache.Ignite.Core.Impl.Unmanaged.Jni.GlobalRef
> target, int opType, long memPtr)
>
> Apache.Ignite.Core.dll!Apache.Ignite.Core.Impl.PlatformJniTarget.InLongOutLong(int
> type, long val)
>
>
> Apache.Ignite.Core.dll!Apache.Ignite.Core.Impl.Common.Future.OnTokenCancel()
>
> mscorlib.dll!System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext
> executionContext, System.Threading.ContextCallback callback, object state,
> bool preserveSyncCtx)
>
> mscorlib.dll!System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext
> executionContext, System.Threading.ContextCallback callback, object state,
> bool preserveSyncCtx)
>
> mscorlib.dll!System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext
> executionContext, System.Threading.ContextCallback callback, object state)
>
> mscorlib.dll!System.Threading.CancellationCallbackInfo.ExecuteCallback()
>
> mscorlib.dll!System.Threading.CancellationTokenSource.ExecuteCallbackHandlers(bool
> throwOnFirstException = false)
>
> mscorlib.dll!System.Threading.CancellationTokenSource.NotifyCancellation(bool
> throwOnFirstException)
>
> mscorlib.dll!System.Threading.CancellationTokenSource.Cancel()
>
> CancellationDemo.exe!CancellationDemo.Program.RunAsync() Line 49
>
> [Resuming Async Method]
>
> mscorlib.dll!System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext
> executionContext, System.Threading.ContextCallback callback, object state,
> bool preserveSyncCtx)
>
> mscorlib.dll!System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext
> executionContext, System.Threading.ContextCallback callback, object state,
> bool preserveSyncCtx)
>
>
> mscorlib.dll!System.Runtime.CompilerServices.AsyncMethodBuilderCore.MoveNextRunner.Run()
>
>
> mscorlib.dll!System.Runtime.CompilerServices.AsyncMethodBuilderCore.OutputAsyncCausalityEvents.AnonymousMethod__0()
>
>
> mscorlib.dll!System.Runtime.CompilerServices.TaskAwaiter.OutputWaitEtwEvents.AnonymousMethod__0()
>
> mscorlib.dll!System.Threading.Tasks.AwaitTaskContinuation.RunOrScheduleAction(System.Action
> action, bool allowInlining, ref System.Threading.Tasks.Task currentTask =
> null)
>
> mscorlib.dll!System.Threading.Tasks.Task.FinishContinuations()
>
> mscorlib.dll!System.Threading.Tasks.Task.TrySetResult(System.Threading.Tasks.VoidTaskResult
> result)
>
> mscorlib.dll!System.Threading.Tasks.Task.DelayPromise.Complete()
>
> mscorlib.dll!System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext
> executionContext, System.Threading.ContextCallback callback, object state,
> bool preserveSyncCtx)
>
> mscorlib.dll!System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext
> executionContext, System.Threading.ContextCallback callback, object state,
> bool preserveSyncCtx)
>
> mscorlib.dll!System.Threading.TimerQueueTimer.CallCallback()
>
> mscorlib.dll!System.Threading.TimerQueueTimer.Fire()
>
> mscorlib.dll!System.Threading.TimerQueue.FireNextTimers()
>
>
>
> Best regards,
>
> Max
>


Re: While enabling JCache, JCacheMetrics is throwing NullPointerException in getCacheManager with spring boot

2018-08-06 Thread Вячеслав Коптилин
Hello,

>  org.apache.ignite.cache.CacheExistsException: Failed to start cache (a
cache with the same name is already started): users
This exception means that the cache is already created in the cluster.
Please consider using Ignite#getOrCreateCache() method instead of
Ignite#createCache() [1]
This method allows to get an existing cache with the given name or creates
a new one.

[1]
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/Ignite.html#getOrCreateCache-org.apache.ignite.configuration.CacheConfiguration-

By the way, please take a look at this page:
https://apacheignite-mix.readme.io/docs/spring-caching
this page provides information about Spring Caching.

perhaps, the following GitHub project will be useful as well
https://github.com/Romeh/spring-boot-ignite

Thanks!


Re: Ignite.NET how to cancel tasks

2018-08-06 Thread Вячеслав Коптилин
Hi Max,

> using (var ignite = Ignition.Start())
Is it possible that Ignite node was closed before the cancellation request
was processed by an instance of SimpleJob? Could you please check that fact?
Perhaps you need to modify your code as follows:

var task = ignite.GetCompute().ExecuteAsync(new SimpleTask(), 3, cts.Token);
await Task.Delay(TimeSpan.FromSeconds(1)).ConfigureAwait(false);
cts.Cancel();

// check that cancellation flag is raised.
Console.WriteLine("status: {0}", task.IsCanceled);

// allow the cluster to handle the cancellation request request
Thread.Sleep(TimeSpan.FromSeconds(10));

Thanks,
S.

вс, 5 авг. 2018 г. в 19:15, Maksym Ieremenko :

> Hello Slava,
>
> Unfortunately demo does not work for me: IComputeJob.Cancel is never
> called.
> In my case an execution of job may take few minutes and I want to receive
> an notification about cancellation.
>
> Please check the following code, SimpleJob will never finish, because
> IComputeJob.Cancel is never called:
>
> using (var ignite = Ignition.Start())
> using (var cts = new CancellationTokenSource())
> {
> var task = ignite.GetCompute().ExecuteAsync(new
> SimpleTask(), 3, cts.Token);
>
> await
> Task.Delay(TimeSpan.FromSeconds(1)).ConfigureAwait(false);
>
> cts.Cancel();
>
> await task.ConfigureAwait(false);
> }
>
> public class SimpleTask : IComputeTask
> {
> public IDictionary, IClusterNode>
> Map(IList subGrid, int jobsPerGridCount)
> {
> return Enumerable
> .Range(1, jobsPerGridCount)
> .SelectMany(i => subGrid)
> .ToDictionary(i => (IComputeJob)new SimpleJob(), i =>
> i);
> }
>
> public ComputeJobResultPolicy OnResult(IComputeJobResult res,
> IList> rcvd)
> {
> return ComputeJobResultPolicy.Wait;
> }
>
> public int Reduce(IList> results)
> {
> return 1;
> }
> }
>
> public class SimpleJob : IComputeJob
> {
> private bool _isCancelled;
>
> public int Execute()
> {
> Console.WriteLine("execute task");
>
> while (!Volatile.Read(ref _isCancelled))
> {
> Thread.Sleep(TimeSpan.FromSeconds(1));
> }
>
> return 1;
> }
>
> public void Cancel()
> {
> // never happens !!!
> Console.WriteLine("cancel task");
> Volatile.Write(ref _isCancelled, true);
> }
> }
>
> Thanks,
> Max
>
> -Original Message-
> From: slava.koptilin [mailto:slava.kopti...@gmail.com]
> Sent: Donnerstag, 2. August 2018 17:56
> To: user@ignite.apache.org
> Subject: Re: Ignite.NET how to cancel tasks
>
> Hello Maksym,
>
> It seems that you need to call Cancel() method.
> something like as follows:
>
> var cts = new CancellationTokenSource(); var task =
> Compute.ExecuteJavaTaskAsync(ComputeApiTest.BroadcastTask, null,
> cts.Token); cts.Cancel();
>
> Please take a look at this example:
>
> https://github.com/apache/ignite/blob/master/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Compute/CancellationTest.cs
>
> Thanks,
> Slava.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: two data region with two nodes

2018-08-03 Thread Вячеслав Коптилин
Hello,

> There are two apps(more apps with different roles) with different ignite
configs,
> first App :set default region with persistence *enable*
> second app :set default region with persistence *disable  *

By the way, I don't think that it is a good idea to configure the same data
region using different values of the `enablePersistence` flag.
I hope that such a check will be added in the near future.
https://issues.apache.org/jira/browse/IGNITE-8951

Thanks,
S.

ср, 1 авг. 2018 г. в 19:51, wangsan :

> Thank you for your answer,
> Maybe I use region config the wrong way.
> There are two apps(more apps with different roles) with different ignite
> configs,
> first App
> :set default region with persistence enable
> :set cache a,with nodefilter in first apps,default region
> second app
> :set default region with persistence disable
> :just access cache a with query
>
> start first app instance 1,second app instance 2,and first app instance 3.
> then close 1,
> then restart 1, the deadlock will happes
>
> fyi, I use ignitelock in first apps when process ignite discovery event
> such
> as join event.  when a new node join, apps 1,3 will receive join message by
> local event listener, but I just want one node to process the message, so i
> use it like this:
>
> if (globalLock.tryLock()) {
> LOGGER.info("--  hold global lock ");
> try {
> switch (event.type()) {
> case EventType.EVT_NODE_JOINED:
> joinListener.onMessage(clusterNode);
> break;
> case EventType.EVT_NODE_FAILED:
> case EventType.EVT_NODE_LEFT:
> leftListener.onMessage(clusterNode);
> break;
> default:
> LOGGER.info("ignore discovery event: {}",
> event);
> break;
> }
> } finally {
> LOGGER.debug("--  process event done ");
> // don't unlock until node left
> // globalLock.unlock();
> }
> }
>
> the node which hold the globalLock will never unlock unless it left, Is it
> the right way with lock
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Remote Session with Ignite & Cassandra persistence

2018-08-02 Thread Вячеслав Коптилин
Hello,

Hello,

First of all, I would recommend switching to TcpDiscoveryVmIpFinder [1]
instead of TcpDiscoveryMulticastIpFinder,
and disable IPv6 stack via specifying JVM property
-Djava.net.preferIPv4Stack=true.

[1]
https://apacheignite.readme.io/docs/tcpip-discovery#section-static-ip-finder

Thanks,
S.

чт, 2 авг. 2018 г. в 16:31, okiesong :

> Hi, how can we use Ignite to start multiple ignite sessions on a remote
> server? I tried using TcpDiscoverySpi and TcpCommunicationSpi to resolve
> this problem, but this was not working. I basically used a similar setting
> as below from ignite website. And I have already added jar under ignite/lib
> folder for Cassandra persistence.
>
> TcpDiscoverySpi spi = new TcpDiscoverySpi();
>
> TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryMulticastIpFinder();
>
> ipFinder.setMulticastGroup("228.10.10.157");
>
> spi.setIpFinder(ipFinder);
>
> IgniteConfiguration cfg = new IgniteConfiguration();
>
> // Override default discovery SPI.
> cfg.setDiscoverySpi(spi);
>
> // Start Ignite node.
> Ignition.start(cfg);
>
>
> This is the error I am getting atm. I am basically ran ignite.sh from
> remote
> server first then, run my application from localhost that points to this
> remote server. but the below error displayed on a remote server.
>
> More details can be found from
>
> http://apache-ignite-users.70518.x6.nabble.com/Ignite-Cluster-with-remote-server-Cassandra-Persistence-tt22886.html
>
> [11:36:25] Topology snapshot [ver=9, servers=1, clients=0, CPUs=16,
> offheap=13.0GB, heap=1.0GB]
> [11:36:25]   ^-- Node [id=C1BAABB3-413C-4867-9F8A-CC21CA818E80,
> clusterState=ACTIVE]
> [11:36:25] Data Regions Configured:
> [11:36:25]   ^-- default [initSize=256.0 MiB, maxSize=12.6 GiB,
> persistenceEnabled=false]
> [11:36:25,574][SEVERE][exchange-worker-#62][GridCacheProcessor] Failed to
> register MBean for cache group: null
> javax.management.InstanceAlreadyExistsException:
> org.apache:clsLdr=764c12b6,group="Cache groups",name="ignite-cass-delta"
> at
> com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437)
> at
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898)
>
> at
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966)
>
> at
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900)
>
> at
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324)
>
> at
> com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
>
> at
> org.apache.ignite.internal.util.IgniteUtils.registerMBean(IgniteUtils.java:4573)
>
> at
> org.apache.ignite.internal.util.IgniteUtils.registerMBean(IgniteUtils.java:4544)
>
> at
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.startCacheGroup(GridCacheProcessor.java:2054)
>
> at
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCacheStart(GridCacheProcessor.java:1938)
>
> at
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.startReceivedCaches(GridCacheProcessor.java:1864)
>
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:667)
>
> at
>
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:2419)
> at
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:2299)
>
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
> at java.lang.Thread.run(Thread.java:748)
> [11:36:25,580][SEVERE][exchange-worker-#62][GridDhtPartitionsExchangeFuture]
>
> Failed to reinitialize local partitions (preloading will be stopped):
> GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion [topVer=8,
> minorTopVer=0], discoEvt=DiscoveryEvent [evtNode=TcpDiscoveryNode
> [id=c8f39e6b-9e13-47c7-8f6c-570b38afa962, addrs=[0:0:0:0:0:0:0:1,
> 10.252.198.106, 127.0.0.1], sockAddrs=[/0:0:0:0:0:0:0:1:47500,
> /127.0.0.1:47500, /10.252.198.106:47500], discPort=47500, order=8,
> intOrder=5, lastExchangeTime=1532619375511, loc=false,
> ver=2.5.0#20180523-sha1:86e110c7, isClient=false], topVer=8,
> nodeId8=c1baabb3, msg=Node joined: TcpDiscoveryNode
> [id=c8f39e6b-9e13-47c7-8f6c-570b38afa962, addrs=[0:0:0:0:0:0:0:1,
> 10.252.198.106, 127.0.0.1], sockAddrs=[/0:0:0:0:0:0:0:1:47500,
> /127.0.0.1:47500, /10.252.198.106:47500], discPort=47500, order=8,
> intOrder=5, lastExchangeTime=1532619375511, loc=false,
> ver=2.5.0#20180523-sha1:86e110c7, isClient=false], type=NODE_JOINED,
> tstamp=1532619385566], nodeId=c8f39e6b, evt=NODE_JOINED]
> class org.apache.ignite.Ig

Re: Ignite Cluster with remote server + Cassandra Persistence

2018-08-02 Thread Вячеслав Коптилин
Hello,

The first problem is related to a bug in the implementation of `PojoField`
class.
This is a known issue https://issues.apache.org/jira/browse/IGNITE-8788
Unfortunately, that bug is not resolved yet.
In any way, the workaround that I provided above should do a trick.

The second issue is `java.lang.IllegalStateException: Affinity for topology
version is not initialized`
and it seems that this exception does not relate to the original issue with
CassandraCacheStoreFactory.
I would suggest to the following:
 - disable IPv6 on all nodes via JVM option -Djava.net.preferIPv4Stack=true
for example
 - enable debug logging via JVM option -DIGNITE_QUIET=false or "-v" to
ignite.{sh|bat}
 - try to reproduce the issue, collect all log files from all nodes and
attach these files to a new message
   I think it will be the best approach to get help from the community.

Thanks,
S.

чт, 2 авг. 2018 г. в 16:49, okiesong :

> I also found this while trying to resolve this problem which is still
> unresolved.
>
> https://issues.apache.org/jira/browse/IGNITE-5998
>
> Will this be impacted?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite backups on collocated data

2018-07-27 Thread Вячеслав Коптилин
Hello,

Yes, backup copies will also be collocated.

Thanks,
S.

пт, 27 июл. 2018 г. в 15:11, kotamrajuyashasvi :

> Hi
>
> Suppose I have two ignite caches and these two caches are collocated based
> on a field using AffinityKemapped. Now if I configure same number of
> backups
> for both the caches, do the backup copies also get collocated i.e backed up
> on same nodes ?
>
> For ex: one row of cache1 and one row from cache2 are collocated i.e will
> be
> in same node/partition. Now will the backup copies of these rows also
> guaranteed to be present on same nodes/partitions ? Or can be on different
> nodes/partitions.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite Cluster with remote server + Cassandra Persistence

2018-07-26 Thread Вячеслав Коптилин
Hello,

> Just wondering, can I know why we are creating a custom
CassandraCacheStoreFactory?
It seems that the problem is PojoField#accessor field which is declared as
`transient` and so this field is not serialized/deserialized. Therefore, it
can lead to NullPointerException. The `PojoField` class is a part of
`KeyPersistenceSettings`.
It looks like a bug. I will try to debug that issue deeper and file a
ticket in order to track that.
So, the suggested workaround is trying to avoid that issue.

> java.lang.IllegalStateException: Affinity for topology version is not
initialized
Well, that exception does not relate to the original issue.
Please make sure that there are no other running nodes in the cluster.
Let's try from the scratch - stop all nodes, clean up working directories
and update your classes on all participating nodes before testing the
workaround.

Thanks,
S.


чт, 26 июл. 2018 г. в 18:49, okiesong :

> Hi, first of all, thanks once again, I have tried your approach (after
> uploading new jar to lib folder as a new class was introduced), but I am
> getting a below error. FYI, "ignite-cass-delta" is the name of the default
> cache name I am using. Thanks again!
>
> [11:36:25] Topology snapshot [ver=9, servers=1, clients=0, CPUs=16,
> offheap=13.0GB, heap=1.0GB]
> [11:36:25]   ^-- Node [id=C1BAABB3-413C-4867-9F8A-CC21CA818E80,
> clusterState=ACTIVE]
> [11:36:25] Data Regions Configured:
> [11:36:25]   ^-- default [initSize=256.0 MiB, maxSize=12.6 GiB,
> persistenceEnabled=false]
> [11:36:25,574][SEVERE][exchange-worker-#62][GridCacheProcessor] Failed to
> register MBean for cache group: null
> javax.management.InstanceAlreadyExistsException:
> org.apache:clsLdr=764c12b6,group="Cache groups",name="ignite-cass-delta"
> at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437)
> at
>
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898)
> at
>
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966)
> at
>
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900)
> at
>
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324)
> at
>
> com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
> at
>
> org.apache.ignite.internal.util.IgniteUtils.registerMBean(IgniteUtils.java:4573)
> at
>
> org.apache.ignite.internal.util.IgniteUtils.registerMBean(IgniteUtils.java:4544)
> at
>
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.startCacheGroup(GridCacheProcessor.java:2054)
> at
>
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCacheStart(GridCacheProcessor.java:1938)
> at
>
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.startReceivedCaches(GridCacheProcessor.java:1864)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:667)
> at
>
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:2419)
> at
>
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:2299)
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
> at java.lang.Thread.run(Thread.java:748)
>
> [11:36:25,580][SEVERE][exchange-worker-#62][GridDhtPartitionsExchangeFuture]
> Failed to reinitialize local partitions (preloading will be stopped):
> GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion [topVer=8,
> minorTopVer=0], discoEvt=DiscoveryEvent [evtNode=TcpDiscoveryNode
> [id=c8f39e6b-9e13-47c7-8f6c-570b38afa962, addrs=[0:0:0:0:0:0:0:1,
> 10.252.198.106, 127.0.0.1], sockAddrs=[/0:0:0:0:0:0:0:1:47500,
> /127.0.0.1:47500, /10.252.198.106:47500], discPort=47500, order=8,
> intOrder=5, lastExchangeTime=1532619375511, loc=false,
> ver=2.5.0#20180523-sha1:86e110c7, isClient=false], topVer=8,
> nodeId8=c1baabb3, msg=Node joined: TcpDiscoveryNode
> [id=c8f39e6b-9e13-47c7-8f6c-570b38afa962, addrs=[0:0:0:0:0:0:0:1,
> 10.252.198.106, 127.0.0.1], sockAddrs=[/0:0:0:0:0:0:0:1:47500,
> /127.0.0.1:47500, /10.252.198.106:47500], discPort=47500, order=8,
> intOrder=5, lastExchangeTime=1532619375511, loc=false,
> ver=2.5.0#20180523-sha1:86e110c7, isClient=false], type=NODE_JOINED,
> tstamp=1532619385566], nodeId=c8f39e6b, evt=NODE_JOINED]
> class org.apache.ignite.IgniteCheckedException: Failed to register MBean
> for
> component:
>
> org.apache.ignite.internal.processors.cache.CacheLocalMetricsMXBeanImpl@7a80ba57
> at
>
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.registerMbean(GridCacheProc

Re: Ignite Cluster with remote server + Cassandra Persistence

2018-07-26 Thread Вячеслав Коптилин
Hello,

It seems I found the root cause of the issue. Could you please try the
following workaround?

1. create your own CacheStoreFactory

public class CustomCassandraCacheStoreFactory extends
CassandraCacheStoreFactory {
private final String persistenceSettingsXml;

public CustomCassandraCacheStoreFactory(String persistenceSettingsXml) {
this.persistenceSettingsXml = persistenceSettingsXml;
}

@Override public CassandraCacheStore create() {
setPersistenceSettings(new
KeyValuePersistenceSettings(persistenceSettingsXml));
return super.create();
}
}


2. configure Ignite cache as follows

CacheConfiguration configuration = new CacheConfiguration<>();

configuration.setName("test-cache");
configuration.setIndexedTypes(String.class, POJOExample.class);
DataSource dataSource = new DataSource();
dataSource.setContactPoints("111.111.111.111"); // Here, I am
basically putting my remote server addresses.
RoundRobinPolicy robinPolicy = new RoundRobinPolicy();
dataSource.setLoadBalancingPolicy(robinPolicy);
dataSource.setReadConsistency("ONE");
dataSource.setWriteConsistency("ONE");



*String persistenceSettingsXml = FileUtils.readFileToString(new
File(persistenceSettingsConfig), "utf-8");CassandraCacheStoreFactory
cacheStoreFactory = new
CustomCassandraCacheStoreFactory(persistenceSettingsXml);cacheStoreFactory.setDataSource(dataSource);*

configuration.setCacheStoreFactory(cacheStoreFactory);
configuration.setWriteThrough(true);
configuration.setReadThrough(true);
configuration.setWriteBehindEnabled(true);


Thanks!

ср, 25 июл. 2018 г. в 21:19, okiesong :

> Hi, so I have provided my code structures, just in case my POJOExample.java
> looks as below; this is used for key-value setting for persistence setting
> for Cassandra. Thanks once again!
>
> POJOExample.java
> public class POJOExample implements Serializable{
>
>
> @QuerySqlField(index=true, descending = true,
> orderedGroups={@QuerySqlField.Group(name = "sample_idx", order = 0)})
> private String id;
>
> @QuerySqlField(orderedGroups={@QuerySqlField.Group(name = "sample_idx",
> order = 1)})
> private String name;
>
>
> public ClientMatchRecord(String name, String id) {
> this.bu1_id = bu1_id;
> this.bu1_name = bu1_name;
>
> }
>
> public String getid() {
> return id;
> }
>
> public void setid(String id) {
> this.id = id;
> }
>
> public String getname() {
> return name;
> }
>
> public void setname(String name) {
> this.name = name;
> }
>
>
> }
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite Cluster with remote server + Cassandra Persistence

2018-07-24 Thread Вячеслав Коптилин
Hello,

It does not seem to me, that the issue is somehow related to the thread you
mentioned.
I bet the root cause of the problem is a misconfiguration. At least, the
exception message tells me that CassandraCacheStore class cannot be
initialized for some reason:
java.lang.NullPointerException
at
org.apache.ignite.cache.store.cassandra.persistence.PojoField.calculatedField(PojoField.java:155)
at
org.apache.ignite.cache.store.cassandra.persistence.PersistenceController.prepareLoadStatements(PersistenceController.java:313)
at
org.apache.ignite.cache.store.cassandra.persistence.PersistenceController.(PersistenceController.java:85)
at
org.apache.ignite.cache.store.cassandra.CassandraCacheStore.(CassandraCacheStore.java:106)
at
org.apache.ignite.cache.store.cassandra.CassandraCacheStoreFactory.create(CassandraCacheStoreFactory.java:59)
at
org.apache.ignite.cache.store.cassandra.CassandraCacheStoreFactory.create(CassandraCacheStoreFactory.java:34)

Could you please share ignite configuration and persistence settings you
are using?

Thanks,
S.

вт, 24 июл. 2018 г. в 23:28, okiesong :

> How can I use Ignite cluster with remote servers?
>
> Currently, I am getting a below error. FYI, I have already configured my
> TcpDiscoveryVmIpFinder and TcpCommunicationSpi correctly, but for some
> reason, it is pointing at 10.209.236.58 which is from localhost, and thus
> the server is not able to connect with those ip address. FYI I have already
> included jar file under ignite's lib folder which also contains Pojo class
> for Cassandra Persistency Settings.
>
> I have already read
>
> http://apache-ignite-users.70518.x6.nabble.com/Remote-client-tt19574.html#a19591
>
> which is the exact problem I was having, but the thread was never answered
> properly.
>
>
> [14:43:58,405][SEVERE][exchange-worker-#62][GridDhtPartitionsExchangeFuture]
> Failed to reinitialize local partitions (preloading will be stopped):
> GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion [topVer=2,
> minorTopVer=0], discoEvt=DiscoveryEvent [evtNode=TcpDiscoveryNode
> [id=b2b939cb-35c1-4aab-8085-309253f6e196, addrs=[0:0:0:0:0:0:0:1,
> 10.209.236.58, 127.0.0.1], sockAddrs=[/10.209.236.58:47500,
> /0:0:0:0:0:0:0:1:47500, /127.0.0.1:47500], discPort=47500, order=2,
> intOrder=2, lastExchangeTime=1532457828204, loc=false,
> ver=2.5.0#20180523-sha1:86e110c7, isClient=false], topVer=2,
> nodeId8=b0f4894f, msg=Node joined: TcpDiscoveryNode
> [id=b2b939cb-35c1-4aab-8085-309253f6e196, addrs=[0:0:0:0:0:0:0:1,
> 10.209.236.58, 127.0.0.1], sockAddrs=[/10.209.236.58:47500,
> /0:0:0:0:0:0:0:1:47500, /127.0.0.1:47500], discPort=47500, order=2,
> intOrder=2, lastExchangeTime=1532457828204, loc=false,
> ver=2.5.0#20180523-sha1:86e110c7, isClient=false], type=NODE_JOINED,
> tstamp=1532457838379], nodeId=b2b939cb, evt=NODE_JOINED]
> java.lang.NullPointerException
> at
> org.apache.ignite.cache.store.cassandra.persistence.PojoField.calculatedField(PojoField.java:155)
>
> at
> org.apache.ignite.cache.store.cassandra.persistence.PersistenceController.prepareLoadStatements(PersistenceController.java:313)
>
> at
> org.apache.ignite.cache.store.cassandra.persistence.PersistenceController.(PersistenceController.java:85)
>
> at
> org.apache.ignite.cache.store.cassandra.CassandraCacheStore.(CassandraCacheStore.java:106)
>
> at
> org.apache.ignite.cache.store.cassandra.CassandraCacheStoreFactory.create(CassandraCacheStoreFactory.java:59)
>
> at
> org.apache.ignite.cache.store.cassandra.CassandraCacheStoreFactory.create(CassandraCacheStoreFactory.java:34)
>
> at
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.createCache(GridCacheProcessor.java:1437)
>
> at
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCacheStart(GridCacheProcessor.java:1945)
>
> at
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.startReceivedCaches(GridCacheProcessor.java:1864)
>
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:667)
>
> at
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:2419)
>
> at
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:2299)
>
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
> at java.lang.Thread.run(Thread.java:748)
>
> [14:43:58,411][SEVERE][exchange-worker-#62][GridCachePartitionExchangeManager]
> Failed to wait for completion of partition map exchange (preloading will
> not
> start): GridDhtPartitionsExchangeFuture [firstDiscoEvt=DiscoveryEvent
> [evtNode=TcpDiscoveryNode [id=b2b939cb-35c1-4aab-8085-309253f6e196,
> addrs=[0:0:0:0:0:0:0:1, 10.209.236

Re: LRU policy, on heap - off heap size ?

2018-07-19 Thread Вячеслав Коптилин
Hello,

Yep, it is expected behavior. You are using on-heap eviction policy and so,
this policy removes the cache entries from Java heap only. The entries
stored in the off-heap region of the memory are not affected.
You can find a comprehensive description here:
https://apacheignite.readme.io/docs/evictions#section-java-heap-cache

Thanks.


чт, 19 июл. 2018 г. в 11:12, monstereo :

> Here is the my lru configuration
>
> ...
> ccfg.setOnheapCacheEnabled(true);
> ccfg.setEvictionPolicyFactory(new LruEvictionPolicyFactory Person>(2));
> ...
>
> When I want to load 3 datas to cache, ignite console shows me
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t1901/ignite-forum-1.png>
>
>
> 2 on heap and 3 off heap
>
> why it is 2 on heap 1 off heap? Is it ignite features?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Running Server Node in UNIX

2018-07-17 Thread Вячеслав Коптилин
Hello,

please try the following:
java -cp
/opt/lib/*:/path-to-your-working-directory-where-class-files-are-stored/
com.cache.init.ServerNodeCodeStartup

I think that these links will be very helpful for you:
https://stackoverflow.com/questions/219585/including-all-the-jars-in-a-directory-within-the-java-classpath
https://docs.oracle.com/javase/10/tools/java.htm#GUID-3B1CE181-CD30-4178-9602-230B800D4FAE__BABDJJFI

Thanks.


вт, 17 июл. 2018 г. в 18:35, Skollur :

> Running below command in UNIX and getting error.it runs fine in windows
> in eclipse..any help on this?
>
> 1. java -cp /opt/lib/*.jar:/opt/com/test/cache/*.*:/opt/com/test/config/*.*
> ServerNodeCodeStartup   Error: Could not find or load main
> class
> ServerNodeCodeStartup
>
> 2. java -cp /opt/lib/*.jar:/opt/com/test/cache/*.*:/opt/com/test/config/*.*
> com.test.cache.ServerNodeCodeStartup   Error: Could not find or
> load main class ServerNodeCodeStartup
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Tracing all SQL Queries

2018-07-16 Thread Вячеслав Коптилин
Hi,

Yes, it can be specified in your Spring configuration file as follows:



...


Thanks,
Slava.


пн, 16 июл. 2018 г. в 19:39, ApacheUser :

> Hi Slava,
> Sorry to get into this thread,I have similar problem to control long
> running
> SQLs. I want timeout SQLs running more than 500ms.
> I sthere any way to set etLongQueryWarningTimeout()  in CONFIG File?
>
> Appreciate your response.
>
> Thanks
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Apache Ignite Bianry Cache store Sql Read Through Feature

2018-07-12 Thread Вячеслав Коптилин
Hi,

I could not find something related to that in Jira. So, it seems there are
no plans to implement this feature.

Thanks,
S.

чт, 12 июл. 2018 г. в 20:50, debashissinha :

> Hi Salva,
>
> Many Thanks for your advise . So it seems there is no real time way to do
> that . However I was also wondering whether datstreamer can be of any Help
> .
> Just wanted to know if any kind of feature is planned for future release.
>
> Thanks in advance
> Debashis
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Tracing all SQL Queries

2018-07-12 Thread Вячеслав Коптилин
Hello Dave,

I am afraid that there is no such possibility out of the box.
The simple workaround that I can imagine is using
IgniteConfiguration#setLongQueryWarningTimeout() method and set 1ms for
example.
In that case, every SQL request that spends more than 1ms will be printed
in the log as follows:
[2018-01-04 06:32:38,230][WARN ] [IgniteH2Indexing] Query execution is too
long [time=3518 ms, sql='the execution plan of your SQL query will be
printed here"

I think you can modify the code of IgniteH2Indexing#executeSqlQueryWithTimer
for debugging purposes, and print out an execution plan always.

Thanks,
S.


чт, 12 июл. 2018 г. в 16:07, Dave Harvey :

> Is there a simple way inside Ignite to get a log of all SQL Queries against
> the cluster, either in the debug logs or elsewhere ?This is not a easy
> question to phase in a way that Google will find a useful answer.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Need to Use Both Native and Cache store Persistence

2018-07-12 Thread Вячеслав Коптилин
Hello,

I think the NodeFilter was designed exactly for that case. Please take a
look at this method: CacheConfiguration#setNodeFilter() [1]

So, it can be used as follows:
 - for example, you can add a node attribute - "you-custom-attribute" that
can be used to determine the fact that a cache should be resided on the
given node or not.
 - create a cache with corresponding node filter as follows:

CacheConfiguration persConfig = new CacheConfiguration("persistent-cache")
.setDataRegionName("persistent-region")
.setNodeFilter(new IgnitePredicate() {
@Override public boolean apply(ClusterNode node) {
// check node's attribute, and return {@code true} if your
cache should reside on the given node.
Boolean persistentNodeAttr =
node.attribute("your-custom-attribute");
return (persistentNodeAttr != null &&
Boolean.TRUE.equals(persistentNodeAttr));
}
});

IgniteCache cache = ignite.getOrCreateCache(persConfig);

CacheConfiguration nonPersConfig = new
CacheConfiguration("non-persistent-cache")
.setDataRegionName("non-persistent-region")
.setNodeFilter(new IgnitePredicate() {
@Override public boolean apply(ClusterNode node) {
// check node's attribute, and return {@code true} if your
cache should reside on the given node.
Boolean persistentNodeAttr =
node.attribute("your-custom-attribute");
return (persistentNodeAttr != null &&
Boolean.FALSE.equals(persistentNodeAttr));
}
})
.setCacheStoreFactory(...);

I hope that the main idea is clear.

[1]
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/CacheConfiguration.html#setNodeFilter-org.apache.ignite.lang.IgnitePredicate-
[2]
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/IgniteConfiguration.html#setUserAttributes-java.util.Map-

Thanks,
S.

чт, 12 июл. 2018 г. в 14:17, siva :

> Hi,
>
> we need to use both Native and Cache store persistence.
>
>
> Is it possible to devide nodes like only particular nodes need to store
> cache store data and other nodes need to store native persistence data.
>
> According to Baseline Topology
>
> Start the node normally(native persistnce disabled for this node). At this
> point:
>
> The cluster remains active.
> The new node joins the cluster, but it is not added to the baseline
> topology.
> The new node can be used to store data of caches/tables who do not use
> Ignite persistence.
> The new node cannot hold data of caches/tables who persist data in Ignite
> persistence.
> The new node can be used from computation standpoint (Ignite compute grid).
>
> I have followed as above ,but not storing the cache store data in the newly
> joined node.Its adding in baseline topology node and giving worning
>
> Both Ignite native persistence and CacheStore are configured for cache
> 'cacheStoresivacache'. This configuration does not guarantee strict
> consistency between CacheStore and Ignite data storage upon restarts.
> Consult documentation for more details.
>
>
>
>
> Thanks
> siva
>
>
>
>
>
>
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite with Spring Boot Data JPA

2018-07-10 Thread Вячеслав Коптилин
Hello,

I don't think I clearly understand the issue you faced.
Please provide more details about the problem so that the community can
help you.
Could create a small reproducer project and share it on GitHub?
By the way, the log file from the server node will be helpful as well.

Thanks,
S.

вт, 10 июл. 2018 г. в 22:40, bitanxen :

> Hi, I have a running a ignite cluster. Now from my spring boot maven
> project
> which is dependent on spring-data-jpa where the jdbc datasource is
> configured with Ignite. But it's giving me an error that hibernate
> entitymanager bean is not initialised because it didn't find
> hibernate.dialect.
>
> Is there any working day configuration for Ingite JDBC with Spring Data
> JPA??
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: apache ignite atomicLong.incrementAndGet() is causing starvation in striped pool

2018-07-09 Thread Вячеслав Коптилин
Hello Vadym,

The root cause of the behavior is that you are trying to use
`IgniteAtomicLong` instance within `ContinuousQuery` local listener.
In general, you should avoid using cache operations in that listener (
`IgniteAtomicLong` uses
IgniteCache under the hood),
because this callback is invoked from a sensitive part of implementation in
a synchronous way and therefore, it may lead to starvation and/or deadlocks.

In order to resolve this issue, you can use @IgniteAsyncCallback
annotation, it allows executing callback methods in the AsyncCallback
thread pool.
Please try the following approach:

@IgniteAsyncCallback
public class MyLocalListener implements
CacheEntryUpdatedListener {
@Override public void onUpdated(
Iterable>
events) throws CacheEntryListenerException {

...
}
}

continuousQuery.setLocalListener(new MyLocalListener());

Thanks,
Slava.


пн, 9 июл. 2018 г. в 22:19, Vadym Vasiuk :

> Hi All,
>
> I have Ignite cluster which consists of two nodes. Each node after start
> creates a continuous query which calls atomicLong.incrementAndGet() in
> "onUpdated" method:
>
> continuousQuery.setLocalListener(new
> CacheEntryUpdatedListener() {
> @Override public void onUpdated(Iterable extends Integer, ? extends String>> evts) {
> for (CacheEntryEvent
> e : evts){
> System.out.println("Incremented value: " +
> atomicLong.incrementAndGet());
> }
> }
> });
>
> This continuous query is listening on all events on "test" cache.
> I start both nodes and then start a client application which inserts 5
> entries into "test" cache. For the first two entries (which client inserts)
> I see below outputs only on one node:
> Incremented value: 1
> Incremented value: 2
>
> And after that I get below warn messages in logs on the node where
> "Incremented value" was printed (on the second node I see no messages) :
>
> 2018-07-09 21:56:57.993  WARN 1876 --- [eout-worker-#23]
> o.apache.ignite.internal.util.typedef.G  : >>> Possible starvation in
> striped pool.
> Thread name: sys-stripe-0-#1
> Queue: []
> Deadlock: false
> Completed: 1
> Thread [name="sys-stripe-0-#1", id=14, state=WAITING, blockCnt=0,
> waitCnt=5]
> at sun.misc.Unsafe.park(Native Method)
> at
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
> at
> o.a.i.i.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:177)
> at
> o.a.i.i.util.future.GridFutureAdapter.get(GridFutureAdapter.java:140)
> at
> o.a.i.i.processors.cache.GridCacheAdapter$25.op(GridCacheAdapter.java:2492)
> at
> o.a.i.i.processors.cache.GridCacheAdapter$25.op(GridCacheAdapter.java:2478)
> at
> o.a.i.i.processors.cache.GridCacheAdapter.syncOp(GridCacheAdapter.java:4088)
> at
> o.a.i.i.processors.cache.GridCacheAdapter.invoke0(GridCacheAdapter.java:2478)
> at
> o.a.i.i.processors.cache.GridCacheAdapter.invoke(GridCacheAdapter.java:2456)
> at
> o.a.i.i.processors.cache.GridCacheProxyImpl.invoke(GridCacheProxyImpl.java:588)
> at
> o.a.i.i.processors.datastructures.GridCacheAtomicLongImpl.incrementAndGet(GridCacheAtomicLongImpl.java:98)
> at com.ignite.config.ServerConfig$2.onUpdated(ServerConfig.java:80)
> at
> o.a.i.i.processors.cache.query.continuous.CacheContinuousQueryHandler.onEntryUpdate(CacheContinuousQueryHandler.java:835)
> at
> o.a.i.i.processors.cache.query.continuous.CacheContinuousQueryHandler.access$700(CacheContinuousQueryHandler.java:82)
> at
> o.a.i.i.processors.cache.query.continuous.CacheContinuousQueryHandler$1$1.apply(CacheContinuousQueryHandler.java:420)
> at
> o.a.i.i.processors.cache.query.continuous.CacheContinuousQueryHandler$1$1.apply(CacheContinuousQueryHandler.java:415)
> at
> o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicAbstractUpdateFuture.onDone(GridDhtAtomicAbstractUpdateFuture.java:556)
> at
> o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicAbstractUpdateFuture.onDone(GridDhtAtomicAbstractUpdateFuture.java:61)
> at
> o.a.i.i.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:440)
> at
> o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicAbstractUpdateFuture.registerResponse(GridDhtAtomicAbstractUpdateFuture.java:367)
> at
> o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicAbstractUpdateFuture.onDhtResponse(GridDhtAtomicAbstractUpdateFuture.java:522)
> at
> o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.processDhtAtomicUpdateResponse(GridDhtAtomicCache.java:3472)
> at
> o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.access$700(GridDhtAtomicCache.java:130)
> at
> o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$8.apply(GridDhtAtomicCache.java:323)
> at
> o.a.i.i.processors.cache.distributed.dht

Re: Can we start an ignite node by passing ignite configuations in json file in case on Dot net Core

2018-07-09 Thread Вячеслав Коптилин
Hello,

You can use Spring XML in order to configure Ignite instance.
Spring config file can be provided via Ignition.Start(string) method and
IgniteConfiguration.SpringConfigUrl property.
Please take a look at this page:
https://apacheignite-net.readme.io/docs/configuration#section-spring-xml

Thanks!

пн, 9 июл. 2018 г. в 11:15, Mahesh Talreja :

> Hi Team,
>  I am developing a project on DotNet Core , Since dot net core no longer
> supports app.config files, so is there a way where we can start an ignite
> node by passing ignite configurations in JSON file. If yes can you please
> share an example?
>
>
>
> --
> *Thanking you,*
> *Regards,*
> *Mahesh Talreja*
> *Mob: +91 9769564242*
>


Re: Apache Ignite with Caching : How do I sure that cache(s) has all datas?

2018-07-08 Thread Вячеслав Коптилин
Hi,

CacheWriteSynchronizationMode defines the behavior of cache during
modification operations. I mean IgniteCache#put(), #remove() etc.
For example, in case of 'PRIMARY_SYNC', a thread that executes modification
operation will wait for completion of operation on primary node only,
backups will be updated asynchronously. If 'FULL_SYNC' is used than a
thread will wait for responses from all participated nodes (from primary
node and all backups).

hope this helps.

Best regards,
Slava.

вс, 8 июл. 2018 г. в 17:24, monstereo :

> then may ask you what is difference between
>
> https://apacheignite.readme.io/docs/primary-and-backup-copies#section-synchronous-and-asynchronous-backups
> this and rebalance mode
>
> thanks,
>
>
> slava.koptilin wrote
> > Hello,
> >
> > I think rebalancing makes sense for all types of caches. It does not
> > matter
> > what type of cache you use.
> > Long story short, a replicated cache is a partitioned cache with the
> > number
> > of backups equals to the number of nodes minus 1.
> >
> > Let's assume that you ingested all data in the cluster, and after that,
> > added a new node.
> > In that case, data will be transferred/copied to a new node during
> > rebalancing.
> >
> > Perhaps, EVT_CACHE_REBALANCE_STARTED and EVT_CACHE_REBALANCE_STOPPED will
> > be useful to track rebalancing.
> >
> > Thanks,
> > Slava.
> >
> > вс, 8 июл. 2018 г. в 15:47, Amir Akhmedov <
>
> > amir.akhmedov@
>
> > >:
> >
> >> Rebalancing is a process when a node joins or leaves (in case backup is
> >> turned on) a cluster, data will be rebalanced within a cluster to make a
> >> fair distribution. It's applicable only for partitioned caches. But you
> >> have replicated cache and it's out of your case.
> >>
> >> Thanks,
> >> Amir
> >>
> >> On Jul 8, 2018 4:02 AM, "monstereo" <
>
> > mehmetozanguven@
>
> > > wrote:
> >>
> >> thank you for your comment,
> >> I also found that there is a data rebalance in ignite.
> >> What do you think about this? Which one should I use?
> >>
> >> Here is the data rebalance link  here
> >> ;
> >>
> >> For SYNCH mode in rebalance says that :::
> >>
> >> "Synchronous rebalancing mode. Distributed caches will not start until
> >> all
> >> necessary data is loaded from other available grid nodes. This means
> that
> >> any call to cache public API will be blocked until rebalancing is
> >> finished."
> >>
> >>
> >>
> >>
> >>
> >>
> >> --
> >> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
> >>
> >>
> >>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Not getting the column access in Ignite Cache created and loaded from Oracle

2018-07-08 Thread Вячеслав Коптилин
Hello,

What method do you use in order to create SQL tables into Ignite?
 - DDL command - CREATE TABLE
https://apacheignite-sql.readme.io/docs/create-table
 - Java API. If it is your case then I guess queryable fields are not
properly specified.
https://apacheignite-sql.readme.io/docs/schema-and-indexes

Thanks!

вс, 8 июл. 2018 г. в 16:10, bitanxen :

> Hi,
>
> I am doing a POC to ingest data from Oracle to Ignite cluster and Fetch the
> data from Ignite in another application. When I created the Model and
> Cache,
> I specified the Key as String and value as Custom Object. Data loaded to
> cluster but then I querying "SELECT * FROM TB_USER" I am getting only two
> column, i.e. _KEY and _VAL. I am trying to get all the column from the
> TB_USER. What are the configuration required for this?
>
> 
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Apache Ignite with Caching : How do I sure that cache(s) has all datas?

2018-07-08 Thread Вячеслав Коптилин
Hello,

I think rebalancing makes sense for all types of caches. It does not matter
what type of cache you use.
Long story short, a replicated cache is a partitioned cache with the number
of backups equals to the number of nodes minus 1.

Let's assume that you ingested all data in the cluster, and after that,
added a new node.
In that case, data will be transferred/copied to a new node during
rebalancing.

Perhaps, EVT_CACHE_REBALANCE_STARTED and EVT_CACHE_REBALANCE_STOPPED will
be useful to track rebalancing.

Thanks,
Slava.

вс, 8 июл. 2018 г. в 15:47, Amir Akhmedov :

> Rebalancing is a process when a node joins or leaves (in case backup is
> turned on) a cluster, data will be rebalanced within a cluster to make a
> fair distribution. It's applicable only for partitioned caches. But you
> have replicated cache and it's out of your case.
>
> Thanks,
> Amir
>
> On Jul 8, 2018 4:02 AM, "monstereo"  wrote:
>
> thank you for your comment,
> I also found that there is a data rebalance in ignite.
> What do you think about this? Which one should I use?
>
> Here is the data rebalance link  here
> 
>
> For SYNCH mode in rebalance says that :::
>
> "Synchronous rebalancing mode. Distributed caches will not start until all
> necessary data is loaded from other available grid nodes. This means that
> any call to cache public API will be blocked until rebalancing is
> finished."
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>
>
>


Re: Cache configuration in case of Client mode

2018-07-05 Thread Вячеслав Коптилин
Oh, I see now that we are talking about Factory
:)
Yes, it looks like a usability issue or even a bug. The implementation
should inject dependencies into that factory in the same way it does for
CacheStoreFactory.
As a workaround, you can try the following:

private CacheConfiguration ipContainerIPV4CacheCfg() {
CacheConfiguration ipContainerIpV4CacheCfg = new
CacheConfiguration<>(CacheName.IP_CONTAINER_IPV4_CACHE.name());
Factory storeFactory =
FactoryBuilder.factoryOf(IpContainerIpV4CacheLoader.class);
ipContainerIpV4CacheCfg.setCacheStoreFactory(storeFactory);
ipContainerIpV4CacheCfg.setCacheStoreSessionListenerFactories(new
Factory() {
@Override public CacheStoreSessionListener create() {
return new TestCacheStoreSessionListener();
}
});

...
}

public class TestCacheStoreSessionListener extends
CacheJdbcStoreSessionListener {
@SpringApplicationContextResource
public void setupDataSourceFromSpringContext(Object appCtx) {
ApplicationContext appContext = (ApplicationContext) appCtx;
setDataSource((DataSource) appContext.getBean("dataSource"));
}
}

Best regards,
Slava.

чт, 5 июл. 2018 г. в 18:44, Prasad Bhalerao :

>
> I tried to debug the ignite code, GridResourceProcessor.inject is not
>> being executed for injecting resources into CacheStoreSessionListener. It
>> is being called for Cachestores.
>>
>> Can you please advise?
>>
>> Thanks,
>> Prasad
>>
>> On Thu, Jul 5, 2018 at 8:14 PM Prasad Bhalerao <
>> prasadbhalerao1...@gmail.com> wrote:
>>
>>> I had used SpringApplicationContextResource  annotation. But it did not
>>> inject the application context in it. So I decided to use spring managed
>>> bean and it worked but now it is creating the problem which is described in
>>> this mail chain.
>>>
>>> On Thu, Jul 5, 2018 at 8:11 PM Prasad Bhalerao <
>>> prasadbhalerao1...@gmail.com> wrote:
>>>
>>>> I had used SpringApplicationContextResource  annotation. But it did not
>>>> inject the application context in it. So I decided to use spring managed
>>>> bean and it worked.
>>>>
>>>> Thanks,
>>>> Prasad
>>>>
>>>> On Thu, Jul 5, 2018 at 7:49 PM Вячеслав Коптилин <
>>>> slava.kopti...@gmail.com> wrote:
>>>>
>>>>> It seems that you need to use @
>>>>>
>>>>> SpringApplicationContextResource
>>>>>
>>>>> instead of @Autowired.
>>>>> Could you please check that?
>>>>>
>>>>> Thanks.
>>>>>
>>>>> чт, 5 июл. 2018 г. в 17:01, Prasad Bhalerao <
>>>>> prasadbhalerao1...@gmail.com>:
>>>>>
>>>>>>
>>>>>>
>>>>>> import java.io.Serializable;
>>>>>> import javax.cache.configuration.Factory;
>>>>>> import javax.sql.DataSource;
>>>>>> import org.apache.ignite.IgniteException;
>>>>>> import org.apache.ignite.cache.store.CacheStoreSessionListener;
>>>>>> import org.apache.ignite.cache.store.jdbc.CacheJdbcStoreSessionListener;
>>>>>> import org.springframework.beans.factory.annotation.Autowired;
>>>>>> import org.springframework.context.ApplicationContext;
>>>>>>
>>>>>> public class CacheStoreSessionListenerFactory implements 
>>>>>> Factory, Serializable {
>>>>>>
>>>>>>   private static final long serialVersionUID = 6142932447545510244L;
>>>>>>
>>>>>>   private String className;
>>>>>>
>>>>>>   @Autowired
>>>>>>   private transient ApplicationContext appCtx;
>>>>>>
>>>>>>
>>>>>>   public 
>>>>>> CacheStoreSessionListenerFactory(Class clazz) 
>>>>>> {
>>>>>> this.className = clazz.getName();
>>>>>>   }
>>>>>>
>>>>>>   @Override
>>>>>>   public CacheStoreSessionListener create() {
>>>>>>
>>>>>> if (appCtx == null) {
>>>>>>   throw new IgniteException("Spring application context resource is 
>>>>>> not injected.");
>>>>>> }
>>>>>> CacheJdbcStoreSessionListener lsnr = new 
>>>>>> CacheJdbcStoreSessionListener();
>>>>>> lsnr.setDataSource((DataSource) appCtx.getBean("dataSource"));
>>>>>> return lsnr;
>>>>>>
>>>>>>   }
>>>>>>
>>>>>> }
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Thu, Jul 5, 2018 at 7:24 PM slava.koptilin <
>>>>>> slava.kopti...@gmail.com> wrote:
>>>>>>
>>>>>>> Well, the exception is thrown by your class:
>>>>>>> org.apache.ignite.IgniteException: Spring application context
>>>>>>> resource is
>>>>>>> not injected.
>>>>>>>  at
>>>>>>>
>>>>>>> *com.qualys.agms.grid.cache.loader.factory.CacheStoreSessionListenerFactory*.create(CacheStoreSessionListenerFactory.java:30)
>>>>>>>  at
>>>>>>>
>>>>>>> com.qualys.agms.grid.cache.loader.factory.CacheStoreSessionListenerFactory.create(CacheStoreSessionListenerFactory.java:12)
>>>>>>>
>>>>>>> Is it possible to share this class as well?
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>>>>>>
>>>>>>


Re: Cache configuration in case of Client mode

2018-07-05 Thread Вячеслав Коптилин
It seems that you need to use @

SpringApplicationContextResource

instead of @Autowired.
Could you please check that?

Thanks.

чт, 5 июл. 2018 г. в 17:01, Prasad Bhalerao :

>
>
> import java.io.Serializable;
> import javax.cache.configuration.Factory;
> import javax.sql.DataSource;
> import org.apache.ignite.IgniteException;
> import org.apache.ignite.cache.store.CacheStoreSessionListener;
> import org.apache.ignite.cache.store.jdbc.CacheJdbcStoreSessionListener;
> import org.springframework.beans.factory.annotation.Autowired;
> import org.springframework.context.ApplicationContext;
>
> public class CacheStoreSessionListenerFactory implements 
> Factory, Serializable {
>
>   private static final long serialVersionUID = 6142932447545510244L;
>
>   private String className;
>
>   @Autowired
>   private transient ApplicationContext appCtx;
>
>
>   public CacheStoreSessionListenerFactory(Class 
> clazz) {
> this.className = clazz.getName();
>   }
>
>   @Override
>   public CacheStoreSessionListener create() {
>
> if (appCtx == null) {
>   throw new IgniteException("Spring application context resource is not 
> injected.");
> }
> CacheJdbcStoreSessionListener lsnr = new CacheJdbcStoreSessionListener();
> lsnr.setDataSource((DataSource) appCtx.getBean("dataSource"));
> return lsnr;
>
>   }
>
> }
>
>
>
> On Thu, Jul 5, 2018 at 7:24 PM slava.koptilin 
> wrote:
>
>> Well, the exception is thrown by your class:
>> org.apache.ignite.IgniteException: Spring application context resource is
>> not injected.
>>  at
>>
>> *com.qualys.agms.grid.cache.loader.factory.CacheStoreSessionListenerFactory*.create(CacheStoreSessionListenerFactory.java:30)
>>  at
>>
>> com.qualys.agms.grid.cache.loader.factory.CacheStoreSessionListenerFactory.create(CacheStoreSessionListenerFactory.java:12)
>>
>> Is it possible to share this class as well?
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>


Re: NPE When joining grid

2018-06-27 Thread Вячеслав Коптилин
Hi Bryan,

DiscoverySpi was slightly reworked in AI 2.5 Please take a look at this
enhancement proposal:
https://cwiki.apache.org/confluence/display/IGNITE/IEP-15%3A+Discovery+SPI+by+ZooKeeper
And it looks like, the issue, that you mentioned, was resolved as part of
https://issues.apache.org/jira/browse/IGNITE-7222
So, I would suggest to upgrade to AI 2.5

Thanks,
Slava.


ср, 27 июн. 2018 г. в 21:42, Bryan Rosander :

> Hey all,
>
> I was wondering if anyone else has seen NPEs while joining a grid w/
> Ignite 2.4.0 (a quick search didn't show anything in Jira)
>
> This is happening in our K8s cluster where the grid is rolled for every CI
> deploy.
>
> 2018-06-27 18:00:59,869 INFO  [exchange-worker-#42]
> o.a.ignite.internal.exchange.time - Finished exchange init
> [topVer=AffinityTopologyVersion [topVer=264, minorTopVer=0], crd=false]
> 2018-06-27 18:01:00,072 INFO  [sys-#44]
> o.a.i.i.p.c.d.d.p.GridDhtPartitionsExchangeFuture - Received full message,
> will finish exchange [node=7dabcf2e-f30f-43e7-8364-32760205c3f1,
> resVer=AffinityTopologyVersion [topVer=265, minorTopVer=0]]
> 2018-06-27 18:01:00,075 ERROR [sys-#44]
> o.a.i.i.p.c.d.d.p.GridDhtPartitionsExchangeFuture - Failed to notify
> listener:
> o.a.i.i.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture$5@be0660
> java.lang.NullPointerException: null
> at
> org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager$13.applyx(CacheAffinitySharedManager.java:1335)
> at
> org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager$13.applyx(CacheAffinitySharedManager.java:1327)
> at
> org.apache.ignite.internal.util.lang.IgniteInClosureX.apply(IgniteInClosureX.java:38)
> at
> org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.forAllCacheGroups(CacheAffinitySharedManager.java:1115)
> at
> org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.onLocalJoin(CacheAffinitySharedManager.java:1327)
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.processFullMessage(GridDhtPartitionsExchangeFuture.java:2941)
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.access$1400(GridDhtPartitionsExchangeFuture.java:124)
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture$5.apply(GridDhtPartitionsExchangeFuture.java:2684)
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture$5.apply(GridDhtPartitionsExchangeFuture.java:2672)
> at
> org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListener(GridFutureAdapter.java:383)
> at
> org.apache.ignite.internal.util.future.GridFutureAdapter.listen(GridFutureAdapter.java:353)
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onReceiveFullMessage(GridDhtPartitionsExchangeFuture.java:2672)
> at
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager.processFullPartitionUpdate(GridCachePartitionExchangeManager.java:1481)
> at
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager.access$1100(GridCachePartitionExchangeManager.java:133)
> at
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$3.onMessage(GridCachePartitionExchangeManager.java:339)
> at
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$3.onMessage(GridCachePartitionExchangeManager.java:337)
> at
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$MessageHandler.apply(GridCachePartitionExchangeManager.java:2689)
> at
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$MessageHandler.apply(GridCachePartitionExchangeManager.java:2668)
> at
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1060)
> at
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:579)
> at
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:378)
> at
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:304)
> at
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$100(GridCacheIoManager.java:99)
> at
> org.apache.ignite.internal.processors.cache.GridCacheIoManager$1.onMessage(GridCacheIoManager.java:293)
> at
> org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1555)
> at
> org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1183)
> at
> org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:126)
> a

Re:

2018-06-27 Thread Вячеслав Коптилин
Hello,

I think ContinuousQuery will be a good choice for your use-case [1]

[1] https://apacheignite.readme.io/docs/continuous-queries

Thanks!

ср, 27 июн. 2018 г. в 18:51, James Dodson :

> Hello.
>
> I am creating a Spring Boot application that gets objects from Kinesis and
> uses ignite-spring-data to insert those objects into an Ignite cluster.
>
> As objects are inserted, I want to take some action - specifically,
> inspect the object being inserted, query Ignite for a related object and if
> not found, send a message to Kinesis.
>
> It seems I could possibly use CacheStoreAdapter, a continuous query, or a
> cache event listener.
>
> I'm wondering which of these (or something else?) is suggested for this
> use case?
>
> Thanks!
>


Re: ClassCastException When Using CacheEntryProcessor in StreamVisitor

2018-06-20 Thread Вячеслав Коптилин
Hi,

I was able to reproduce the issue you mentioned.
I will debug that use-case and create a JIRA ticket in order to track this.

As a workaround, please try to replace the following line:

  personCache.withKeepBinary().invoke(id, new
CacheEntryProcessor<> ...

by
IgniteCache personCacheBinary =
Ignition.ignite().cache("person-cache-name").withKeepBinary();
personCacheBinary.invoke(id, new CacheEntryProcessor() {   ...

Thanks,
Slava.


пт, 8 июн. 2018 г. в 18:28, Cong Guo :

> Static classes do not work either. Has anyone used CacheEntryProcessor in
> StreamVisitor? Does Ignite allow that?
>
> My requirement is to update certain fields of the Value in cache  Value> based on a data stream. I want to update only several fields in a
> large object. Do I have to get the whole Value object and then put it back?
>
>
>
>
> Thanks,
>
> Cong
>
>
>
> *From:* Cong Guo
> *Sent:* 2018年6月6日 9:50
> *To:* user@ignite.apache.org
> *Subject:* RE: ClassCastException When Using CacheEntryProcessor in
> StreamVisitor
>
>
>
> Hi,
>
>
>
> I put the same jar on the two nodes. The codes are the same. Why does
> lambda not work here? Thank you.
>
>
>
>
>
> *From:* Andrey Mashenkov [mailto:andrey.mashen...@gmail.com
> ]
> *Sent:* 2018年6月6日 9:11
> *To:* user@ignite.apache.org
> *Subject:* Re: ClassCastException When Using CacheEntryProcessor in
> StreamVisitor
>
>
>
> Hi,
>
>
>
> Is it possible you change lambdas code between calls? Or may be classes
> are differs on nodes?
>
> Try to replace lambdas with static classes in your code. Will it work for
> you?
>
>
>
> On Tue, Jun 5, 2018 at 10:28 PM, Cong Guo  wrote:
>
> Hi,
>
>
>
> The stacktrace is as follows. Do I use the CacheEntryProcessor in the
> right way? May I have an example about how to use CacheEntryProcessor in
> StreamVisitor, please? Thank you!
>
>
>
> javax.cache.processor.EntryProcessorException:
> java.lang.ClassCastException: com.huawei.clusterexperiment.model.Person
> cannot be cast to org.apache.ignite.binary.BinaryObject
>
> at
> org.apache.ignite.internal.processors.cache.CacheInvokeResult.get(CacheInvokeResult.java:102)
>
> at
> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.invoke(IgniteCacheProxyImpl.java:1361)
>
> at
> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.invoke(IgniteCacheProxyImpl.java:1405)
>
> at
> org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.invoke(GatewayProtectedCacheProxy.java:1362)
>
> at
> com.huawei.clusterexperiment.Client.lambda$streamUpdate$531c8d2f$1(Client.java:337)
>
> at
> org.apache.ignite.stream.StreamVisitor$1.apply(StreamVisitor.java:50)
>
> at
> org.apache.ignite.stream.StreamVisitor$1.apply(StreamVisitor.java:48)
>
> at
> org.apache.ignite.stream.StreamVisitor.receive(StreamVisitor.java:38)
>
> at
> org.apache.ignite.internal.processors.datastreamer.DataStreamerUpdateJob.call(DataStreamerUpdateJob.java:137)
>
> at
> org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.localUpdate(DataStreamProcessor.java:397)
>
> at
> org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.processRequest(DataStreamProcessor.java:302)
>
> at
> org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.access$000(DataStreamProcessor.java:59)
>
> at
> org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor$1.onMessage(DataStreamProcessor.java:89)
>
> at
> org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1555)
>
> at
> org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1183)
>
> at
> org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:126)
>
> at
> org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1090)
>
> at
> org.apache.ignite.internal.util.StripedExecutor$Stripe.run(StripedExecutor.java:505)
>
> at java.lang.Thread.run(Thread.java:745)
>
> Caused by: java.lang.ClassCastException:
> com.huawei.clusterexperiment.model.Person cannot be cast to
> org.apache.ignite.binary.BinaryObject
>
> at com.huawei.clusterexperiment.Client$2.process(Client.java:340)
>
> at
> org.apache.ignite.internal.processors.cache.EntryProcessorResourceInjectorProxy.process(EntryProcessorResourceInjectorProxy.java:68)
>
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.onEntriesLocked(GridDhtTxPrepareFuture.java:421)
>
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.prepare0(GridDhtTxPrepareFuture.java:1231)
>
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.mapIfLocked(GridDhtTxPrepareFuture.java:671)
>
> at
> org.apache.ignite.internal.p

Re: A bug in SQL "CREATE TABLE" and its underlying Ignite cache

2018-06-20 Thread Вячеслав Коптилин
Hi,

> How should I use BinaryObject in the CREATE TABLE statement?
If you want to use binary objects, then there is no need to specify
'VALUE_TYPE'.

Please use the following code:

String createTableSQL = "CREATE TABLE Persons (id LONG, orgId LONG,
firstName VARCHAR, lastName VARCHAR, resume VARCHAR, salary FLOAT, PRIMARY
KEY(firstName))" +

"WITH \"BACKUPS=1, ATOMICITY=TRANSACTIONAL,
WRITE_SYNCHRONIZATION_MODE=PRIMARY_SYNC, CACHE_NAME=" + PERSON_CACHE_NAME +
"\"";

Thanks!

вт, 19 июн. 2018 г. в 16:43, Cong Guo :

> Hi,
>
>
>
> How should I use BinaryObject in the CREATE TABLE statement? I try using
> BinaryObject.class.getName() as the value_type, but get the following
> exception:
>
>
>
> class org.apache.ignite.IgniteCheckedException: Failed to initialize
> property 'ORGID' of type 'java.lang.Long' for key class 'class
> java.lang.Long' and value class 'interface
> org.apache.ignite.binary.BinaryObject'. Make sure that one of these classes
> contains respective getter method or field.
>
>
>
> I want to use BinaryObject here for some flexibility.
>
>
>
> Thanks,
>
> Cong
>
>
>
> *From:* Вячеслав Коптилин [mailto:slava.kopti...@gmail.com]
> *Sent:* 2018年6月18日 18:10
> *To:* user@ignite.apache.org
> *Subject:* Re: A bug in SQL "CREATE TABLE" and its underlying Ignite cache
>
>
>
> Hello,
>
>
>
> It seems that the root cause of the issue is wrong values of 'KEY_TYPE'
> and 'VALUE_TYPE' parameters.
>
> In your case, there is no need to specify 'KEY_TYPE' at all, and
> 'VALUE_TYPE' should Person.class.getName() I think.
>
>
>
> Please try the following:
>
> String createTableSQL = "CREATE TABLE Persons (id LONG, orgId
> LONG, firstName VARCHAR, lastName VARCHAR, resume VARCHAR, salary FLOAT,
> PRIMARY KEY(firstName))" +
>
> "WITH \"BACKUPS=1, ATOMICITY=TRANSACTIONAL,
> WRITE_SYNCHRONIZATION_MODE=PRIMARY_SYNC, CACHE_NAME=" + PERSON_CACHE_NAME +
>
> ", VALUE_TYPE=" + Person.class.getName() + "\"";
>
>
>
> Best regards,
>
> Slava.
>
>
>
> пн, 18 июн. 2018 г. в 21:50, Cong Guo :
>
> Hi,
>
>
>
> I need to use both SQL and non-SQL APIs (key-value) on a single cache. I
> follow the document in:
>
> https://apacheignite-sql.readme.io/docs/create-table
>
>
>
> I use “CREATE TABLE” to create the table and its underlying cache. I can
> use both SQL “INSERT” and put to add data to the cache. However, when I run
> a SqlFieldsQuery, only the row added by SQL “INSERT” can be seen. The
> Ignite version is 2.4.0.
>
>
>
> You can reproduce the bug using the following code:
>
>
>
> CacheConfiguration dummyCfg = new
> CacheConfiguration<>("DUMMY");
>
>
> dummyCfg.setSqlSchema("PUBLIC");
>
>
>
>  try(IgniteCache Integer> dummyCache = ignite.getOrCreateCache(dummyCfg)){
>
> String
> createTableSQL = "CREATE TABLE Persons (id LONG, orgId LONG, firstName
> VARCHAR, lastName VARCHAR, resume VARCHAR, salary FLOAT, PRIMARY
> KEY(firstName))" +
>
>
>"WITH \"BACKUPS=1, ATOMICITY=TRANSACTIONAL,
> WRITE_SYNCHRONIZATION_MODE=PRIMARY_SYNC, CACHE_NAME=" + PERSON_CACHE_NAME +
>
>
>   ", KEY_TYPE=String, VALUE_TYPE=BinaryObject\"";
>
>
>
>
>  dummyCache.query(new SqlFieldsQuery(createTableSQL)).getAll();
>
>
>
>
>  SqlFieldsQuery firstInsert = new SqlFieldsQuery("INSERT INTO Persons (id,
> orgId, firstName, lastname, resume, salary) VALUES (?,?,?,?,?,?)");
>
>
> firstInsert.setArgs(1L, 1L, "John", "Smith", "PhD", 1.0d);
>
>
> dummyCache.query(firstInsert).getAll();
>
>
>
>
>  try(IgniteCache personCache =
> ignite.cache(PERSON_CACHE_NAME)){
>
>
> Person p2 = new Person(2L, 1L, "Hello", "World", "Master", 1000.0d);
>
>
> personCache.put("Hello", p2);
>
>
>
>
> IgniteCache binaryCache = personCache. BinaryObject>withKeepBinary();
>
>
> System.out.println("Size of the cache is: " +
> binaryCache.size(CachePeekMode.ALL));
>
>
>
>
>
>  binaryCache.query(new ScanQuery<>(null)).forEach(entry ->
> System.out.println(entry.getKey()));
>
>
>
>
>
> System.out.println("Select results: ");
>
>
> SqlFieldsQuery qry = new SqlFieldsQuery("select * from Persons");
>
>
> QueryCursor> answers = personCache.query(qry);
>
>
> List> personList = answers.getAll();
>
>
> for(List row : personList) {
>
>
> String fn = (String)row.get(2);
>
>
> System.out.println(fn);
>
>
> }
>
> }
>
> }
>
>
>
>
>
> The output is:
>
>
>
> Size of the cache is: 2
>
> Hello
>
> String [idHash=213193302, hash=-900113201, FIRSTNAME=John]
>
> Select results:
>
> John
>
>
>
> The bug is that the SqlFieldsQuery cannot see the data added by “put”.
>
>


Re: A bug in SQL "CREATE TABLE" and its underlying Ignite cache

2018-06-18 Thread Вячеслав Коптилин
Hello,

It seems that the root cause of the issue is wrong values of 'KEY_TYPE' and
'VALUE_TYPE' parameters.
In your case, there is no need to specify 'KEY_TYPE' at all, and
'VALUE_TYPE' should Person.class.getName() I think.

Please try the following:
String createTableSQL = "CREATE TABLE Persons (id LONG, orgId LONG,
firstName VARCHAR, lastName VARCHAR, resume VARCHAR, salary FLOAT, PRIMARY
KEY(firstName))" +
"WITH \"BACKUPS=1, ATOMICITY=TRANSACTIONAL,
WRITE_SYNCHRONIZATION_MODE=PRIMARY_SYNC, CACHE_NAME=" + PERSON_CACHE_NAME +
", VALUE_TYPE=" + Person.class.getName() + "\"";

Best regards,
Slava.

пн, 18 июн. 2018 г. в 21:50, Cong Guo :

> Hi,
>
>
>
> I need to use both SQL and non-SQL APIs (key-value) on a single cache. I
> follow the document in:
>
> https://apacheignite-sql.readme.io/docs/create-table
>
>
>
> I use “CREATE TABLE” to create the table and its underlying cache. I can
> use both SQL “INSERT” and put to add data to the cache. However, when I run
> a SqlFieldsQuery, only the row added by SQL “INSERT” can be seen. The
> Ignite version is 2.4.0.
>
>
>
> You can reproduce the bug using the following code:
>
>
>
> CacheConfiguration dummyCfg = new
> CacheConfiguration<>("DUMMY");
>
>
> dummyCfg.setSqlSchema("PUBLIC");
>
>
>
>  try(IgniteCache Integer> dummyCache = ignite.getOrCreateCache(dummyCfg)){
>
> String
> createTableSQL = "CREATE TABLE Persons (id LONG, orgId LONG, firstName
> VARCHAR, lastName VARCHAR, resume VARCHAR, salary FLOAT, PRIMARY
> KEY(firstName))" +
>
>
>"WITH \"BACKUPS=1, ATOMICITY=TRANSACTIONAL,
> WRITE_SYNCHRONIZATION_MODE=PRIMARY_SYNC, CACHE_NAME=" + PERSON_CACHE_NAME +
>
>
>   ", KEY_TYPE=String, VALUE_TYPE=BinaryObject\"";
>
>
>
>
>  dummyCache.query(new SqlFieldsQuery(createTableSQL)).getAll();
>
>
>
>
>  SqlFieldsQuery firstInsert = new SqlFieldsQuery("INSERT INTO Persons (id,
> orgId, firstName, lastname, resume, salary) VALUES (?,?,?,?,?,?)");
>
>
> firstInsert.setArgs(1L, 1L, "John", "Smith", "PhD", 1.0d);
>
>
> dummyCache.query(firstInsert).getAll();
>
>
>
>
>  try(IgniteCache personCache =
> ignite.cache(PERSON_CACHE_NAME)){
>
>
> Person p2 = new Person(2L, 1L, "Hello", "World", "Master", 1000.0d);
>
>
> personCache.put("Hello", p2);
>
>
>
>
> IgniteCache binaryCache = personCache. BinaryObject>withKeepBinary();
>
>
> System.out.println("Size of the cache is: " +
> binaryCache.size(CachePeekMode.ALL));
>
>
>
>
>
>  binaryCache.query(new ScanQuery<>(null)).forEach(entry ->
> System.out.println(entry.getKey()));
>
>
>
>
>
> System.out.println("Select results: ");
>
>
> SqlFieldsQuery qry = new SqlFieldsQuery("select * from Persons");
>
>
> QueryCursor> answers = personCache.query(qry);
>
>
> List> personList = answers.getAll();
>
>
> for(List row : personList) {
>
>
> String fn = (String)row.get(2);
>
>
> System.out.println(fn);
>
>
> }
>
> }
>
> }
>
>
>
>
>
> The output is:
>
>
>
> Size of the cache is: 2
>
> Hello
>
> String [idHash=213193302, hash=-900113201, FIRSTNAME=John]
>
> Select results:
>
> John
>
>
>
> The bug is that the SqlFieldsQuery cannot see the data added by “put”.
>


Re: SQL cannot find data of new class definition

2018-06-18 Thread Вячеслав Коптилин
Hello,

>  I use BinaryObject in the first place because the document says
BinaryObject “enables you to add and remove fields from objects of the same
type”
Yes, you can dynamically add fields to BinaryObject using
BinaryObjecyBuilder, but fields that you want to query have to be specified
on node startup for example through QueryEntity.
Please take a look at this page:
https://apacheignite.readme.io/v2.5/docs/indexes#queryentity-based-configuration

I would suggest specifying a new field via QueryEntity in XML configuration
file and restart your cluster. I hope it helps.

Thanks!

пн, 18 июн. 2018 г. в 16:47, Cong Guo :

> Hi,
>
>
>
> Does anyone have experience using both Cache and SQL interfaces at the
> same time? How do you solve the possible upgrade? Is my problem a bug for
> BinaryObject? Should I debug the ignite source code?
>
>
>
> *From:* Cong Guo
> *Sent:* 2018年6月15日 10:12
> *To:* 'user@ignite.apache.org' 
> *Subject:* RE: SQL cannot find data of new class definition
>
>
>
> I run the SQL query only after the cache size has changed. The new data
> should be already in the cache when I run the query.
>
>
>
>
>
> *From:* Cong Guo
> *Sent:* 2018年6月15日 10:01
> *To:* user@ignite.apache.org
> *Subject:* RE: SQL cannot find data of new class definition
>
>
>
> Hi,
>
>
>
> Thank you for the reply. In my original test, I do not create a table
> using SQL. I just create a cache. I think a table using the value class
> name is created implicitely. I add the new field/column using ALTER TABLE
> before I put new data into the cache, but I still cannot find the data of
> the new class in the table with the class name.
>
>
>
> It is easy to reproduce my original test. I use the Person class from
> ignite example.
>
>
>
> In the old code:
>
>
>
> CacheConfiguration personCacheCfg = new
> CacheConfiguration<>(PERSON_CACHE_NAME);
>
> personCacheCfg.setCacheMode(CacheMode.REPLICATED);
>
> personCacheCfg.setQueryEntities(Arrays.asList(createPersonQueryEntity()));
>
> try(IgniteCache personCache =
> ignite.getOrCreateCache(personCacheCfg)){
>
> // add some data here
>
>Person p1 = new Person(…);
>
>personCache.put(1L, p1);
>
>//  keep the node running and run the SQL query
>
> }
>
>
>
> private static QueryEntity createPersonQueryEntity() {
>
> QueryEntity personEntity = new
> QueryEntity();
>
>
> personEntity.setValueType(Person.class.getName());
>
>
> personEntity.setKeyType(Long.class.getName());
>
>
>
> LinkedHashMap fields = new
> LinkedHashMap<>();
>
> fields.put("id", Long.class.getName());
>
> fields.put("orgId", Long.class.getName());
>
> fields.put("firstName",
> String.class.getName());
>
> fields.put("lastName",
> String.class.getName());
>
> fields.put("resume",
> String.class.getName());
>
> fields.put("salary",
> Double.class.getName());
>
> personEntity.setFields(fields);
>
>
>
> personEntity.setIndexes(Arrays.asList(
>
> new
> QueryIndex("id"),
>
> new
> QueryIndex("orgId")
>
> ));
>
>
>
> return personEntity;
>
> }
>
>
>
> The SQL query is:
>
> IgniteCache binaryCache =
> personCache.withKeepBinary();
>
>
> SqlFieldsQuery qry = new SqlFieldsQuery("select salary from Person");
>
>
>
>
>
> QueryCursor> answers = binaryCache.query(qry);
>
>
> List> salaryList = answers.getAll();
>
>
> for(List row : salaryList) {
>
>
> Double salary = (Double)row.get(0);
>
>
> System.out.println(salary);
>
> }
>
>
>
> In the new code:
>
>
>
> I add a member to the Person class which is “private in addOn”.
>
>
>
> try(IgniteCache personCache =
> ignite.cache(PERSON_CACHE_NAME)){
>
>// add the new data and then check the cache size
>
>   Person p2 = new Person(…);
>
>   personCache.put(2L, p2);
>
>System.out.println("Size of the cache is: " +
> personCache.size(CachePeekMode.ALL));
>
> }
>
>
>
> I can only get the data of the old class P1 using the SQL query, but there
> is no error.
>
>
>
> I use BinaryObject in the first place because the document says
> BinaryObject “enables you to add and remove fields from objects of the
> same type”
>
>
>
> https://apacheignite.readme.io/docs/binary-marshaller
>
>
>
> I can get the data of different class definitions using get(key), but I
> also need the SQL fields query.
>
>
>
> IgniteCache binaryCache = personCache. BinaryObject>withKeepBinary();
>
> BinaryObject bObj = binaryCache.get(1L);
>
> System.out.println(bObj.type().field("firstName").value(bObj) + 

Re: unsubscribe

2018-05-29 Thread Вячеслав Коптилин
Hello,

Hello,

To unsubscribe from the user mailing list send a letter to
user-unsubscr...@ignite.apache.org with a word "Unsubscribe" without quotes as
a subject.

If you have a mailing client, follow an unsubscribe link here:
https://ignite.apache.org/community/resources.html#mail-lists

Thanks,
S.



2018-05-29 16:14 GMT+03:00 Rusher,Gary :

>
> --
> E-MAIL CONFIDENTIALITY NOTICE: The information transmitted in this e-mail
> and in any replies and forwards are for the sole use of the above
> individual(s) or entities and may contain proprietary, privileged and/or
> highly confidential information. Any unauthorized dissemination, review,
> distribution or copying of these communications is strictly prohibited. If
> this e-mail has been transmitted to you in error, please notify and return
> the original message to the sender immediately at the above listed address.
> Thank you for your cooperation.
>


Re: What is the expected ETA for the 2.5.0 release...

2018-05-28 Thread Вячеслав Коптилин
Hello Paul,

The voting has already been started. Please take a look at this topic:
http://apache-ignite-developers.2346864.n4.nabble.com/VOTE-Apache-Ignite-2-5-0-RC1-tc30923.html
So, I assume AI 2.5 will be released soon.

Best regards,
S.

2018-05-28 17:10 GMT+03:00 Paul Anderson :

> ... hours, days or weeks?
>
> sorry to be a pain
>


Re: Fwd: Re: Ignite cache query

2018-04-25 Thread Вячеслав Коптилин
Hello Shaleen,

It looks like that you want that SQL engine would create table columns
based on the hashmap which has a dynamic structure (obviously you can add
and remove keys from the map simultaneously).
So, every time when you add/remove a key from the map for a particular
entry, SQL engine should automatically add/remove the corresponding column.
I don't think it is possible.

I see only the one way, that can be used here. This is a user-defined
function [1]
public class Pojo {
@QuerySqlField
private Map map;

@QuerySqlFunction
public static String get(Map field, String id) {
return map.get(id);
}

...
}

In that case, you can use this custom function as follows: select * from
Pojo where get(map,?)=?
Though you will not be able to use indexes this way.

[1] https://apacheignite-sql.readme.io/docs/custom-sql-functions

Thanks.

2018-04-24 9:00 GMT+03:00 Shaleen Sharma :

> Hi All
>
> Is there anything anyone has to say about the below issue?
>
> Thanks
> Shaleen
>
> *From:* Shaleen Sharma 
> *Sent:* Monday, April 23, 2018 4:04:13 PM
> *To:* user@ignite.apache.org; user@ignite.apache.org
> *Subject:* Re: Ignite cache query
>
> Thanks for your reply.
>
> Forgot to mention earlier that I am already using @QuerySqlField on each
> field of my Pojo class so all the fields are appearing as table columns but
> the hashmap doesn't. So how do i use @QuerySqlField on a hashmap so that
> each key of hashmap also behaves as a table column which I can use to do an
> "order by" in my sql.
>
> Thanks
> Shaleen
>
>
>
> From: begineer
> Sent: Monday, 23 April, 3:53 pm
> Subject: Re: Ignite cache query
> To: user@ignite.apache.org
>
>
> e.g. public class Student { @QuerySqlField(index=true) int id;
> @QuerySqlField String name ; @QuerySqlField LocalDateTime dob;
> @QuerySqlField LocalDate dos; @QuerySqlField Map> map = new HashMap config
> = new CacheConfiguration cache = ignite.getOrCreateCache(config);
> DateTimeFormatter formatter = DateTimeFormatter.ofPattern("-MM-dd
> H:mm"); LocalDateTime time1= LocalDateTime.now(); String s =
> time1.format(formatter); LocalDateTime time = LocalDateTime.parse(s,
> formatter); Map> map = new HashMap query = new SqlQuery list =
> cache.query(query).getAll().stream().map(Cache.Entry::
> getValue).collect(Collectors.toList()); -- Sent from:
> http://apache-ignite-users.70518.x6.nabble.com/
>
>
>
>


Re: Ignite configuration for starting in client mode

2018-04-24 Thread Вячеслав Коптилин
Hello Prasad,

I think it is not necessary to setup cache configurations and/or data
storage config on the client node until you don't use local caches.
The most important things that you have to configure are
IgniteConfiguration#clientMode(true) [1] and Discovery SPI [2].

[1]
https://apacheignite.readme.io/docs/clients-vs-servers#section-configuring-clients-and-servers
[2] https://apacheignite.readme.io/docs/cluster-config

Thanks!

2018-04-24 10:40 GMT+03:00 Prasad Bhalerao :

> Hi,
>
> I want to start ignite in tomcat container but in client mode.
>
> Is it necessary to set cache configuration CacheConfiguration and
> DataStorageConfiguration in IgniteConfiguration while starting the in
> client mode?
>
> Thanks,
> Prasad
>


Re: Cache loading from the web console

2018-04-11 Thread Вячеслав Коптилин
Hello,

Are you looking for an ability to do it (execute IgniteCache#loadCache())
via web console?
I don't think there is such capability at the moment.

Thanks,
Slava.


2018-04-09 15:19 GMT+03:00 demanfear :

> About the loadcache () method, I know it's interesting to know how to
> download data besides it
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: stop sending messages pls

2018-04-03 Thread Вячеслав Коптилин
test message

2018-04-02 10:12 GMT+03:00 andriy.kasat...@kyivstar.net <
andriy.kasat...@kyivstar.net>:

> My email is andriy.kasat...@kyivstar.net. Can you please unsubscribe this
> mail.
>


Re: stop sending messages pls

2018-04-02 Thread Вячеслав Коптилин
Hello,

To unsubscribe from the user mailing list send a letter to
user-unsubscr...@ignite.apache.org with a word "Unsubscribe" without quotes
as a topic.

If you have a mailing client, follow an unsubscribe link here:
https://ignite.apache.org/community/resources.html#mail-lists

Thanks,
S.

2018-04-02 10:12 GMT+03:00 andriy.kasat...@kyivstar.net <
andriy.kasat...@kyivstar.net>:

> My email is andriy.kasat...@kyivstar.net. Can you please unsubscribe this
> mail.
>


Re: Ignite 2.3.4 and Spring Boot 2.0.0.RELEASE

2018-03-26 Thread Вячеслав Коптилин
Hello,

If I am not mistaken, Apache Ignite is certified with
spring-data:1.13.1.RELEASE version.
I am not sure there is an activity related to the latest code changes in
spring boot library.
Anyway, you can start a discussion on dev-list about that, and feel free to
contribute to the project!

Thanks!

2018-03-26 12:15 GMT+03:00 Coke :

> Hi,
> When declaring Spring Boot 2 in my POM.xml I get this error:
>
> Error:(14, 8) java: name clash: deleteAll(java.lang.Iterable T>) in org.springframework.data.repository.CrudRepository
> and deleteAll(java.lang.Iterable) in org.apache.ignite.springdata.
> repository.IgniteRepository
> have the same erasure, yet neither overrides the other
>
> But if a change it to 1.5.10.RELEASE, the error goes away. Sound like some
> functions are not being overridden correctly with the new version of Spring
> Boot. Did anyone notice this?
> Thanks,
>
> best regards .
>
>
>
>
> Carr. Fuencarral, 14
> -16.
> 28108 Alcobendas, Madrid
> Phone.: +34 91 173 2350
>
>
>
> *This message and any attached documents are confidential.  They contain
> information that cannot be divulged.  If you have received this message by
> error, please eliminate it from your system and notify the sender by
> resending it to the email from which it came from. Do not copy or divulge
> the message or its contents to other parties. Although we have taken
> reasonable steps to ensure that this communication (including attachments)
> are free from computer virus, you are advised to take your own steps to
> ensure that they are actually virus free.**Thank You.*
>
>
>
> *Este mensaje y los ficheros anexos son confidenciales. Los mismos
> contienen información reservada que no puede ser difundida. Si usted ha
> recibido este correo por error, tenga la amabilidad de eliminarlo de su
> sistema y avisar al remitente mediante reenvío a su dirección de correo
> electrónico. No deberá copiar el mensaje ni divulgar su contenido a ninguna
> persona. Aunque hemos tomado, razonablemente,todas las precauciones para
> asegurar que este e-mail (incluidos los archivos adjuntos) están libres de
> virus, le reomendamos tome sus propias precauciones para asegurar que este
> e-mail está libre de virus. Gracias.*
>


Re: Cancelling a Continuous Query

2018-03-25 Thread Вячеслав Коптилин
Hello Andre,

As mentioned in the javadoc [1], you have to call QueryCursor#close()
method in order to stop receiving updates.

// Create new continuous query.
ContinuousQuery qry = new ContinuousQuery<>();
// Execute query and get cursor that iterates through initial data.
QueryCursor> cur = cache.query(qry);
...
// Stop receiving updates
cur.close();

[1] 
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/cache/query/ContinuousQuery.html

[2] 
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/cache/query/QueryCursor.html


Thanks!


2018-03-25 20:47 GMT+03:00 au.fp2018 :

> Hello All,
>
> What is the correct way to cancel a Continuous Query, and make sure all the
> resources are used by the query are freed?
>
> I looked in the documentation and the examples, I didn't see any explicit
> reference to cancelling a running continuous query.
>
> Thanks,
> Andre
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Can we enabled persistence per cache level in the java code instead of setting persistence in xml configuration file for entire cluster?

2018-03-23 Thread Вячеслав Коптилин
If your question is about a possibility which allows to enable/disable
persistence of a data region on the fly, then the answer is no.
That property can be configured via XML file or java api. You can find a
comprehensive description here
https://apacheignite.readme.io/docs/distributed-persistent-store

Thanks!

2018-03-23 17:30 GMT+03:00 Teja :

> Currently, Persistence is not enabled in Ignite cluster. Say if i want to
> enable it for a particular ignite cache while loading data into it, is it
> possible to enable the persistence in java code only for that cache,
> instead
> of doing configuration changes in the default configuration xml and restart
> it?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: peerclassloading property to true while we are loading data with persistence property set to true?

2018-03-23 Thread Вячеслав Коптилин
Perhaps, this page will be useful as well
https://apacheignite.readme.io/docs/cluster-activation

Thanks!

2018-03-23 17:37 GMT+03:00 Вячеслав Коптилин :

> Hi Teja,
>
> >  Is it mandatory to set peerclassloading property to true while we are
> loading data with persistence property set to true?
> Peer class loading property is not mandatory. This feature relates to
> Compute Grid https://apacheignite.readme.io/docs/compute-grid
> Please check this page for the details: https://apacheignite.
> readme.io/docs/zero-deployment
>
> >  should we need to activate the cluster as shown below
> Yes, you have to activate the cluster before as follows:
> ignite.active(true);
>
> Thanks!
>
> 2018-03-23 17:25 GMT+03:00 Teja :
>
>> Is it mandatory to set peerclassloading property to true while we are
>> loading
>> data with persistence property set to true?
>>
>> Every time when we are loading the data to ignite cache with persistence
>> enabled, should we need to activate the cluster as shown below?
>> ignite.active(true);
>>
>>
>>
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>
>


Re: peerclassloading property to true while we are loading data with persistence property set to true?

2018-03-23 Thread Вячеслав Коптилин
Hi Teja,

>  Is it mandatory to set peerclassloading property to true while we are
loading data with persistence property set to true?
Peer class loading property is not mandatory. This feature relates to
Compute Grid https://apacheignite.readme.io/docs/compute-grid
Please check this page for the details:
https://apacheignite.readme.io/docs/zero-deployment

>  should we need to activate the cluster as shown below
Yes, you have to activate the cluster before as follows:
ignite.active(true);

Thanks!

2018-03-23 17:25 GMT+03:00 Teja :

> Is it mandatory to set peerclassloading property to true while we are
> loading
> data with persistence property set to true?
>
> Every time when we are loading the data to ignite cache with persistence
> enabled, should we need to activate the cluster as shown below?
> ignite.active(true);
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite on Kubernetes: Question on external jars

2018-02-23 Thread Вячеслав Коптилин
Hello,

It seems to me, that you have to restart your cluster.
Java classpath is not something that can be dynamically changed at runtime.

Thanks!

2018-02-23 12:14 GMT+03:00 vbm :

> Hi,
>
> I have a java program which is used to write to ignite cache and then into
> the persistant db.
> I have  created a client jar. I am trying to use this to interact with
> ignite servers on kubernetes.
> I have followed  this
>  web&cd=1&cad=rja&uact=8&ved=0ahUKEwiptbHM2LvZAhXHe7wKHaeNA
> eMQFggmMAA&url=https%3A%2F%2Fapacheignite.readme.io%2Fdocs%2Fkubernetes-
> deployment&usg=AOvVaw0c802vvylyi4aFPEyagE9O>
> document to deploy ignite on kubernetes.
>
> As per my understanding, the client program jar needs to be present on the
> libs folder of all servers.
> I think this can be set using EXTERNAL_LIBS option. But in my case, ignite
> servers are already started.
>
> Do i need to restart now, for the changes to take affect ?
> Can I just place it in libs folder and servers will pick up the latest jars
> ?
>
>
> Regards,
> Vishwas
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


  1   2   >