Re: Lost partitions automatically reset

2020-01-27 Thread j_recuerda
akorensh wrote
>   The issue you described is a bit different from the original topic.
>This one deals with incorrect lostPartitions() count

Sorry about that. As I mentioned I have two different behaviors, hence the
mess up. I am trying to reproduce the original issue, the one I am
experiencing in my project, in a toy project but I have not been able yet. I
can't figure out why.

Thank you.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: java.lang.NullPointerException while preloadEntry

2020-01-27 Thread userx
Hi Ilya,

Apologies for delay. PFA the logs. Ignite.zip
  


Let me know if any other information is required.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Does this Config Reveals any Problem ?

2020-01-27 Thread v-shaal
*Code : *

public static void main(String[] args) throws InterruptedException {
   //Ignition.setClientMode(true);

try (Ignite ignite = Ignition.start("ignite.xml")) {
  System.out.println();
  System.out.println(">>> Cache query example started.");
  CacheConfiguration kafkaCache = new
CacheConfiguration<>(UA_Cache);

  kafkaCache.setCacheMode(CacheMode.PARTITIONED);
 
kafkaCache.setIndexedTypes(AffinityKey.class,AllEventsAttributes.class);

  KafkaStreamer kafkaStreamer = new
KafkaStreamer<>();
  ignite.getOrCreateCache(kafkaCache);

  IgniteDataStreamer stmr
=Ignition.ignite().dataStreamer(UA_Cache);
  stmr.allowOverwrite(true);
  kafkaStreamer.setIgnite(ignite);
  kafkaStreamer.setStreamer(stmr);

  List topics=  new ArrayList();
  topics.add("allEvents");
  // set the topic
  kafkaStreamer.setTopic(topics);
  // set the number of threads to process Kafka streams
  kafkaStreamer.setThreads(20);



  Properties props = new Properties();
  props.put("bootstrap.servers", "localhost:9092");
  props.put("value.deserializer",
"org.apache.kafka.common.serialization.StringDeserializer");
  props.put("key.deserializer", 
"org.apache.kafka.common.serialization.StringDeserializer");
  props.put("group.id", "allEvents");
  props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG,"earliest");
  kafkaStreamer.setConsumerConfig(props);
  final CountDownLatch latch = new CountDownLatch(40);
  
  kafkaStreamer.setMultipleTupleExtractor(

  record ->{
Map entries = new HashMap<>();
try{
  ObjectMapper mapper = new ObjectMapper();

  AllEvents allEvents =
mapper.readValue(record.value().toString(), AllEvents.class);

  if(!(allEvents.UserId.equals("0")) &&
!(allEvents.UserId.equals("")) && !allEvents.UserId.isEmpty()){
AllEventsAttributes allEventsAttributes = new
AllEventsAttributes(allEvents.UserId,
allEvents.RecUpdatedAt,allEvents.RecUpdatedAt);
   
entries.put(allEventsAttributes.UserId,allEventsAttributes);
   
  }
   
/*  String val = record.value().toString();
 
}catch (Exception ex) {
  System.out.println("Unexpected error." + ex);
}
return entries;
  }
  );

  kafkaStreamer.start();
  System.out.println("Kafka streamer started!");
  latch.await();
}
  }
}




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Does this Config Reveals any Problem ?

2020-01-27 Thread Evgenii Zhuravlev
Hi,
These parameters looks strange to me:




Why have you set them?

Can you share logs and the code that you use?

Thanks,
Evgenii


пн, 27 янв. 2020 г. в 01:53, v-shaal :

> I am working with kafka streamer which at start with putting 10k rec/sec
> and
> after around 1 Million records in cache it slows down to 2000rec/sec to 500
> to 100.  I am not getting hold of whats getting wrong.
> If its to do with dataPages, Threads or something else.
>
> *Following are the logs :*
>
> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
> ^-- Node [id=367b9688, uptime=00:50:00.246]
> ^-- H/N/C [hosts=5, nodes=6, CPUs=80]
> ^-- CPU [cur=4.6%, avg=23.52%, GC=0%]
> ^-- PageMemory [pages=12052]
> ^-- Heap [used=3222MB, free=77.04%, comm=4085MB]
> ^-- Off-heap [used=47MB, free=99.93%, comm=10576MB]
> ^--   sysMemPlc region [used=0MB, free=99.21%, comm=40MB]
> ^--   default region [used=0MB, free=100%, comm=256MB]
> ^--   500MB_Region region [used=46MB, free=99.91%, comm=10240MB]
> ^--   TxLog region [used=0MB, free=100%, comm=40MB]
> ^-- Outbound messages queue [size=0]
> ^-- Public thread pool [active=0, idle=0, qSize=0]
> ^-- System thread pool [active=0, idle=6, qSize=0]
> [2020-01-27 09:48:19,368][INFO ][grid-timeout-worker-#39][IgniteKernal]
> FreeList [name=null, buckets=256, dataPages=8527, reusePages=0]
>
> *Cache Config
> *
> 
> 
> 
>  class="org.apache.ignite.configuration.CacheConfiguration">
> 
> 
> 
> 
> 
> 
> class="org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction">
> 
> 
> 
> 
> 
> 
> 
>
> *Ignite and Data region config*
>
> 
>
>
> 
>
> 
>
> 
>  class="org.apache.ignite.configuration.DataStorageConfiguration">
> 
> 
>
>  class="org.apache.ignite.configuration.DataRegionConfiguration">
>
> 
>
>
> 
>
>
> 
>
>
> 
> 
> 
> 
> 
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Server side cache configuration only

2020-01-27 Thread Mikael

Hi!

1) You do not need to have cache configuration on client side, I do that 
all the time.


2) Do not know

Mikael

Den 2020-01-27 kl. 21:42, skrev Sabar Banks:

Hello Ignite Community,

My questions are:

1) Is it possible to only define cache configurations on the server side,
via xml, and avoid defining caches on the client side?

2) Is it possible to only have data bean classes listed in a cache config,
to ONLY exist on the server side physically? I am trying to only have the
bean definitions in one place: server.

Let me know

Thanks.




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Server side cache configuration only

2020-01-27 Thread Sabar Banks
Hello Ignite Community, 

My questions are:

1) Is it possible to only define cache configurations on the server side,
via xml, and avoid defining caches on the client side? 

2) Is it possible to only have data bean classes listed in a cache config,
to ONLY exist on the server side physically? I am trying to only have the
bean definitions in one place: server.

Let me know

Thanks.  




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Persistent Data Only Available after Repeatedly Restarting Pod in k8s

2020-01-27 Thread Denis Magda
Aren't you using any ephemeral storage for your Ignite pods? If you've
assigned any type of volume that doesn't guarantee to keep data among
restarts then it might be the root cause.

Check these instructions for reference:
https://www.gridgain.com/docs/latest/installation-guide/kubernetes/amazon-eks-deployment

-
Denis


On Thu, Jan 16, 2020 at 11:19 AM kellan  wrote:

> I'm running Ignite 2.7.6 on Kubernetes and have noticed that my persistent
> data often isn't available to me after restarting my Ignite pod. Sometimes
> I'll have to restart the pod 1 or more times before I can access any of my
> data.
>
> What could be causing this?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Lost partitions automatically reset

2020-01-27 Thread akorensh
Hi,
  Thanks for the reproducer project. 
  The issue you described is a bit different from the original topic.
   This one deals with incorrect lostPartitions() count

(Original issue:
I have a cluster with 3 nodes with persistence enabled. I have a distributed
cache with backup = 1 where I put some data.
After I shutdown NODE-2 and NODE-3 some partitions state become LOST. Then I
run NODE-2 again calling cache.lostPartitions(), which returns the array
with all
the lost partitions. This call
is done from NODE-2 when it is run again. At this point, I would expect
those partitions not to be lost since all the data is available again.)

Reproducer issue:
  - Run 5 nodes NodeStartup.kt
  - Run Client.kt which insert some data into an
  - Shut down 2 out of the 5 Nodes.
  - Run Client.kt again.
  * Calling to lostPartitions() returns an empty list, I would
expect it to return some partitions since the backup is set to one and two
nodes were turned off.
   

  We were able to reproduce the issue with the incorrect lostPartitions(),
and are planning a fix.

Thanks, Alex



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


[MEETUP] Seattle Meetup in March

2020-01-27 Thread Kseniya Romanova
Hi Igniters! GridGain wants to start IMC meetup in Seattle[1] and the first
session can be as soon as in March.

Do we have here someone from Seattle? It would be cool if you want to
present a talk about Apache Ignite - please use this IMC CFP form[2]. Or
maybe your company can host the session? Or you know a good place to make
an event in?

Any advice would be greatly appreciated!

Regards,
Kseniya

[1] https://www.meetup.com/seattle-imc-meetup/
[2]
https://docs.google.com/forms/d/e/1FAIpQLSdK-zqm8YWlPgpYHZjhJtmH_v3ZR3XKIAmkvvS1ZbwnPt1MWA/viewform


Re: Data Load to Ignite cache is very slow from Oracle Table

2020-01-27 Thread Ilya Kasnacheev
Hello!

I can see that you only define data source locally. It needs to be defined
on all server nodes participating in cache load.

Please take a look at https://apacheignite-mix.readme.io/docs/examples

Regards,
-- 
Ilya Kasnacheev


пн, 27 янв. 2020 г. в 17:45, nithin91 <
nithinbharadwaj.govindar...@franklintempleton.com>:

> Hi Belyakov,
>
> Thank you so much. This is very helpful.
>
> I am facing the following error when i am using this approach
>
> Failed to start component: class org.apache.ignite.IgniteException: Failed
> to initialize cache store (data source is not provided).
>
> Below is the code used for implementation.I have configured the data source
> property correctly but not sure why this error pops up.Can you please help
> me on this.
>
> package ignite.example.ignite_read;
>
> import java.sql.SQLException;
> import java.util.ArrayList;
> import java.util.HashSet;
> import java.util.LinkedHashMap;
> import java.util.Set;
>
> //import javax.activation.DataSource;
> import javax.cache.configuration.Factory;
> import javax.cache.integration.CacheLoaderException;
> import javax.sql.DataSource;
>
> import org.apache.ignite.Ignite;
> import org.apache.ignite.IgniteCache;
> import org.apache.ignite.IgniteException;
> import org.apache.ignite.Ignition;
> import org.apache.ignite.cache.CacheAtomicityMode;
> import org.apache.ignite.cache.CacheMode;
> import org.apache.ignite.cache.QueryEntity;
> import org.apache.ignite.cache.store.jdbc.CacheJdbcPojoStoreFactory;
> import org.apache.ignite.cache.store.jdbc.JdbcType;
> import org.apache.ignite.cache.store.jdbc.JdbcTypeField;
> import org.apache.ignite.cache.store.jdbc.dialect.OracleDialect;
> import org.apache.ignite.configuration.CacheConfiguration;
> import org.apache.ignite.configuration.IgniteConfiguration;
> import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
> import
> org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
>
> import oracle.jdbc.pool.OracleDataSource;
>
> public class IgniteCacheload {
>
> @SuppressWarnings("unchecked")
> public static void main(String[] args) throws IgniteException,
> SQLException
> {
>
>
>
>
>  IgniteConfiguration config = new IgniteConfiguration();
>  /*
> // config code
>  *
>
>  try (Ignite ignite = Ignition.start(config)){
>
> CacheConfiguration prdCacheCfg = new
> CacheConfiguration<>();
>
> prdCacheCfg.setName("ProdCache");
> prdCacheCfg.setCacheMode(CacheMode.PARTITIONED);
> prdCacheCfg.setAtomicityMode(CacheAtomicityMode.ATOMIC);
>
> //personCacheCfg.setReadThrough(true);
> //personCacheCfg.setWriteThrough(true);
>
> CacheJdbcPojoStoreFactory factory =
> new
> CacheJdbcPojoStoreFactory<>();
>
>
> factory.setDialect(new OracleDialect());
> //factory.setDataSource(dsdetails());
> factory.setDataSourceFactory(dsdetails());
>
> JdbcType productType = new JdbcType();
> productType.setCacheName("ProdCache");
> productType.setKeyType(ProductKey.class);
> productType.setValueType(Products.class);
> // Specify the schema if applicable
>
> productType.setDatabaseTable("table");
>
> productType.setKeyFields(new
> JdbcTypeField(java.sql.Types.VARCHAR, "fid",
> ProductKey.class, "FID"));
> productType.setValueFields(new
> JdbcTypeField(java.sql.Types.VARCHAR,
> "scode", Products.class, "scode"));
>
>
> factory.setTypes(productType);
>
> prdCacheCfg.setCacheStoreFactory(factory);
>
>
> config.setCacheConfiguration(prdCacheCfg);
>
>
>
> IgniteCache cache =
> ignite.getOrCreateCache(prdCacheCfg);
>
>   cache.clear();
>
>   // Load cache on all data nodes with default SQL statement.
>   System.out.println(">>> Load ALL data to cache from DB...");
>   cache.loadCache(null);
>
>
>   System.out.println(">>> Loaded cache entries: " +
> cache.size());
>
> }
>  catch (Exception e) {
> throw new CacheLoaderException("Failed to load the cache"+
> e.getMessage());
>
> }
>
>
> }
>
> public static Factory dsdetails() throws SQLException{
> //public static DataSource dsdetails() throws SQLException{
> OracleDataSource oraDataSrc = new OracleDataSource();
> oraDataSrc.setURL("url");
> oraDataSrc.setUser("username");
> oraDataSrc.setPassword("pswd");
> //return oraDataSrc;
> return (Factory)oraDataSrc;
>
> }
>
> }
>
>
>
>
>
>
> --
> Sent from: 

Re: Lost partitions automatically reset

2020-01-27 Thread Igor Belyakov
Hi,

I've tried to run provided example with partitionLossPolicy changed to
"READ_WRITE_SAFE", as was described in initial message, and received next
results:

1. After shutting down of 2 nodes (out of 5) I see lost partitions on the
client:
JRH: LostData = [6, 32, 35, 41, 66, 83, 112, 115, 134, 136, 137, 171, 188,
195, 227, 231, 233, 243, 265, 273, 277, 289, 298, 300, 306, 314, 328, 347,
366, 371, 382, 383, 391, 394, 401, 410, 413, 417, 420, 426, 433, 461, 475,
484, 494, 496, 527, 537, 542, 547, 550, 570, 584, 599, 604, 608, 612, 616,
639, 653, 655, 660, 661, 693, 695, 701, 707, 711, 715, 717, 731, 752, 764,
776, 782, 789, 810, 817, 818, 834, 847, 849, 854, 856, 862, 879, 893, 897,
909, 921, 924, 926, 932, 938, 955, 969, 970, 974, 978, 979, 980, 994, 1007,
1013, 1021]
And "Failed to map keys for cache (all partition nodes left the grid)." is
thrown since we don't have partitions for such keys.

2. After running 1 of the shutted down nodes there are no more lost
partitions found:
JRH: LostData = []
Populating the cache...
Done: 1000
Done: 2000
Done: 3000
Done: 4000
Done: 5000
Done: 6000
Done: 7000
Done: 8000
Done: 9000
LOST PARTITION = []

Seems like the policy works correctly. Did you cleanup your work directory
between test runs?

On Mon, Jan 27, 2020 at 2:59 PM j_recuerda 
wrote:

> I have two different scenarios or behaviors, none is working as I would
> expect. One is what I am experiencing in my project (whole project) and the
> other one is what I am experiencing when creating a toy project trying to
> reproduce it. In both cases, I am using Ignite 2.7.6.
>
> This is the code I am using for the toy project ( Github:Code
> <
> https://github.com/jrecuerda/IgnitePlayground/tree/master/PartitionLossPolicy>
>
> ). It is written in Kotlin but I think it is quite simple so it should be
> understandable even if you don't know Kotlin.
>
> Steps to reproduce it:
>   - Run 5 nodes NodeStartup.kt
>   - Run Client.kt which activate the cluster and insert some data into an
> IgniteCache
>   - Shut down 2 out of the 5 Nodes.
>   - Run Client.kt again.
>   * Calling to lostPartitions() returns an empty list, I would
> expect it to return some partitions since the backup is set to one and two
> nodes were turned off.
>   * When trying to put data into the cache, even when the
> partitionLossPolicy is set to IGNORE, throws:
> Exception in thread "main"
> org.apache.ignite.cache.CacheServerNotFoundException: Failed to map keys
> for
> cache (all partition nodes left the grid).
> at
>
> org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1321)
> at
>
> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.cacheException(IgniteCacheProxyImpl.java:1758)
> at
>
> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.put(IgniteCacheProxyImpl.java:1108)
> at
>
> org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.put(GatewayProtectedCacheProxy.java:820)
> at jrh.ClientKt.insertData(Client.kt:30)
> at jrh.ClientKt.main(Client.kt:42)
> at jrh.ClientKt.main(Client.kt)
> Caused by: class
> org.apache.ignite.internal.cluster.ClusterTopologyServerNotFoundException:
> Failed to map keys for cache (all partition nodes left the grid).
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.mapSingleUpdate(GridNearAtomicSingleUpdateFuture.java:562)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.map(GridNearAtomicSingleUpdateFuture.java:454)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.mapOnTopology(GridNearAtomicSingleUpdateFuture.java:443)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.map(GridNearAtomicAbstractUpdateFuture.java:248)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.update0(GridDhtAtomicCache.java:1153)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.put0(GridDhtAtomicCache.java:611)
> at
>
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2449)
> at
>
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2426)
> at
>
> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.put(IgniteCacheProxyImpl.java:1105)
> ... 4 more
>
> Thank you!
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Data Load to Ignite cache is very slow from Oracle Table

2020-01-27 Thread nithin91
Hi Belyakov,

Thank you so much. This is very helpful.

I am facing the following error when i am using this approach

Failed to start component: class org.apache.ignite.IgniteException: Failed
to initialize cache store (data source is not provided).

Below is the code used for implementation.I have configured the data source
property correctly but not sure why this error pops up.Can you please help
me on this.

package ignite.example.ignite_read;

import java.sql.SQLException;
import java.util.ArrayList;
import java.util.HashSet;
import java.util.LinkedHashMap;
import java.util.Set;

//import javax.activation.DataSource;
import javax.cache.configuration.Factory;
import javax.cache.integration.CacheLoaderException;
import javax.sql.DataSource;

import org.apache.ignite.Ignite;
import org.apache.ignite.IgniteCache;
import org.apache.ignite.IgniteException;
import org.apache.ignite.Ignition;
import org.apache.ignite.cache.CacheAtomicityMode;
import org.apache.ignite.cache.CacheMode;
import org.apache.ignite.cache.QueryEntity;
import org.apache.ignite.cache.store.jdbc.CacheJdbcPojoStoreFactory;
import org.apache.ignite.cache.store.jdbc.JdbcType;
import org.apache.ignite.cache.store.jdbc.JdbcTypeField;
import org.apache.ignite.cache.store.jdbc.dialect.OracleDialect;
import org.apache.ignite.configuration.CacheConfiguration;
import org.apache.ignite.configuration.IgniteConfiguration;
import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
import
org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;

import oracle.jdbc.pool.OracleDataSource;

public class IgniteCacheload {

@SuppressWarnings("unchecked")
public static void main(String[] args) throws IgniteException, 
SQLException
{




 IgniteConfiguration config = new IgniteConfiguration();
 /*
// config code
 *
   
 try (Ignite ignite = Ignition.start(config)){

CacheConfiguration prdCacheCfg = new
CacheConfiguration<>();

prdCacheCfg.setName("ProdCache");
prdCacheCfg.setCacheMode(CacheMode.PARTITIONED);
prdCacheCfg.setAtomicityMode(CacheAtomicityMode.ATOMIC);

//personCacheCfg.setReadThrough(true);
//personCacheCfg.setWriteThrough(true);

CacheJdbcPojoStoreFactory factory = new
CacheJdbcPojoStoreFactory<>();


factory.setDialect(new OracleDialect());
//factory.setDataSource(dsdetails());
factory.setDataSourceFactory(dsdetails());

JdbcType productType = new JdbcType();
productType.setCacheName("ProdCache");
productType.setKeyType(ProductKey.class);
productType.setValueType(Products.class);
// Specify the schema if applicable

productType.setDatabaseTable("table");

productType.setKeyFields(new 
JdbcTypeField(java.sql.Types.VARCHAR, "fid",
ProductKey.class, "FID"));
productType.setValueFields(new 
JdbcTypeField(java.sql.Types.VARCHAR,
"scode", Products.class, "scode"));


factory.setTypes(productType);

prdCacheCfg.setCacheStoreFactory(factory);


config.setCacheConfiguration(prdCacheCfg);



IgniteCache cache =
ignite.getOrCreateCache(prdCacheCfg);

  cache.clear();

  // Load cache on all data nodes with default SQL statement.
  System.out.println(">>> Load ALL data to cache from DB...");
  cache.loadCache(null);
  

  System.out.println(">>> Loaded cache entries: " +
cache.size());

}
 catch (Exception e) {
throw new CacheLoaderException("Failed to load the cache"+
e.getMessage());

}


}

public static Factory dsdetails() throws SQLException{
//public static DataSource dsdetails() throws SQLException{
OracleDataSource oraDataSrc = new OracleDataSource();
oraDataSrc.setURL("url");
oraDataSrc.setUser("username");
oraDataSrc.setPassword("pswd");
//return oraDataSrc;
return (Factory)oraDataSrc;

}

}






--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to terminate long running transactions in 2.4.0?

2020-01-27 Thread Ilya Kasnacheev
Hello!

I think that killing originator nodes' of these transactions should
eventually cause them to terminate, unless there's a hard VM-level deadlock.

Regards,
-- 
Ilya Kasnacheev


сб, 25 янв. 2020 г. в 01:57, src :

> Cluster details: 3 servers, 4 clients.
>
> Ignite version: 2.4.0
>
> Number of caches: 4
>
> I am observing a lot of "Found long running transactions/cache futures"
> with
> state=ROLLED_BACK for transactions on a particular cache which is causing
> failure to start PME when any server node is booted. It looks like
> txnTimeout was not set for transactions started on this particular cache.
> Added the txnTimeout which will not cause any further "long running
> transactions" but how do I eliminate the currently "long running
> transactions" ? Reboot of client instances did not remove these
> transactions.
>
> 1) Is there a way to terminate these transactions?
>
> 2) If I delete the particular cache will these transactions be terminated?
>
> Thanks!
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Launched Ignite meetups and redesigned events pages

2020-01-27 Thread Alexey Zinoviev
Count me in!

пн, 27 янв. 2020 г. в 15:12, Dmitriy Pavlov :

> Hi Denis,
>
> yes, sorry for late reply, I just did double check that I can access
> answers. Additionally, Ksenia R. have access to proposals.
>
> Anyone from PMC who would like to volunteer and being PMC Representative
> for Apache Ignite Local meetups are always welcomed (according
> https://www.apache.org/foundation/marks/events  - The ASF events branding
> policy).
>
> Moreover, not-yet-PMC volunteers, who would help with talks preparation are
> also welcomed. I strongly believe we should not be too formal here.
>
> Sincerely,
> Dmitriy Pavlov
>
>
>
> чт, 9 янв. 2020 г. в 23:47, Denis Magda :
>
> > Dmitry,
> >
> > We've also added the reference to the form on the event's webpage. Btw,
> > could you remind me who will be receiving the proposals - you, I and some
> > other folks or all @dev list?
> >
> > -
> > Denis
> >
> >
> > On Thu, Jan 9, 2020 at 1:53 AM Dmitriy Pavlov 
> wrote:
> >
> >> Hi Igniters, Mauricio, Ignacio, and Denis,
> >>
> >> Thank you all for updating these pages.
> >>
> >> I would like to stress just one thing:
> >> should you have in mind some topic you can run talk about, please feel
> >> absolutely free to fill submission proposal.
> >>
> >> To submit proposal you need to fill in this form
> >>
> >>
> https://docs.google.com/forms/d/e/1FAIpQLSdiY7movHKvyWg3gOVedHgukJJnNiaejSO_X838vBseL9VmiQ/viewform
> >> .
> >>
> >> Sincerely,
> >> Dmitriy Pavlov
> >>
> >> вт, 7 янв. 2020 г. в 02:48, Denis Magda :
> >>
> >> > Igniters,
> >> >
> >> > I've just merged changes contributed by Mauricio and Ignacio, who
> helped
> >> > us to prepare professional webpages for Ignite meetup groups around
> the
> >> > world [1] and for Ignite events [2]. Now you can easily enroll in a
> >> meetup
> >> > group nearby or sign up for an upcoming event to be hosted one of our
> >> > experts.
> >> >
> >> > Please check them out and share your feedback. In the meantime, three
> of
> >> > us are going to carry on with optimizations and changes making the
> >> website
> >> > more useful for developers as well as more searchable/discoverable
> from
> >> > various search engines.
> >> >
> >> > [1] https://ignite.apache.org/meetup-groups.html
> >> > [2] https://ignite.apache.org/events.html
> >> >
> >> > -
> >> > Denis
> >> >
> >>
> >
>


Re: Launched Ignite meetups and redesigned events pages

2020-01-27 Thread Dmitriy Pavlov
Hi Denis,

yes, sorry for late reply, I just did double check that I can access
answers. Additionally, Ksenia R. have access to proposals.

Anyone from PMC who would like to volunteer and being PMC Representative
for Apache Ignite Local meetups are always welcomed (according
https://www.apache.org/foundation/marks/events  - The ASF events branding
policy).

Moreover, not-yet-PMC volunteers, who would help with talks preparation are
also welcomed. I strongly believe we should not be too formal here.

Sincerely,
Dmitriy Pavlov



чт, 9 янв. 2020 г. в 23:47, Denis Magda :

> Dmitry,
>
> We've also added the reference to the form on the event's webpage. Btw,
> could you remind me who will be receiving the proposals - you, I and some
> other folks or all @dev list?
>
> -
> Denis
>
>
> On Thu, Jan 9, 2020 at 1:53 AM Dmitriy Pavlov  wrote:
>
>> Hi Igniters, Mauricio, Ignacio, and Denis,
>>
>> Thank you all for updating these pages.
>>
>> I would like to stress just one thing:
>> should you have in mind some topic you can run talk about, please feel
>> absolutely free to fill submission proposal.
>>
>> To submit proposal you need to fill in this form
>>
>> https://docs.google.com/forms/d/e/1FAIpQLSdiY7movHKvyWg3gOVedHgukJJnNiaejSO_X838vBseL9VmiQ/viewform
>> .
>>
>> Sincerely,
>> Dmitriy Pavlov
>>
>> вт, 7 янв. 2020 г. в 02:48, Denis Magda :
>>
>> > Igniters,
>> >
>> > I've just merged changes contributed by Mauricio and Ignacio, who helped
>> > us to prepare professional webpages for Ignite meetup groups around the
>> > world [1] and for Ignite events [2]. Now you can easily enroll in a
>> meetup
>> > group nearby or sign up for an upcoming event to be hosted one of our
>> > experts.
>> >
>> > Please check them out and share your feedback. In the meantime, three of
>> > us are going to carry on with optimizations and changes making the
>> website
>> > more useful for developers as well as more searchable/discoverable from
>> > various search engines.
>> >
>> > [1] https://ignite.apache.org/meetup-groups.html
>> > [2] https://ignite.apache.org/events.html
>> >
>> > -
>> > Denis
>> >
>>
>


Re: Lost partitions automatically reset

2020-01-27 Thread j_recuerda
I have two different scenarios or behaviors, none is working as I would
expect. One is what I am experiencing in my project (whole project) and the
other one is what I am experiencing when creating a toy project trying to
reproduce it. In both cases, I am using Ignite 2.7.6.

This is the code I am using for the toy project ( Github:Code
 
). It is written in Kotlin but I think it is quite simple so it should be
understandable even if you don't know Kotlin.

Steps to reproduce it:
  - Run 5 nodes NodeStartup.kt
  - Run Client.kt which activate the cluster and insert some data into an
IgniteCache
  - Shut down 2 out of the 5 Nodes.
  - Run Client.kt again. 
  * Calling to lostPartitions() returns an empty list, I would
expect it to return some partitions since the backup is set to one and two
nodes were turned off.
  * When trying to put data into the cache, even when the
partitionLossPolicy is set to IGNORE, throws: 
Exception in thread "main"
org.apache.ignite.cache.CacheServerNotFoundException: Failed to map keys for
cache (all partition nodes left the grid).
at
org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1321)
at
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.cacheException(IgniteCacheProxyImpl.java:1758)
at
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.put(IgniteCacheProxyImpl.java:1108)
at
org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.put(GatewayProtectedCacheProxy.java:820)
at jrh.ClientKt.insertData(Client.kt:30)
at jrh.ClientKt.main(Client.kt:42)
at jrh.ClientKt.main(Client.kt)
Caused by: class
org.apache.ignite.internal.cluster.ClusterTopologyServerNotFoundException:
Failed to map keys for cache (all partition nodes left the grid).
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.mapSingleUpdate(GridNearAtomicSingleUpdateFuture.java:562)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.map(GridNearAtomicSingleUpdateFuture.java:454)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.mapOnTopology(GridNearAtomicSingleUpdateFuture.java:443)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.map(GridNearAtomicAbstractUpdateFuture.java:248)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.update0(GridDhtAtomicCache.java:1153)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.put0(GridDhtAtomicCache.java:611)
at
org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2449)
at
org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2426)
at
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.put(IgniteCacheProxyImpl.java:1105)
... 4 more

Thank you!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Data Load to Ignite cache is very slow from Oracle Table

2020-01-27 Thread Igor Belyakov
Hi,

Example of CacheJdbcPojoStore configuration via code is available here
(check "Java Configuration" tab):
https://www.gridgain.com/docs/latest/developers-guide/persistence/external-storage#cachejdbcpojostore


Regards,
Igor Belyakov

On Mon, Jan 27, 2020 at 12:44 PM nithin91 <
nithinbharadwaj.govindar...@franklintempleton.com> wrote:

> Hi Mikael,
>
> Thanks for your quick response.
>
> I have gone through the documentation reg usage of IgniteCache.loadcache
> method.
>
> Documentation Link:
> https://apacheignite.readme.io/docs/3rd-party-store#section-loadcache-
>
> in the Documentation It was mentioned to enable the JDBC POJO store
> manually
> in the Ignite XML configuration file (or via code).
>
> Can you please provide any reference link on how to enable the  JDBC POJO
> store  via Code.
>
>
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Does this Config Reveals any Problem ?

2020-01-27 Thread v-shaal
I am working with kafka streamer which at start with putting 10k rec/sec and
after around 1 Million records in cache it slows down to 2000rec/sec to 500
to 100.  I am not getting hold of whats getting wrong.
If its to do with dataPages, Threads or something else.

*Following are the logs :* 

Metrics for local node (to disable set 'metricsLogFrequency' to 0)
^-- Node [id=367b9688, uptime=00:50:00.246]
^-- H/N/C [hosts=5, nodes=6, CPUs=80]
^-- CPU [cur=4.6%, avg=23.52%, GC=0%]
^-- PageMemory [pages=12052]
^-- Heap [used=3222MB, free=77.04%, comm=4085MB]
^-- Off-heap [used=47MB, free=99.93%, comm=10576MB]
^--   sysMemPlc region [used=0MB, free=99.21%, comm=40MB]
^--   default region [used=0MB, free=100%, comm=256MB]
^--   500MB_Region region [used=46MB, free=99.91%, comm=10240MB]
^--   TxLog region [used=0MB, free=100%, comm=40MB]
^-- Outbound messages queue [size=0]
^-- Public thread pool [active=0, idle=0, qSize=0]
^-- System thread pool [active=0, idle=6, qSize=0]
[2020-01-27 09:48:19,368][INFO ][grid-timeout-worker-#39][IgniteKernal]
FreeList [name=null, buckets=256, dataPages=8527, reusePages=0]

*Cache Config 
*


















*Ignite and Data region config*



































--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Data Load to Ignite cache is very slow from Oracle Table

2020-01-27 Thread nithin91
Hi Mikael,

Thanks for your quick response.

I have gone through the documentation reg usage of IgniteCache.loadcache
method.

Documentation Link:
https://apacheignite.readme.io/docs/3rd-party-store#section-loadcache-

in the Documentation It was mentioned to enable the JDBC POJO store manually
in the Ignite XML configuration file (or via code).

Can you please provide any reference link on how to enable the  JDBC POJO
store  via Code.








--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Data Load to Ignite cache is very slow from Oracle Table

2020-01-27 Thread Mikael

Hi!

If you use put() to insert the data it's not the fastest way, using 
putAll(), IgniteCache.loadCache() or a streamer is usually much faster, 
but it depends a little on how you use your data, a streamer is fast but 
you can't expect all data to be available until you close or flush the 
streamer, there are many examples in the documentation.


Mikael

Den 2020-01-27 kl. 09:36, skrev nithin91:

Hi

I am trying to load data from Oracle Table to Ignite Cache using Cache Store
Load Cache method.

Following is the logic implemented in Load Cache method to load the data
from Oracle Table using Ignite Cache.

1. JDBC connector is used to connect to Oracle Table and the data is
available in Result Set Cursor.
2. While loop is used to loop through the Result Set Object and insert the
data into cache.

Is there any other way to insert the data from Oracle Table to Ignite Cache.
If possible please share sample code.







 










--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Data Load to Ignite cache is very slow from Oracle Table

2020-01-27 Thread nithin91
Its taking almost 1hour to load the 0.1 million data using result set Cursor



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Data Load to Ignite cache is very slow from Oracle Table

2020-01-27 Thread nithin91
Hi 

I am trying to load data from Oracle Table to Ignite Cache using Cache Store
Load Cache method.

Following is the logic implemented in Load Cache method to load the data
from Oracle Table using Ignite Cache.

1. JDBC connector is used to connect to Oracle Table and the data is
available in Result Set Cursor.
2. While loop is used to loop through the Result Set Object and insert the
data into cache.

Is there any other way to insert the data from Oracle Table to Ignite Cache.
If possible please share sample code.





   











--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/