Replicated In-memory synchronized cache with Oracle SE

2017-11-20 Thread pragmaticbigdata
Hi, I need to build a replicated cache in the app server's jvm that is synchronized with database changes done *by external applications*. Also the requirement is not change our current code base that currently writes any data changes directly to the database (Oracle SE). We would like to make min

Re: Understanding faster batch inserts/updates with apache ignite 1.9

2017-03-15 Thread pragmaticbigdata
Thanks for the reply. DML also implies query parsing, mapping to Java objects and other > preparation steps before IgniteDataStreamer API is called. Thus the > performance difference. Does this mean the data streamer api would be faster in pre-loading or bulk-data as compared to the streaming mo

Re: Understanding faster batch inserts/updates with apache ignite 1.9

2017-03-14 Thread pragmaticbigdata
Thanks for sharing the link. I have a couple of questions on the streaming mode support 1. How soon are the cache entries added via the jdbc driver streaming mode available for other requests like querying? Is it similar to how IgniteDataStreamer api behaves i.e. the cache entries are available as

Understanding faster batch inserts/updates with apache ignite 1.9

2017-03-10 Thread pragmaticbigdata
The latest release of ignite introduces streaming mode that enables DML execution operations in specific scenarios, such as *batch inserts and updates or data preloading*. I tried following the jira issue

Re: IGFS Questions

2017-01-27 Thread pragmaticbigdata
Thanks for the replies. I have a few follow up questions. > Yes, as long as you have Hadoop-compliant implementation of S3 file system > (e.g. org.apache.hadoop.fs.s3.S3FileSystem). I will spend sometime in understanding what this means but by "Hadoop compliant implementation" are you hinting th

Kafka as a Persistent Store

2017-01-24 Thread pragmaticbigdata
Can I configure Kafka as a persistent store for my apache ignite cache? If so, I assume it could be configured in both write-through and write-behind modes. Kindly share some insights on the configuration of such an implementation. Thanks. -- View this message in context: http://apache-ignite

IGFS Questions

2017-01-24 Thread pragmaticbigdata
I have some questions of deploying IGFS as a cache layer given that ignite could be deployed both as a key-value store and as a file system 1. How does IGFS behave when deployed in standalone mode? I wanted to confirm that there is no durability in this mode. Assuming I persist a parquet file on I

Re: Index Maintenance During Transactions

2017-01-19 Thread pragmaticbigdata
Ok. Just so that we are on the same page, the transaction duration would include the amount of time to perform data updates and the time to rebuilt/update the indexes, right? -- View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Index-Maintenance-During-Transactions-tp

Re: Consistency Guarantees & Smart Updates with Spark Integration

2017-01-17 Thread pragmaticbigdata
1.a. So basically Ignite cannot help much here. Would wrapping the save around an IgniteTransaction help? When spark node crashes I can rollback the transaction so that the data in the data grid is consistent. This also means I should be using IgniteTransaction api for all the operations performed

Re: Index Maintenance During Transactions

2017-01-17 Thread pragmaticbigdata
Ok. So this means that the index maintenance is done in the background. Is there some kind of flag that stops the index usage by any queries executed after the transaction commit and before the indexes are completely rebuilt? If not, would the queries that use these indexes give wrong results? Ass

Merging with RDBMS transaction

2017-01-17 Thread pragmaticbigdata
I would like to confirm my understanding of CacheStoreSession. 1. Assuming my persistence store is Oracle, using CacheStoreSession & the out of the box CacheJdbcStoreSessionListener

Data Rebalancing & Transactions

2017-01-17 Thread pragmaticbigdata
I have a few questions about data balancing 1. With synchronous data rebalancing, I understand that the newly added node will not available for any cache operations till the data is completely rebalanced in the cluster. This means that other nodes including the one from which the data is being tra

Consistency Guarantees & Smart Updates with Spark Integration

2017-01-17 Thread pragmaticbigdata
I have a couple of questions with about the ignite spark integration 1. What consistency guarantees does ignite provide when saving a RDD to the data grid? For e.g. Assuming the spark rdd holds 1 million records and I call sharedRdd.savePairs() api. a. What happens if the spark worker crashes aft

Index Maintenance During Transactions

2017-01-16 Thread pragmaticbigdata
When updating a partitioned/replicated cache(s) in a transactions, does Ignite update the indexes as part of the transaction or is that done after the transaction commit? In either case is this configurable? Thanks! -- View this message in context: http://apache-ignite-users.70518.x6.nabble.co

Re: Re-partitioning when partition key changes

2017-01-12 Thread pragmaticbigdata
Ok. That clears my understanding on how partitioning would work. Thanks -- View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Re-partitioning-when-partition-key-changes-tp10031p10071.html Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Re: Re-partitioning when partition key changes

2017-01-11 Thread pragmaticbigdata
We have a dynamic model hence we plan to use the BinaryObject API. Our cache looks like IgniteCache. Out of the number of attributes that could be part of the cache key, we plan to define the partition key through the AffinityKeyMapper interface implementation. In the affinityKey(key) method we wou

Re-partitioning when partition key changes

2017-01-11 Thread pragmaticbigdata
Does Ignite (1.8) support re-partitioning of data at runtime? We have a use case where the partition key (group of attributes) could change based on user input. This should reflect in the data grid too. Does Ignite support this capability? If it does how does it implement - Is there a specific API

Re: Apache Spark & Ignite Integration

2016-12-12 Thread pragmaticbigdata
Sure. 1. The first diagram is for understanding the data visibility aspect of the spark integration. Given that a cache exists on the ignite node, spark tries to create a data frame from the IgniteRDD and perform an action (df.show()) on it. Concurrently if there are changes made to the cache (eit

Re: Apache Spark & Ignite Integration

2016-12-05 Thread pragmaticbigdata
I have tried translating my understanding in these two images. Kindly let me know if the diagrams depict the ignite-spark integration in terms of data visibility and persistence correctly.

Re: Apache Spark & Ignite Integration

2016-11-21 Thread pragmaticbigdata
Thanks for the follow up. > Data will be in sync because it's stored in Ignite cache. IgniteRDD uses > Ignite API to update it and you can do this as well in your code. > > There is no copy of the data maintained in Spark, it's always stored in > Ignite caches. Spark runs Ignite client(s) that

Re: Apache Spark & Ignite Integration

2016-11-21 Thread pragmaticbigdata
> use Ignite to update the data in transactional manner and Spark for > analytics. Yes but the data would not be in sync when both(updates and analytics) are done concurrently, right? I will have to discard the spark rdd/dataset/dataframe every time the data is updated in ignite through the Igni

Re: Apache Spark & Ignite Integration

2016-11-18 Thread pragmaticbigdata
a. No. I have not executed any tests. I am doing a theoretical study first to understand the memory footprint and data movement between the spark & ignite nodes c. So basically there is no use case when working with spark (for data processing) and ignite (for in-memory data storage) that can benef

Re: Apache Spark & Ignite Integration

2016-11-17 Thread pragmaticbigdata
Appreciate your follow ups. a. " Data is stored in Ignite and Spark will fetch data for a particular partition when you execute something." Does IgniteRDD (i.e. Spark) fetch the data to the closest Spark node that probably resides on the same server? One of the earlier responses mention that this

Re: Apache Spark & Ignite Integration

2016-11-16 Thread pragmaticbigdata
Ok. a. From your comments I understand that there is only one copy of the data which resides on the ignite cluster. The data is not copied on the spark nodes while executing the lineage graph consisting of transformations & actions. If my understanding is correct what happens when a transformation

Re: Apache Spark & Ignite Integration

2016-11-15 Thread pragmaticbigdata
Thanks for sharing the jira ticket. Do you have inputs on the additional questions I asked about shared RDD implementation? They aren't related to the dataframe/dataset support. Looking forward for your thoughts. -- View this message in context: http://apache-ignite-users.70518.x6.nabble.com/

Re: Apache Spark & Ignite Integration

2016-11-14 Thread pragmaticbigdata
Ok. Is there a jira task that I can track for the dataframes and datasets support? I do have a couple of follow up questions to understand the memory representation of the shared RDD support that ignite brings with the spark integration. 1. Could you detail on how are shared RDD's implemented wh

Apache Spark & Ignite Integration

2016-10-27 Thread pragmaticbigdata
I am trying out the integration of Ignite with Spark and I have a few questions related to how we integrate them and the advantages that we could get from the integration. 1. How can I convert the IgniteRDD (fetched using the IgniteContext) to a Spark dataset? 2. Once converted into Spark dataset,

Re: Implementing Security Plugin

2016-08-19 Thread pragmaticbigdata
On further troubleshooting, I came across the DiscoverySpiNodeAuthenticator interface. It seems that the authenticateNode() method is getting called on the server node whenever a node (client/server) joins the cluster. 1. Is this (DiscoverySpiNodeAuthenticator) the only interface we need to implem

Implementing Security Plugin

2016-08-18 Thread pragmaticbigdata
I am using apache ignite version 1.6 and trying to implement a security plugin by following the post (http://smartkey.co.uk/development/securing-an-apache-ignite-cluster/). Since the plugin API has changed after the blog post, I am unable to activate the plugin and configure only an authenticated a

Re: Node authentication using security credentials

2016-08-11 Thread pragmaticbigdata
When you mention "Ignite has all the hooks in the code" I think you are referring the the plugin support that ignite provides and the classes under the package "org.apache.ignite.plugin.security". What could be the reasons of having a security layer given that ignite would be deployed on a private

Understanding Grid Gain Multi-tenancy

2016-08-11 Thread pragmaticbigdata
Going through the gridgain docs , I am trying to understand how does the multi-tenancy feature work. With this enterprise feature one could specify permissions at the cache level. The feature guarantees that a tenant will never be able to read/updat

Node authentication using security credentials

2016-08-08 Thread pragmaticbigdata
It seems ignite (version 1.6) provides a way to specify security credentials but I do not see a way to set it in the IgniteConfiguration. Is it meant to b

Multi-threaded transactions

2016-07-27 Thread pragmaticbigdata
I am using ignite 1.6.I am trying to execute two transactions in parallel and expecting one of them to fail since the same cache keys are updated. I found that the second transaction doesn't fail and the commit succeeds. Below is the code I am trying out let me know what is missing. After startin

Re: Adding a binary object to two caches fails with FULL_SYNC write mode configured for the replicated cache

2016-07-18 Thread pragmaticbigdata
Please see my comments below > Currently you have to make a copy of BinaryObject for each cache operation > because it's not immutable and internally caches some information for > performance reasons. Isn't the BinaryObject not bound

Adding a binary object to two caches fails with FULL_SYNC write mode configured for the replicated cache

2016-07-18 Thread pragmaticbigdata
I am using ignite version 1.6. In my use case I have two caches with the below configuration CacheConfiguration cfg1 = new CacheConfiguration<>("Cache 1"); cfg1.setCacheMode(CacheMode.PARTITIONED); cfg1.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL); IgniteCach

Re: Iterating through a BinaryObject cache fails

2016-07-14 Thread pragmaticbigdata
Cool. Thanks. That helped. I assume without that setting the cache store on the server node was trying to de-serialize the cache key BinaryObject and hence it failed. I will implement the CacheStore interface to optimize the bulk loads and updates to the cache. -- View this message in context:

Re: Iterating through a BinaryObject cache fails

2016-07-14 Thread pragmaticbigdata
Here's a gist to share the relevant code - https://gist.github.com/shahamit/80b1c63e9c9c89b3ff32578a77a54b54. testLoad() is called by the client node. -- View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Iterating-through-a-BinaryObject-cache-fails-tp6038p6307.html S

Re: Iterating through a BinaryObject cache fails

2016-07-14 Thread pragmaticbigdata
In my use case, I am working directly with BinaryObject's and creating them through BinaryObjectBuilder. In my last test case where I get this exception, I am performing the below steps in the *client *node 1. Start a server node (no caches exists) 2. Start a client node and a. create the c

Re: Iterating through a BinaryObject cache fails

2016-07-13 Thread pragmaticbigdata
Just bringing up this discussion back up again since I have been experiencing the exception mentioned on this thread again. Caused by: class org.apache.ignite.IgniteCheckedException: Class definition was not found at marshaller cache and local file. [id=-1556878003, file=E:\ApacheIgnite\apache-ign

Re: Understanding data store and partitioning

2016-07-13 Thread pragmaticbigdata
Thanks for the follow up > Cache is a key-value store at first. Yes you can to use ID of record (from > database), as key of cache. Let's take an example of a Person table in an RDBMS database Person Id | Name | Address | Age | Date Of Birth Assuming "Person Id" is our cache key, to implement

Re: Understanding data store and partitioning

2016-07-13 Thread pragmaticbigdata
> You can find partition number using: affinity.partition(key) My question was - to get the partition id we need the cache key. When doing the initial load into ignite we don't have the cache key. Does that mean we cannot have an optimized data loading (i.e. partition aware data loading)? > Yes

Re: Cache EntryProcessor.process is getting invoked twice when cache.invoke() method is called within Transaction, in atomic mode its invoked once.

2016-07-13 Thread pragmaticbigdata
I am trying to understand this behavior of the entry processor. I could see that the entry processor is called on all server nodes including the backup nodes and the client node. The entry processor is executed on the client node when it is invoked from a transaction irrespective of the isolation l

Understanding data store and partitioning

2016-07-13 Thread pragmaticbigdata
Following the documentation on data loading I have some questions with regards to ignite version 1.6 1. How does ignite derive the partition id from the cache key? What is the relation between the partition id and the affinity key? 2. Partitio

Handling external updates to persistence store

2016-07-07 Thread pragmaticbigdata
We are using ignite version 1.6. Our cache is configured as write through for data consistency reasons. What are the different ways to update the ignite cache when the rdbms store is updated by other applications? One option would be write a hook that calls the data load api's of ignite (CacheStor

Cache Repartitioning

2016-07-07 Thread pragmaticbigdata
We have an IgniteCache. In our usecase we implement the AffinityKeyMapper to specify the field(s) from the BinaryObject which should be part of the affinity key. As I have understood, ignite ignores the cache configuration that is passed in the ignite.getOrCreateCache(config) for an existing cach

Re: Starting H2 Debug Console On Remote Server

2016-07-05 Thread pragmaticbigdata
It I think expires in 30 mins. Currently I access h2 debug console on my local machine through chrome. I use it currently to verify if the data is partitioned correctly based on the affinity key I have configured. Once the session expires, the only option left is restart Ignite and execute the d

Re: Error while loading data into cache with BinaryObject as key field

2016-07-05 Thread pragmaticbigdata
Any comments here? -- View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Error-while-loading-data-into-cache-with-BinaryObject-as-key-field-tp6014p6105.html Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Re: Starting H2 Debug Console On Remote Server

2016-07-05 Thread pragmaticbigdata
I don't think the debug console session timeout problem would be resolved with this fix. Are you sure? I started ignite on my local machine and I am able to access the h2 debug console but the UI session times out after a certain duration. -- View this message in context: http://apache-ignite-u

Re: Starting H2 Debug Console On Remote Server

2016-07-04 Thread pragmaticbigdata
Ok. I understand. I need to start the ignite server instance on my local machine to view the h2 debug console. I have been experiencing web session timeout on the h2 debug console after which I see the h2 login screen. If I specify the db name in the jdbc url as the random id that was earlier gene

Re: Iterating through a BinaryObject cache fails

2016-07-04 Thread pragmaticbigdata
Ok. Thanks for sharing the internals. By specifying the withKeepBinary flag, I was able to iterate through the cache and add or drop fields from a BinaryObject at runtime. This was the original purpose of iterating through the cache. The changes (of adding and/or dropping a field) made to the cac

Re: Error while loading data into cache with BinaryObject as key field

2016-07-04 Thread pragmaticbigdata
1. The strange part is even when the binary objects are created with different type names (different for key BinaryObject and for value BinaryObject), I am able to query the field that is the part of the key BinaryObject by specifying the valuetype 2. By "binary type descriptor" I understand that

Starting H2 Debug Console On Remote Server

2016-07-03 Thread pragmaticbigdata
I am running ignite 1.6 on Oracle Linux Server Release 6.4 (RHEL). I did set -J-DIGNITE_H2_DEBUG_CONSOLE=true parameter while starting ignite. From the logs I can see the below warnings [22:54:11,223][WARNING][main][IgniteH2Indexing] Serialization of Java objects in H2 was enabled. [22:54:11,440]

Re: Error while loading data into cache with BinaryObject as key field

2016-07-03 Thread pragmaticbigdata
Thanks for pinpointing the issue. The issue got resolved after setting a custom key type. The data got loaded into the cache. 1. The strange part is that even if I set different key and value types the below query executes correctly SqlQuery query = new SqlQuery<>(table.getCacheValueType

Re: Iterating through a BinaryObject cache fails

2016-07-03 Thread pragmaticbigdata
Ok. The test case worked after I specified "withKeepBinary()" when fetching the cache instance before iterating through it. I have following questions in order to understand to make sure I understand the internals of how it works. 1. Should I specify withKeepBinary() when creating the cache inst

Re: Error while loading data into cache with BinaryObject as key field

2016-07-01 Thread pragmaticbigdata
On further tries, the data load program again started failing. The application is not consistent and I do not follow why and when does the it fail. To ease the understanding of the issue I have created a gist that includes the data load program. The program tries to load data from a csv file using

Re: Iterating through a BinaryObject cache fails

2016-07-01 Thread pragmaticbigdata
No its not related. The other thread is for cache loading issue with an ignite cache . This issue is while iterating through a cache of that is preloaded with data. The exception is similar though. I am not sure if its all something to do with BinaryObject. I have shared the data loading code

Re: Error while loading data into cache with BinaryObject as key field

2016-07-01 Thread pragmaticbigdata
On further troubleshooting I realized that the error appeared only when I have different typeNames used for the key and value objects added to the cache. BinaryObjectBuilder keyBuilder = ignite.binary().builder(table.getCacheKeyType()); BinaryObjectBuilder valueBuilder = i

BinaryObject Restrictions

2016-07-01 Thread pragmaticbigdata
>From the documentation I understood that BinaryMarshaller is the default Marshaller used for serializing and storing objects in the cache. So, all the java pojo objects added to the cache are converted to the binary format (which is repesented by the BinaryObject class) and deserialized whenever w

Iterating through a BinaryObject cache fails

2016-07-01 Thread pragmaticbigdata
I am using ignite version 1.6 and I have a replicated cache of pre-loaded. In order to try out the dynamic structure change ability with BinaryObjects, I tried iterating through the cache with different approaches. All of them fail with an error Caused by: class org.apache.ignite.IgniteCheckedEx

Re: Error while loading data into cache with BinaryObject as key field

2016-06-30 Thread pragmaticbigdata
> What was returned as the type name before? As I see from your code that you used some getter to retrieve the name. I have a IgniteCache. The type name for the key object before was a string "My_Object_ValueType" and "My_Object_KeyType". I didn't follow why doesn't it work with these typenames.

Re: Error while loading data into cache with BinaryObject as key field

2016-06-30 Thread pragmaticbigdata
I realized that the error disappears if I hard code the typeName while constructing the BinaryObjectBuilder. ignite.binary().builder("ConstantString"); Few questions based on this 1. What is the significance of the typeName except the fact that it is used while querying? 2. What are th

Error while loading data into cache with BinaryObject as key field

2016-06-30 Thread pragmaticbigdata
I am using ignite version 1.6 and getting an error while loading data from a csv file through DataStreamer into a IgniteCache. The error is Failed to execute compound future reducer: GridCompoundFuture [rdc=null, initFlag=1, lsnrCalls=0, done=false, cancelled=false, err=null, futs=[true]]class org

Re: Ignite client reads old metadata even after cache is destroyed and recreated

2016-06-23 Thread pragmaticbigdata
Ok. Thanks for your inputs. I understand the binary object better than before. Few follow up questions. About the second point > 2. It seems ignite starts reading the metadata in the background even > before > Ignite instance is created. Where does it read the metadata from? > Metadata is acc

Ignite client reads old metadata even after cache is destroyed and recreated

2016-06-22 Thread pragmaticbigdata
I think I came across a bug while working with binary objects in ignite (version 1.6). The steps are 1. Create a cache with BinaryObject as the value. Add a field and a String value in it. 2. Destroy the cache using ignite.destroyCache(c) 3. Create the same cache but this time add the same field w

Re: Slow Transaction Performance

2016-06-13 Thread pragmaticbigdata
Thanks for the replies > do you execute transactions in parallel? Usually if keys that are used in > transactions are not intersected you can start several Thread an execute > transactions from them simultaneously. The timings I posed are to update 11k entries in a cache that was pre-loaded with

Re: Simulating Graph Dependencies With Ignite

2016-06-10 Thread pragmaticbigdata
Thanks Alexei for your inputs. 1. How does the EntryProcessor detect which node does the data reside given the key? I question it because I have configured I have PARTITIONED cache for which I haven't set any affinity function. It is partitioning the cache based on the hash function of the cached

Re: Slow Transaction Performance

2016-06-09 Thread pragmaticbigdata
I forgot to mention that I have set backups as 0 in order to benchmark the best possible performance. So setting primary_sync does not have any effect. Do you have any of your test cases that are resulting in better numbers? -- View this message in context: http://apache-ignite-users.70518.x

Slow Transaction Performance

2016-06-09 Thread pragmaticbigdata
I am executing my tests with apache ignite 1.6. With a 5 node cluster (servers=5, clients=0, CPUs=16, heap=17.0GB), I create a partitioned cache that is preloaded with 1 million entries (IgniteCache). Updating 11k records in this partitioned cache is taking between 1.6 secs while it takes 17 secs

Re: Simulating Graph Dependencies With Ignite

2016-06-08 Thread pragmaticbigdata
1. The code attempts to fetch the cache entry, update it and return an attribute of that cache entry. Assuming it would be faster to perform this operation on the node where the data resides, I was trying out affinity collocation. Kindly correct me if my assumption is wrong. 2. I added the if chec

Re: Simulating Graph Dependencies With Ignite

2016-06-07 Thread pragmaticbigdata
I tuned the application by batching the cache updates and making the query use the index. Wasn't able to make affinity calls work. Alexei, can you please provide your inputs on the affinity code? -- View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Simulating-Graph-

Re: Self Join Query As An Alternative To IN clause

2016-06-06 Thread pragmaticbigdata
Great. The query worked now and it is 50% faster than the in clause query. Could you detail on the internals of why passing the object array directly didn't work and how did an [] inside an [] worked out? -- View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Self-Join

Re: How to connect/monitor ignite server through jmx client

2016-06-06 Thread pragmaticbigdata
Great. Works now. Thanks! -- View this message in context: http://apache-ignite-users.70518.x6.nabble.com/How-to-connect-monitor-ignite-server-through-jmx-client-tp5420p5456.html Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Re: How to connect/monitor ignite server through jmx client

2016-06-06 Thread pragmaticbigdata
How do you start ignite in verbose mode programmatically? I am starting ignite with Ignition.start("conf.xml"); -- View this message in context: http://apache-ignite-users.70518.x6.nabble.com/How-to-connect-monitor-ignite-server-through-jmx-client-tp5420p5454.html Sent from the Apache Ignite Us

Re: How to connect/monitor ignite server through jmx client

2016-06-06 Thread pragmaticbigdata
echo %IGNITE_JMX_PORT% gives the port that I have set. I haven't set it through the registry. I have set it from the UI (Advanced Settings -> Environment Variables -> Add). -- View this message in context: http://apache-ignite-users.70518.x6.nabble.com/How-to-connect-monitor-ignite-server-throu

Re: How to connect/monitor ignite server through jmx client

2016-06-06 Thread pragmaticbigdata
Hi Denis, Setting an environment variable worked out when starting ignite from command line on a linux machine but when I set the environment variable on my windows desktop and start ignite programmatically, it doesn't work. What are the alternatives? -- View this message in context: http://

Self Join Query As An Alternative To IN clause

2016-06-06 Thread pragmaticbigdata
I am using apache ignite 1.6. Executing an in clause query on a cache containing 1 mil entries took around 1.5 seconds. As a performance optimization suggested here , I tried out a join clause query but query binding fails. SqlFieldsQuer

Re: Runtime error at IgniteSpiThread

2016-06-05 Thread pragmaticbigdata
I am facing a similar exception while starting apache ignite. I am using apache ignite 1.6. The exception is a bit different this time [09:27:59] Topology snapshot [ver=269, servers=1, clients=0, CPUs=2, heap=2.0GB] [09:27:59,957][SEVERE][exchange-worker-#46%null%][GridCachePartitionExchangeManage

Re: Loading cache with DataStreamer

2016-06-05 Thread pragmaticbigdata
Got it. Thanks for detailing it out. -- View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Loading-cache-with-DataStreamer-tp5421p5429.html Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Ignite Startup Failures

2016-06-04 Thread pragmaticbigdata
I am using ignite 1.6 and trying to build up a cluster of multiple servers. While starting different nodes on my linux vm's through the ignite.sh conf.xml command, I get a NPE. The complete console and server.logs are shared here . The configuration file used to st

Re: Loading cache with DataStreamer

2016-06-04 Thread pragmaticbigdata
Thanks for the replies. Calling flush() and later close() made sure all the entries were added to the cache. I think this step should be added to the examples

Re: Loading cache with DataStreamer

2016-06-04 Thread pragmaticbigdata
I tried that but instead of loading 10 entries it loaded only 99330. Doesn't datastreamer do a parallel load through multiple threads and hence return a future object that we need to wait on? Thanks! -- View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Loading-c

Loading cache with DataStreamer

2016-06-04 Thread pragmaticbigdata
I am using apache ignite version 1.6. Assuming datastreamer is a better approach to bulk load data performance wise I am trying to execute the below code that gets hanged IgniteDataStreamer streamer = ignite.dataStreamer(cacheName); List futures = Lists.newArrayList(); fo

How to connect/monitor ignite server through jmx client

2016-06-04 Thread pragmaticbigdata
I am using apache ignite 1.6 in my tests. I am running the test by programmatically starting ignite through Ignition.start(conf.xml). Other nodes that join the cluster are started using the ./ignite.sh command. I tried starting the server node by passing the jmx configuration parameters in the com

Re: Simulating Graph Dependencies With Ignite

2016-06-03 Thread pragmaticbigdata
Alexei, what do you think about the object size and the affinity code? Thanks, Amit. -- View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Simulating-Graph-Dependencies-With-Ignite-tp5282p5416.html Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Re: Simulating Graph Dependencies With Ignite

2016-06-02 Thread pragmaticbigdata
1. I will need to instrument and calculate the size of the ProductDetail object. It has 7 Double attributes along with a String. ~ 100-150 bytes. 2. Below is the code that fails Map> resultMap = productCache.invokeAll(keyList.stream() .map(String::valueOf).collect(Collectors.toSet

Re: Simulating Graph Dependencies With Ignite

2016-06-02 Thread pragmaticbigdata
Thanks Alexei for the responses 1. Ok I will try out the GC settings and off heap memory usage. I have a cache of IgniteCache where ProductDetails is my custom model. I have implemented custom logic using directed acyclic graphs. 2. I tried executing it with cache.invokeAll. The first run faile

Re: ClassNotFoundException with affinity run

2016-06-02 Thread pragmaticbigdata
Ok great. That explains why the app worked after the jar deployment. Thanks. I wonder if you have any inputs on the performance issues I am experiencing with updating an ignite cache. It is tracked on a different discussion thread. -- View this message in context: http://apache-ignite-users.70

Re: Simulating Graph Dependencies With Ignite

2016-06-01 Thread pragmaticbigdata
Hi Alexei, I was able to implement this custom logic based on your guidance below. Thanks for that. I do experience a couple of performance issues. 1. With a caching having 1 million entries, updating 11k entities 5 times (cached entities are updated multiple times in the application) took 1 min.

Re: ClassNotFoundException with affinity run

2016-06-01 Thread pragmaticbigdata
I was able to make the application work by deploying my application jar on all the nodes in the cluster (adding it to the lib directory). I didn't follow why the peerClassLoading didn't work. Any idea? -- View this message in context: http://apache-ignite-users.70518.x6.nabble.com/ClassNotFound

Re: ClassNotFoundException with affinity run

2016-06-01 Thread pragmaticbigdata
Note that peerClassLoading is enabled on all the server nodes. Shouldn't this cause all the nodes to dynamically load the custom class? -- View this message in context: http://apache-ignite-users.70518.x6.nabble.com/ClassNotFoundException-with-affinity-run-tp5359p5366.html Sent from the Apache

ClassNotFoundException with affinity run

2016-06-01 Thread pragmaticbigdata
I am trying to execute a compute task on the node where the data is located following the example shown here - https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/datagrid/CacheAffinityExample.java#L84 I get an java.lang.ClassNotFoundException: com.ignite

Simulating Graph Dependencies With Ignite

2016-05-27 Thread pragmaticbigdata
Hello, I have started exploring apache ignite by following the introductory videos. It looks quite promising and I wanted to understand if it will be well suited for the use case I brief out below. If so, I would be glad to hear out on how could I approach it The use case is We are trying to im