Exception when Ignite server starts up and also when a new node joins the cluster

2017-06-13 Thread jaipal
I am getting the following exception when starting Ignite in server mode(2.0)
and from then Ignite is stopping all the caches.

Exception in thread "main" class org.apache.ignite.IgniteException:
Attempted to release write lock while not holding it [lock=7eff50271580,
state=0002
2639
at
org.apache.ignite.internal.util.IgniteUtils.convertException(IgniteUtils.java:949)
at org.apache.ignite.Ignition.start(Ignition.java:325)
at com.gvc.impl.IgniteEDSBootstrap.main(IgniteEDSBootstrap.java:31)
Caused by: class org.apache.ignite.IgniteCheckedException: Attempted to
release write lock while not holding it [lock=7eff50271580,
state=00022639
at
org.apache.ignite.internal.util.IgniteUtils.cast(IgniteUtils.java:7242)
at
org.apache.ignite.internal.util.future.GridFutureAdapter.resolve(GridFutureAdapter.java:258)
at
org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:231)
at
org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:158)
at
org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:150)
at
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager.onKernalStart0(GridCachePartitionExchangeManager.java:463)
at
org.apache.ignite.internal.processors.cache.GridCacheSharedManagerAdapter.onKernalStart(GridCacheSharedManagerAdapter.java:108)
at
org.apache.ignite.internal.processors.cache.GridCacheProcessor.onKernalStart(GridCacheProcessor.java:911)
at
org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1013)
at
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:1895)
at
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1647)
at
org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1075)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:595)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:519)
at org.apache.ignite.Ignition.start(Ignition.java:322)
... 1 more
Caused by: java.lang.IllegalMonitorStateException: Attempted to release
write lock while not holding it [lock=7eff50271580,
state=00022639
at
org.apache.ignite.internal.util.OffheapReadWriteLock.writeUnlock(OffheapReadWriteLock.java:259)
at
org.apache.ignite.internal.pagemem.impl.PageMemoryNoStoreImpl.writeUnlock(PageMemoryNoStoreImpl.java:495)
at
org.apache.ignite.internal.processors.cache.database.tree.util.PageHandler.writeUnlock(PageHandler.java:379)
at
org.apache.ignite.internal.processors.cache.database.tree.util.PageHandler.writePage(PageHandler.java:288)
at
org.apache.ignite.internal.processors.cache.database.tree.util.PageHandler.initPage(PageHandler.java:225)
at
org.apache.ignite.internal.processors.cache.database.DataStructure.init(DataStructure.java:328)
at
org.apache.ignite.internal.processors.cache.database.tree.BPlusTree.initTree(BPlusTree.java:796)
at
org.apache.ignite.internal.processors.cache.database.tree.BPlusTree.initTree(BPlusTree.java:781)
at
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataTree.(IgniteCacheOffheapManagerImpl.java:1423)
at
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.createCacheDataStore0(IgniteCacheOffheapManagerImpl.java:728)
at
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.createCacheDataStore(IgniteCacheOffheapManagerImpl.java:706)
at
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition.(GridDhtLocalPartition.java:163)
at
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtPartitionTopologyImpl.createPartition(GridDhtPartitionTopologyImpl.java:718)
at
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtPartitionTopologyImpl.initPartitions0(GridDhtPartitionTopologyImpl.java:405)
at
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtPartitionTopologyImpl.beforeExchange(GridDhtPartitionTopologyImpl.java:569)
at
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.distributedExchange(GridDhtPartitionsExchaengeFuture.java:844)
at
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:573)
at
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:1806)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:744)


and Also when one of the ignite server node joins the cluster, it is
throwing Failed to wait for partition map 

Re: Off-Heap On-Heap in Ignite-2.0.0

2017-06-13 Thread Megha Mittal
Denis,

Thanks for clearing it out. That might be the reason for the memory
difference.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Off-Heap-On-Heap-in-Ignite-2-0-0-tp13548p13683.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: IGFS Question

2017-06-13 Thread Ishan Jain
By starting another ignite node i mean an ignite node with different
config. As seen in ignite data fabric example of stream words from file we
have to use some event types also .

On Wed, Jun 14, 2017 at 10:38 AM, Ishan Jain  wrote:

> Okay i start using file system api which ignite file system has. I start
> an ignite node and start loading data from it.
> Now i have to put data into normal ignite cache for using sql queries on
> it. Ignite node has been already started from the program in IGFS mode. How
> will i create a normal ignite cache which can be accessed remotely.
> Shouldn't i have to start another ignite node from the same program?This is
> really confusing. Please revert
>
> On Fri, Jun 9, 2017 at 11:00 AM, Yermek 
> wrote:
>
>> Yes, of course, you're right. I used a little wrong phrase, only by using
>> FileSystem API
>>
>> vkulichenko wrote
>> > IGFS cache can't and should not be accessed directly. Use FileSystem API
>> > for this.
>> >
>> > -Val
>>
>>
>>
>>
>>
>> --
>> View this message in context: http://apache-ignite-users.705
>> 18.x6.nabble.com/IGFS-Question-tp13447p13547.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>
>


Re: IGFS Question

2017-06-13 Thread Ishan Jain
Okay i start using file system api which ignite file system has. I start an
ignite node and start loading data from it.
Now i have to put data into normal ignite cache for using sql queries on
it. Ignite node has been already started from the program in IGFS mode. How
will i create a normal ignite cache which can be accessed remotely.
Shouldn't i have to start another ignite node from the same program?This is
really confusing. Please revert

On Fri, Jun 9, 2017 at 11:00 AM, Yermek  wrote:

> Yes, of course, you're right. I used a little wrong phrase, only by using
> FileSystem API
>
> vkulichenko wrote
> > IGFS cache can't and should not be accessed directly. Use FileSystem API
> > for this.
> >
> > -Val
>
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/IGFS-Question-tp13447p13547.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


odd query plan with joins?

2017-06-13 Thread djardine
Hi,
I'm attempting to run a query that does a join between 2 large tables (one
with 6m rows, another with 80m rows). In the query plan I see a join "on
1=1" and separately i see filters for my join under the "where" clause. I'm
not sure if this is standard output in the query plan or if it's doing a
ridiculously expensive join where it's combining every possible permutation
between the tables and then filtering it down. The query basically runs
forever, never returning and eventually will kill the server node that it's
running on (maybe OOM?). I've tried this on both PARTITIONED and REPLICATED
clusters. "property_id" fields are indexed.

The query is:

SELECT p.property_id, sum(cm.days_r)::real/(sum(cm.days_r)+sum(cm.days_a))
as occ_rate, p.bedrooms, cm.period, p.room_type
FROM calendar_metric as cm
JOIN "PropertyCache".property as p on p.property_id = 
cm.property_id
WHERE 
cm.days_r > 0 AND p.bedrooms is not null AND ( 
p.room_type = 'Entire
home/apt' ) 
AND cm.period BETWEEN '2016-1-1' and '2016-8-1'
AND p.city_id = 59053
GROUP BY cm.period, p.room_type, p.bedrooms, p.property_id

The query plan shows:

SELECT
P__Z1.PROPERTY_ID AS __C0_0,
SUM(CM__Z0.DAYS_R) AS __C0_1,
P__Z1.BEDROOMS AS __C0_2,
CM__Z0.PERIOD AS __C0_3,
P__Z1.ROOM_TYPE AS __C0_4,
SUM(CM__Z0.DAYS_R) AS __C0_5,
SUM(CM__Z0.DAYS_A) AS __C0_6
FROM CalendarMetricCache.CALENDAR_METRIC CM__Z0
/* CalendarMetricCache.CALENDAR_METRIC_PERIOD_DAYS_R_IDX: PERIOD >= DATE
'2016-01-01'
AND PERIOD <= DATE '2016-08-01'
AND DAYS_R > 0
 */
/* WHERE (CM__Z0.DAYS_R > 0)
AND ((CM__Z0.PERIOD >= DATE '2016-01-01')
AND (CM__Z0.PERIOD <= DATE '2016-08-01'))
*/
INNER JOIN PropertyCache.PROPERTY P__Z1
/* PropertyCache.PROPERTY_CITY_ID_IDX: CITY_ID = 59053 */
ON 1=1
WHERE (P__Z1.PROPERTY_ID = CM__Z0.PROPERTY_ID)
AND ((P__Z1.CITY_ID = 59053)
AND (((CM__Z0.PERIOD >= DATE '2016-01-01')
AND (CM__Z0.PERIOD <= DATE '2016-08-01'))
AND ((P__Z1.ROOM_TYPE = 'Entire home/apt')
AND ((CM__Z0.DAYS_R > 0)
AND (P__Z1.BEDROOMS IS NOT NULL)
GROUP BY CM__Z0.PERIOD, P__Z1.ROOM_TYPE, P__Z1.BEDROOMS, P__Z1.PROPERTY_ID

SELECT
__C0_0 AS PROPERTY_ID,
(CAST(SUM(__C0_1) AS REAL) / (SUM(__C0_5) + SUM(__C0_6))) AS OCC_RATE,
__C0_2 AS BEDROOMS,
__C0_3 AS PERIOD,
__C0_4 AS ROOM_TYPE
FROM PUBLIC.__T0
/* CalendarMetricCache.merge_scan */
GROUP BY __C0_3, __C0_4, __C0_2, __C0_0



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/odd-query-plan-with-joins-tp13680.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


RE: Write behind using Grid Gain

2017-06-13 Thread Raymond Wilson
Denis,



Ah! Looks very interesting. Thanks for the pointer J



Raymond.



*From:* Denis Magda [mailto:dma...@apache.org]
*Sent:* Wednesday, June 14, 2017 9:41 AM
*To:* user@ignite.apache.org
*Subject:* Re: Write behind using Grid Gain



Raymond,



Then Ignite Persistent Store is exactly for your use case. Please refer to
this discussion on the dev list:

http://apache-ignite-developers.2346864.n4.nabble.com/GridGain-Donates-Persistent-Distributed-Store-To-ASF-Apache-Ignite-
td16788.html#a16838

Also it was covered a bit in that webinar:

https://www.youtube.com/watch?v=bDrGueQ16UQ

The store should be released by the community in the nearest.



—

Denis



On Jun 13, 2017, at 2:02 PM, Raymond Wilson 
wrote:



Hi Pavel,



It’s a little complicated. The system is essentially a DB in its own right;
actually it’s an IMDG a bit like Ignite, but developed 8 years ago to
fulfill a need we had.



Today, I am looking to modernize that system and rather than continuing to
build and maintain all the core ‘infrastructure’ features of an IMDG such
as clustering, messaging, enterprise caching etc, I am looking to see how
well Ignite fits by running a Proof of Concept project. It turns out it
fits quite well, largely because the architectural structure of both
systems (ie: IMDG) is well aligned in terms of the problems being solved.



The primary gap between the legacy system and IMDG is that IMDG does not
support persistence. The legacy system has a distributed cache that stores
objects that are aggregate collections (10’s of thousand’s) of relatively
simple spatial data records that are operated on by the clustered compute
engine. Sometimes billions of records need to be processed to satisfy a
single query. Your standard run of the mill SQL DB finds these sorts of
queries hard.



I suppose you could use another DB (MS-SQL, AWS:RDS etc) to store those
aggregate blobs, but it seems like a bit of a ‘miss-use case’ when what I’m
really after is a persistence/storage layer J



Thanks,

Raymond.





*From:* Pavel Tupitsyn [mailto:ptupit...@apache.org]
*Sent:* Wednesday, June 14, 2017 1:59 AM
*To:* user@ignite.apache.org
*Subject:* Re: Write behind using Grid Gain



Hi Raymond,



I think your use case fits well into traditional Ignite model of
write-through cache store with backing database.

Why do you want to avoid a DB? Do you plan to store data on disk directly
as a set of files?



Pavel



On Mon, Jun 12, 2017 at 2:14 AM, Raymond Wilson 
wrote:

Hi Pavel,



Thanks for the blog – it explains it quite well.



I have a slightly different use case where groups of records within a much
larger data set are clustered together for efficiency (ie: each of the
cached items in the Ignite grid cache has significant internal structure).
You can think of them as a large number of smallish files (a few Kb to a
few Mb), but file systems don’t like lots of small files.



I have a legacy implementation that houses these small files within a
single larger file, but wanted to know if there was a clean way of
supporting the same structure using the Ignite read/write through support,
perhaps with another system providing relatively transparent persistency
semantics but which does not use a DB to store the data.



Thanks,

Raymond.



*From:* Pavel Tupitsyn [mailto:ptupit...@apache.org]
*Sent:* Saturday, May 27, 2017 5:03 AM
*To:* user@ignite.apache.org


*Subject:* Re: Write behind using Grid Gain



I've decided to write a blog post, since this topic seems to be in demand:

https://ptupitsyn.github.io/Ado-Net-Cache-Store/



Code:

https://github.com/ptupitsyn/ignite-net-examples/tree/master/AdoNetCacheStore



Let me know if this helps!



On Fri, May 26, 2017 at 3:50 PM, Chetan D  wrote:

Thank you Pavel.



waiting for your response.



On Fri, May 26, 2017 at 6:03 PM, Pavel Tupitsyn 
wrote:

To give everyone context, this is not about GridGain, but about Apache
Ignite.

The blog post in question is
https://ptupitsyn.github.io/Entity-Framework-Cache-Store/



Chetan, I'll prepare an example with Ignite 2.0 / ado.net and post it some
time later.



Pavel



On Fri, May 26, 2017 at 2:32 PM, Chetan D  wrote:

++ User List



any help much appreciated.



Thanks And Regards

Chetan D

-- Forwarded message --
From: *Pavel Tupitsyn* 
Date: Fri, May 26, 2017 at 4:38 PM
Subject: Re: Write behind using Grid Gain
To: Chetan D 

Hi Chetan, can you please write this to our user list,
user@ignite.apache.org?

So that entire community can participate.



Thanks,

Pavel



On Fri, May 26, 2017 at 1:35 PM, Chetan D  wrote:

Hi Pavel Tupitsyn,



I have posted a comment in your blog as well (entity framework as ignite
.net store) regarding write behind using ignite.



I have been working on a project where i need to implement 

Re: Write behind using Grid Gain

2017-06-13 Thread Denis Magda
Raymond,

Then Ignite Persistent Store is exactly for your use case. Please refer to this 
discussion on the dev list:
http://apache-ignite-developers.2346864.n4.nabble.com/GridGain-Donates-Persistent-Distributed-Store-To-ASF-Apache-Ignite-td16788.html#a16838

Also it was covered a bit in that webinar:
https://www.youtube.com/watch?v=bDrGueQ16UQ

The store should be released by the community in the nearest.

—
Denis

> On Jun 13, 2017, at 2:02 PM, Raymond Wilson  
> wrote:
> 
> Hi Pavel,
>  
> It’s a little complicated. The system is essentially a DB in its own right; 
> actually it’s an IMDG a bit like Ignite, but developed 8 years ago to fulfill 
> a need we had. 
>  
> Today, I am looking to modernize that system and rather than continuing to 
> build and maintain all the core ‘infrastructure’ features of an IMDG such as 
> clustering, messaging, enterprise caching etc, I am looking to see how well 
> Ignite fits by running a Proof of Concept project. It turns out it fits quite 
> well, largely because the architectural structure of both systems (ie: IMDG) 
> is well aligned in terms of the problems being solved.
>  
> The primary gap between the legacy system and IMDG is that IMDG does not 
> support persistence. The legacy system has a distributed cache that stores 
> objects that are aggregate collections (10’s of thousand’s) of relatively 
> simple spatial data records that are operated on by the clustered compute 
> engine. Sometimes billions of records need to be processed to satisfy a 
> single query. Your standard run of the mill SQL DB finds these sorts of 
> queries hard.
>  
> I suppose you could use another DB (MS-SQL, AWS:RDS etc) to store those 
> aggregate blobs, but it seems like a bit of a ‘miss-use case’ when what I’m 
> really after is a persistence/storage layer J
>  
> Thanks,
> Raymond.
>  
>  
> From: Pavel Tupitsyn [mailto:ptupit...@apache.org 
> ] 
> Sent: Wednesday, June 14, 2017 1:59 AM
> To: user@ignite.apache.org 
> Subject: Re: Write behind using Grid Gain
>  
> Hi Raymond,
>  
> I think your use case fits well into traditional Ignite model of 
> write-through cache store with backing database.
> Why do you want to avoid a DB? Do you plan to store data on disk directly as 
> a set of files?
>  
> Pavel
>  
> On Mon, Jun 12, 2017 at 2:14 AM, Raymond Wilson  > wrote:
> Hi Pavel,
>  
> Thanks for the blog – it explains it quite well.
>  
> I have a slightly different use case where groups of records within a much 
> larger data set are clustered together for efficiency (ie: each of the cached 
> items in the Ignite grid cache has significant internal structure). You can 
> think of them as a large number of smallish files (a few Kb to a few Mb), but 
> file systems don’t like lots of small files. 
>  
> I have a legacy implementation that houses these small files within a single 
> larger file, but wanted to know if there was a clean way of supporting the 
> same structure using the Ignite read/write through support, perhaps with 
> another system providing relatively transparent persistency semantics but 
> which does not use a DB to store the data.
>  
> Thanks,
> Raymond.
>  
> From: Pavel Tupitsyn [mailto:ptupit...@apache.org 
> ] 
> Sent: Saturday, May 27, 2017 5:03 AM
> To: user@ignite.apache.org 
> 
> Subject: Re: Write behind using Grid Gain
>  
> I've decided to write a blog post, since this topic seems to be in demand:
> https://ptupitsyn.github.io/Ado-Net-Cache-Store/ 
> 
>  
> Code:
> https://github.com/ptupitsyn/ignite-net-examples/tree/master/AdoNetCacheStore 
> 
>  
> Let me know if this helps!
>  
> On Fri, May 26, 2017 at 3:50 PM, Chetan D  > wrote:
> Thank you Pavel.
>  
> waiting for your response.
>  
> On Fri, May 26, 2017 at 6:03 PM, Pavel Tupitsyn  > wrote:
> To give everyone context, this is not about GridGain, but about Apache Ignite.
> The blog post in question is 
> https://ptupitsyn.github.io/Entity-Framework-Cache-Store/ 
> 
>  
> Chetan, I'll prepare an example with Ignite 2.0 / ado.net  
> and post it some time later.
>  
> Pavel
>  
> On Fri, May 26, 2017 at 2:32 PM, Chetan D  > wrote:
> ++ User List
>  
> any help much appreciated.
>  
> Thanks And Regards
> Chetan D
> -- Forwarded message --
> From: Pavel Tupitsyn >
> Date: Fri, May 26, 2017 at 4:38 PM
> Subject: Re: Write behind using Grid Gain
> To: Chetan D 

RE: Write behind using Grid Gain

2017-06-13 Thread Raymond Wilson
Hi Pavel,



It’s a little complicated. The system is essentially a DB in its own right;
actually it’s an IMDG a bit like Ignite, but developed 8 years ago to
fulfill a need we had.



Today, I am looking to modernize that system and rather than continuing to
build and maintain all the core ‘infrastructure’ features of an IMDG such
as clustering, messaging, enterprise caching etc, I am looking to see how
well Ignite fits by running a Proof of Concept project. It turns out it
fits quite well, largely because the architectural structure of both
systems (ie: IMDG) is well aligned in terms of the problems being solved.



The primary gap between the legacy system and IMDG is that IMDG does not
support persistence. The legacy system has a distributed cache that stores
objects that are aggregate collections (10’s of thousand’s) of relatively
simple spatial data records that are operated on by the clustered compute
engine. Sometimes billions of records need to be processed to satisfy a
single query. Your standard run of the mill SQL DB finds these sorts of
queries hard.



I suppose you could use another DB (MS-SQL, AWS:RDS etc) to store those
aggregate blobs, but it seems like a bit of a ‘miss-use case’ when what I’m
really after is a persistence/storage layer J



Thanks,

Raymond.





*From:* Pavel Tupitsyn [mailto:ptupit...@apache.org]
*Sent:* Wednesday, June 14, 2017 1:59 AM
*To:* user@ignite.apache.org
*Subject:* Re: Write behind using Grid Gain



Hi Raymond,



I think your use case fits well into traditional Ignite model of
write-through cache store with backing database.

Why do you want to avoid a DB? Do you plan to store data on disk directly
as a set of files?



Pavel



On Mon, Jun 12, 2017 at 2:14 AM, Raymond Wilson 
wrote:

Hi Pavel,



Thanks for the blog – it explains it quite well.



I have a slightly different use case where groups of records within a much
larger data set are clustered together for efficiency (ie: each of the
cached items in the Ignite grid cache has significant internal structure).
You can think of them as a large number of smallish files (a few Kb to a
few Mb), but file systems don’t like lots of small files.



I have a legacy implementation that houses these small files within a
single larger file, but wanted to know if there was a clean way of
supporting the same structure using the Ignite read/write through support,
perhaps with another system providing relatively transparent persistency
semantics but which does not use a DB to store the data.



Thanks,

Raymond.



*From:* Pavel Tupitsyn [mailto:ptupit...@apache.org]
*Sent:* Saturday, May 27, 2017 5:03 AM
*To:* user@ignite.apache.org


*Subject:* Re: Write behind using Grid Gain



I've decided to write a blog post, since this topic seems to be in demand:

https://ptupitsyn.github.io/Ado-Net-Cache-Store/



Code:

https://github.com/ptupitsyn/ignite-net-examples/tree/master/AdoNetCacheStore



Let me know if this helps!



On Fri, May 26, 2017 at 3:50 PM, Chetan D  wrote:

Thank you Pavel.



waiting for your response.



On Fri, May 26, 2017 at 6:03 PM, Pavel Tupitsyn 
wrote:

To give everyone context, this is not about GridGain, but about Apache
Ignite.

The blog post in question is
https://ptupitsyn.github.io/Entity-Framework-Cache-Store/



Chetan, I'll prepare an example with Ignite 2.0 / ado.net and post it some
time later.



Pavel



On Fri, May 26, 2017 at 2:32 PM, Chetan D  wrote:

++ User List



any help much appreciated.



Thanks And Regards

Chetan D

-- Forwarded message --
From: *Pavel Tupitsyn* 
Date: Fri, May 26, 2017 at 4:38 PM
Subject: Re: Write behind using Grid Gain
To: Chetan D 

Hi Chetan, can you please write this to our user list,
user@ignite.apache.org?

So that entire community can participate.



Thanks,

Pavel



On Fri, May 26, 2017 at 1:35 PM, Chetan D  wrote:

Hi Pavel Tupitsyn,



I have posted a comment in your blog as well (entity framework as ignite
.net store) regarding write behind using ignite.



I have been working on a project where i need to implement distributed
caching and i have been asked to look into grid gain.



This is is the first time i am working on caching and this is entirely new
topic for me.



The example which you have shared i was able to understand a little and the
sad part is even entity framework also i have never worked on.



It would be helpful if you can share me a simple example using ado.net
implementing read through, write through and write behind even a simple
table helps me understand the concept.


Re: Grid/Cluster unique UUID possible with IgniteUuid?

2017-06-13 Thread Muthu
Hi Nikolai,

I looked at the code for this method earlier (reproduced below)...the UUID
is generated once (via VM_ID) per cluster node JVM & the atomic long again
is local to the cluster node JVM (unlike the *igniteAtomicSequence*). Do
you think its still okay to use it...i thought we at least need the
*igniteAtomicSequence *for cluster level uniqueness...

/** VM ID. */
public static final UUID VM_ID = UUID.randomUUID();

public static IgniteUuid randomUuid() {
return new IgniteUuid(VM_ID, cntGen.incrementAndGet());
}

Regards,
Muthu

On Tue, Jun 13, 2017 at 6:32 AM, Nikolai Tikhonov 
wrote:

> Muthu,
>
> Look at Ignite Uuid#randomUuid() method. I think it will provide needed
> guarantees for your case.
>
> On Mon, Jun 12, 2017 at 9:53 PM, Muthu  wrote:
>
>> Thanks Nikolai..this is what i am doing...not sure if this is too
>> much..what do you think..the goal is to make sure that a UUID is unique
>> across the entire application (the problem is each node that is part of the
>> cluster would be doing this for different entities that it owns)
>>
>> ...
>> ...
>> System.out.println(" in ObjectCacheMgrService.insertDepartment 
>> for dept : " + dept);
>> long t1 = System.currentTimeMillis();
>> *String uUID = new IgniteUuid(UUID.randomUUID(),
>> igniteAtomicSequence.incrementAndGet()).toString();*
>> long t2 = System.currentTimeMillis();
>> System.out.println("Time for UUID generation (millis) : " + (t2 - t1));
>> *dept.setId(uUID);*
>> * deptCache.getAndPut(uUID, dept);*
>> System.out.println(" in ObjectCacheMgrService.insertDepartment :
>> department  inserted successfully : " + dept);
>> ...
>> ...
>>
>> Regards,
>> Muthu
>>
>> On Mon, Jun 12, 2017 at 3:24 AM, Nikolai Tikhonov 
>> wrote:
>>
>>> Muthu,
>>>
>>> Yes, you can use IgniteUUID as unique ID generator. What you will use
>>> depends your requirements. IgniteAtomicSequence takes one long and
>>> IgniteUUID takes 3 long. But getting new range sequence is distributed
>>> operation. You need to decied what more critical for your.
>>>
>>> On Fri, Jun 9, 2017 at 8:46 PM, Muthu  wrote:
>>>

 Missed adding this one...i know there is support for ID generation with
 IgniteAtomicSequence @ https://apacheignite.readme.io/docs/id-generator

 The question is which one should i use...i want to use this to generate
 unique ids for entities that are to be cached & persisted..

 Regards,
 Muthu


 On Fri, Jun 9, 2017 at 10:27 AM, Muthu 
 wrote:

> Hi Folks,
>
> Is it possible to generate a Grid/Cluster unique UUID using
> IgniteUuid. I looked at the source code & static factory method 
> *randomUuid
> *().
> It looks like it generates one with with a java.util.UUID (generated with
> its randomUUID) & an AutomicLong's incrementAndGet
>
> Can i safely assume that given that it uses a combination of UUID &
> long on the individual VMs that are part of the Grid/Cluster it will be
> unique or is there a better way?
>
> Regards,
> Muthu
>


>>>
>>
>


Re: NoClassDefFoundError org/h2/server/Service

2017-06-13 Thread murphyRic
Hi,
I subscribed to the mailing list now. Thank you.

I did not change anything related to the project, was just trying to execute
as-is.

I wanted to run the IgniteNodeStartup and I get this error. 
I mentioned the  h2 jar to let know that the specific version I was using is
the latest as suggested in some of the answers to the same kind of error in
this forum. 
Here are the specific Jar dependencies [ignite-spark 2.0.0  -->
ignite-indexing 2.0.0 --> h2 1.4.195] which is being used in the project and
so I find h2 in the classpath. 

Please let me know if this is something wrong and suggest on how I can
correct.

Thanks




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/NoClassDefFoundError-org-h2-server-Service-tp13636p13661.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Apache Ignite, Memory Class Storage, GPUs, and all that

2017-06-13 Thread Denis Magda
Hello Steve,

Starting with Apache Ignite 2.0 the project is no longer considered as an 
in-memory technology only.

The new virtual memory architecture that sits at the core of the platforms 
allows considering Ignite as an memory-first (memory-optimized) computational 
platform that distributes data and workloads across a cluster of machines 
storing it both in RAM and on disk.

The off-heap is a primary storage of your data. As for the secondary storage 
you can use SSDs, Flash, Intel 3DxPoint, etc. I think that GPUs can be 
integrated with the virtual memory as well going forward.

At all, now with Ignite you can gain in-memory performance with durability with 
disk cluster wide. This is one of the main things we need to keep in mind.

—
Denis

> On Jun 13, 2017, at 6:18 AM, steve.hostettler  
> wrote:
> 
> Hello,
> 
> Having to explain the choice of Ignite internally, I wonder what is the
> "official" position of Apache Ignite towards Storage Class Memory and using
> GPUs.
> 
> On the SCM story, I guess it is just another way of allocating/freeing
> memory in a kind of off-heap mode but on disk.
> 
> On the GPUs story, I guess is whether or not a library like jcuda can be
> used inside Ignite transparently.
> 
> Could you elaborate on this topics?  But I would like to have more details
> if possible.
> 
> 
> Best Regards
> 
> 
> 
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/Apache-Ignite-Memory-Class-Storage-GPUs-and-all-that-tp13646.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.



Re: Off-Heap On-Heap in Ignite-2.0.0

2017-06-13 Thread Denis Magda
Megha,

The objects are stored in the deserialized form both in heap (AI 1.x) and 
off-heap (AI 2.0). The difference is that when an object is in Java heap we 
need to an create extra wrapper object around it so that it can be used by an 
application running on top of JVM. Plus there might be some extra costs having 
data in Java heap.

—
Denis

> On Jun 12, 2017, at 10:20 PM, Megha Mittal  wrote:
> 
> Hi Denis, thanks for answering. That completely cleared out my doubt. 
> 
> I would like to know one more thing. I was testing with ~1000 (10
> million) records with each records of  ~1400 bytes. When I was trying to
> keep everything on-heap I had to allocate around 15GB to on-heap to store
> all these records. But when I switched off on-heap and kept everything
> off-heap, same number of records got fit in 10 GB off heap memory region.
> Does it imply that when ignite saves in on-heap it's in deserialized form
> hence exact memory(no. of records * size of one record) is required. But
> when it keeps in off-heap it is stored in serialized form hence less memory
> than expected is required. Or is there some other reason for this memory
> requirement difference.
> 
> 
> 
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/Off-Heap-On-Heap-in-Ignite-2-0-0-tp13548p13637.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.



Re: ODBC driver issue

2017-06-13 Thread Riccardo Iacomini
Thank you for the reply Igor,

the error just changed into:

*pyodbc.Error: ('01000', "[01000] [unixODBC][Driver Manager]Can't open lib
> 'Apache Ignite' : file not found (0) (SQLDriverConnect)")*



The Ignite Driver seems to be installed. Here's my /etc/odbcinst.ini :

[Apache Ignite]
> Description=Apache Ignite
> Driver=/usr/local/lib/libignite-odbc.so
> Setup=/usr/local/lib/libignite-odbc.so
> DriverODBCVer=03.00
> FileUsage=0
> UsageCount=3




Riccardo Iacomini


*RDSLab*

On Tue, Jun 13, 2017 at 4:31 PM, Igor Sapego  wrote:

> Hi,
>
> Try adding /usr/local/lib/ to LD_LIBRARY_PATH evn variable.
>
> Best Regards,
> Igor
>
> On Tue, Jun 13, 2017 at 4:54 PM, Riccardo Iacomini <
> riccardo.iacom...@rdslab.com> wrote:
>
>> Hello,
>> I am trying to access Ignite 2.0 using the ODBC driver. I've followed the
>> guide , and tried to
>> access Ignite via Python using the pyodbc module:
>>
>>
>>> *import pyodbc**ignite = pyodbc.connect('DRIVER={Apache
>>> Ignite};ADDRESS=localhost:10800;CACHE=cache1')*
>>
>>
>> however I get:
>>
>> *pyodbc.Error: ('01000', "[01000] [unixODBC][Driver Manager]Can't open
>>> lib '/usr/local/lib/libignite-odbc.so' : file not found (0)
>>> (SQLDriverConnect)")*
>>
>>
>> The library is instead located the path; this is the directory listing of
>> */usr/local/lib:*
>>
>> drwxr-xr-x  5 root root 4096 giu 13 15:26 ./
>>> drwxr-xr-x 12 root root 4096 nov 22  2016 ../
>>> lrwxrwxrwx  1 root root   37 giu 13 15:24
>>> libignite-binary-2.0.0.19668.so.0 -> libignite-binary-2.0.0.19668.s
>>> o.0.0.0*
>>> -rwxr-xr-x  1 root root  1777000 giu 13 15:24
>>> libignite-binary-2.0.0.19668.so.0.0.0*
>>> -rw-r--r--  1 root root  4617182 giu 13 15:24 libignite-binary.a
>>> -rwxr-xr-x  1 root root 1089 giu 13 15:24
>>> libignite-binary.la*
>>> lrwxrwxrwx  1 root root   37 giu 13 15:24
>>> libignite-binary.so -> libignite-binary-2.0.0.19668.so.0.0.0*
>>> lrwxrwxrwx  1 root root   37 giu 13 15:24
>>> libignite-common-2.0.0.19668.so.0 -> libignite-common-2.0.0.19668.s
>>> o.0.0.0*
>>> -rwxr-xr-x  1 root root   648856 giu 13 15:24
>>> libignite-common-2.0.0.19668.so.0.0.0*
>>> -rw-r--r--  1 root root  1493756 giu 13 15:24 libignite-common.a
>>> -rwxr-xr-x  1 root root 1054 giu 13 15:24
>>> libignite-common.la*
>>> lrwxrwxrwx  1 root root   37 giu 13 15:24
>>> libignite-common.so -> libignite-common-2.0.0.19668.so.0.0.0*
>>> lrwxrwxrwx  1 root root   35 giu 13 15:24
>>> libignite-odbc-2.0.0.19668.so.0 -> libignite-odbc-2.0.0.19668.so.0.0.0*
>>> -rwxr-xr-x  1 root root  6581160 giu 13 15:24
>>> libignite-odbc-2.0.0.19668.so.0.0.0*
>>> -rw-r--r--  1 root root 18307190 giu 13 15:24 libignite-odbc.a
>>> -rwxr-xr-x  1 root root 1121 giu 13 15:24 libignite-odbc.la*
>>> lrwxrwxrwx  1 root root   35 giu 13 15:24 libignite-odbc.so
>>> -> libignite-odbc-2.0.0.19668.so.0.0.0*
>>> -rw-rw-r--  1 riccardo riccardo 46518354 feb 20 12:53 libntl.a
>>> -rwxr-xr-x  1 root root  966 giu 13 15:26 libodbccr.la*
>>> lrwxrwxrwx  1 root root   18 giu 13 15:26 libodbccr.so ->
>>> libodbccr.so.2.0.0*
>>> lrwxrwxrwx  1 root root   18 giu 13 15:26 libodbccr.so.2 ->
>>> libodbccr.so.2.0.0*
>>> -rwxr-xr-x  1 root root   507264 giu 13 15:26 libodbccr.so.2.0.0*
>>> -rwxr-xr-x  1 root root 1015 giu 13 15:26 libodbcinst.la*
>>> lrwxrwxrwx  1 root root   20 giu 13 15:26 libodbcinst.so ->
>>> libodbcinst.so.2.0.0*
>>> lrwxrwxrwx  1 root root   20 giu 13 15:26 libodbcinst.so.2
>>> -> libodbcinst.so.2.0.0*
>>> -rwxr-xr-x  1 root root   463392 giu 13 15:26
>>> libodbcinst.so.2.0.0*
>>> -rwxr-xr-x  1 root root  991 giu 13 15:26 libodbc.la*
>>> lrwxrwxrwx  1 root root   16 giu 13 15:26 libodbc.so ->
>>> libodbc.so.2.0.0*
>>> lrwxrwxrwx  1 root root   16 giu 13 15:26 libodbc.so.2 ->
>>> libodbc.so.2.0.0*
>>> -rwxr-xr-x  1 root root  2274544 giu 13 15:26 libodbc.so.2.0.0*
>>> drwxrwsr-x  4 root staff4096 nov 29  2016 python2.7/
>>> drwxrwsr-x  3 root staff4096 ott 21  2015 python3.5/
>>> drwxr-xr-x  3 root root 4096 giu 12 23:38 site_ruby/
>>
>>
>> Any suggestion on how to procede?
>>
>> Thank you
>>
>> Riccardo Iacomini
>>
>>
>> *RDSLab*
>>
>
>


Re: SQL Query against BigDecimal value field

2017-06-13 Thread Igor Sapego
Hi,

We need a little bit more info. What is the expected and real values?
Does JDBC, ODBC and API show the same value?

Best Regards,
Igor

On Sat, Jun 10, 2017 at 1:19 AM, pingzing  wrote:

> Anyone with experience? A simple query against BigDecimal value field
> doesn't
> work.
> Also query from DBeaver via JDBC or Tableau via ODBC doesn't show the
> correct number. Am I missing something? I have no problem switching the
> field to DOUBLE.
>
> SELECT SUM(SomeBigDecimalValueField) FROM SomeCache
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/SQL-Query-against-BigDecimal-value-field-tp13584.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: ODBC driver issue

2017-06-13 Thread Igor Sapego
Hi,

Try adding /usr/local/lib/ to LD_LIBRARY_PATH evn variable.

Best Regards,
Igor

On Tue, Jun 13, 2017 at 4:54 PM, Riccardo Iacomini <
riccardo.iacom...@rdslab.com> wrote:

> Hello,
> I am trying to access Ignite 2.0 using the ODBC driver. I've followed the
> guide , and tried to
> access Ignite via Python using the pyodbc module:
>
>
>> *import pyodbc**ignite = pyodbc.connect('DRIVER={Apache
>> Ignite};ADDRESS=localhost:10800;CACHE=cache1')*
>
>
> however I get:
>
> *pyodbc.Error: ('01000', "[01000] [unixODBC][Driver Manager]Can't open lib
>> '/usr/local/lib/libignite-odbc.so' : file not found (0)
>> (SQLDriverConnect)")*
>
>
> The library is instead located the path; this is the directory listing of
> */usr/local/lib:*
>
> drwxr-xr-x  5 root root 4096 giu 13 15:26 ./
>> drwxr-xr-x 12 root root 4096 nov 22  2016 ../
>> lrwxrwxrwx  1 root root   37 giu 13 15:24
>> libignite-binary-2.0.0.19668.so.0 -> libignite-binary-2.0.0.19668.
>> so.0.0.0*
>> -rwxr-xr-x  1 root root  1777000 giu 13 15:24
>> libignite-binary-2.0.0.19668.so.0.0.0*
>> -rw-r--r--  1 root root  4617182 giu 13 15:24 libignite-binary.a
>> -rwxr-xr-x  1 root root 1089 giu 13 15:24 libignite-binary.la
>> *
>> lrwxrwxrwx  1 root root   37 giu 13 15:24 libignite-binary.so
>> -> libignite-binary-2.0.0.19668.so.0.0.0*
>> lrwxrwxrwx  1 root root   37 giu 13 15:24
>> libignite-common-2.0.0.19668.so.0 -> libignite-common-2.0.0.19668.
>> so.0.0.0*
>> -rwxr-xr-x  1 root root   648856 giu 13 15:24
>> libignite-common-2.0.0.19668.so.0.0.0*
>> -rw-r--r--  1 root root  1493756 giu 13 15:24 libignite-common.a
>> -rwxr-xr-x  1 root root 1054 giu 13 15:24 libignite-common.la
>> *
>> lrwxrwxrwx  1 root root   37 giu 13 15:24 libignite-common.so
>> -> libignite-common-2.0.0.19668.so.0.0.0*
>> lrwxrwxrwx  1 root root   35 giu 13 15:24
>> libignite-odbc-2.0.0.19668.so.0 -> libignite-odbc-2.0.0.19668.so.0.0.0*
>> -rwxr-xr-x  1 root root  6581160 giu 13 15:24
>> libignite-odbc-2.0.0.19668.so.0.0.0*
>> -rw-r--r--  1 root root 18307190 giu 13 15:24 libignite-odbc.a
>> -rwxr-xr-x  1 root root 1121 giu 13 15:24 libignite-odbc.la*
>> lrwxrwxrwx  1 root root   35 giu 13 15:24 libignite-odbc.so
>> -> libignite-odbc-2.0.0.19668.so.0.0.0*
>> -rw-rw-r--  1 riccardo riccardo 46518354 feb 20 12:53 libntl.a
>> -rwxr-xr-x  1 root root  966 giu 13 15:26 libodbccr.la*
>> lrwxrwxrwx  1 root root   18 giu 13 15:26 libodbccr.so ->
>> libodbccr.so.2.0.0*
>> lrwxrwxrwx  1 root root   18 giu 13 15:26 libodbccr.so.2 ->
>> libodbccr.so.2.0.0*
>> -rwxr-xr-x  1 root root   507264 giu 13 15:26 libodbccr.so.2.0.0*
>> -rwxr-xr-x  1 root root 1015 giu 13 15:26 libodbcinst.la*
>> lrwxrwxrwx  1 root root   20 giu 13 15:26 libodbcinst.so ->
>> libodbcinst.so.2.0.0*
>> lrwxrwxrwx  1 root root   20 giu 13 15:26 libodbcinst.so.2 ->
>> libodbcinst.so.2.0.0*
>> -rwxr-xr-x  1 root root   463392 giu 13 15:26
>> libodbcinst.so.2.0.0*
>> -rwxr-xr-x  1 root root  991 giu 13 15:26 libodbc.la*
>> lrwxrwxrwx  1 root root   16 giu 13 15:26 libodbc.so ->
>> libodbc.so.2.0.0*
>> lrwxrwxrwx  1 root root   16 giu 13 15:26 libodbc.so.2 ->
>> libodbc.so.2.0.0*
>> -rwxr-xr-x  1 root root  2274544 giu 13 15:26 libodbc.so.2.0.0*
>> drwxrwsr-x  4 root staff4096 nov 29  2016 python2.7/
>> drwxrwsr-x  3 root staff4096 ott 21  2015 python3.5/
>> drwxr-xr-x  3 root root 4096 giu 12 23:38 site_ruby/
>
>
> Any suggestion on how to procede?
>
> Thank you
>
> Riccardo Iacomini
>
>
> *RDSLab*
>


Re: Ignite REST SERVICE

2017-06-13 Thread ezhuravlev
Hi,

Explicit configuration is not required, connector starts up automatically
and listens on port 8080.

Which http code have you got when you accessing it by one of the methods
from api? From example, curl http://:8080/ignite?cmd=version. 

Also, try access rest api directly from the server, for example, connect to
it via ssh and run curl command.

Evgenii



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-REST-SERVICE-tp13555p13653.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Write behind using Grid Gain

2017-06-13 Thread Pavel Tupitsyn
Hi Raymond,

I think your use case fits well into traditional Ignite model of
write-through cache store with backing database.
Why do you want to avoid a DB? Do you plan to store data on disk directly
as a set of files?

Pavel

On Mon, Jun 12, 2017 at 2:14 AM, Raymond Wilson 
wrote:

> Hi Pavel,
>
>
>
> Thanks for the blog – it explains it quite well.
>
>
>
> I have a slightly different use case where groups of records within a much
> larger data set are clustered together for efficiency (ie: each of the
> cached items in the Ignite grid cache has significant internal structure).
> You can think of them as a large number of smallish files (a few Kb to a
> few Mb), but file systems don’t like lots of small files.
>
>
>
> I have a legacy implementation that houses these small files within a
> single larger file, but wanted to know if there was a clean way of
> supporting the same structure using the Ignite read/write through support,
> perhaps with another system providing relatively transparent persistency
> semantics but which does not use a DB to store the data.
>
>
>
> Thanks,
>
> Raymond.
>
>
>
> *From:* Pavel Tupitsyn [mailto:ptupit...@apache.org]
> *Sent:* Saturday, May 27, 2017 5:03 AM
> *To:* user@ignite.apache.org
>
> *Subject:* Re: Write behind using Grid Gain
>
>
>
> I've decided to write a blog post, since this topic seems to be in demand:
>
> https://ptupitsyn.github.io/Ado-Net-Cache-Store/
>
>
>
> Code:
>
> https://github.com/ptupitsyn/ignite-net-examples/tree/
> master/AdoNetCacheStore
>
>
>
> Let me know if this helps!
>
>
>
> On Fri, May 26, 2017 at 3:50 PM, Chetan D  wrote:
>
> Thank you Pavel.
>
>
>
> waiting for your response.
>
>
>
> On Fri, May 26, 2017 at 6:03 PM, Pavel Tupitsyn 
> wrote:
>
> To give everyone context, this is not about GridGain, but about Apache
> Ignite.
>
> The blog post in question is https://ptupitsyn.github.
> io/Entity-Framework-Cache-Store/
>
>
>
> Chetan, I'll prepare an example with Ignite 2.0 / ado.net and post it
> some time later.
>
>
>
> Pavel
>
>
>
> On Fri, May 26, 2017 at 2:32 PM, Chetan D  wrote:
>
> ++ User List
>
>
>
> any help much appreciated.
>
>
>
> Thanks And Regards
>
> Chetan D
>
> -- Forwarded message --
> From: *Pavel Tupitsyn* 
> Date: Fri, May 26, 2017 at 4:38 PM
> Subject: Re: Write behind using Grid Gain
> To: Chetan D 
>
> Hi Chetan, can you please write this to our user list,
> user@ignite.apache.org?
>
> So that entire community can participate.
>
>
>
> Thanks,
>
> Pavel
>
>
>
> On Fri, May 26, 2017 at 1:35 PM, Chetan D  wrote:
>
> Hi Pavel Tupitsyn,
>
>
>
> I have posted a comment in your blog as well (entity framework as ignite
> .net store) regarding write behind using ignite.
>
>
>
> I have been working on a project where i need to implement distributed
> caching and i have been asked to look into grid gain.
>
>
>
> This is is the first time i am working on caching and this is entirely new
> topic for me.
>
>
>
> The example which you have shared i was able to understand a little and
> the sad part is even entity framework also i have never worked on.
>
>
>
> It would be helpful if you can share me a simple example using ado.net
> implementing read through, write through and write behind even a simple
> table helps me understand the concept.
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>


Re: NoClassDefFoundError org/h2/server/Service

2017-06-13 Thread ezhuravlev
Hi,

Please properly subscribe to the mailing list so that the community can
receive email notifications for your messages. To subscribe, send empty
email to user-subscr...@ignite.apache.org and follow simple instructions in
the reply. 

This example works for me without problems. Did you change something in it?

Why did you mentioned h2 jar version? Did you add h2 jar to classpath? Why?

Regards,
Evgenii



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/NoClassDefFoundError-org-h2-server-Service-tp13636p13650.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Automatic Persistence : Unable to bring up cache with web console generated models ("org.apache.ignite.IgniteCheckedException: Failed to register query type" exception is thrown)

2017-06-13 Thread Alexey Kuznetsov
Muthu,

Please create separate threads on user for mentioned problems: code
generation (if needed), spring transactions, JDBC.

I'm lost in this "big" number of text lines.

-- 
Alexey Kuznetsov


Re: swift store as secondary file system

2017-06-13 Thread Nikolai Tikhonov
I got it! If you do it yourself doesn't shy to share your experience with
community. ;)

On Mon, Jun 12, 2017 at 7:23 PM, Antonio Si  wrote:

> Thanks Nikolai. I am wondering if anyone has done something similar.
>
> Thanks.
>
> Antonio.
>
> On Mon, Jun 12, 2017 at 3:30 AM, Nikolai Tikhonov 
> wrote:
>
>> Hi, Antonio!
>>
>> You can implement your own CacheStore which will propagate data to the
>> swift. Or do you mean other integration with this product?
>>
>> On Sat, Jun 10, 2017 at 9:04 AM, Antonio Si  wrote:
>>
>>> Hi Alexey,
>>>
>>> I meant a swift object storage: https://wiki.openstack.org/wiki/Swift
>>>
>>> Thanks.
>>>
>>> Antonio.
>>>
>>>
>>>
>>> On Fri, Jun 9, 2017 at 6:38 PM, Alexey Kuznetsov 
>>> wrote:
>>>
 Hi, Antonio!

 What is a "swift store"?
 Could you give a link?

 On Sat, Jun 10, 2017 at 7:32 AM, Antonio Si 
 wrote:

> Hi,
>
> Is there a secondary file system implementation for a swift store?
>
> Thanks.
>
> Antonio.
>



 --
 Alexey Kuznetsov

>>>
>>>
>>
>


Re: Grid/Cluster unique UUID possible with IgniteUuid?

2017-06-13 Thread Nikolai Tikhonov
Muthu,

Look at Ignite Uuid#randomUuid() method. I think it will provide needed
guarantees for your case.

On Mon, Jun 12, 2017 at 9:53 PM, Muthu  wrote:

> Thanks Nikolai..this is what i am doing...not sure if this is too
> much..what do you think..the goal is to make sure that a UUID is unique
> across the entire application (the problem is each node that is part of the
> cluster would be doing this for different entities that it owns)
>
> ...
> ...
> System.out.println(" in ObjectCacheMgrService.insertDepartment 
> for dept : " + dept);
> long t1 = System.currentTimeMillis();
> *String uUID = new IgniteUuid(UUID.randomUUID(),
> igniteAtomicSequence.incrementAndGet()).toString();*
> long t2 = System.currentTimeMillis();
> System.out.println("Time for UUID generation (millis) : " + (t2 - t1));
> *dept.setId(uUID);*
> * deptCache.getAndPut(uUID, dept);*
> System.out.println(" in ObjectCacheMgrService.insertDepartment :
> department  inserted successfully : " + dept);
> ...
> ...
>
> Regards,
> Muthu
>
> On Mon, Jun 12, 2017 at 3:24 AM, Nikolai Tikhonov 
> wrote:
>
>> Muthu,
>>
>> Yes, you can use IgniteUUID as unique ID generator. What you will use
>> depends your requirements. IgniteAtomicSequence takes one long and
>> IgniteUUID takes 3 long. But getting new range sequence is distributed
>> operation. You need to decied what more critical for your.
>>
>> On Fri, Jun 9, 2017 at 8:46 PM, Muthu  wrote:
>>
>>>
>>> Missed adding this one...i know there is support for ID generation with
>>> IgniteAtomicSequence @ https://apacheignite.readme.io/docs/id-generator
>>>
>>> The question is which one should i use...i want to use this to generate
>>> unique ids for entities that are to be cached & persisted..
>>>
>>> Regards,
>>> Muthu
>>>
>>>
>>> On Fri, Jun 9, 2017 at 10:27 AM, Muthu 
>>> wrote:
>>>
 Hi Folks,

 Is it possible to generate a Grid/Cluster unique UUID using IgniteUuid.
 I looked at the source code & static factory method *randomUuid
 *().
 It looks like it generates one with with a java.util.UUID (generated with
 its randomUUID) & an AutomicLong's incrementAndGet

 Can i safely assume that given that it uses a combination of UUID &
 long on the individual VMs that are part of the Grid/Cluster it will be
 unique or is there a better way?

 Regards,
 Muthu

>>>
>>>
>>
>


Apache Ignite, Memory Class Storage, GPUs, and all that

2017-06-13 Thread steve.hostettler
Hello,

Having to explain the choice of Ignite internally, I wonder what is the
"official" position of Apache Ignite towards Storage Class Memory and using
GPUs.

On the SCM story, I guess it is just another way of allocating/freeing
memory in a kind of off-heap mode but on disk.

On the GPUs story, I guess is whether or not a library like jcuda can be
used inside Ignite transparently.

Could you elaborate on this topics?  But I would like to have more details
if possible.


Best Regards



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Apache-Ignite-Memory-Class-Storage-GPUs-and-all-that-tp13646.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Failed to wait for initial partition map exchange

2017-06-13 Thread Vladimir
INFO  2017-06-13 20:36:22 [localhost-startStop-1]
org.apache.ignite.internal.IgniteKernal%svip - [[OS: Linux
4.1.12-61.1.18.el7uek.x86_64 amd64]]
INFO  2017-06-13 20:36:22 [localhost-startStop-1]
org.apache.ignite.internal.IgniteKernal%svip - [[PID: 7608]]
[20:36:22] VM information: Java(TM) SE Runtime Environment 1.8.0_102-b14
Oracle Corporation Java HotSpot(TM) 64-Bit Server VM 25.102-b14
INFO  2017-06-13 20:36:22 [localhost-startStop-1]
org.apache.ignite.internal.IgniteKernal%svip - [[Language runtime: Java
Platform API Specification ver. 1.8]]
INFO  2017-06-13 20:36:22 [localhost-startStop-1]
org.apache.ignite.internal.IgniteKernal%svip - [[VM information: Java(TM) SE
Runtime Environment 1.8.0_102-b14 Oracle Corporation Java HotSpot(TM) 64-Bit
Server VM 25.102-b14]]
INFO  2017-06-13 20:36:22 [localhost-startStop-1]
org.apache.ignite.internal.IgniteKernal%svip - [[VM total memory: 2.1GB]]
INFO  2017-06-13 20:36:22 [localhost-startStop-1]
org.apache.ignite.internal.IgniteKernal%svip - [[Remote Management [restart:
off, REST: on, JMX (remote: off)]]]
INFO  2017-06-13 20:36:22 [localhost-startStop-1]
org.apache.ignite.internal.IgniteKernal%svip - [[System cache's MemoryPolicy
size is configured to 40 MB. Use MemoryConfiguration.systemCacheMemorySize
property to change the setting.]]
INFO  2017-06-13 20:36:22 [localhost-startStop-1]
org.apache.ignite.internal.IgniteKernal%svip - [[Configured caches [in
'default' memoryPolicy: ['ignite-sys-cache', 'ignite-atomics-sys-cache'
INFO  2017-06-13 20:36:22 [localhost-startStop-1]
org.apache.ignite.internal.IgniteKernal%svip - [[Local node user attribute
[IgSupport_LogicClusterGroups=com.bpcbt.common.support.ignite.beans.IgLogicClusterGroups@0]]]
WARN  2017-06-13 20:36:22 [pub-#14%svip%]
org.apache.ignite.internal.GridDiagnostic - [[Initial heap size is 154MB
(should be no less than 512MB, use -Xms512m -Xmx512m).]]
[20:36:22] Initial heap size is 154MB (should be no less than 512MB, use
-Xms512m -Xmx512m).
[20:36:23] Configured plugins:
INFO  2017-06-13 20:36:23 [localhost-startStop-1]
org.apache.ignite.internal.processors.plugin.IgnitePluginProcessor -
[[Configured plugins:]]
[20:36:23]   ^-- None
INFO  2017-06-13 20:36:23 [localhost-startStop-1]
org.apache.ignite.internal.processors.plugin.IgnitePluginProcessor - [[  ^--
None]]
[20:36:23] 
INFO  2017-06-13 20:36:23 [localhost-startStop-1]
org.apache.ignite.internal.processors.plugin.IgnitePluginProcessor - [[]]
INFO  2017-06-13 20:36:23 [localhost-startStop-1]
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi - [[Successfully
bound communication NIO server to TCP port [port=9343, locHost=/127.0.0.1,
selectorsCnt=4, selectorSpins=0, pairedConn=false]]]
WARN  2017-06-13 20:36:23 [localhost-startStop-1]
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi - [[Message
queue limit is set to 0 which may lead to potential OOMEs when running cache
operations in FULL_ASYNC or PRIMARY_SYNC modes due to message queues growth
on sender and receiver sides.]]
[20:36:23] Message queue limit is set to 0 which may lead to potential OOMEs
when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to
message queues growth on sender and receiver sides.
WARN  2017-06-13 20:36:23 [localhost-startStop-1]
org.apache.ignite.spi.checkpoint.noop.NoopCheckpointSpi - [[Checkpoints are
disabled (to enable configure any GridCheckpointSpi implementation)]]
WARN  2017-06-13 20:36:23 [localhost-startStop-1]
org.apache.ignite.internal.managers.collision.GridCollisionManager -
[[Collision resolution is disabled (all jobs will be activated upon
arrival).]]
[20:36:23] Security status [authentication=off, tls/ssl=off]
INFO  2017-06-13 20:36:23 [localhost-startStop-1]
org.apache.ignite.internal.IgniteKernal%svip - [[Security status
[authentication=off, tls/ssl=off]]]
INFO  2017-06-13 20:36:23 [localhost-startStop-1]
org.apache.ignite.internal.processors.rest.protocols.tcp.GridTcpRestProtocol
- [[Command protocol successfully started [name=TCP binary,
host=0.0.0.0/0.0.0.0, port=11214]]]
INFO  2017-06-13 20:36:23 [localhost-startStop-1]
org.apache.ignite.internal.IgniteKernal%svip - [[Non-loopback local IPs:
192.168.122.1, 192.168.209.65]]
INFO  2017-06-13 20:36:23 [localhost-startStop-1]
org.apache.ignite.internal.IgniteKernal%svip - [[Enabled local MACs:
0021F6321229, 5254000A6937]]
INFO  2017-06-13 20:36:23 [localhost-startStop-1]
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi - [[Successfully bound
to TCP port [port=9463, localHost=/127.0.0.1,
locNodeId=7fc5ea30-913d-4ff5-932b-62d81c6027db]]]
INFO  2017-06-13 20:36:24 [localhost-startStop-1]
org.apache.ignite.internal.processors.cache.GridCacheProcessor - [[Started
cache [name=ignite-sys-cache, memoryPolicyName=sysMemPlc, mode=REPLICATED]]]
INFO  2017-06-13 20:36:24 [localhost-startStop-1]
org.apache.ignite.internal.processors.cache.GridCacheProcessor - [[Started
cache [name=ignite-atomics-sys-cache, memoryPolicyName=sysMemPlc,
mode=REPLICATED]]]
INFO  2017-06-13 

Re: Data Analysis and visualization

2017-06-13 Thread Ishan Jain
Size would be very large as stock prices would be streamed every hour

On Tue, Jun 13, 2017 at 12:05 PM, Jörn Franke  wrote:

> What is the size of the data?
> For me it looks more that orc or parquet would be enough.
>
>  I do not see here specific in-memory requirements.
>
> On 12. Jun 2017, at 09:59, Ishan Jain  wrote:
>
> I need to just get the price of a stock which is stored in hdfs with
> timestamp and make a graph with the prices of that stock over time.
>
> On Mon, Jun 12, 2017 at 1:03 PM, Jörn Franke  wrote:
>
>> First you need the user requirements - without them answering your
>> questions will be difficult
>>
>> > On 12. Jun 2017, at 07:08, ishan-jain  wrote:
>> >
>> > I am new to BIG Data .Just been working for a month.
>> > I have HDFS data of stock prices. I need to perform data analysis(maybe
>> some
>> > ML) and visualizations(Graphs and charts). For that i need Mapreduce
>> > functions. Which approach should i use?
>> > 1. Stream data from IGFS into ignite cache and work on it?
>> > 2. Use Hive with Tez and LLap function.(Should i use it with ignite or
>> > independent and directly on HDFS. No info available on the net.)
>> > 3. Use presto (Which is the better variant?(Hive or presto))
>> > 4. Some other fast way with IGFS if possible.
>> > 5. Also which open source tools should i use to accomplish this.
>> > Any help would be appreciated.
>> >
>> >
>> >
>> > --
>> > View this message in context: http://apache-ignite-users.705
>> 18.x6.nabble.com/Data-Analysis-and-visualization-tp13614.html
>> > Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>
>


Re: How to operate with cache<*Key, *> or cache<AffinityKey<...>,*>?

2017-06-13 Thread Artёm Basov
Hi, Valentine.

Thanks for your response! It's a shame i overlooked DML.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/How-to-operate-with-cache-Key-or-cache-AffinityKey-tp13561p13642.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Data Analysis and visualization

2017-06-13 Thread Jörn Franke
What is the size of the data? 
For me it looks more that orc or parquet would be enough.

 I do not see here specific in-memory requirements.

> On 12. Jun 2017, at 09:59, Ishan Jain  wrote:
> 
> I need to just get the price of a stock which is stored in hdfs with 
> timestamp and make a graph with the prices of that stock over time.
> 
>> On Mon, Jun 12, 2017 at 1:03 PM, Jörn Franke  wrote:
>> First you need the user requirements - without them answering your questions 
>> will be difficult
>> 
>> > On 12. Jun 2017, at 07:08, ishan-jain  wrote:
>> >
>> > I am new to BIG Data .Just been working for a month.
>> > I have HDFS data of stock prices. I need to perform data analysis(maybe 
>> > some
>> > ML) and visualizations(Graphs and charts). For that i need Mapreduce
>> > functions. Which approach should i use?
>> > 1. Stream data from IGFS into ignite cache and work on it?
>> > 2. Use Hive with Tez and LLap function.(Should i use it with ignite or
>> > independent and directly on HDFS. No info available on the net.)
>> > 3. Use presto (Which is the better variant?(Hive or presto))
>> > 4. Some other fast way with IGFS if possible.
>> > 5. Also which open source tools should i use to accomplish this.
>> > Any help would be appreciated.
>> >
>> >
>> >
>> > --
>> > View this message in context: 
>> > http://apache-ignite-users.70518.x6.nabble.com/Data-Analysis-and-visualization-tp13614.html
>> > Sent from the Apache Ignite Users mailing list archive at Nabble.com.
> 


Re: System Parameters to improve CPU utilization

2017-06-13 Thread rishi007bansod
Hi,
  We are performing parallel data loading in memory using multiple
instances of ignite(from kafka to ignite) in single node. While caching CPU
is not getting utilized above 70%. How can we improve this?

Thanks



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/System-Parameters-to-improve-CPU-utilization-tp13562p13640.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Data Analysis and visualization

2017-06-13 Thread ishan-jain
I need to basically have a sql query remote access from tools like tableau or
zeppelin and have fast mapreduce funtions



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Data-Analysis-and-visualization-tp13614p13639.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.